A 2.5D convolutional neural network for HPV prediction in advanced oropharyngeal cancer

  • La Greca Saint-Esteven, A.
  • Bogowicz, M.
  • Konukoglu, E.
  • Riesterer, O.
  • Balermpas, P.
  • Guckenberger, M.
  • Tanadini-Lang, S.
  • van Timmeren, J. E.
Comput Biol Med 2022 Journal Article, cited 0 times
Website
BACKGROUND: Infection with human papilloma virus (HPV) is one of the most relevant prognostic factors in advanced oropharyngeal cancer (OPC) treatment. In this study we aimed to assess the diagnostic accuracy of a deep learning-based method for HPV status prediction in computed tomography (CT) images of advanced OPC. METHOD: An internal dataset and three public collections were employed (internal: n = 151, HNC1: n = 451; HNC2: n = 80; HNC3: n = 110). Internal and HNC1 datasets were used for training, whereas HNC2 and HNC3 collections were used as external test cohorts. All CT scans were resampled to a 2 mm(3) resolution and a sub-volume of 72x72x72 pixels was cropped on each scan, centered around the tumor. Then, a 2.5D input of size 72x72x3 pixels was assembled by selecting the 2D slice containing the largest tumor area along the axial, sagittal and coronal planes, respectively. The convolutional neural network employed consisted of the first 5 modules of the Xception model and a small classification network. Ten-fold cross-validation was applied to evaluate training performance. At test time, soft majority voting was used to predict HPV status. RESULTS: A final training mean [range] area under the curve (AUC) of 0.84 [0.76-0.89], accuracy of 0.76 [0.64-0.83] and F1-score of 0.74 [0.62-0.83] were achieved. AUC/accuracy/F1-score values of 0.83/0.75/0.69 and 0.88/0.79/0.68 were achieved on the HNC2 and HNC3 test sets, respectively. CONCLUSION: Deep learning was successfully applied and validated in two external cohorts to predict HPV status in CT images of advanced OPC, proving its potential as a support tool in cancer precision medicine.

2Be3-Net: Combining 2D and 3D Convolutional Neural Networks for 3D PET Scans Predictions

  • Thomas, Ronan
  • Schalck, Elsa
  • Fourure, Damien
  • Bonnefoy, Antoine
  • Cervera-Marzal, Inaki
2021 Conference Paper, cited 0 times
Website
Radiomics - high-dimensional features extracted from clinical images - is the main approach used to develop predictive models based on 3D Positron Emission Tomography (PET) scans of patients suffering from cancer. Radiomics extraction relies on an accurate segmentation of the tumoral region, which is a time consuming task subject to inter-observer variability. On the other hand, data driven approaches such as deep convolutional neural networks (CNN) struggle to achieve great performances on PET images due to the absence of available large PET datasets combined to the size of 3D networks. In this paper, we assemble several public datasets to create a PET dataset large of 2800 scans and propose a deep learning architecture named “2Be3-Net” associating a 2D feature extractor to a 3D CNN predictor. First, we take advantage of a 2D pre-trained model to extract feature maps out of 2D PET slices. Then we apply a 3D CNN on top of the concatenation of the previously extracted feature maps to compute patient-wise predictions. Experiments suggest that 2Be3-Net has an improved ability to exploit spatial information compared to 2D or 3D-only CNN solutions. We also evaluate our network on the prediction of clinical outcomes of head-and-neck cancer. The proposed pipeline outperforms PET radiomics approaches on the prediction of loco-regional recurrences and overall survival. Innovative deep learning architectures combining a pre-trained network with a 3D CNN could therefore be a great alternative to traditional CNN and radiomics approaches while empowering small and medium sized datasets.

2D and 3D CT Radiomics Features Prognostic Performance Comparison in Non-Small Cell Lung Cancer

  • Shen, Chen
  • Liu, Zhenyu
  • Guan, Min
  • Song, Jiangdian
  • Lian, Yucheng
  • Wang, Shuo
  • Tang, Zhenchao
  • Dong, Di
  • Kong, Lingfei
  • Wang, Meiyun
Transl OncolTranslational oncology 2017 Journal Article, cited 10 times
Website

2d view aggregation for lymph node detection using a shallow hierarchy of linear classifiers

  • Seff, Ari
  • Lu, Le
  • Cherry, Kevin M
  • Roth, Holger R
  • Liu, Jiamin
  • Wang, Shijun
  • Hoffman, Joanne
  • Turkbey, Evrim B
  • Summers, Ronald M
2014 Book Section, cited 21 times
Website
Enlarged lymph nodes (LNs) can provide important information for cancer diagnosis, staging, and measuring treatment reactions, making automated detection a highly sought goal. In this paper, we propose a new algorithm representation of decomposing the LN detection problem into a set of 2D object detection subtasks on sampled CT slices, largely alleviating the curse of dimensionality issue. Our 2D detection can be effectively formulated as linear classification on a single image feature type of Histogram of Oriented Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We exploit both simple pooling and sparse linear fusion schemes to aggregate these 2D detection scores for the final 3D LN detection. In this manner, detection is more tractable and does not need to perform perfectly at instance level (as weak hypotheses) since our aggregation process will robustly harness collective information for LN detection. Two datasets (90 patients with 389 mediastinal LNs and 86 patients with 595 abdominal LNs) are used for validation. Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume (FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10 FP/vol.), for the mediastinal and abdominal datasets respectively. Our results compare favorably to previous state-of-the-art methods.

3D automatic levels propagation approach to breast MRI tumor segmentation

  • Bouchebbah, Fatah
  • Slimani, Hachem
Expert Systems with Applications 2020 Journal Article, cited 0 times
Website
Magnetic Resonance Imaging MRI is a relevant tool for breast cancer screening. Moreover, an accurate 3D segmentation of breast tumors from MRI scans plays a key role in the analysis of the disease. In this manuscript, we propose a novel 3D automatic method for segmenting MRI breast tumors, called 3D Automatic Levels Propagation Approach (3D-ALPA). The proposed method performs the segmentation automatically in two steps: in the first step, the entire MRI volume to process is segmented slice by slice. Specifically, using a new automatic approach called 2D Automatic Levels Propagation Approach (2D-ALPA) which is an improved version of a previous semi-automatic approach, named 2D Levels Propagation Approach (2D-LPA). In the second step, the partial segmentations obtained after the application of 2D-ALPA are recombined to rebuild the complete volume(s) of tumor(s). 3D-ALPA has many characteristics, mainly: it is an automatic method which can take into consideration multi-tumor segmentation, and it has the property to be easily applicable according to the Axial, Coronal, as well as Sagittal planes. Therefore, it offers a multi-view representation of the segmented tumor(s). To validate the new 3D-ALPA method, we have firstly performed tests on a 2D private dataset composed of eighteen patients to estimate the accuracy of the new 2D-ALPA in comparison to the previous 2D-LPA. The obtained results have been in favor of the proposed 2D-ALPA, showing hence an improvement in accuracy after integrating the automatization in the 2D-ALPA approach. Then, we have evaluated the complete 3D-ALPA method on a 3D private dataset constituted of MRI exams of twenty-two patients having real breast tumors of different types, and on the public RIDER dataset. Essentially, 3D-ALPA has been evaluated regarding two main features: segmentation accuracy and running time, by considering two kinds of breast tumors: non-enhanced and enhanced tumors. The experimental studies have shown that 3D-ALPA has produced better results for the both kinds of tumors than a recent and concurrent method in the literature that addresses the same problematic.

3D convolution neural networks for molecular subtype prediction in glioblastoma multiforme

  • Khened, Mahendra
  • Anand, Vikas Kumar
  • Acharya, Gagan
  • Shah, Nameeta
  • Krishnamurthi, Ganapathy
2019 Conference Proceedings, cited 0 times
Website

3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities

  • Villarini, B.
  • Asaturyan, H.
  • Kurugol, S.
  • Afacan, O.
  • Bell, J. D.
  • Thomas, E. L.
2021 Journal Article, cited 3 times
Website
Accurate, quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided assisted diagnosis (CADx) systems to support the interpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, the presence of edge-based artefacts, and heavy un-controlled breathing that can produce blurred motion-based artefacts. This paper presents a novel computing approach for automatic organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal detailed organ or muscle boundaries. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and psoas-muscle and achieves quantitative measures of mean Dice similarity coefficient (DSC) that surpass or are comparable with the state-of-the-art. A qualitative evaluation performed by two independent radiologists verified the preservation of detailed organ and muscle boundaries.

3D Isotropic Super-resolution Prostate MRI Using Generative Adversarial Networks and Unpaired Multiplane Slices

  • Liu, Y.
  • Liu, Y.
  • Vanguri, R.
  • Litwiller, D.
  • Liu, M.
  • Hsu, H. Y.
  • Ha, R.
  • Shaish, H.
  • Jambawalikar, S.
J Digit Imaging 2021 Journal Article, cited 0 times
Website
We developed a deep learning-based super-resolution model for prostate MRI. 2D T2-weighted turbo spin echo (T2w-TSE) images are the core anatomical sequences in a multiparametric MRI (mpMRI) protocol. These images have coarse through-plane resolution, are non-isotropic, and have long acquisition times (approximately 10-15 min). The model we developed aims to preserve high-frequency details that are normally lost after 3D reconstruction. We propose a novel framework for generating isotropic volumes using generative adversarial networks (GAN) from anisotropic 2D T2w-TSE and single-shot fast spin echo (ssFSE) images. The CycleGAN model used in this study allows the unpaired dataset mapping to reconstruct super-resolution (SR) volumes. Fivefold cross-validation was performed. The improvements from patch-to-volume reconstruction (PVR) to SR are 80.17%, 63.77%, and 186% for perceptual index (PI), RMSE, and SSIM, respectively; the improvements from slice-to-volume reconstruction (SVR) to SR are 72.41%, 17.44%, and 7.5% for PI, RMSE, and SSIM, respectively. Five ssFSE cases were used to test for generalizability; the perceptual quality of SR images surpasses the in-plane ssFSE images by 37.5%, with 3.26% improvement in SSIM and a higher RMSE by 7.92%. SR images were quantitatively assessed with radiologist Likert scores. Our isotropic SR volumes are able to reproduce high-frequency detail, maintaining comparable image quality to in-plane TSE images in all planes without sacrificing perceptual accuracy. The SR reconstruction networks were also successfully applied to the ssFSE images, demonstrating that high-quality isotropic volume achieved from ultra-fast acquisition is feasible.

3D medical image denoising using 3D block matching and low-rank matrix completion

  • Roozgard, Aminmohammad
  • Barzigar, Nafise
  • Verma, Pramode
  • Cheng, Samuel
2013 Conference Proceedings, cited 0 times
Website

3D medical image denoising using 3D block matching and low-rank matrix completion

  • Roozgard, Aminmohammad
  • Barzigar, Nafise
  • Verma, Pramode
  • Cheng, Samuel
2013 Conference Proceedings, cited 0 times
Website
3D Denoising as one of the most significant tools in medical imaging was studied in the literature. However, most existing 3D medical data denoising algorithms have assumed the additive white Gaussian noise. In this work, we propose an efficient 3D medical data denoising method that can handle a noise mixture of various types. Our method is based on modified 2D Adaptive Rood Pattern Search (ARPS) [1] and low-rank matrix completion as follows. In our method, a noisy 3D data is processed in blockwise manner, for each processed 3D block we find similar 3D blocks in 3D data, where we use overlapped 3D patches to further lower the computation complexity. The 3D blocks then will stack together and unreliable voxels will be replaced using fast matrix completion method [2]. Experimental results show that the proposed method is able to robustly denoise the mixed noise from 3D medical data.

3D MRI Brain Tumour Segmentation with Autoencoder Regularization and Hausdorff Distance Loss Function

  • Fonov, Vladimir S.
  • Rosa-Neto, Pedro
  • Collins, D. Louis
2022 Conference Paper, cited 0 times
Website
Manual segmentation of the Glioblastoma is a challenging task for the radiologists, essential for treatment planning. In recent years deep convolutional neural networks have been shown to perform exceptionally well, in particular the winner of the BraTS challenge 2019 uses 3D U-net architecture in combination with variational autoencoder, using Dice overlap measure as a cost function. In this work we are proposing a loss function that approximates Hausdorff Distance metric that is used to evaluate performance of different segmentation in the hopes that it will allow achieving better performance of the segmentation on new data.

3D multi-view convolutional neural networks for lung nodule classification

  • Kang, Guixia
  • Liu, Kui
  • Hou, Beibei
  • Zhang, Ningbo
PLoS One 2017 Journal Article, cited 7 times
Website

3D PULMONARY NODULES DETECTION USING FAST MARCHING SEGMENTATION

  • Paing, MP
  • Choomchuay, S
Journal of Fundamental and Applied Sciences 2017 Journal Article, cited 1 times
Website

3D Reconstruction from CT Images Using Free Software Tools

  • Paulo, Soraia Figueiredo
  • Lopes, Daniel Simões
  • Jorge, Joaquim
2021 Book Section, cited 0 times
Website

3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction

  • Sood, R. R.
  • Shao, W.
  • Kunder, C.
  • Teslovich, N. C.
  • Wang, J. B.
  • Soerensen, S. J. C.
  • Madhuripan, N.
  • Jawahar, A.
  • Brooks, J. D.
  • Ghanouni, P.
  • Fan, R. E.
  • Sonn, G. A.
  • Rusu, M.
Med Image Anal 2021 Journal Article, cited 0 times
Website
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.

A 3D semi-automated co-segmentation method for improved tumor target delineation in 3D PET/CT imaging

  • Yu, Zexi
  • Bui, Francis M
  • Babyn, Paul
2015 Conference Proceedings, cited 1 times
Website
The planning of radiotherapy is increasingly based on multi-modal imaging techniques such as positron emission tomography (PET)-computed tomography (CT), since PET/CT provides not only anatomical but also functional assessment of the tumor. In this work, we propose a novel co-segmentation method, utilizing both the PET and CT images, to localize the tumor. The method constructs the segmentation problem as minimization of a Markov random field model, which encapsulates features from both imaging modalities. The minimization problem can then be solved by the maximum flow algorithm, based on graph cuts theory. The proposed tumor delineation algorithm was validated in both a phantom, with a high-radiation area, and in patient data. The obtained results show significant improvement compared to existing segmentation methods, with respect to various qualitative and quantitative metrics.

3D spatial priors for semi-supervised organ segmentation with deep convolutional neural networks

  • Petit, O.
  • Thome, N.
  • Soler, L.
Int J Comput Assist Radiol Surg 2022 Journal Article, cited 0 times
Website
PURPOSE: Fully Convolutional neural Networks (FCNs) are the most popular models for medical image segmentation. However, they do not explicitly integrate spatial organ positions, which can be crucial for proper labeling in challenging contexts. METHODS: In this work, we propose a method that combines a model representing prior probabilities of an organ position in 3D with visual FCN predictions by means of a generalized prior-driven prediction function. The prior is also used in a self-labeling process to handle low-data regimes, in order to improve the quality of the pseudo-label selection. RESULTS: Experiments carried out on CT scans from the public TCIA pancreas segmentation dataset reveal that the resulting STIPPLE model can significantly increase performances compared to the FCN baseline, especially with few training images. We also show that STIPPLE outperforms state-of-the-art semi-supervised segmentation methods by leveraging the spatial prior information. CONCLUSIONS: STIPPLE provides a segmentation method effective with few labeled examples, which is crucial in the medical domain. It offers an intuitive way to incorporate absolute position information by mimicking expert annotators.

3d texture analysis of solitary pulmonary nodules using co-occurrence matrix from volumetric lung CT images

  • Dhara, Ashis Kumar
  • Mukhopadhyay, Sudipta
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 7 times
Website

3D-MCN: A 3D Multi-scale Capsule Network for Lung Nodule Malignancy Prediction

  • Afshar, Parnian
  • Oikonomou, Anastasia
  • Naderkhani, Farnoosh
  • Tyrrell, Pascal N
  • Plataniotis, Konstantinos N
  • Farahani, Keyvan
  • Mohammadi, Arash
2020 Journal Article, cited 1 times
Website
Despite the advances in automatic lung cancer malignancy prediction, achieving high accuracy remains challenging. Existing solutions are mostly based on Convolutional Neural Networks (CNNs), which require a large amount of training data. Most of the developed CNN models are based only on the main nodule region, without considering the surrounding tissues. Obtaining high sensitivity is challenging with lung nodule malignancy prediction. Moreover, the interpretability of the proposed techniques should be a consideration when the end goal is to utilize the model in a clinical setting. Capsule networks (CapsNets) are new and revolutionary machine learning architectures proposed to overcome shortcomings of CNNs. Capitalizing on the success of CapsNet in biomedical domains, we propose a novel model for lung tumor malignancy prediction. The proposed framework, referred to as the 3D Multi-scale Capsule Network (3D-MCN), is uniquely designed to benefit from: (i) 3D inputs, providing information about the nodule in 3D; (ii) Multi-scale input, capturing the nodule's local features, as well as the characteristics of the surrounding tissues, and; (iii) CapsNet-based design, being capable of dealing with a small number of training samples. The proposed 3D-MCN architecture predicted lung nodule malignancy with a high accuracy of 93.12%, sensitivity of 94.94%, area under the curve (AUC) of 0.9641, and specificity of 90% when tested on the LIDC-IDRI dataset. When classifying patients as having a malignant condition (i.e., at least one malignant nodule is detected) or not, the proposed model achieved an accuracy of 83%, and a sensitivity and specificity of 84% and 81% respectively.

3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks

  • Du, Richard
  • Vardhanabhuti, Varut
2020 Conference Proceedings, cited 4 times
Website
Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=90.0) compared to training from scratch (DICE=41.8). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.

3D-SCoBeP: 3D medical image registration using sparse coding and belief propagation

  • Roozgard, Aminmohammad
  • Barzigar, Nafise
  • Verma, Pramode
  • Cheng, Samuel
International Journal of Diagnostic Imaging 2014 Journal Article, cited 4 times
Website

3D/2D model-to-image registration by imitation learning for cardiac procedures

  • Toth, Daniel
  • Miao, Shun
  • Kurzendorfer, Tanja
  • Rinaldi, Christopher A
  • Liao, Rui
  • Mansi, Tommaso
  • Rhode, Kawal
  • Mountney, Peter
International Journal of Computer Assisted Radiology and Surgery 2018 Journal Article, cited 1 times
Website

4D robust optimization including uncertainties in time structures can reduce the interplay effect in proton pencil beam scanning radiation therapy

  • Engwall, Erik
  • Fredriksson, Albin
  • Glimelius, Lars
Medical Physics 2018 Journal Article, cited 2 times
Website

4D-CBCT Registration with a FBCT-derived Plug-and-Play Feasibility Regularizer

  • Sang, Y.
  • Ruan, D.
2021 Conference Paper, cited 0 times
Website
Deformable registration of phase-resolved lung images is an important procedure to appreciate respiratory motion and enhance image quality. Compared to high-resolution fan-beam CTs (FBCTs), cone-beam CTs (CBCTs) are more readily available for on-table acquisition in companion with treatment. However, CBCT registration is challenging because classic regularization energies in convention methods usually cannot overcome the strong artifacts and the lack of structural details. In this study, we propose to learn an implicit feasibility prior of respiratory motion and incorporate it in a plug-and-play (PnP) fashion into the training of an unsupervised image registration network to improve registration accuracy and robustness to noise and artifacts. In particular, we propose a novel approach to develop a feasibility descriptor from a set of deformation vector fields (DVFs) generated from FBCTs. Subsequently, this FBCT-derived feasibility descriptor was used as a spatially variant regularizer on DVF Jacobian during the unsupervised training for 4D-CBCT registration. In doing so, the higher-quality, higher-confidence information from FBCT is transferred into the much challenging problem of CBCT registration, without explicit FB-CB synthesis. The method was evaluated using manually identified landmarks on real CBCTs and automatically detected landmarks on simulated CBCTs. The method presented good robustness to noise and artifacts and generated physically more feasible DVFs. The target registration errors on the real and simulated data were (1.63 ± 0.98) and (2.16 ± 1.91) mm, respectively, significantly better than the classic bending energy regularization in both the conventional method in SimpleElastix and the unsupervised network. The average registration time was 0.04 s. Keywords Deep learning Image registration 4D cone-beam CT

4DCT imaging to assess radiomics feature stability: An investigation for thoracic cancers

  • Larue, Ruben THM
  • Van De Voorde, Lien
  • van Timmeren, Janna E
  • Leijenaar, Ralph TH
  • Berbée, Maaike
  • Sosef, Meindert N
  • Schreurs, Wendy MJ
  • van Elmpt, Wouter
  • Lambin, Philippe
Radiotherapy and Oncology 2017 Journal Article, cited 7 times
Website
BACKGROUND AND PURPOSE: Quantitative tissue characteristics derived from medical images, also called radiomics, contain valuable prognostic information in several tumour-sites. The large number of features available increases the risk of overfitting. Typically test-retest CT-scans are used to reduce dimensionality and select robust features. However, these scans are not always available. We propose to use different phases of respiratory-correlated 4D CT-scans (4DCT) as alternative. MATERIALS AND METHODS: In test-retest CT-scans of 26 non-small cell lung cancer (NSCLC) patients and 4DCT-scans (8 breathing phases) of 20 NSCLC and 20 oesophageal cancer patients, 1045 radiomics features of the primary tumours were calculated. A concordance correlation coefficient (CCC) >0.85 was used to identify robust features. Correlation with prognostic value was tested using univariate cox regression in 120 oesophageal cancer patients. RESULTS: Features based on unfiltered images demonstrated greater robustness than wavelet-filtered features. In total 63/74 (85%) unfiltered features and 268/299 (90%) wavelet features stable in the 4D-lung dataset were also stable in the test-retest dataset. In oesophageal cancer 397/1045 (38%) features were robust, of which 108 features were significantly associated with overall-survival. CONCLUSION: 4DCT-scans can be used as alternative to eliminate unstable radiomics features as first step in a feature selection procedure. Feature robustness is tumour-site specific and independent of prognostic value.

[18F] FDG Positron Emission Tomography (PET) Tumor and Penumbra Imaging Features Predict Recurrence in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A.
  • Davidzon, Guido A.
  • Bakr, Shaimaa
  • Echegaray, Sebastian
  • Leung, Ann N. C.
  • Vasanawala, Minal
  • Horng, George
  • Napel, Sandy
  • Nair, Viswam S.
Tomography (Ann Arbor, Mich.) 2019 Journal Article, cited 0 times
Website
We identified computational imaging features on 18F-fluorodeoxyglucose positron emission tomography (PET) that predict recurrence/progression in non-small cell lung cancer (NSCLC). We retrospectively identified 291 patients with NSCLC from 2 prospectively acquired cohorts (training, n = 145; validation, n = 146). We contoured the metabolic tumor volume (MTV) on all pretreatment PET images and added a 3-dimensional penumbra region that extended outward 1 cm from the tumor surface. We generated 512 radiomics features, selected 435 features based on robustness to contour variations, and then applied randomized sparse regression (LASSO) to identify features that predicted time to recurrence in the training cohort. We built Cox proportional hazards models in the training cohort and independently evaluated the models in the validation cohort. Two features including stage and a MTV plus penumbra texture feature were selected by LASSO. Both features were significant univariate predictors, with stage being the best predictor (hazard ratio [HR] = 2.15 [95% confidence interval (CI): 1.56-2.95], P < .001). However, adding the MTV plus penumbra texture feature to stage significantly improved prediction (P = .006). This multivariate model was a significant predictor of time to recurrence in the training cohort (concordance = 0.74 [95% CI: 0.66-0.81], P < .001) that was validated in a separate validation cohort (concordance = 0.74 [95% CI: 0.67-0.81], P < .001). A combined radiomics and clinical model improved NSCLC recurrence prediction. FDG PET radiomic features may be useful biomarkers for lung cancer prognosis and add clinical utility for risk stratification.

AAWS-Net: Anatomy-aware weakly-supervised learning network for breast mass segmentation

  • Sun, Y.
  • Ji, Y.
PLoS One 2021 Journal Article, cited 0 times
Website
Accurate segmentation of breast masses is an essential step in computer aided diagnosis of breast cancer. The scarcity of annotated training data greatly hinders the model's generalization ability, especially for the deep learning based methods. However, high-quality image-level annotations are time-consuming and cumbersome in medical image analysis scenarios. In addition, a large amount of weak annotations is under-utilized which comprise common anatomy features. To this end, inspired by teacher-student networks, we propose an Anatomy-Aware Weakly-Supervised learning Network (AAWS-Net) for extracting useful information from mammograms with weak annotations for efficient and accurate breast mass segmentation. Specifically, we adopt a weakly-supervised learning strategy in the Teacher to extract anatomy structure from mammograms with weak annotations by reconstructing the original image. Besides, knowledge distillation is used to suggest morphological differences between benign and malignant masses. Moreover, the prior knowledge learned from the Teacher is introduced to the Student in an end-to-end way, which improves the ability of the student network to locate and segment masses. Experiments on CBIS-DDSM have shown that our method yields promising performance compared with state-of-the-art alternative models for breast mass segmentation in terms of segmentation accuracy and IoU.

Accelerated brain tumor dynamic contrast‐enhanced MRI using Adaptive Pharmaco‐Kinetic Model Constrained method

  • Liu, Fan
  • Li, Dongxiao
  • Jin, Xinyu
  • Qiu, Wenyuan
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2021 Journal Article, cited 0 times
In brain tumor, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) spatiotemporally resolved high-quality reconstruction which is required for quantitative analysis of some physiological characteristics of brain tissue. By exploiting some kind of sparsity priori, compressed sensing methods can achieve high spatiotemporal DCE-MRI image reconstruction from undersampled k-space data. Recently, as a kind of priori information about the contrast agent (CA) concentration dynamics, Pharmacokinetic (PK) models have been explored for undersampled DCE-MRI reconstruction. This paper presents a novel dictionary learning-based reconstruction method with Adaptive Pharmaco-Kinetic Model Constraints (APKMC). In APKMC, the priori knowledge about CA dynamics is incorporated into a novel dictionary, which consists of PK model-based atoms and adaptive atoms. The PK atoms are constructed based on Patlak model and K-SVD dimension reduction algorithm, and the adaptive ones are used to resolve PK model inconsistencies. To solve APKMC, an optimization algorithm based on variable splitting and alternating iterative optimization is presented. The proposed method has been validated on three brain tumor DCE-MRI data sets by comparing with two state-of-the-art methods. As demonstrated by the quantitative and qualitative analysis of results, APKMC achieved substantially better quality in the reconstruction of brain DCE-MRI images, as well as in the reconstruction of PK model parameter maps.

Accelerating Brain DTI and GYN MRI Studies Using Neural Network

  • Yan, Yuhao
Medical Physics 2021 Thesis, cited 0 times
Website
There always exists a demand to accelerate the time-consuming MRI acquisition process. Many methods have been proposed to achieve this goal, including deep learning method which appears to be a robust tool compared to conventional methods. While many works have been done to evaluate the performance of neural networks on standard anatomical MR images, few attentions have been paid to accelerating other less conventional MR image acquisitions. This work aims to evaluate the feasibility of neural networks on accelerating Brain DTI and Gynecological Brachytherapy MRI.Three neural networks including U-net, Cascade-net and PD-net were evaluated. Brain DTI data was acquired from public database RIDER NEURO MRI while cervix gynecological MRI data was acquired from Duke University Hospital clinic data. A 25% Cartesian undersampling strategy was applied to all the training and test data. Diffusion weighted images and quantitative functional maps in Brain DTI, T1-spgr and T2 images in GYN studies were reconstructed. The performance of the neural networks was evaluated by quantitatively calculating the similarity between the reconstructed images and the reference images, using the metric Total Relative Error (TRE). Results showed that with the architectures and parameters set in this work, all three neural networks could accelerate Brain DTI and GYN T2 MR imaging. Generally, PD-net slightly outperformed Cascade-net, and they both outperformed U-net with respect to image reconstruction performance. While this was also true for reconstruction of quantitative functional diffusion weighted maps and GYN T1-spgr images, the overall performance of the three neural networks on these two tasks needed further improvement. To be concluded, PD-net is very promising on accelerating T2-weighted-based MR imaging. Future work can focus on adjusting the parameters and architectures of the neural networks to improve the performance on accelerating GYN T1-spgr MR imaging and adopting more robust undersampling strategy such as radial undersampling strategy to further improve the overall acceleration performance.

Accelerating Machine Learning with Training Data Management

  • Ratner, Alexander Jason
2019 Thesis, cited 1 times
Website
One of the biggest bottlenecks in developing machine learning applications today is the need for large hand-labeled training datasets. Even at the world's most sophisticated technology companies, and especially at other organizations across science, medicine, industry, and government, the time and monetary cost of labeling and managing large training datasets is often the blocking factor in using machine learning. In this thesis, we describe work on training data management systems that enable users to programmatically build and manage training datasets, rather than labeling and managing them by hand, and present algorithms and supporting theory for automatically modeling this noisier process of training set specification in order to improve the resulting training set quality. We then describe extensive empirical results and real-world deployments demonstrating that programmatically building, managing, and modeling training sets in this way can lead to radically faster, more flexible, and more accessible ways of developing machine learning applications. We start by describing data programming, a paradigm for labeling training datasets programmatically rather than by hand, and Snorkel, an open source training data management system built around data programming that has been used by major technology companies, academic labs, and government agencies to build machine learning applications in days or weeks rather than months or years. In Snorkel, rather than hand-labeling training data, users write programmatic operators called labeling functions, which label data using various heuristic or weak supervision strategies such as pattern matching, distant supervision, and other models. These labeling functions can have noisy, conflicting, and correlated outputs, which Snorkel models and combines into clean training labels without requiring any ground truth using theoretically consistent modeling approaches we develop. We then report on extensive empirical validations, user studies, and real-world applications of Snorkel in industrial, scientific, medical, and other use cases ranging from knowledge base construction from text data to medical monitoring over image and video data. Next, we will describe two other approaches for enabling users to programmatically build and manage training datasets, both currently integrated into the Snorkel open source framework: Snorkel MeTaL, an extension of data programming and Snorkel to the setting where users have multiple related classification tasks, in particular focusing on multi-task learning; and TANDA, a system for optimizing and managing strategies for data augmentation, a critical training dataset management technique wherein a labeled dataset is artificially expanded by transforming data points. Finally, we will conclude by outlining future research directions for further accelerating and democratizing machine learning workflows, such as higher-level programmatic interfaces and massively multi-task frameworks.

Accuracy of emphysema quantification performed with reduced numbers of CT sections

  • Pilgram, Thomas K
  • Quirk, James D
  • Bierhals, Andrew J
  • Yusen, Roger D
  • Lefrak, Stephen S
  • Cooper, Joel D
  • Gierada, David S
American Journal of Roentgenology 2010 Journal Article, cited 8 times
Website

Accuracy of fractal analysis and PI-RADS assessment of prostate magnetic resonance imaging for prediction of cancer grade groups: a clinical validation study

  • Michallek, F.
  • Huisman, H.
  • Hamm, B.
  • Elezkurtaj, S.
  • Maxeiner, A.
  • Dewey, M.
Eur Radiol 2021 Journal Article, cited 1 times
Website
OBJECTIVES: Multiparametric MRI with Prostate Imaging Reporting and Data System (PI-RADS) assessment is sensitive but not specific for detecting clinically significant prostate cancer. This study validates the diagnostic accuracy of the recently suggested fractal dimension (FD) of perfusion for detecting clinically significant cancer. MATERIALS AND METHODS: Routine clinical MR imaging data, acquired at 3 T without an endorectal coil including dynamic contrast-enhanced sequences, of 72 prostate cancer foci in 64 patients were analyzed. In-bore MRI-guided biopsy with International Society of Urological Pathology (ISUP) grading served as reference standard. Previously established FD cutoffs for predicting tumor grade were compared to measurements of the apparent diffusion coefficient (25th percentile, ADC25) and PI-RADS assessment with and without inclusion of the FD as separate criterion. RESULTS: Fractal analysis allowed prediction of ISUP grade groups 1 to 4 but not 5, with high agreement to the reference standard (kappaFD = 0.88 [CI: 0.79-0.98]). Integrating fractal analysis into PI-RADS allowed a strong improvement in specificity and overall accuracy while maintaining high sensitivity for significant cancer detection (ISUP > 1; PI-RADS alone: sensitivity = 96%, specificity = 20%, area under the receiver operating curve [AUC] = 0.65; versus PI-RADS with fractal analysis: sensitivity = 95%, specificity = 88%, AUC = 0.92, p < 0.001). ADC25 only differentiated low-grade group 1 from pooled higher-grade groups 2-5 (kappaADC = 0.36 [CI: 0.12-0.59]). Importantly, fractal analysis was significantly more reliable than ADC25 in predicting non-significant and clinically significant cancer (AUCFD = 0.96 versus AUCADC = 0.75, p < 0.001). Diagnostic accuracy was not significantly affected by zone location. CONCLUSIONS: Fractal analysis is accurate in noninvasively predicting tumor grades in prostate cancer and adds independent information when implemented into PI-RADS assessment. This opens the opportunity to individually adjust biopsy priority and method in individual patients. KEY POINTS: * Fractal analysis of perfusion is accurate in noninvasively predicting tumor grades in prostate cancer using dynamic contrast-enhanced sequences (kappaFD = 0.88). * Including the fractal dimension into PI-RADS as a separate criterion improved specificity (from 20 to 88%) and overall accuracy (AUC from 0.86 to 0.96) while maintaining high sensitivity (96% versus 95%) for predicting clinically significant cancer. * Fractal analysis was significantly more reliable than ADC25 in predicting clinically significant cancer (AUCFD = 0.96 versus AUCADC = 0.75).

An Accuracy vs. Complexity Comparison of Deep Learning Architectures for the Detection of COVID-19 Disease

  • Sarv Ahrabi, Sima
  • Scarpiniti, Michele
  • Baccarelli, Enzo
  • Momenzadeh, Alireza
Computation 2021 Journal Article, cited 0 times
Website

Accurate pancreas segmentation using multi-level pyramidal pooling residual U-Net with adversarial mechanism

  • Li, M.
  • Lian, F.
  • Wang, C.
  • Guo, S.
BMC Med Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: A novel multi-level pyramidal pooling residual U-Net with adversarial mechanism was proposed for organ segmentation from medical imaging, and was conducted on the challenging NIH Pancreas-CT dataset. METHODS: The 82 pancreatic contrast-enhanced abdominal CT volumes were split via four-fold cross validation to test the model performance. In order to achieve accurate segmentation, we firstly involved residual learning into an adversarial U-Net to achieve a better gradient information flow for improving segmentation performance. Then, we introduced a multi-level pyramidal pooling module (MLPP), where a novel pyramidal pooling was involved to gather contextual information for segmentation, then four groups of structures consisted of a different number of pyramidal pooling blocks were proposed to search for the structure with the optimal performance, and two types of pooling blocks were applied in the experimental section to further assess the robustness of MLPP for pancreas segmentation. For evaluation, Dice similarity coefficient (DSC) and recall were used as the metrics in this work. RESULTS: The proposed method preceded the baseline network 5.30% and 6.16% on metrics DSC and recall, and achieved competitive results compared with the-state-of-art methods. CONCLUSIONS: Our algorithm showed great segmentation performance even on the particularly challenging pancreas dataset, this indicates that the proposed model is a satisfactory and promising segmentor.

Acute Lymphoblastic Leukemia Detection Using Depthwise Separable Convolutional Neural Networks

  • Clinton Jr, Laurence P
  • Somes, Karen M
  • Chu, Yongjun
  • Javed, Faizan
SMU Data Science Review 2020 Journal Article, cited 0 times
Website

Acute Tumor Transition Angle on Computed Tomography Predicts Chromosomal Instability Status of Primary Gastric Cancer: Radiogenomics Analysis from TCGA and Independent Validation

  • Lai, Ying-Chieh
  • Yeh, Ta-Sen
  • Wu, Ren-Chin
  • Tsai, Cheng-Kun
  • Yang, Lan-Yan
  • Lin, Gigin
  • Kuo, Michael D
Cancers 2019 Journal Article, cited 0 times
Website
Chromosomal instability (CIN) of gastric cancer is correlated with distinct outcomes. This study aimed to investigate the role of computed tomography (CT) imaging traits in predicting the CIN status of gastric cancer. We screened 443 patients in the Cancer Genome Atlas gastric cancer cohort to filter 40 patients with complete CT imaging and genomic data as the training cohort. CT imaging traits were subjected to logistic regression to select independent predictors for the CIN status. For the validation cohort, we prospectively enrolled 18 gastric cancer patients for CT and tumor genomic analysis. The imaging predictors were tested in the validation cohort using receiver operating characteristic curve (ROC) analysis. Thirty patients (75%) in the training cohort and 9 patients (50%) in the validation cohort had CIN subtype gastric cancers. Smaller tumor diameter (p = 0.017) and acute tumor transition angle (p = 0.045) independently predict CIN status in the training cohort. In the validation cohort, acute tumor transition angle demonstrated the highest accuracy, sensitivity, and specificity of 88.9%, 88.9%, and 88.9%, respectively, and areas under ROC curve of 0.89. In conclusion, this pilot study showed acute tumor transition angle on CT images may predict the CIN status of gastric cancer.

An Ad Hoc Random Initialization Deep Neural Network Architecture for Discriminating Malignant Breast Cancer Lesions in Mammographic Images

  • Duggento, Andrea
  • Aiello, Marco
  • Cavaliere, Carlo
  • Cascella, Giuseppe L
  • Cascella, Davide
  • Conte, Giovanni
  • Guerrisi, Maria
  • Toschi, Nicola
Contrast Media Mol Imaging 2019 Journal Article, cited 1 times
Website
Breast cancer is one of the most common cancers in women, with more than 1,300,000 cases and 450,000 deaths each year worldwide. In this context, recent studies showed that early breast cancer detection, along with suitable treatment, could significantly reduce breast cancer death rates in the long term. X-ray mammography is still the instrument of choice in breast cancer screening. In this context, the false-positive and false-negative rates commonly achieved by radiologists are extremely arduous to estimate and control although some authors have estimated figures of up to 20% of total diagnoses or more. The introduction of novel artificial intelligence (AI) technologies applied to the diagnosis and, possibly, prognosis of breast cancer could revolutionize the current status of the management of the breast cancer patient by assisting the radiologist in clinical image interpretation. Lately, a breakthrough in the AI field has been brought about by the introduction of deep learning techniques in general and of convolutional neural networks in particular. Such techniques require no a priori feature space definition from the operator and are able to achieve classification performances which can even surpass human experts. In this paper, we design and validate an ad hoc CNN architecture specialized in breast lesion classification from imaging data only. We explore a total of 260 model architectures in a train-validation-test split in order to propose a model selection criterion which can pose the emphasis on reducing false negatives while still retaining acceptable accuracy. We achieve an area under the receiver operatic characteristics curve of 0.785 (accuracy 71.19%) on the test set, demonstrating how an ad hoc random initialization architecture can and should be fine tuned to a specific problem, especially in biomedical applications.

Adaptive Enhancement Technique for Cancerous Lung Nodule in Computed Tomography Images

  • AbuBaker, Ayman A
International Journal of Engineering and Technology 2016 Journal Article, cited 1 times
Website
Diagnosis the Computed Tomography Images (CT-Images) may take a lot of time by the radiologist. This will increase the radiologist fatigue and may miss some of the cancerous lung nodule lesions. Therefore, an adaptive local enhancement Computer Aided Diagnosis (CAD) system is proposed. The proposed technique is design to enhance the suspicious cancerous regions in the CT-Images. The visual characteristics of the cancerous lung nodules in the CT-Images was the main criteria in designing this technique. The new approach is divided into two phases, pre-processing phase and image enhancement phase. The image noise reduction, thresholding process, and extraction the lung regions are considered as a pre-processing phase. Whereas, the new adaptive local enhancement method for the CTImages were implemented as a second phase. The proposed algorithm is tested and evaluated on 42 normal and cancerous lung nodule CT-Images. As a result, this new approach can efficiently enhance the cancerous lung nodules by 25% comparing with the original images.

Adaptive Multi-Column Deep Neural Networks with Application to Robust Image Denoising

  • Agostinelli, Forest
  • Anderson, Michael R
  • Lee, Honglak
2013 Conference Proceedings, cited 118 times
Website
Stacked sparse denoising auto-encoders (SSDAs) have recently been shown to be successful at removing noise from corrupted images. However, like most denoising techniques, the SSDA is not robust to variation in noise types beyond what it has seen during training. We present the multi-column stacked sparse denoising autoencoder, a novel technique of combining multiple SSDAs into a multi-column SSDA (MC-SSDA) by combining the outputs of each SSDA. We eliminate the need to determine the type of noise, let alone its statistics, at test time. We show that good denoising performance can be achieved with a single system on a variety of different noise types, including ones not seen in the training set. Additionally, we experimentally demonstrate the efficacy of MC-SSDA denoising by achieving MNIST digit error rates on denoised images at close to that of the uncorrupted images.

Adaptive multi-modality fusion network for glioma grading; 自适应多模态特征融合胶质瘤分级网络

  • Wang Li
  • Cao Ying
  • Tian Lili
  • Chen Qijian
  • Guo Shunchao
  • Zhang Jian
  • Wang Lihui
Journal of Image and Graphics 2021 Journal Article, cited 0 times
Objective The accurate grading of glioma is the main method to assist in the formulation of personalized treatment plans, but most of the existing studies focus on the classification prediction based on the tumor area, which needs to delineate the area of ​​interest in advance, which cannot meet the real-time performance of clinical intelligent auxiliary diagnosis. need. Therefore, this paper proposes an adaptive multi-modal fusion network (AMMFNet), which can achieve end-to-end accurate prediction from the original acquired images to the glioma level without the need to delineate the tumor region. . Methods The AMMFNet method uses four isomorphic network branches to extract multi-scale image features of different modalities; uses adaptive multi-modal feature fusion module and dimensionality reduction module for feature fusion; combines cross-entropy classification loss and feature embedding loss to improve glue. Classification accuracy of plasmoid tumors. In order to verify the model performance, this paper uses the MICCAI (Medical Image Computing and Computer Assisted Intervention Society) 2018 public dataset for training and testing, and compares it with the cutting-edge deep learning model and the latest glioma classification model, and uses the accuracy and subject The area under the curve (AUC) and other indicators were used for quantitative analysis. Results Without delineating the tumor area, the AUC of this model for predicting glioma grade was 0.965; when the tumor area was used, its AUC was as high as 0.997, and the accuracy was 0.982, which was more than the current best glioma classification model- The task convolutional neural network increased by 1.2% year-on-year. Conclusion The adaptive multimodal feature fusion network proposed in this paper can accurately predict glioma grades without delineating tumor regions by combining multimodal and multi-semantic-level features. Glioma grading ; deep learning ; multimodal fusion ; multiscale features ; end-to-end classification

Adding features from the mathematical model of breast cancer to predict the tumour size

  • Nave, OPhir
International Journal of Computer Mathematics: Computer Systems Theory 2020 Journal Article, cited 0 times
Website
In this study, we combine a theoretical mathematical model with machine learning (ML) to predict tumour sizes in breast cancer. Our study is based on clinical data from 1869 women of various ages with breast cancer. To accurately predict tumour size for each woman individually, we solved our customized mathematical model for each woman, then added the solution vector of the dynamic variables in the model (in machine learning language, these are called features) to the clinical data and used a variety of machine learning algorithms. We compared the results obtained with and without the mathematical model and showed that by adding specific features from the mathematical model we were able to better predict tumour size for each woman.

Addition of MR imaging features and genetic biomarkers strengthens glioblastoma survival prediction in TCGA patients

  • Nicolasjilwan, Manal
  • Hu, Ying
  • Yan, Chunhua
  • Meerzaman, Daoud
  • Holder, Chad A
  • Gutman, David
  • Jain, Rajan
  • Colen, Rivka
  • Rubin, Daniel L
  • Zinn, Pascal O
  • Hwang, Scott N
  • Raghavan, Prashant
  • Hammoud, Dima A
  • Scarpace, Lisa M
  • Mikkelsen, Tom
  • Chen, James
  • Gevaert, Olivier
  • Buetow, Kenneth
  • Freymann, John
  • Kirby, Justin
  • Flanders, Adam E
  • Wintermark, Max
Journal of Neuroradiology 2014 Journal Article, cited 49 times
Website
PURPOSE: The purpose of our study was to assess whether a model combining clinical factors, MR imaging features, and genomics would better predict overall survival of patients with glioblastoma (GBM) than either individual data type. METHODS: The study was conducted leveraging The Cancer Genome Atlas (TCGA) effort supported by the National Institutes of Health. Six neuroradiologists reviewed MRI images from The Cancer Imaging Archive (http://cancerimagingarchive.net) of 102 GBM patients using the VASARI scoring system. The patients' clinical and genetic data were obtained from the TCGA website (http://www.cancergenome.nih.gov/). Patient outcome was measured in terms of overall survival time. The association between different categories of biomarkers and survival was evaluated using Cox analysis. RESULTS: The features that were significantly associated with survival were: (1) clinical factors: chemotherapy; (2) imaging: proportion of tumor contrast enhancement on MRI; and (3) genomics: HRAS copy number variation. The combination of these three biomarkers resulted in an incremental increase in the strength of prediction of survival, with the model that included clinical, imaging, and genetic variables having the highest predictive accuracy (area under the curve 0.679+/-0.068, Akaike's information criterion 566.7, P<0.001). CONCLUSION: A combination of clinical factors, imaging features, and HRAS copy number variation best predicts survival of patients with GBM.

Additional Value of PET/CT-Based Radiomics to Metabolic Parameters in Diagnosing Lynch Syndrome and Predicting PD1 Expression in Endometrial Carcinoma

  • Wang, X.
  • Wu, K.
  • Li, X.
  • Jin, J.
  • Yu, Y.
  • Sun, H.
Front Oncol 2021 Journal Article, cited 0 times
Website
Purpose: We aim to compare the radiomic features and parameters on 2-deoxy-2-[fluorine-18] fluoro-D-glucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) between patients with endometrial cancer with Lynch syndrome and those with endometrial cancer without Lynch syndrome. We also hope to explore the biologic significance of selected radiomic features. Materials and Methods: We conducted a retrospective cohort study, first using the 18F-FDG PET/CT images and clinical data from 100 patients with endometrial cancer to construct a training group (70 patients) and a test group (30 patients). The metabolic parameters and radiomic features of each tumor were compared between patients with and without Lynch syndrome. An independent cohort of 23 patients with solid tumors was used to evaluate the value of selected radiomic features in predicting the expression of the programmed cell death 1 (PD1), using 18F-FDG PET/CT images and RNA-seq genomic data. Results: There was no statistically significant difference in the standardized uptake values on PET between patients with endometrial cancer with Lynch syndrome and those with endometrial cancer without Lynch syndrome. However, there were significant differences between the 2 groups in metabolic tumor volume and total lesion glycolysis (p < 0.005). There was a difference in the radiomic feature of gray level co-occurrence matrix entropy (GLCMEntropy; p < 0.001) between the groups: the area under the curve was 0.94 in the training group (sensitivity, 82.86%; specificity, 97.14%) and 0.893 in the test group (sensitivity, 80%; specificity, 93.33%). In the independent cohort of 23 patients, differences in GLCMEntropy were related to the expression of PD1 (rs =0.577; p < 0.001). Conclusions: In patients with endometrial cancer, higher metabolic tumor volumes, total lesion glycolysis values, and GLCMEntropy values on 18F-FDG PET/CT could suggest a higher risk for Lynch syndrome. The radiomic feature of GLCMEntropy for tumors is a potential predictor of PD1 expression.

Addressing architectural distortion in mammogram using AlexNet and support vector machine

  • Vedalankar, Aditi V.
  • Gupta, Shankar S.
  • Manthalkar, Ramchandra R.
Informatics in Medicine Unlocked 2021 Journal Article, cited 0 times
Website
Objective To address the architectural distortion (AD) which is an irregularity in the parenchymal pattern of breast. The nature of AD is extremely complex; still, the study is very much essential because AD is viewed as a primitive sign of breast cancer. In this study, a new convolutional neural network (CNN) based system is developed that performs classification of AD distorted mammograms and other mammograms. Methods In the first part, mammograms undergo pre-processing and image augmentation techniques. In the other half, learned and handcrafted features are retrieved. The AlexNet Pretrained CNN is utilized for extraction of learned features. The support vector machine (SVM) validates the existence of AD. For improved classification, the scheme is tested for various conditions. Results A sophisticated CNN based system is developed for stepwise analysis of AD. The maximum accuracy, sensitivity and specificity yielded as 92%, 81.50% and 90.83% respectively. The results outperform the conventional methods. Conclusion Based on the overall study, it is recommended that a combination of CNN pre-trained network and support vector machine is a good option for identification of AD. The study will motivate researchers to find improved methods of high performance. Further, it will also help the radiologists. Significance The AD can develop up to two years before the growth of any anomaly. The proposed system will play an essential role in the detection of early manifestations of breast cancer. The system will aid society to go for better treatment options for women all over the world and curtail the mortality rate.

Advanced 3D printed model of middle cerebral artery aneurysms for neurosurgery simulation

  • Nagassa, Ruth G
  • McMenamin, Paul G
  • Adams, Justin W
  • Quayle, Michelle R
  • Rosenfeld, Jeffrey V
3D Print Med 2019 Journal Article, cited 0 times
Website
BACKGROUND: Neurosurgical residents are finding it more difficult to obtain experience as the primary operator in aneurysm surgery. The present study aimed to replicate patient-derived cranial anatomy, pathology and human tissue properties relevant to cerebral aneurysm intervention through 3D printing and 3D print-driven casting techniques. The final simulator was designed to provide accurate simulation of a human head with a middle cerebral artery (MCA) aneurysm. METHODS: This study utilized living human and cadaver-derived medical imaging data including CT angiography and MRI scans. Computer-aided design (CAD) models and pre-existing computational 3D models were also incorporated in the development of the simulator. The design was based on including anatomical components vital to the surgery of MCA aneurysms while focusing on reproducibility, adaptability and functionality of the simulator. Various methods of 3D printing were utilized for the direct development of anatomical replicas and moulds for casting components that optimized the bio-mimicry and mechanical properties of human tissues. Synthetic materials including various types of silicone and ballistics gelatin were cast in these moulds. A novel technique utilizing water-soluble wax and silicone was used to establish hollow patient-derived cerebrovascular models. RESULTS: A patient-derived 3D aneurysm model was constructed for a MCA aneurysm. Multiple cerebral aneurysm models, patient-derived and CAD, were replicated as hollow high-fidelity models. The final assembled simulator integrated six anatomical components relevant to the treatment of cerebral aneurysms of the Circle of Willis in the left cerebral hemisphere. These included models of the cerebral vasculature, cranial nerves, brain, meninges, skull and skin. The cerebral circulation was modeled through the patient-derived vasculature within the brain model. Linear and volumetric measurements of specific physical modular components were repeated, averaged and compared to the original 3D meshes generated from the medical imaging data. Calculation of the concordance correlation coefficient (rhoc: 90.2%-99.0%) and percentage difference (</=0.4%) confirmed the accuracy of the models. CONCLUSIONS: A multi-disciplinary approach involving 3D printing and casting techniques was used to successfully construct a multi-component cerebral aneurysm surgery simulator. Further study is planned to demonstrate the educational value of the proposed simulator for neurosurgery residents.

Advanced MRI Techniques in the Monitoring of Treatment of Gliomas

  • Hyare, Harpreet
  • Thust, Steffi
  • Rees, Jeremy
Current treatment options in neurology 2017 Journal Article, cited 11 times
Website
OPINION STATEMENT: With advances in treatments and survival of patients with glioblastoma (GBM), it has become apparent that conventional imaging sequences have significant limitations both in terms of assessing response to treatment and monitoring disease progression. Both 'pseudoprogression' after chemoradiation for newly diagnosed GBM and 'pseudoresponse' after anti-angiogenesis treatment for relapsed GBM are well-recognised radiological entities. This in turn has led to revision of response criteria away from the standard MacDonald criteria, which depend on the two-dimensional measurement of contrast-enhancing tumour, and which have been the primary measure of radiological response for over three decades. A working party of experts published RANO (Response Assessment in Neuro-oncology Working Group) criteria in 2010 which take into account signal change on T2/FLAIR sequences as well as the contrast-enhancing component of the tumour. These have recently been modified for immune therapies, which are associated with specific issues related to the timing of radiological response. There has been increasing interest in quantification and validation of physiological and metabolic parameters in GBM over the last 10 years utilising the wide range of advanced imaging techniques available on standard MRI platforms. Previously, MRI would provide structural information only on the anatomical location of the tumour and the presence or absence of a disrupted blood-brain barrier. Advanced MRI sequences include proton magnetic resonance spectroscopy (MRS), vascular imaging (perfusion/permeability) and diffusion imaging (diffusion weighted imaging/diffusion tensor imaging) and are now routinely available. They provide biologically relevant functional, haemodynamic, cellular, metabolic and cytoarchitectural information and are being evaluated in clinical trials to determine whether they offer superior biomarkers of early treatment response than conventional imaging, when correlated with hard survival endpoints. Multiparametric imaging, incorporating different combinations of these modalities, improves accuracy over single imaging modalities but has not been widely adopted due to the amount of post-processing analysis required, lack of clinical trial data, lack of radiology training and wide variations in threshold values. New techniques including diffusion kurtosis and radiomics will offer a higher level of quantification but will require validation in clinical trial settings. Given all these considerations, it is clear that there is an urgent need to incorporate advanced techniques into clinical trial design to avoid the problems of under or over assessment of treatment response.

Advancing Semantic Interoperability of Image Annotations: Automated Conversion of Non-standard Image Annotations in a Commercial PACS to the Annotation and Image Markup

  • Swinburne, Nathaniel C
  • Mendelson, David
  • Rubin, Daniel L
J Digit Imaging 2019 Journal Article, cited 0 times
Website
Sharing radiologic image annotations among multiple institutions is important in many clinical scenarios; however, interoperability is prevented because different vendors’ PACS store annotations in non-standardized formats that lack semantic interoperability. Our goal was to develop software to automate the conversion of image annotations in a commercial PACS to the Annotation and Image Markup (AIM) standardized format and demonstrate the utility of this conversion for automated matching of lesion measurements across time points for cancer lesion tracking. We created a software module in Java to parse the DICOM presentation state (DICOM-PS) objects (that contain the image annotations) for imaging studies exported from a commercial PACS (GE Centricity v3.x). Our software identifies line annotations encoded within the DICOM-PS objects and exports the annotations in the AIM format. A separate Python script processes the AIM annotation files to match line measurements (on lesions) across time points by tracking the 3D coordinates of annotated lesions. To validate the interoperability of our approach, we exported annotations from Centricity PACS into ePAD (http://epad.stanford.edu) (Rubin et al., Transl Oncol 7(1):23–35, 2014), a freely available AIM-compliant workstation, and the lesion measurement annotations were correctly linked by ePAD across sequential imaging studies. As quantitative imaging becomes more prevalent in radiology, interoperability of image annotations gains increasing importance. Our work demonstrates that image annotations in a vendor system lacking standard semantics can be automatically converted to a standardized metadata format such as AIM, enabling interoperability and potentially facilitating large-scale analysis of image annotations and the generation of high-quality labels for deep learning initiatives. This effort could be extended for use with other vendors’ PACS.

Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features

  • Bakas, Spyridon
  • Akbari, Hamed
  • Sotiras, Aristeidis
  • Bilello, Michel
  • Rozycki, Martin
  • Kirby, Justin S.
  • Freymann, John B.
  • Farahani, Keyvan
  • Davatzikos, Christos
Scientific data 2017 Journal Article, cited 1036 times
Website
Gliomas belong to a group of central nervous system tumors, and consist of various sub-regions. Gold standard labeling of these sub-regions in radiographic imaging is essential for both clinical and computational studies, including radiomic and radiogenomic analyses. Towards this end, we release segmentation labels and radiomic features for all pre-operative multimodal magnetic resonance imaging (MRI) (n=243) of the multi-institutional glioma collections of The Cancer Genome Atlas (TCGA), publicly available in The Cancer Imaging Archive (TCIA). Pre-operative scans were identified in both glioblastoma (TCGA-GBM, n=135) and low-grade-glioma (TCGA-LGG, n=108) collections via radiological assessment. The glioma sub-region labels were produced by an automated state-of-the-art method and manually revised by an expert board-certified neuroradiologist. An extensive panel of radiomic features was extracted based on the manually-revised labels. This set of labels and features should enable i) direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as ii) performance evaluation of computer-aided segmentation methods, and comparison to our state-of-the-art method.

Adverse prognosis of glioblastoma contacting the subventricular zone: Biological correlates

  • Berendsen, S.
  • van Bodegraven, E.
  • Seute, T.
  • Spliet, W. G. M.
  • Geurts, M.
  • Hendrikse, J.
  • Schoysman, L.
  • Huiszoon, W. B.
  • Varkila, M.
  • Rouss, S.
  • Bell, E. H.
  • Kroonen, J.
  • Chakravarti, A.
  • Bours, V.
  • Snijders, T. J.
  • Robe, P. A.
PLoS One 2019 Journal Article, cited 2 times
Website
INTRODUCTION: The subventricular zone (SVZ) in the brain is associated with gliomagenesis and resistance to treatment in glioblastoma. In this study, we investigate the prognostic role and biological characteristics of subventricular zone (SVZ) involvement in glioblastoma. METHODS: We analyzed T1-weighted, gadolinium-enhanced MR images of a retrospective cohort of 647 primary glioblastoma patients diagnosed between 2005-2013, and performed a multivariable Cox regression analysis to adjust the prognostic effect of SVZ involvement for clinical patient- and tumor-related factors. Protein expression patterns of a.o. markers of neural stem cellness (CD133 and GFAP-delta) and (epithelial-) mesenchymal transition (NF-kappaB, C/EBP-beta and STAT3) were determined with immunohistochemistry on tissue microarrays containing 220 of the tumors. Molecular classification and mRNA expression-based gene set enrichment analyses, miRNA expression and SNP copy number analyses were performed on fresh frozen tissue obtained from 76 tumors. Confirmatory analyses were performed on glioblastoma TCGA/TCIA data. RESULTS: Involvement of the SVZ was a significant adverse prognostic factor in glioblastoma, independent of age, KPS, surgery type and postoperative treatment. Tumor volume and postoperative complications did not explain this prognostic effect. SVZ contact was associated with increased nuclear expression of the (epithelial-) mesenchymal transition markers C/EBP-beta and phospho-STAT3. SVZ contact was not associated with molecular subtype, distinct gene expression patterns, or markers of stem cellness. Our main findings were confirmed in a cohort of 229 TCGA/TCIA glioblastomas. CONCLUSION: In conclusion, involvement of the SVZ is an independent prognostic factor in glioblastoma, and associates with increased expression of key markers of (epithelial-) mesenchymal transformation, but does not correlate with stem cellness, molecular subtype, or specific (mi)RNA expression patterns.

Age-related copy number variations and expression levels of F-box protein FBXL20 predict ovarian cancer prognosis

  • Zheng, S.
  • Fu, Y.
Translational oncologyTransl Oncol 2020 Journal Article, cited 0 times
Website
About 70% of ovarian cancer (OvCa) cases are diagnosed at advanced stages (stage III/IV) with only 20-40% of them survive over 5years after diagnosis. A reliably screening marker could enable a paradigm shift in OvCa early diagnosis and risk stratification. Age is one of the most significant risk factors for OvCa. Older women have much higher rates of OvCa diagnosis and poorer clinical outcomes. In this article, we studied the correlation between aging and genetic alterations in The Cancer Genome Atlas Ovarian Cancer dataset. We demonstrated that copy number variations (CNVs) and expression levels of the F-Box and Leucine-Rich Repeat Protein 20 (FBXL20), a substrate recognizing protein in the SKP1-Cullin1-F-box-protein E3 ligase, can predict OvCa overall survival, disease-free survival and progression-free survival. More importantly, FBXL20 copy number loss predicts the diagnosis of OvCa at a younger age, with over 60% of patients in that subgroup have OvCa diagnosed at age less than 60years. Clinicopathological studies further demonstrated malignant histological and radiographical features associated with elevated FBXL20 expression levels. This study has thus identified a potential biomarker for OvCa prognosis.

Aggregating Multi-scale Prediction Based on 3D U-Net in Brain Tumor Segmentation

  • Chen, Minglin
  • Wu, Yaozu
  • Wu, Jianhuang
2020 Conference Paper, cited 0 times
Website
Magnetic resonance imaging (MRI) is the dominant modality used in the initial evaluation of patients with primary brain tumors due to its superior image resolution and high safety profile. Automated segmentation of brain tumors from MRI is critical in the determination of response to therapy. In this paper, we propose a novel method which aggregates multi-scale prediction from 3D U-Net to segment enhancing tumor (ET), whole tumor (WT) and tumor core (TC) from multimodal MRI. Multi-scale prediction is derived from the decoder part of 3D U-Net at different resolutions. The final prediction takes the minimum value of the corresponding pixel from the upsampling multi-scale prediction. Aggregating multi-scale prediction can add constraints to the network which is beneficial for limited data. Additionally, we employ model ensembling strategy to further improve the performance of the proposed network. Finally, we achieve dice scores of 0.7745, 0.8640 and 0.7914, and Hausdorff distances (95th percentile) of 4.2365, 6.9381 and 6.6026 for ET, WT and TC respectively on the test set in BraTS 2019.

Agile convolutional neural network for pulmonary nodule classification using CT images

  • Zhao, X.
  • Liu, L.
  • Qi, S.
  • Teng, Y.
  • Li, J.
  • Qian, W.
Int J Comput Assist Radiol Surg 2018 Journal Article, cited 6 times
Website
OBJECTIVE: To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. METHODS: A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. RESULTS: After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. CONCLUSIONS: This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.

AI Based Classification Framework For Cancer Detection Using Brain MRI Images

  • Thachayani, M.
  • Kurian, Sneha
2021 Conference Paper, cited 0 times
Website
Brain imaging technologies plays an important role in medical diagnosis by providing new views of the brain anatomy giving greater insight into brain condition and functions. Image processing is used in the area of medical science to assist the early detection and treatment of life-critical illness. In this paper, cancer detection based on the brain magnetic resonance imaging (MRI) images using a combination of convolutional neural network (CNN) and sparse stacked auto encoder is presented. This combination is found to provide a significant effect in improving the accuracy and effectiveness of the classification process. The proposed method is coded in MATLAB and verified with the dataset consisting of 120 MRI images. The results obtained had shown that the proposed classifier is very much effective in classifying and grading the brain tumor MRI images.

AI-based Prognostic Imaging Biomarkers for Precision Neurooncology: the ReSPOND Consortium

  • Davatzikos, C.
  • Barnholtz-Sloan, J. S.
  • Bakas, S.
  • Colen, R.
  • Mahajan, A.
  • Quintero, C. B.
  • Font, J. C.
  • Puig, J.
  • Jain, R.
  • Sloan, A. E.
  • Badve, C.
  • Marcus, D. S.
  • Choi, Y. S.
  • Lee, S. K.
  • Chang, J. H.
  • Poisson, L. M.
  • Griffith, B.
  • Dicker, A. P.
  • Flanders, A. E.
  • Booth, T. C.
  • Rathore, S.
  • Akbari, H.
  • Sako, C.
  • Bilello, M.
  • Shukla, G.
  • Kazerooni, A. F.
  • Brem, S.
  • Lustig, R.
  • Mohan, S.
  • Bagley, S.
  • Nasrallah, M.
  • O'Rourke, D. M.
2020 Journal Article, cited 0 times
Website

AIR-Net: A novel multi-task learning method with auxiliary image reconstruction for predicting EGFR mutation status on CT images of NSCLC patients

  • Gui, D.
  • Song, Q.
  • Song, B.
  • Li, H.
  • Wang, M.
  • Min, X.
  • Li, A.
Comput Biol Med 2022 Journal Article, cited 0 times
Website
Automated and accurate EGFR mutation status prediction using computed tomography (CT) imagery is of great value for tailoring optimal treatments to non-small cell lung cancer (NSCLC) patients. However, existing deep learning based methods usually adopt a single task learning strategy to design and train EGFR mutation status prediction models with limited training data, which may be insufficient to learn distinguishable representations for promoting prediction performance. In this paper, a novel multi-task learning method named AIR-Net is proposed to precisely predict EGFR mutation status on CT images. First, an auxiliary image reconstruction task is effectively integrated with EGFR mutation status prediction, aiming at providing extra supervision at the training phase. Particularly, we adequately employ multi-level information in a shared encoder to generate more comprehensive representations of tumors. Second, a powerful feature consistency loss is further introduced to constrain semantic consistency of original and reconstructed images, which contributes to enhanced image reconstruction and offers more effective regularization to AIR-Net during training. Performance analysis of AIR-Net indicates that auxiliary image reconstruction plays an essential role in identifying EGFR mutation status. Furthermore, extensive experimental results demonstrate that our method achieves favorable performance against other competitive prediction methods. All the results executed in this study suggest that the effectiveness and superiority of AIR-Net in precisely predicting EGFR mutation status of NSCLC.

Airway Anomaly Detection by Prototype-Based Graph Neural Network

  • Zhao, Tianyi
  • Yin, Zhaozheng
2021 Conference Proceedings, cited 0 times
Website

Algorithmic three-dimensional analysis of tumor shape in MRI improves prognosis of survival in glioblastoma: a multi-institutional study

  • Czarnek, Nicholas
  • Clark, Kal
  • Peters, Katherine B
  • Mazurowski, Maciej A
Journal of Neuro-Oncology 2017 Journal Article, cited 15 times
Website
In this retrospective, IRB-exempt study, we analyzed data from 68 patients diagnosed with glioblastoma (GBM) in two institutions and investigated the relationship between tumor shape, quantified using algorithmic analysis of magnetic resonance images, and survival. Each patient's Fluid Attenuated Inversion Recovery (FLAIR) abnormality and enhancing tumor were manually delineated, and tumor shape was analyzed by automatic computer algorithms. Five features were automatically extracted from the images to quantify the extent of irregularity in tumor shape in two and three dimensions. Univariate Cox proportional hazard regression analysis was performed to determine how prognostic each feature was of survival. Kaplan Meier analysis was performed to illustrate the prognostic value of each feature. To determine whether the proposed quantitative shape features have additional prognostic value compared with standard clinical features, we controlled for tumor volume, patient age, and Karnofsky Performance Score (KPS). The FLAIR-based bounding ellipsoid volume ratio (BEVR), a 3D complexity measure, was strongly prognostic of survival, with a hazard ratio of 0.36 (95% CI 0.20-0.65), and remained significant in regression analysis after controlling for other clinical factors (P = 0.0061). Three enhancing-tumor based shape features were prognostic of survival independently of clinical factors: BEVR (P = 0.0008), margin fluctuation (P = 0.0013), and angular standard deviation (P = 0.0078). Algorithmically assessed tumor shape is statistically significantly prognostic of survival for patients with GBM independently of patient age, KPS, and tumor volume. This shows promise for extending the utility of MR imaging in treatment of GBM patients.

Algorithms applied to spatially registered multi-parametric MRI for prostate tumor volume measurement

  • Mayer, Rulon
  • Simone, Charles B
  • II, Baris Turkbey
  • Choyke, Peter
Quantitative Imaging in Medicine and Surgery 2021 Journal Article, cited 0 times
Website

ALNett: A cluster layer deep convolutional neural network for acute lymphoblastic leukemia classification

  • Jawahar, M.
  • H, S.
  • L, J. A.
  • Gandomi, A. H.
Comput Biol Med 2022 Journal Article, cited 0 times
Website
Acute Lymphoblastic Leukemia (ALL) is cancer in which bone marrow overproduces undeveloped lymphocytes. Over 6500 cases of ALL are diagnosed every year in the United States in both adults and children, accounting for around 25% of pediatric cancers, and the trend continues to rise. With the advancements of AI and big data analytics, early diagnosis of ALL can be used to aid the clinical decisions of physicians and radiologists. This research proposes a deep neural network-based (ALNett) model that employs depth-wise convolution with different dilation rates to classify microscopic white blood cell images. Specifically, the cluster layers encompass convolution and max-pooling followed by a normalization process that provides enriched structural and contextual details to extract robust local and global features from the microscopic images for the accurate prediction of ALL. The performance of the model was compared with various pre-trained models, including VGG16, ResNet-50, GoogleNet, and AlexNet, based on precision, recall, accuracy, F1 score, loss accuracy, and receiver operating characteristic (ROC) curves. Experimental results showed that the proposed ALNett model yielded the highest classification accuracy of 91.13% and an F1 score of 0.96 with less computational complexity. ALNett demonstrated promising ALL categorization and outperformed the other pre-trained models.

Alternative Tool for the Diagnosis of Diseases Through Virtual Reality

  • Galeano, Sara Daniela Galeano
  • Gonzalez, Miguel Esteban Mora
  • Medina, Ricardo Alonso Espinosa
2021 Conference Paper, cited 0 times
Website
Virtual reality (VR) presents objects or simulated scenes to reproduce situations in a way similar to the real thing. In medicine, processing and 3D reconstruction of medical images is an important step in VR. We propose a methodology for processing medical images, to segment organs, reconstruct structures in 3D and represent structures in a VR environment, in order to provide the specialist with an alternative tool for the analysis of medical images. We present a method of image segmentation based on area differentiation and other image processing techniques; the 3D reconstruction was by the 'isosurface' method. Different studies show the benefits of VR applied to clinical practice, adding its uses as an educational tool. A VR environment was created to be visualized with glasses for this purpose, this can be an alternative tool in the identification and visualization of COVID-19 affected lungs through medical image processing and subsequent 3D reconstruction.

ALTIS: A fast and automatic lung and trachea CT-image segmentation method

  • Sousa, A. M.
  • Martins, S. B.
  • Falcão, A. X.
  • Reis, F.
  • Bagatin, E.
  • Irion, K.
Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: The automated segmentation of each lung and trachea in CT scans is commonly taken as a solved problem. Indeed, existing approaches may easily fail in the presence of some abnormalities caused by a disease, trauma, or previous surgery. For robustness, we present ALTIS (implementation is available at http://lids.ic.unicamp.br/downloads) - a fast automatic lung and trachea CT-image segmentation method that relies on image features and relative shape- and intensity-based characteristics less affected by most appearance variations of abnormal lungs and trachea. METHODS: ALTIS consists of a sequence of image foresting transforms (IFTs) organized in three main steps: (a) lung-and-trachea extraction, (b) seed estimation inside background, trachea, left lung, and right lung, and (c) their delineation such that each object is defined by an optimum-path forest rooted at its internal seeds. We compare ALTIS with two methods based on shape models (SOSM-S and MALF), and one algorithm based on seeded region growing (PTK). RESULTS: The experiments involve the highest number of scans found in literature - 1255 scans, from multiple public data sets containing many anomalous cases, being only 50 normal scans used for training and 1205 scans used for testing the methods. Quantitative experiments are based on two metrics, DICE and ASSD. Furthermore, we also demonstrate the robustness of ALTIS in seed estimation. Considering the test set, the proposed method achieves an average DICE of 0.987 for both lungs and 0.898 for the trachea, whereas an average ASSD of 0.938 for the right lung, 0.856 for the left lung, and 1.316 for the trachea. These results indicate that ALTIS is statistically more accurate and considerably faster than the compared methods, being able to complete segmentation in a few seconds on modern PCs. CONCLUSION: ALTIS is the most effective and efficient choice among the compared methods to segment left lung, right lung, and trachea in anomalous CT scans for subsequent detection, segmentation, and quantitative analysis of abnormal structures in the lung parenchyma and pleural space.

Analysis and Application of clustering and visualization methods of computed tomography radiomic features to contribute to the characterization of patients with non-metastatic Non-small-cell lung cancer.

  • Serra, Maria Mercedes
2022 Thesis, cited 0 times
Website
Background: The lung is the most common site for cancer and has the highest worldwide cancer-related mortality. Routine study of patients with lung cancer usually includes at least one computed tomography (CT) study previous to the histopathological diagnosis. In the last decade the development of tools that help extract quantitative measures from medical imaging, known as radiomic characteristics, have become increasingly relevant in this domain, including mathematically extracted measures of volume, shape, texture analysis, etc. Radiomics can quantify tumor phenotypic characteristics non-invasively and could potentially contribute with objective elements to support these patients' diagnosis, management and prognosis in routine clinical practice. Methodology: LUNG1 dataset frommUniversity of Maastricht and publicly available in The Cancer Imaging Archive was obtained. Radiomic feature extraction was performed with pyRadiomics package v3.0.1 using CT scans from 422 non-small cell lung cancer (NSCLC) patients, including manual segmentations of the gross tumor volume. A single data frame was constructed including clinical data, radiomic features output, CT manufacturer and study date acquisition information. Exploratory data analysis, curation, feature selection, modeling and visualization was performed using R Software. Model based clustering was performed using VarselLCM library both with and without wrapper feature selection. Results: During exploratory data analysis lack of independence was found between histology and age and overall stage, and between survival curves and scanner manufacturer model. Features related to the manufacturer model were excluded from further analysis. Additional feature filtering was performed using the MRMR algorithm. When performing clustering analysis both models, with and without variable selection, showed significant association between partitions generated and survival curves, significance of this association was greater for the model with wrapper variable selection which selected only radiomic variables. original\_shape\_VoxelVolume feature showed the highest discriminative power for both models along with log.sigma.5.0.mm.3D\_glzm\_LargeAreaLowGrayLevelEmphasis and wavelet\_LHL\_glzm\_LargeAreaHighGrayLevelEmphasis. Clusters with significant lower median survival were also related to higher Clinical T stages, greater mean values of original\_shape\_VoxelVolume, log.sigma.5.0.mm.3D\_glzm\_LargeAreaLowGrayLevelEmphasis and wavelet\_LHL\_glzm\_LargeAreaHighGrayLevelEmphasis and lower mean wavelet.HHl\_glcm\_ClusterProminence. A weaker relationship was found between histology and selected clusters. Conclusions: Potential sources of bias given by relationship between different variables of interest and technical sources should be taken into account when analyzing this data set. Aside from original\_shape\_VoxelVolume feature, texture features applied to images with LoG and wavelet filters where found most significantly associated with different clinical characteristics in the present analysis. Value: This work highlights the relevance of analyzing clinical data and technical sources when performing radiomic analysis. It also goes through the different steps needed to extract, analyze and visualize a high dimensional dataset of radiomic features and describes associations between radiomic features and clinical variables establishing the base for future work.

Analysis of a feature-deselective neuroevolution classifier (FD-NEAT) in a computer-aided lung nodule detection system for CT images

  • Tan, Maxine
  • Deklerck, Rudi
  • Jansen, Bart
  • Cornelis, Jan
2012 Conference Proceedings, cited 9 times
Website
Systems for Computer-Aided Detection (CAD), specifically for lung nodule detection received increasing attention in recent years. This is in tandem with the observation that patients who are diagnosed with early stage lung cancer and who undergo curative resection have a much better prognosis. In this paper, we analyze the performance of a novel feature-deselective neuroevolution method called FD-NEAT to retain relevant features derived from CT images and evolve neural networks that perform well for combined feature selection and classification. Network performance is analyzed based on radiologists' ratings of various lung nodule characteristics defined in the LIDC database. The analysis shows that the FD-NEAT classifier relates well with the radiologists' perception in almost all the defined nodule characteristics, and shows that FD-NEAT evolves networks that are less complex than the fixed-topology ANN in terms of number of connections.

Analysis of Classification Methods for Diagnosis of Pulmonary Nodules in CT Images

  • Baboo, Capt Dr S Santhosh
  • Iyyapparaj, E
IOSR Journal of Electrical and Electronics Engineering 2017 Journal Article, cited 0 times
Website
The main aim of this work is to propose a novel Computer-aided detection (CAD) system based on a Contextual clustering combined with region growing for assisting radiologists in early identification of lung cancer from computed tomography(CT) scans. Instead of using conventional thresholding approach, this proposed work uses Contextual Clustering which yields a more accurate segmentation of the lungs from the chest volume. Following segmentation GLCM features are extracted which are then classified using three different classifiers namely Random forest, SVM and k-NN.

Analysis of CT DICOM Image Segmentation for Abnormality Detection

  • Kulkarni, Rashmi
  • Bhavani, K.
International Journal of Engineering and Manufacturing 2019 Journal Article, cited 0 times
Website
The cancer is a menacing disease. More care is required while diagnosing cancer disease. Mostly CT modality is used for Cancer therapy. Image processing techniques [1] can help doctors to diagnose easily and more accurately. Image pre-processing [2], segmentation methods [3] are used in extraction of cancerous nodules from CT images. Many researches have been done on segmentation of CT images with different algorithms, but they failed to reach 100% accuracy. This research work, proposes a model for analysis of CT image segmentation with filtered and without filtered images. And brings out the importance of pre-processing of CT images.

Analysis of dual tree M‐band wavelet transform based features for brain image classification

  • Ayalapogu, Ratna Raju
  • Pabboju, Suresh
  • Ramisetty, Rajeswara Rao
Magnetic Resonance in Medicine 2018 Journal Article, cited 1 times
Website

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C. Chad
Journal of Magnetic Resonance Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: Dynamic susceptibility contrast (DSC)-MRI analysis pipelines differ across studies and sites, potentially confounding the clinical value and use of the derived biomarkers. PURPOSE/HYPOTHESIS: To investigate how postprocessing steps for computation of cerebral blood volume (CBV) and residue function dependent parameters (cerebral blood flow [CBF], mean transit time [MTT], capillary transit heterogeneity [CTH]) impact glioma grading. STUDY TYPE: Retrospective study from The Cancer Imaging Archive (TCIA). POPULATION: Forty-nine subjects with low- and high-grade gliomas. FIELD STRENGTH/SEQUENCE: 1.5 and 3.0T clinical systems using a single-echo echo planar imaging (EPI) acquisition. ASSESSMENT: Manual regions of interest (ROIs) were provided by TCIA and automatically segmented ROIs were generated by k-means clustering. CBV was calculated based on conventional equations. Residue function dependent biomarkers (CBF, MTT, CTH) were found by two deconvolution methods: circular discretization followed by a signal-to-noise ratio (SNR)-adapted eigenvalue thresholding (Method 1) and Volterra discretization with L-curve-based Tikhonov regularization (Method 2). STATISTICAL TESTS: Analysis of variance, receiver operating characteristics (ROC), and logistic regression tests. RESULTS: MTT alone was unable to statistically differentiate glioma grade (P > 0.139). When normalized, tumor CBF, CTH, and CBV did not differ across field strengths (P > 0.141). Biomarkers normalized to automatically segmented regions performed equally (rCTH AUROC is 0.73 compared with 0.74) or better (rCBF AUROC increases from 0.74-0.84; rCBV AUROC increases 0.78-0.86) than manually drawn ROIs. By updating the current deconvolution steps (Method 2), rCTH can act as a classifier for glioma grade (P < 0.007), but not if processed by current conventional DSC methods (Method 1) (P > 0.577). Lastly, higher-order biomarkers (eg, rCBF and rCTH) along with rCBV increases AUROC to 0.92 for differentiating tumor grade as compared with 0.78 and 0.86 (manual and automatic reference regions, respectively) for rCBV alone. DATA CONCLUSION: With optimized analysis pipelines, higher-order perfusion biomarkers (rCBF and rCTH) improve glioma grading as compared with CBV alone. Additionally, postprocessing steps impact thresholds needed for glioma grading. LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2019.

Analysis of postprocessing steps for residue function dependent dynamic susceptibility contrast (DSC)‐MRI biomarkers and their clinical impact on glioma grading for both 1.5 and 3T

  • Bell, Laura C
  • Stokes, Ashley M
  • Quarles, C Chad
Journal of Magnetic Resonance Imaging 2020 Journal Article, cited 0 times
Website

Analysis of Vestibular Labyrinthine Geometry and Variation in the Human Temporal Bone

  • Johnson Chacko, Lejo
  • Schmidbauer, Dominik T
  • Handschuh, Stephan
  • Reka, Alen
  • Fritscher, Karl D
  • Raudaschl, Patrik
  • Saba, Rami
  • Handler, Michael
  • Schier, Peter P
  • Baumgarten, Daniel
  • Fischer, Natalie
  • Pechriggl, Elisabeth J
  • Brenner, Erich
  • Hoermann, Romed
  • Glueckert, Rudolf
  • Schrott-Fischer, Anneliese
Frontiers in Neuroscience 2018 Journal Article, cited 4 times
Website
Stable posture and body movement in humans is dictated by the precise functioning of the ampulla organs in the semi-circular canals. Statistical analysis of the interrelationship between bony and membranous compartments within the semi-circular canals is dependent on the visualization of soft tissue structures. Thirty-one human inner ears were prepared, post-fixed with osmium tetroxide and decalcified for soft tissue contrast enhancement. High resolution X-ray microtomography images at 15 mum voxel-size were manually segmented. This data served as templates for centerline generation and cross-sectional area extraction. Our estimates demonstrate the variability of individual specimens from averaged centerlines of both bony and membranous labyrinth. Centerline lengths and cross-sectional areas along these lines were identified from segmented data. Using centerlines weighted by the inverse squares of the cross-sectional areas, plane angles could be quantified. The fit planes indicate that the bony labyrinth resembles a Cartesian coordinate system more closely than the membranous labyrinth. A widening in the membranous labyrinth of the lateral semi-circular canal was observed in some of the specimens. Likewise, the cross-sectional areas in the perilymphatic spaces of the lateral canal differed from the other canals. For the first time we could precisely describe the geometry of the human membranous labyrinth based on a large sample size. Awareness of the variations in the canal geometry of the membranous and bony labyrinth would be a helpful reference in designing electrodes for future vestibular prosthesis and simulating fluid dynamics more precisely.

Analyzing MRI scans to detect glioblastoma tumor using hybrid deep belief networks

  • Reddy, Annapareddy V. N.
  • Krishna, Ch Phani
  • Mallick, Pradeep Kumar
  • Satapathy, Sandeep Kumar
  • Tiwari, Prayag
  • Zymbler, Mikhail
  • Kumar, Sachin
Journal of Big Data 2020 Journal Article, cited 0 times
Website
Glioblastoma (GBM) is a stage 4 malignant tumor in which a large portion of tumor cells are reproducing and dividing at any moment. These tumors are life threatening and may result in partial or complete mental and physical disability. In this study, we have proposed a classification model using hybrid deep belief networks (DBN) to classify magnetic resonance imaging (MRI) for GBM tumor. DBN is composed of stacked restricted Boltzmann machines (RBM). DBN often requires a large number of hidden layers that consists of large number of neurons to learn the best features from the raw image data. Hence, computational and space complexity is high and requires a lot of training time. The proposed approach combines DTW with DBN to improve the efficiency of existing DBN model. The results are validated using several statistical parameters. Statistical validation verifies that the combination of DTW and DBN outperformed the other classifiers in terms of training time, space complexity and classification accuracy.

An anatomic transcriptional atlas of human glioblastoma

  • Puchalski, Ralph B
  • Shah, Nameeta
  • Miller, Jeremy
  • Dalley, Rachel
  • Nomura, Steve R
  • Yoon, Jae-Guen
  • Smith, Kimberly A
  • Lankerovich, Michael
  • Bertagnolli, Darren
  • Bickley, Kris
  • Boe, Andrew F
  • Brouner, Krissy
  • Butler, Stephanie
  • Caldejon, Shiella
  • Chapin, Mike
  • Datta, Suvro
  • Dee, Nick
  • Desta, Tsega
  • Dolbeare, Tim
  • Dotson, Nadezhda
  • Ebbert, Amanda
  • Feng, David
  • Feng, Xu
  • Fisher, Michael
  • Gee, Garrett
  • Goldy, Jeff
  • Gourley, Lindsey
  • Gregor, Benjamin W
  • Gu, Guangyu
  • Hejazinia, Nika
  • Hohmann, John
  • Hothi, Parvinder
  • Howard, Robert
  • Joines, Kevin
  • Kriedberg, Ali
  • Kuan, Leonard
  • Lau, Chris
  • Lee, Felix
  • Lee, Hwahyung
  • Lemon, Tracy
  • Long, Fuhui
  • Mastan, Naveed
  • Mott, Erika
  • Murthy, Chantal
  • Ngo, Kiet
  • Olson, Eric
  • Reding, Melissa
  • Riley, Zack
  • Rosen, David
  • Sandman, David
  • Shapovalova, Nadiya
  • Slaughterbeck, Clifford R
  • Sodt, Andrew
  • Stockdale, Graham
  • Szafer, Aaron
  • Wakeman, Wayne
  • Wohnoutka, Paul E
  • White, Steven J
  • Marsh, Don
  • Rostomily, Robert C
  • Ng, Lydia
  • Dang, Chinh
  • Jones, Allan
  • Keogh, Bart
  • Gittleman, Haley R
  • Barnholtz-Sloan, Jill S
  • Cimino, Patrick J
  • Uppin, Megha S
  • Keene, C Dirk
  • Farrokhi, Farrokh R
  • Lathia, Justin D
  • Berens, Michael E
  • Iavarone, Antonio
  • Bernard, Amy
  • Lein, Ed
  • Phillips, John W
  • Rostad, Steven W
  • Cobbs, Charles
  • Hawrylycz, Michael J
  • Foltz, Greg D
Science 2018 Journal Article, cited 6 times
Website
Glioblastoma is an aggressive brain tumor that carries a poor prognosis. The tumor's molecular and cellular landscapes are complex, and their relationships to histologic features routinely used for diagnosis are unclear. We present the Ivy Glioblastoma Atlas, an anatomically based transcriptional atlas of human glioblastoma that aligns individual histologic features with genomic alterations and gene expression patterns, thus assigning molecular information to the most important morphologic hallmarks of the tumor. The atlas and its clinical and genomic database are freely accessible online data resources that will serve as a valuable platform for future investigations of glioblastoma pathogenesis, diagnosis, and treatment.

Anatomical DCE-MRI phantoms generated from glioma patient data

  • Beers, Andrew
  • Chang, Ken
  • Brown, James
  • Zhu, Xia
  • Sengupta, Dipanjan
  • Willke, Theodore L
  • Gerstner, Elizabeth
  • Rosen, Bruce
  • Kalpathy-Cramer, Jayashree
2018 Conference Proceedings, cited 0 times
Website

Anatomical Segmentation of CT images for Radiation Therapy planning using Deep Learning

  • Schreier, Jan
2018 Thesis, cited 0 times
Website

AnatomyNet: Deep learning for fast and fully automated whole‐volume segmentation of head and neck anatomy

  • Zhu, Wentao
  • Huang, Yufang
  • Zeng, Liang
  • Chen, Xuming
  • Liu, Yong
  • Qian, Zhen
  • Du, Nan
  • Fan, Wei
  • Xie, Xiaohui
Medical Physics 2018 Journal Article, cited 4 times
Website

An annotated test-retest collection of prostate multiparametric MRI

  • Fedorov, Andriy
  • Schwier, Michael
  • Clunie, David
  • Herz, Christian
  • Pieper, Steve
  • Kikinis, Ron
  • Tempany, Clare
  • Fennessy, Fiona
Scientific data 2018 Journal Article, cited 0 times
Website

The application of a workflow integrating the variable reproducibility and harmonizability of radiomic features on a phantom dataset

  • Ibrahim, Abdalla
  • Refaee, Turkey
  • Leijenaar, Ralph TH
  • Primakov, Sergey
  • Hustinx, Roland
  • Mottaghy, Felix M
  • Woodruff, Henry C
  • Maidment, Andrew DA
  • Lambin, Philippe
PLoS One 2021 Journal Article, cited 2 times
Website

Application of Artificial Neural Networks for Prognostic Modeling in Lung Cancer after Combining Radiomic and Clinical Features

  • Chufal, Kundan S.
  • Ahmad, Irfan
  • Pahuja, Anjali K.
  • Miller, Alexis A.
  • Singh, Rajpal
  • Chowdhary, Rahul L.
Asian Journal of Oncology 2019 Journal Article, cited 0 times
Website
Objective This study was aimed to investigate machine learning (ML) and artificial neural networks (ANNs) in the prognostic modeling of lung cancer, utilizing high-dimensional data. Materials and Methods A computed tomography (CT) dataset of inoperable nonsmall cell lung carcinoma (NSCLC) patients with embedded tumor segmentation and survival status, comprising 422 patients, was selected. Radiomic data extraction was performed on Computation Environment for Radiation Research (CERR). The survival probability was first determined based on clinical features only and then unsupervised ML methods. Supervised ANN modeling was performed by direct and hybrid modeling which were subsequently compared. Statistical significance was set at <0.05. Results Survival analyses based on clinical features alone were not significant, except for gender. ML clustering performed on unselected radiomic and clinical data demonstrated a significant difference in survival (two-step cluster, median overall survival [ mOS]: 30.3 vs. 17.2 m; p = 0.03; K-means cluster, mOS: 21.1 vs. 7.3 m; p < 0.001). Direct ANN modeling yielded a better overall model accuracy utilizing multilayer perceptron (MLP) than radial basis function (RBF; 79.2 vs. 61.4%, respectively). Hybrid modeling with MLP (after feature selection with ML) resulted in an overall model accuracy of 80%. There was no difference in model accuracy after direct and hybrid modeling (p = 0.164). Conclusion Our preliminary study supports the application of ANN in predicting outcomes based on radiomic and clinical data.

Application of Fuzzy c-means and Neural networks to categorize tumor affected breast MR Images

  • Anand, Shruthi
  • Vinod, Viji
  • Rampure, Anand
International Journal of Applied Engineering Research 2015 Journal Article, cited 4 times
Website

Application of Homomorphic Encryption on Neural Network in Prediction of Acute Lymphoid Leukemia

  • Khilji, Ishfaque Qamar
  • Saha, Kamonashish
  • Amin, Jushan
  • Iqbal, Muhammad
International Journal of Advanced Computer Science and Applications 2020 Journal Article, cited 0 times
Machine learning is now becoming a widely used mechanism and applying it in certain sensitive fields like medical and financial data has only made things easier. Accurate Diagnosis of cancer is essential in treating it properly. Medical tests regarding cancer in recent times are quite expensive and not available in many parts of the world. CryptoNets, on the other hand, is an exhibit of the use of Neural-Networks over data encrypted with Homomorphic Encryption. This project demonstrates the use of Homomorphic Encryption for outsourcing neural-network predictions in case of Acute Lymphoid Leukemia (ALL). By using CryptoNets, the patients or doctors in need of the service can encrypt their data using Homomorphic Encryption and send only the encrypted message to the service provider (hospital or model owner). Since Homomorphic Encryptions allow the provider to operate on the data while it is encrypted, the provider can make predictions using a pre-trained Neural-Network while the data remains encrypted all throughout the process and finally sending the prediction to the user who can decrypt the results. During the process the service provider (hospital or the model owner) gains no knowledge about the data that was used or the result since everything is encrypted throughout the process. Our work proposes a Neural Network model which will be able to predict ALL-Acute Lymphoid Leukemia with approximate 80% accuracy using the C_NMC Challenge dataset. Prior to building our own model, we used the dataset and pre-process it using a different approach. We then ran on different machine learning and Neural Network models like VGG16, SVM, AlexNet, ResNet50 and compared the validation accuracies of these models with our own model which lastly gives better accuracy than the rest of the models used. We then use our own pre-trained Neural Network to make predictions using CryptoNets. We were able to achieve an encrypted prediction of about 78% which is close to what we achieved when validating our own CNN model that has a validation accuracy of 80% for prediction of Acute Lymphoid Leukemia (ALL).

APPLICATION OF MAGNETIC RESONANCE RADIOMICS PLATFORM (MRP) FOR MACHINE LEARNING BASED FEATURES EXTRACTION FROM BRAIN TUMOR IMAGES

  • Idowu, B.A.
  • Dada, O. M.
  • Awojoyogbe, O.B.
Journal of Science, Technology, Mathematics and Education (JOSTMED) 2021 Journal Article, cited 0 times
Website
This study investigated the implementation of magnetic resonance radiomics platform (MRP) for machine learning based features extraction from brain tumor images. Magnetic resonance imaging data publicly available in The Cancer Imaging Archive (TCIA) were downloaded and used to perform image Coregistration, Multi-Modality, Images interpolation, Morphology and Extraction of radiomic features with MRP tools. Radiomics analyses were then applied to the data (containing AX-T1-POST, Diffusion weighted, AX-T2-FSE and AX-T2-FLAIR sequences) using wavelet decomposition principles. The results employing different configurations of low-pass and high-pass filters were exported to Microsoft excel data sheets. The exported data were visualized using MATLAB’s classification learner tool. These exported data and the visualizations provide a new way of deep assessment of image data as well as easier interpretation of image scans. Findings from this study revealed that Machine learning Radiomics Platform is important in characterizing, visualizing and gives adequate information of a brain tumor.

Application of Sparse-Coding Super-Resolution to 16-Bit DICOM Images for Improving the Image Resolution in MRI

  • Ota, Junko
  • Umehara, Kensuke
  • Ishimaru, Naoki
  • Ishida, Takayuki
Open Journal of Medical Imaging 2017 Journal Article, cited 1 times
Website

An Appraisal of Lung Nodules Automatic Classification Algorithms for CT Images

  • Xinqi Wang
  • Keming Mao
  • Lizhe Wang
  • Peiyi Yang
  • Duo Lu
  • Ping He
Sensors (Basel) 2019 Journal Article, cited 0 times
Website
Lung cancer is one of the most deadly diseases around the world representing about 26% of all cancers in 2017. The five-year cure rate is only 18% despite great progress in recent diagnosis and treatment. Before diagnosis, lung nodule classification is a key step, especially since automatic classification can help clinicians by providing a valuable opinion. Modern computer vision and machine learning technologies allow very fast and reliable CT image classification. This research area has become very hot for its high efficiency and labor saving. The paper aims to draw a systematic review of the state of the art of automatic classification of lung nodules. This research paper covers published works selected from the Web of Science, IEEEXplore, and DBLP databases up to June 2018. Each paper is critically reviewed based on objective, methodology, research dataset, and performance evaluation. Mainstream algorithms are conveyed and generic structures are summarized. Our work reveals that lung nodule classification based on deep learning becomes dominant for its excellent performance. It is concluded that the consistency of the research objective and integration of data deserves more attention. Moreover, collaborative works among developers, clinicians, and other parties should be strengthened.

An Approach Toward Automatic Classification of Tumor Histopathology of Non–Small Cell Lung Cancer Based on Radiomic Features

  • Patil, Ravindra
  • Mahadevaiah, Geetha
  • Dekker, Andre
Tomography: a journal for imaging research 2016 Journal Article, cited 2 times
Website

Approaches to uncovering cancer diagnostic and prognostic molecular signatures

  • Hong, Shengjun
  • Huang, Yi
  • Cao, Yaqiang
  • Chen, Xingwei
  • Han, Jing-Dong J
Molecular & Cellular Oncology 2014 Journal Article, cited 2 times
Website
The recent rapid development of high-throughput technology enables the study of molecular signatures for cancer diagnosis and prognosis at multiple levels, from genomic and epigenomic to transcriptomic. These unbiased large-scale scans provide important insights into the detection of cancer-related signatures. In addition to single-layer signatures, such as gene expression and somatic mutations, integrating data from multiple heterogeneous platforms using a systematic approach has been proven to be particularly effective for the identification of classification markers. This approach not only helps to uncover essential driver genes and pathways in the cancer network that are responsible for the mechanisms of cancer development, but will also lead us closer to the ultimate goal of personalized cancer therapy.

Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images

  • Gupta, Suneet
  • Porwal, Rabins
International Journal of Biomedical Imaging 2016 Journal Article, cited 10 times
Website
Medical imaging systems often produce images that require enhancement, such as improving the image contrast as they are poor in contrast. Therefore, they must be enhanced before they are examined by medical professionals. This is necessary for proper diagnosis and subsequent treatment. We do have various enhancement algorithms which enhance the medical images to different extents. We also have various quantitative metrics or measures which evaluate the quality of an image. This paper suggests the most appropriate measures for two of the medical images, namely, brain cancer images and breast cancer images.

Are all shortcuts in encoder–decoder networks beneficial for CT denoising?

  • Chen, Junhua
  • Zhang, Chong
  • Wee, Leonard
  • Dekker, Andre
  • Bermejo, Inigo
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2022 Journal Article, cited 0 times
Website
Denoising of CT scans has attracted the attention of many researchers in the medical image analysis domain. Encoder–decoder networks are deep learning neural networks that have become common for image denoising in recent years. Shortcuts between the encoder and decoder layers are crucial for some image-to-image translation tasks. However, are all shortcuts necessary for CT denoising? To answer this question, we set up two encoder–decoder networks representing two popular architectures and then progressively removed shortcuts from the networks from shallow to deep (forward removal) and from deep to shallow (backward removal). We used two unrelated datasets with different noise levels to test the denoising performance of these networks using two metrics, namely root mean square error and content loss. The results show that while more than half of the shortcuts are still indispensable for CT scan denoising, removing certain shortcuts leads to performance improvement for denoising. Both shallow and deep shortcuts might be removed, thus retaining sparse connections, especially when the noise level is high. Backward removal seems to have a better performance than forward removal, which means deep shortcuts have priority to be removed. Finally, we propose a hypothesis to explain this phenomenon and validate it in the experiments.

Are radiomics features universally applicable to different organs?

  • Lee, S. H.
  • Cho, H. H.
  • Kwon, J.
  • Lee, H. Y.
  • Park, H.
Cancer Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: Many studies have successfully identified radiomics features reflecting macroscale tumor features and tumor microenvironment for various organs. There is an increased interest in applying these radiomics features found in a given organ to other organs. Here, we explored whether common radiomics features could be identified over target organs in vastly different environments. METHODS: Four datasets of three organs were analyzed. One radiomics model was constructed from the training set (lungs, n = 401), and was further evaluated in three independent test sets spanning three organs (lungs, n = 59; kidneys, n = 48; and brains, n = 43). Intensity histograms derived from the whole organ were compared to establish organ-level differences. We constructed a radiomics score based on selected features using training lung data over the tumor region. A total of 143 features were computed for each tumor. We adopted a feature selection approach that favored stable features, which can also capture survival. The radiomics score was applied to three independent test data from lung, kidney, and brain tumors, and whether the score could be used to separate high- and low-risk groups, was evaluated. RESULTS: Each organ showed a distinct pattern in the histogram and the derived parameters (mean and median) at the organ-level. The radiomics score trained from the lung data of the tumor region included seven features, and the score was only effective in stratifying survival for other lung data, not in other organs such as the kidney and brain. Eliminating the lung-specific feature (2.5 percentile) from the radiomics score led to similar results. There were no common features between training and test sets, but a common category of features (texture category) was identified. CONCLUSION: Although the possibility of a generally applicable model cannot be excluded, we suggest that radiomics score models for survival were mostly specific for a given organ; applying them to other organs would require careful consideration of organ-specific properties.

Are shape morphologies associated with survival? A potential shape-based biomarker predicting survival in lung cancer

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae-Sun
J Cancer Res Clin Oncol 2019 Journal Article, cited 0 times
Website
PURPOSE: Imaging biomarkers (IBMs) are increasingly investigated as prognostic indicators. IBMs might be capable of assisting treatment selection by providing useful insights into tumor-specific factors in a non-invasive manner. METHODS: We investigated six three-dimensional shape-based IBMs: eccentricities between (I) intermediate-major axis (Eimaj), (II) intermediate-minor axis (Eimin), (III) major-minor axis (Emj-mn) and volumetric index of (I) sphericity (VioS), (II) flattening (VioF), (III) elongating (VioE). Additionally, we investigated previously established two-dimensional shape IBMs: eccentricity (E), index of sphericity (IoS), and minor-to-major axis length (Mn_Mj). IBMs were compared in terms of their predictive performance for 5-year overall survival in two independent cohorts of patients with lung cancer. Cohort 1 received surgical excision, while cohort 2 received radiation therapy alone or chemo-radiation therapy. Univariate and multivariate survival analyses were performed. Correlations with clinical parameters were evaluated using analysis of variance. IBM reproducibility was assessed using concordance correlation coefficients (CCCs). RESULTS: E was associated with reduced survival in cohort 1 (hazard ratio [HR]: 0.664). Eimin and VioF were associated with reduced survival in cohort 2 (HR 1.477 and 1.701). VioS was associated with reduced survival in cohorts 1 and 2 (HR 1.758 and 1.472). Spherical tumors correlated with shorter survival durations than did irregular tumors (median survival difference: 1.21 and 0.35 years in cohorts 1 and 2, respectively). VioS was a significant predictor of survival in multivariate analyses of both cohorts. All IBMs showed good reproducibility (CCC ranged between 0.86-0.98). CONCLUSIONS: In both investigated cohorts, VioS successfully linked shape morphology to patient survival.

Arterial input function and tracer kinetic model-driven network for rapid inference of kinetic maps in Dynamic Contrast-Enhanced MRI (AIF-TK-net)

  • Kettelkamp, Joseph
  • Lingala, Sajan Goud
2020 Conference Paper, cited 0 times
Website
We propose a patient-specific arterial input function (AIF) and tracer kinetic (TK) model-driven network to rapidly estimate the extended Tofts- Kety kinetic model parameters in DCE-MRI. We term our network as AIF-TK-net, which maps an input comprising of an image patch of the DCE-time series and the patient-specific AIF to the output image patch of the TK parameters. We leverage the open-source NEURO-RIDER database of brain tumor DCE-MRI scans to train our network. Once trained, our model rapidly infers the TK maps of unseen DCE-MRI images on the order of a 0.34 sec/slice for a 256x256x65 time series data on a NVIDIA GeForce GTX 1080 Ti GPU. We show its utility on high time resolution DCE-MRI datasets where significant variability in AIFs across patients exists. We demonstrate that the proposed AIF - TK net considerably improves the TK parameter estimation accuracy in comparison to a network, which does not utilize the patient AIF.

Artifact Reduction for Sparse-view CT using Deep Learning with Band Patch

  • Okamoto, Takayuki
  • Ohnishi, Takashi
  • Haneishi, Hideaki
2022 Journal Article, cited 1 times
Website
Sparse-view computed tomography (CT), an imaging technique that reduces the number of projections, can reduce the total scan duration and radiation dose. However, sparse data sampling causes streak artifacts on images reconstructed with analytical algorithms. In this paper, we propose an artifact reduction method for sparse-view CT using deep learning. We developed a light-weight fully convolutional network to estimate a fully sampled sinogram from a sparse-view sinogram by enlargement in the vertical direction. Furthermore, we introduced the band patch, a rectangular region cropped in the vertical direction, as an input image for the network based on the sinogram’s characteristics. Comparison experiments using a swine rib dataset of micro-CT scans and a chest dataset of clinical CT scans were conducted to compare the proposed method, improved U-net from a previous study, and the U-net with band patches. The experimental results showed that the proposed method achieved the best performance and the U-net with band patches had the second-best result in terms of accuracy and prediction time. In addition, the reconstructed images of the proposed method suppressed streak artifacts while preserving the object’s structural information. We confirmed that the proposed method and band patch are useful for artifact reduction for sparse-view CT.

Artificial intelligence in cancer imaging: Clinical challenges and applications

  • Bi, Wenya Linda
  • Hosny, Ahmed
  • Schabath, Matthew B
  • Giger, Maryellen L
  • Birkbak, Nicolai J
  • Mehrtash, Alireza
  • Allison, Tavis
  • Arnaout, Omar
  • Abbosh, Christopher
  • Dunn, Ian F
CA: a cancer journal for clinicians 2019 Journal Article, cited 0 times
Website

The ASNR-ACR-RSNA Common Data Elements Project: What Will It Do for the House of Neuroradiology?

  • Flanders, AE
  • Jordan, JE
American Journal of Neuroradiology 2018 Journal Article, cited 0 times
Website

Assessing robustness of radiomic features by image perturbation

  • Zwanenburg, Alex
  • Leger, Stefan
  • Agolli, Linda
  • Pilz, Karoline
  • Troost, Esther G C
  • Richter, Christian
  • Löck, Steffen
2019 Journal Article, cited 0 times
Website
Image features need to be robust against differences in positioning, acquisition and segmentation to ensure reproducibility. Radiomic models that only include robust features can be used to analyse new images, whereas models with non-robust features may fail to predict the outcome of interest accurately. Test-retest imaging is recommended to assess robustness, but may not be available for the phenotype of interest. We therefore investigated 18 combinations of image perturbations to determine feature robustness, based on noise addition (N), translation (T), rotation (R), volume growth/shrinkage (V) and supervoxel-based contour randomisation (C). Test-retest and perturbation robustness were compared for combined total of 4032 morphological, statistical and texture features that were computed from the gross tumour volume in two cohorts with computed tomography imaging: I) 31 non-small-cell lung cancer (NSCLC) patients; II): 19 head-and-neck squamous cell carcinoma (HNSCC) patients. Robustness was determined using the 95% confidence interval (CI) of the intraclass correlation coefficient (1, 1). Features with CI >/= 0:90 were considered robust. The NTCV, TCV, RNCV and RCV perturbation chain produced similar results and identified the fewest false positive robust features (NSCLC: 0.2-0.9%; HNSCC: 1.7-1.9%). Thus, these perturbation chains may be used as an alternative to test-retest imaging to assess feature robustness.

Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma

  • Dunn, William D Jr
  • Aerts, Hugo J W L
  • Cooper, Lee A
  • Holder, Chad A
  • Hwang, Scott N
  • Jaffe, Carle C
  • Brat, Daniel J
  • Jain, Rajan
  • Flanders, Adam E
  • Zinn, Pascal O
  • Colen, Rivka R
  • Gutman, David A
J Neuroimaging Psychiatry Neurol 2016 Journal Article, cited 0 times
Website
Background: Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods: Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results: We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman's r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion: Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses.

Assessing the prognostic impact of 3D CT image tumour rind texture features on lung cancer survival modelling

  • Vial, A.
  • Stirling, D.
  • Field, M.
  • Ros, M.
  • Ritz, C.
  • Carolan, M
  • Holloway, L.
  • Miller, A. A.
2017 Conference Paper, cited 1 times
Website
In this paper we examine a technique for developing prognostic image characteristics, termed radiomics, for non-small cell lung cancer based on a tumour edge region-based analysis. Texture features were extracted from the rind of the tumour in a publicly available 3D CT data set to predict two-year survival. The derived models were compared against the previous methods of training radiomic signatures that are descriptive of the whole tumour volume. Radiomic features derived solely from regions external, but neighbouring, the tumour were shown to also have prognostic value. By using additional texture features an increase in accuracy, of 3%, is shown over previous approaches for predicting two-year survival, upon examining the outside rind including the volume compared to the volume without the rind. This indicates that while the centre of the tumour is currently the main clinical target for radiotherapy treatment, the tissue immediately around the tumour is also clinically important.

Assessment of a radiomic signature developed in a general NSCLC cohort for predicting overall survival of ALK-positive patients with different treatment types

  • Huang, Lyu
  • Chen, Jiayan
  • Hu, Weigang
  • Xu, Xinyan
  • Liu, Di
  • Wen, Junmiao
  • Lu, Jiayu
  • Cao, Jianzhao
  • Zhang, Junhua
  • Gu, Yu
  • Wang, Jiazhou
  • Fan, Min
Clinical Lung Cancer 2019 Journal Article, cited 0 times
Website
Objectives To investigate the potential of a radiomic signature developed in a general NSCLC cohort for predicting the overall survival of ALK-positive patients with different treatment types. Methods After test-retest in the RIDER dataset, 132 features (ICC>0.9) were selected in the LASSO Cox regression model with a leave-one-out cross-validation. The NSCLC Radiomics collection from TCIA was randomly divided into a training set (N=254) and a validation set (N=63) to develop a general radiomic signature for NSCLC. In our ALK+ set, 35 patients received targeted therapy and 19 patients received non-targeted therapy. The developed signature was tested later in this ALK+ set. Performance of the signature was evaluated with C-index and stratification analysis. Results The general signature has good performance (C-index>0.6, log-rank p-value<0.05) in the NSCLC Radiomics collection. It includes five features: Geom_va_ratio, W_GLCM_LH_Std, W_GLCM_LH_DV, W_GLCM_HH_IM2 and W_his_HL_mean (Supplementary Table S2). Its accuracy of predicting overall survival in the ALK+ set achieved 0.649 (95%CI=0.640-0.658). Nonetheless, impaired performance was observed in the targeted therapy group (C-index=0.573, 95%CI=0.556-0.589) while significantly improved performance was observed in the non-targeted therapy group (C-index=0.832, 95%CI=0.832-0.852). Stratification analysis also showed that the general signature could only identify high- and low-risk patients in the non-targeted therapy group (log-rank p-value=0.00028). Conclusions This preliminary study suggests that the applicability of a general signature to ALK-positive patients is limited. The general radiomic signature seems to be only applicable to ALK-positive patients who had received non-targeted therapy, which indicates that developing special radiomics signatures for patients treated with TKI might be necessary. Abbreviations and acronyms TCIA The Cancer Imaging Archive ALK Anaplastic lymphoma kinase NSCLC Non-small cell lung cancer EML4-ALK fusion The echinoderm microtubule-associated protein like 4-anaplastic lymphoma kinase fusion C-index Concordance index CI Confidence interval ICC The intra-class correlation coefficient OS Overall Survival LASSO The Least Absolute Shrinkage and Selection Operator EGFR Epidermal Growth Factor Receptor TKI Tyrosine-kinase inhibitor

Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier

  • Jensen, C.
  • Carl, J.
  • Boesen, L.
  • Langkilde, N. C.
  • Ostergaard, L. R.
J Appl Clin Med Phys 2019 Journal Article, cited 0 times
Website
PURPOSE: To automatically assess the aggressiveness of prostate cancer (PCa) lesions using zonal-specific image features extracted from diffusion weighted imaging (DWI) and T2W MRI. METHODS: Region of interest was extracted from DWI (peripheral zone) and T2W MRI (transitional zone and anterior fibromuscular stroma) around the center of 112 PCa lesions from 99 patients. Image histogram and texture features, 38 in total, were used together with a k-nearest neighbor classifier to classify lesions into their respective prognostic Grade Group (GG) (proposed by the International Society of Urological Pathology 2014 consensus conference). A semi-exhaustive feature search was performed (1-6 features in each feature set) and validated using threefold stratified cross validation in a one-versus-rest classification setup. RESULTS: Classifying PCa lesions into GGs resulted in AUC of 0.87, 0.88, 0.96, 0.98, and 0.91 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5 for the peripheral zone, respectively. The results for transitional zone and anterior fibromuscular stroma were AUC of 0.85, 0.89, 0.83, 0.94, and 0.86 for GG1, GG2, GG1 + 2, GG3, and GG4 + 5, respectively. CONCLUSION: This study showed promising results with reasonable AUC values for classification of all GG indicating that zonal-specific imaging features from DWI and T2W MRI can be used to differentiate between PCa lesions of various aggressiveness.

Assessment of Renal Cell Carcinoma by Texture Analysis in Clinical Practice: A Six-Site, Six-Platform Analysis of Reliability

  • Doshi, A. M.
  • Tong, A.
  • Davenport, M. S.
  • Khalaf, A.
  • Mresh, R.
  • Rusinek, H.
  • Schieda, N.
  • Shinagare, A.
  • Smith, A. D.
  • Thornhill, R.
  • Vikram, R.
  • Chandarana, H.
AJR Am J Roentgenol 2021 Journal Article, cited 0 times
Website
Background: Multiple commercial and open-source software applications are available for texture analysis. Nonstandard techniques can cause undesirable variability that impedes result reproducibility and limits clinical utility. Objective: The purpose of this study is to measure agreement of texture metrics extracted by 6 software packages. Methods: This retrospective study included 40 renal cell carcinomas with contrast-enhanced CT from The Cancer Genome Atlas and Imaging Archive. Images were analyzed by 7 readers at 6 sites. Each reader used 1 of 6 software packages to extract commonly studied texture features. Inter and intra-reader agreement for segmentation was assessed with intra-class correlation coefficients. First-order (available in 6 packages) and second-order (available in 3 packages) texture features were compared between software pairs using Pearson correlation. Results: Inter- and intra-reader agreement was excellent (ICC 0.93-1). First-order feature correlations were strong (r>0.8, p<0.001) between 75% (21/28) of software pairs for mean and standard deviation, 48% (10/21) for entropy, 29% (8/28) for skewness, and 25% (7/28) for kurtosis. Of 15 second-order features, only co-occurrence matrix correlation, grey-level non-uniformity, and run-length non-uniformity showed strong correlation between software packages (0.90-1, p<0.001). Conclusion: Variability in first and second order texture features was common across software configurations and produced inconsistent results. Standardized algorithms and reporting methods are needed before texture data can be reliably used for clinical applications. Clinical Impact: It is important to be aware of variability related to texture software processing and configuration when reporting and comparing outputs.

Assessment of the global noise algorithm for automatic noise measurement in head CT examinations

  • Ahmad, M.
  • Tan, D.
  • Marisetty, S.
Med Phys 2021 Journal Article, cited 0 times
Website
PURPOSE: The global noise (GN) algorithm has been previously introduced as a method for automatic noise measurement in clinical CT images. The accuracy of the GN algorithm has been assessed in abdomen CT examinations, but not in any other body part until now. This work assesses the GN algorithm accuracy in automatic noise measurement in head CT examinations. METHODS: A publicly available image dataset of 99 head CT examinations was used to evaluate the accuracy of the GN algorithm in comparison to reference noise values. Reference noise values were acquired using a manual noise measurement procedure. The procedure used a consistent instruction protocol and multiple observers to mitigate the influence of intra- and interobserver variation, resulting in precise reference values. Optimal GN algorithm parameter values were determined. The GN algorithm accuracy and the corresponding statistical confidence interval were determined. The GN measurements were compared across the six different scan protocols used in this dataset. The correlation of GN to patient head size was also assessed using a linear regression model, and the CT scanner's X-ray beam quality was inferred from the model fit parameters. RESULTS: Across all head CT examinations in the dataset, the range of reference noise was 2.9-10.2 HU. A precision of +/-0.33 HU was achieved in the reference noise measurements. After optimization, the GN algorithm had a RMS error 0.34 HU corresponding to a percent RMS error of 6.6%. The GN algorithm had a bias of +3.9%. Statistically significant differences in GN were detected in 11 out of the 15 different pairs of scan protocols. The GN measurements were correlated with head size with a statistically significant regression slope parameter (p < 10(-7) ). The CT scanner X-ray beam quality estimated from the slope parameter was 3.5 cm water HVL (2.8-4.8 cm 95% CI). CONCLUSION: The GN algorithm was validated for application in head CT examinations. The GN algorithm was accurate in comparison to reference manual measurement, with errors comparable to interobserver variation in manual measurement. The GN algorithm can detect noise differences in examinations performed on different scanner models or using different scan protocols. The trend in GN across patients of different head sizes closely follows that predicted by a physical model of X-ray attenuation.

Assigning readers to cases in imaging studies using balanced incomplete block designs

  • Huang, Erich P
  • Shih, Joanna H
Stat Methods Med Res 2021 Journal Article, cited 0 times
Website
In many imaging studies, each case is reviewed by human readers and characterized according to one or more features. Often, the inter-reader agreement of the feature indications is of interest in addition to their diagnostic accuracy or association with clinical outcomes. Complete designs in which all participating readers review all cases maximize efficiency and guarantee estimability of agreement metrics for all pairs of readers but often involve a heavy reading burden. Assigning readers to cases using balanced incomplete block designs substantially reduces reading burden by having each reader review only a subset of cases, while still maintaining estimability of inter-reader agreement for all pairs of readers. Methodology for data analysis and power and sample size calculations under balanced incomplete block designs is presented and applied to simulation studies and an actual example. Simulation studies results suggest that such designs may reduce reading burdens by >40% while in most scenarios incurring a <20% increase in the standard errors and a <8% and <20% reduction in power to detect between-modality differences in diagnostic accuracy and kappa statistics, respectively.

Associating spatial diversity features of radiologically defined tumor habitats with epidermal growth factor receptor driver status and 12-month survival in glioblastoma: methods and preliminary investigation

  • Lee, Joonsang
  • Narang, Shivali
  • Martinez, Juan J
  • Rao, Ganesh
  • Rao, Arvind
Journal of Medical Imaging 2015 Journal Article, cited 15 times
Website
We analyzed the spatial diversity of tumor habitats, regions with distinctly different intensity characteristics of a tumor, using various measurements of habitat diversity within tumor regions. These features were then used for investigating the association with a 12-month survival status in glioblastoma (GBM) patients and for the identification of epidermal growth factor receptor (EGFR)-driven tumors. T1 postcontrast and T2 fluid attenuated inversion recovery images from 65 GBM patients were analyzed in this study. A total of 36 spatial diversity features were obtained based on pixel abundances within regions of interest. Performance in both the classification tasks was assessed using receiver operating characteristic (ROC) analysis. For association with 12-month overall survival, area under the ROC curve was 0.74 with confidence intervals [0.630 to 0.858]. The sensitivity and specificity at the optimal operating point ([Formula: see text]) on the ROC were 0.59 and 0.75, respectively. For the identification of EGFR-driven tumors, the area under the ROC curve (AUC) was 0.85 with confidence intervals [0.750 to 0.945]. The sensitivity and specificity at the optimal operating point ([Formula: see text]) on the ROC were 0.76 and 0.83, respectively. Our findings suggest that these spatial habitat diversity features are associated with these clinical characteristics and could be a useful prognostic tool for magnetic resonance imaging studies of patients with GBM.

Association between tumor architecture derived from generalized Q-space MRI and survival in glioblastoma

  • Taylor, Erik N
  • Ding, Yao
  • Zhu, Shan
  • Cheah, Eric
  • Alexander, Phillip
  • Lin, Leon
  • Aninwene II, George E
  • Hoffman, Matthew P
  • Mahajan, Anita
  • Mohamed, Abdallah SR
OncotargetOncotarget 2017 Journal Article, cited 0 times
Website
While it is recognized that the overall resistance of glioblastoma to treatment may be related to intra-tumor patterns of structural heterogeneity, imaging methods to assess such patterns remain rudimentary. Methods: We utilized a generalized Q-space imaging (GQI) algorithm to analyze magnetic resonance imaging (MRI) derived from a rodent model of glioblastoma and 2 clinical datasets to correlate GQI, histology, and survival. Results: In a rodent glioblastoma model, GQI demonstrated a poorly coherent core region, consisting of diffusion tracts < 5 mm, surrounded by a shell of highly coherent diffusion tracts, 6-25 mm. Histologically, the core region possessed a high degree of necrosis, whereas the shell consisted of organized sheets of anaplastic cells with elevated mitotic index. These attributes define tumor architecture as the macroscopic organization of variably aligned tumor cells. Applied to MRI data from The Cancer Imaging Atlas (TCGA), the core-shell diffusion tract-length ratio (c/s ratio) correlated linearly with necrosis, which, in turn, was inversely associated with survival (p = 0.00002). We confirmed in an independent cohort of patients (n = 62) that the c/s ratio correlated inversely with survival (p = 0.0004). Conclusions: The analysis of MR images by GQI affords insight into tumor architectural patterns in glioblastoma that correlate with biological heterogeneity and clinical outcome.

Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm

  • Buda, Mateusz
  • Saha, Ashirbani
  • Mazurowski, Maciej A
2019 Journal Article, cited 1 times
Website
Recent analysis identified distinct genomic subtypes of lower-grade glioma tumors which are associated with shape features. In this study, we propose a fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes. We used preoperative imaging and genomic data of 110 patients from 5 institutions with lower-grade gliomas from The Cancer Genome Atlas. Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. To analyze the relationship between the imaging features and genomic clusters, we conducted the Fisher exact test for 10 hypotheses for each pair of imaging feature and genomic subtype. To account for multiple hypothesis testing, we applied a Bonferroni correction. P-values lower than 0.005 were considered statistically significant. We found the strongest association between RNASeq clusters and the bounding ellipsoid volume ratio (p < 0.0002) and between RNASeq clusters and margin fluctuation (p < 0.005). In addition, we identified associations between bounding ellipsoid volume ratio and all tested molecular subtypes (p < 0.02) as well as between angular standard deviation and RNASeq cluster (p < 0.02). In terms of automatic tumor segmentation that was used to generate the quantitative image characteristics, our deep learning algorithm achieved a mean Dice coefficient of 82% which is comparable to human performance.

Association of Peritumoral Radiomics With Tumor Biology and Pathologic Response to Preoperative Targeted Therapy for HER2 (ERBB2)-Positive Breast Cancer

  • Braman, Nathaniel
  • Prasanna, Prateek
  • Whitney, Jon
  • Singh, Salendra
  • Beig, Niha
  • Etesami, Maryam
  • Bates, David D. B.
  • Gallagher, Katherine
  • Bloch, B. Nicolas
  • Vulchi, Manasa
  • Turk, Paulette
  • Bera, Kaustav
  • Abraham, Jame
  • Sikov, William M.
  • Somlo, George
  • Harris, Lyndsay N.
  • Gilmore, Hannah
  • Plecha, Donna
  • Varadan, Vinay
  • Madabhushi, Anant
JAMA Netw Open 2019 Journal Article, cited 0 times
Website
Importance There has been significant recent interest in understanding the utility of quantitative imaging to delineate breast cancer intrinsic biological factors and therapeutic response. No clinically accepted biomarkers are as yet available for estimation of response to human epidermal growth factor receptor 2 (currently known as ERBB2, but referred to as HER2 in this study)–targeted therapy in breast cancer. Objective To determine whether imaging signatures on clinical breast magnetic resonance imaging (MRI) could noninvasively characterize HER2-positive tumor biological factors and estimate response to HER2-targeted neoadjuvant therapy. Design, Setting, and Participants In a retrospective diagnostic study encompassing 209 patients with breast cancer, textural imaging features extracted within the tumor and annular peritumoral tissue regions on MRI were examined as a means to identify increasingly granular breast cancer subgroups relevant to therapeutic approach and response. First, among a cohort of 117 patients who received an MRI prior to neoadjuvant chemotherapy (NAC) at a single institution from April 27, 2012, through September 4, 2015, imaging features that distinguished HER2+ tumors from other receptor subtypes were identified. Next, among a cohort of 42 patients with HER2+ breast cancers with available MRI and RNaseq data accumulated from a multicenter, preoperative clinical trial (BrUOG 211B), a signature of the response-associated HER2-enriched (HER2-E) molecular subtype within HER2+ tumors (n = 42) was identified. The association of this signature with pathologic complete response was explored in 2 patient cohorts from different institutions, where all patients received HER2-targeted NAC (n = 28, n = 50). Finally, the association between significant peritumoral features and lymphocyte distribution was explored in patients within the BrUOG 211B trial who had corresponding biopsy hematoxylin-eosin–stained slide images. Data analysis was conducted from January 15, 2017, to February 14, 2019. Main Outcomes and Measures Evaluation of imaging signatures by the area under the receiver operating characteristic curve (AUC) in identifying HER2+ molecular subtypes and distinguishing pathologic complete response (ypT0/is) to NAC with HER2-targeting. Results In the 209 patients included (mean [SD] age, 51.1 [11.7] years), features from the peritumoral regions better discriminated HER2-E tumors (maximum AUC, 0.85; 95% CI, 0.79-0.90; 9-12 mm from the tumor) compared with intratumoral features (AUC, 0.76; 95% CI, 0.69-0.84). A classifier combining peritumoral and intratumoral features identified the HER2-E subtype (AUC, 0.89; 95% CI, 0.84-0.93) and was significantly associated with response to HER2-targeted therapy in both validation cohorts (AUC, 0.80; 95% CI, 0.61-0.98 and AUC, 0.69; 95% CI, 0.53-0.84). Features from the 0- to 3-mm peritumoral region were significantly associated with the density of tumor-infiltrating lymphocytes (R2 = 0.57; 95% CI, 0.39-0.75; P = .002). Conclusions and Relevance A combination of peritumoral and intratumoral characteristics appears to identify intrinsic molecular subtypes of HER2+ breast cancers from imaging, offering insights into immune response within the peritumoral environment and suggesting potential benefit for treatment guidance.

Associations between gene expression profiles of invasive breast cancer and Breast Imaging Reporting and Data System MRI lexicon

  • Kim, Ga Ram
  • Ku, You Jin
  • Cho, Soon Gu
  • Kim, Sei Joong
  • Min, Byung Soh
Annals of Surgical Treatment and Research 2017 Journal Article, cited 3 times
Website
Purpose: To evaluate whether the Breast Imaging Reporting and Data System (BI-RADS) MRI lexicon could reflect the genomic information of breast cancers and to suggest intuitive imaging features as biomarkers. Methods: Matched breast MRI data from The Cancer Imaging Archive and gene expression profile from The Cancer Genome Atlas of 70 invasive breast cancers were analyzed. Magnetic resonance images were reviewed according to the BI-RADS MRI lexicon of mass morphology. The cancers were divided into 2 groups of gene clustering by gene set enrichment analysis. Clinicopathologic and imaging characteristics were compared between the 2 groups. Results: The luminal subtype was predominant in the group 1 gene set and the triple-negative subtype was predominant in the group 2 gene set (55 of 56, 98.2% vs. 9 of 14, 64.3%). Internal enhancement descriptors were different between the 2 groups; heterogeneity was most frequent in group 1 (27 of 56, 48.2%) and rim enhancement was dominant in group 2 (10 of 14, 71.4%). In group 1, the gene sets related to mammary gland development were overexpressed whereas the gene sets related to mitotic cell division were overexpressed in group 2. Conclusion: We identified intuitive imaging features of breast MRI associated with distinct gene expression profiles using the standard imaging variables of BI-RADS. The internal enhancement pattern on MRI might reflect specific gene expression profiles of breast cancers, which can be recognized by visual distinction.

An Attention Based Deep Learning Model for Direct Estimation of Pharmacokinetic Maps from DCE-MRI Images

  • Zeng, Qingyuan
  • Zhou, Wu
2021 Conference Paper, cited 0 times
Website
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a useful imaging technique that can quantitatively measure the pharmacokinetic (PK) parameters to characterize the microvasculature of tissues. Typically, the PK parameters are extracted by fitting the MR signal intensity of the pixels on the time series with the nonlinear least-squares method. The main disadvantage is that there are thousands of voxels in a single MR slice and the time consumption of voxels fitting too btain the P K parameters is very large. Recently, deep learning methods based on convolutional neural networks (CNNs) and Long Short-Term Memory (LSTM) network have been applied to directly estimate the PK parameters from the acquired DCE-MRI image-temporal series. However, how to effectively extract discriminative spatial and temporal features within DCE-MRI for the estimation of PK parameters is still a challenging problem, due to the large intensity variation of tissue images in different temporal phases of DCE-MRI during the injection of contrast agents. In this work, we propose an attention based deep learning model for the estimation of PK parameters, which can improve the estimation performance of PK parameters by focusing on dominant spatial and temporal characteristics. Specifically, a temporal frame attention block (FAB) and a channel/spatial attention block (CSAB) are separately designed to focus on dominant features in specific temporal phases, channels and spatial areas for better estimation. Experimental results of clinical DCE-MRI from an open source RIDER-NEURO dataset with quantitative and qualitative evaluation demonstrate that the proposed method outperforms previously reported CNN-based and LSTM-based deep learning models for the estimation of PK maps, and the ablation study also demonstrates the effectiveness of the proposed attention-based modules. In addition, the visualization of the attention mechanism reflects interesting findings that are consistent with clinical interpretation.

Attention P-Net for Segmentation of Post-operative Glioblastoma in MRI

  • Enlund Åström, Isabelle
2019 Thesis, cited 0 times
Website
Segmentation of post-operative glioblastoma is important for follow-up treatment. In this thesis, Fully Convolutional Networks (FCN) are utilised together with attention modules for segmentation of post-operative glioblastoma in MR images. Attention-based modules help the FCN to focus on relevant features to improve segmentation results. Channel and spatial attention combines both the spatial context as well as the semantic information in MR images. P-Net is used as a backbone for creating an architecture with existing bottleneck attention modules and was named attention P-Net. The proposed network and competing techniques were evaluated on a Uppsala University database containing T1-weighted MR images of brain from 12 subjects. The proposed framework shows substantial improvement over the existing techniques.

Attention-guided duplex adversarial U-net for pancreatic segmentation from computed tomography images

  • Li, M.
  • Lian, F.
  • Li, Y.
  • Guo, S.
J Appl Clin Med Phys 2022 Journal Article, cited 0 times
Website
PURPOSE: Segmenting the organs from computed tomography (CT) images is crucial to early diagnosis and treatment. Pancreas segmentation is especially challenging because the pancreas has a small volume and a large variation in shape. METHODS: To mitigate this issue, an attention-guided duplex adversarial U-Net (ADAU-Net) for pancreas segmentation is proposed in this work. First, two adversarial networks are integrated into the baseline U-Net to ensure the obtained prediction maps resemble the ground truths. Then, attention blocks are applied to preserve much contextual information for segmentation. The implementation of the proposed ADAU-Net consists of two steps: 1) backbone segmentor selection scheme is introduced to select an optimal backbone segmentor from three two-dimensional segmentation model variants based on a conventional U-Net and 2) attention blocks are integrated into the backbone segmentor at several locations to enhance the interdependency among pixels for a better segmentation performance, and the optimal structure is selected as a final version. RESULTS: The experimental results on the National Institutes of Health Pancreas-CT dataset show that our proposed ADAU-Net outperforms the baseline segmentation network by 6.39% in dice similarity coefficient and obtains a competitive performance compared with the-state-of-art methods for pancreas segmentation. CONCLUSION: The ADAU-Net achieves satisfactory segmentation results on the public pancreas dataset, indicating that the proposed model can segment pancreas outlines from CT images accurately.

An Augmentation in the Diagnostic Potency of Breast Cancer through A Deep Learning Cloud-Based AI Framework to Compute Tumor Malignancy & Risk

  • Agarwal, O
International Research Journal of Innovations in Engineering and Technology (IRJIET) 2019 Journal Article, cited 0 times
Website
This research project focuses on developing a web-based multi-platform solution for augmenting prognostic strategies to diagnose breast cancer (BC), from a variety of different tests, including histology, mammography, cytopathology, and fine-needle aspiration cytology, all inan automated fashion. The respective application utilizes tensor-based data representations and deep learning architectural algorithms, to produce optimized models for the prediction of novel instances against each of these medical tests. This system has been designed in a way that all of its computation can be integrated seamlessly into a clinical setting, without posing any disruption to a clinician’s productivity or workflow, but rather an enhancement of their capabilities. This software can make the diagnostic process automated, standardized, faster, and even more accurate than current benchmarks achieved by both pathologists, and radiologists, which makes it invaluable from a clinical standpoint to make well-informed diagnostic decisions with nominal resources.

Auto Diagnostics of Lung Nodules Using Minimal Characteristics Extraction Technique

  • Peña, Diego M
  • Luo, Shouhua
  • Abdelgader, Abdeldime
Diagnostics 2016 Journal Article, cited 6 times
Website

Auto‐segmentation of organs at risk for head and neck radiotherapy planning: from atlas‐based to deep learning methods

  • Vrtovec, Tomaž
  • Močnik, Domen
  • Strojan, Primož
  • Pernuš, Franjo
  • Ibragimov, Bulat
Medical Physics 2020 Journal Article, cited 2 times
Website

AutoComBat: a generic method for harmonizing MRI-based radiomic features

  • Carré, A.
  • Battistella, E.
  • Niyoteka, S.
  • Sun, R.
  • Deutsch, E.
  • Robert, C.
2022 Journal Article, cited 0 times
Website
The use of multicentric data is becoming essential for developing generalizable radiomic signatures. In particular, Magnetic Resonance Imaging (MRI) data used in brain oncology are often heterogeneous in terms of scanners and acquisitions, which significantly impact quantitative radiomic features. Various methods have been proposed to decrease dependency, including methods acting directly on MR images, i.e., based on the application of several preprocessing steps before feature extraction or the ComBat method, which harmonizes radiomic features themselves. The ComBat method used for radiomics may be misleading and presents some limitations, such as the need to know the labels associated with the "batch effect". In addition, a statistically representative sample is required and the applicability of a signature whose batch label is not present in the train set is not possible. This work aimed to compare a priori and a posteriori radiomic harmonization methods and propose a code adaptation to be machine learning compatible. Furthermore, we have developed AutoComBat, which aims to automatically determine the batch labels, using either MRI metadata or quality metrics as inputs of the proposed constrained clustering. A heterogeneous dataset consisting of high and low-grade gliomas coming from eight different centers was considered. The different methods were compared based on their ability to decrease relative standard deviation of radiomic features extracted from white matter and on their performance on a classification task using different machine learning models. ComBat and AutoComBat using image-derived quality metrics as inputs for batch assignment and preprocessing methods presented promising results on white matter harmonization, but with no clear consensus for all MR images. Preprocessing showed the best results on the T1w-gd images for the grading task. For T2w-flair, AutoComBat, using either metadata plus quality metrics or metadata alone as inputs, performs better than the conventional ComBat, highlighting its potential for data harmonization. Our results are MRI weighting, feature class and task dependent and require further investigations on other datasets.

Autocorrection of lung boundary on 3D CT lung cancer images

  • Nurfauzi, R.
  • Nugroho, H. A.
  • Ardiyanto, I.
  • Frannita, E. L.
Journal of King Saud University - Computer and Information Sciences 2019 Journal Article, cited 0 times
Website
Lung cancer in men has the highest mortality rate among all types of cancer. Juxta-pleural and juxta-vascular are the most common nodules located on the lung surface. A computer aided detection (CADe) system is effective for assisting radiologists in diagnosing lung nodules. However, the lung segmentation step requires sophisticated methods when juxta-pleural and juxta-vascular nodules are present. Fast computational time and low error in covering nodule areas are the aims of this study. The proposed method consists of five stages, namely ground truth (GT) extraction, data preparation, tracheal extraction, separation of lung fusion and lung border correction. The used data consist of 57 3D CT lung cancer images taken from selected LIDC-IDRI dataset. These nodules are determined as the outer areas labeled by four radiologists. The proposed method achieves the fastest computational time of 0.32 s per slice or 60 times faster than that of conventional adaptive border marching (ABM). Moreover, it produces under segmentation of nodule value as low as 14.6%. It indicates that the proposed method has a potential to be embedded in the lung CADe system to cover pleural juxta and vascular nodule areas in lung segmentation.

Automated 3-D Tissue Segmentation Via Clustering

  • Edwards, Samuel
  • Brown, Scott
  • Lee, Michael
Journal of Biomedical Engineering and Medical Imaging 2018 Journal Article, cited 0 times

Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN)

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Health Inf Sci Syst 2019 Journal Article, cited 0 times
Website
Purpose: A large chunk of lung cancers are of the type non-small cell lung cancer (NSCLC). Both the treatment planning and patients' prognosis depend greatly on factors like AJCC staging which is an abstraction over TNM staging. Many significant efforts have so far been made towards automated staging of NSCLC, but the groundbreaking application of a deep neural networks (DNNs) is yet to be observed in this domain of study. DNN is capable of achieving higher level of accuracy than the traditional artificial neural networks (ANNs) as it uses deeper layers of convolutional neural network (CNN). The objective of the present study is to propose a simple yet fast CNN model combined with recurrent neural network (RNN) for automated AJCC staging of NSCLC and to compare the outcome with a few standard machine learning algorithms along with a few similar studies. Methods: The NSCLC radiogenomics collection from the cancer imaging archive (TCIA) dataset was considered for the study. The tumor images were refined and filtered by resizing, enhancing, de-noising, etc. The initial image processing phase was followed by texture based image segmentation. The segmented images were fed into a hybrid feature detection and extraction model which was comprised of two sequential phases: maximally stable extremal regions (MSER) and the speeded up robust features (SURF). After a prolonged experiment, the desired CNN-RNN model was derived and the extracted features were fed into the model. Results: The proposed CNN-RNN model almost outperformed the other machine learning algorithms under consideration. The accuracy remained steadily higher than the other contemporary studies. Conclusion: The proposed CNN-RNN model performed commendably during the study. Further studies may be carried out to refine the model and develop an improved auxiliary decision support system for oncologists and radiologists.

Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment

  • Akbar, S.
  • Peikari, M.
  • Salama, S.
  • Panah, A. Y.
  • Nofech-Mozes, S.
  • Martel, A. L.
2019 Journal Article, cited 3 times
Website
The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists' workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.

Automated apparent diffusion coefficient analysis for genotype prediction in lower grade glioma: association with the T2-FLAIR mismatch sign

  • Aliotta, E.
  • Dutta, S. W.
  • Feng, X.
  • Tustison, N. J.
  • Batchala, P. P.
  • Schiff, D.
  • Lopes, M. B.
  • Jain, R.
  • Druzgal, T. J.
  • Mukherjee, S.
  • Patel, S. H.
J Neurooncol 2020 Journal Article, cited 0 times
Website
PURPOSE: The prognosis of lower grade glioma (LGG) patients depends (in large part) on both isocitrate dehydrogenase (IDH) gene mutation and chromosome 1p/19q codeletion status. IDH-mutant LGG without 1p/19q codeletion (IDHmut-Noncodel) often exhibit a unique imaging appearance that includes high apparent diffusion coefficient (ADC) values not observed in other subtypes. The purpose of this study was to develop an ADC analysis-based approach that can automatically identify IDHmut-Noncodel LGG. METHODS: Whole-tumor ADC metrics, including fractional tumor volume with ADC > 1.5 x 10(-3)mm(2)/s (VADC>1.5), were used to identify IDHmut-Noncodel LGG in a cohort of N = 134 patients. Optimal threshold values determined in this dataset were then validated using an external dataset containing N = 93 cases collected from The Cancer Imaging Archive. Classifications were also compared with radiologist-identified T2-FLAIR mismatch sign and evaluated concurrently to identify added value from a combined approach. RESULTS: VADC>1.5 classified IDHmut-Noncodel LGG in the internal cohort with an area under the curve (AUC) of 0.80. An optimal threshold value of 0.35 led to sensitivity/specificity = 0.57/0.93. Classification performance was similar in the validation cohort, with VADC>1.5 >/= 0.35 achieving sensitivity/specificity = 0.57/0.91 (AUC = 0.81). Across both groups, 37 cases exhibited positive T2-FLAIR mismatch sign-all of which were IDHmut-Noncodel. Of these, 32/37 (86%) also exhibited VADC>1.5 >/= 0.35, as did 23 additional IDHmut-Noncodel cases which were negative for T2-FLAIR mismatch sign. CONCLUSION: Tumor subregions with high ADC were a robust indicator of IDHmut-Noncodel LGG, with VADC>1.5 achieving > 90% classification specificity in both internal and validation cohorts. VADC>1.5 exhibited strong concordance with the T2-FLAIR mismatch sign and the combination of both parameters improved sensitivity in detecting IDHmut-Noncodel LGG.

Automated Brain Lesion Detection and Segmentation Using Magnetic Resonance Images

  • Nabizadeh, Nooshin
2015 Thesis, cited 10 times
Website

Automated classification of acute leukemia on a heterogeneous dataset using machine learning and deep learning techniques

  • Abhishek, Arjun
  • Jha, Rajib Kumar
  • Sinha, Ruchi
  • Jha, Kamlesh
Biomedical Signal Processing and Control 2022 Journal Article, cited 2 times
Website
Today, artificial intelligence and deep learning techniques constitute a prominent part in the area of medical sciences. These techniques help doctors detect diseases early and reduce their burden as well as chances of errors. However, experiments based on deep learning techniques require large and well-annotated dataset. This paper introduces a novel dataset of 500 peripheral blood smear images, containing normal, Acute Myeloid Leukemia and Acute Lymphoblastic Leukemia images. The dataset comprises almost 1700 cancerous blood cells. The size of the dataset is increased by adding images of a publicly available dataset and forming a heterogeneous dataset. The heterogeneous dataset is used for the automated binary classification task, which is one of the major tasks of the proposed work. The proposed work perform binary as well as three-class classification tasks involving state-of-the-art techniques based on machine learning and deep learning. For binary classification, the proposed work achieved an accuracy of 97% when fully connected layers along with the last three convolutional layers of VGG16 are fine tuned and 98% for DenseNet121 along with support vector machine. For three-class classification task, an accuracy of 95% is obtained for ResNet50 along with support vector machine. The preparation of the novel dataset is done under the opinion of various expertise that will help the scientific community for medical research supported by machine learning models.

Automated Classification of Lung Diseases in Computed Tomography Images Using a Wavelet Based Convolutional Neural Network

  • Matsuyama, Eri
  • Tsai, Du-Yih
Journal of Biomedical Science and Engineering 2018 Journal Article, cited 0 times
Website

Automated delineation of non‐small cell lung cancer: A step toward quantitative reasoning in medical decision science

  • Saad, Maliazurina
  • Lee, Ik Hyun
  • Choi, Tae‐Sun
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2019 Journal Article, cited 0 times
Website
Quantitative reasoning in medical decision science relies on the delineation of pathological objects. For example, evidence‐based clinical decisions regarding lung diseases require the segmentation of nodules, tumors, or cancers. Non‐small cell lung cancer (NSCLC) tends to be large sized, irregularly shaped, and grows against surrounding structures imposing challenges in the segmentation, even for expert clinicians. An automated delineation tool based on spatial analysis was developed and studied on 25 sets of computed tomography scans of NSCLC. Manual and automated delineations were compared, and the proposed method exhibited robustness in terms of the tumor size (5.32–18.24 mm), shape (spherical or irregular), contouring (lobulated, spiculated, or cavitated), localization (solitary, pleural, mediastinal, endobronchial, or tagging), and laterality (left or right lobe) with accuracy between 80% and 99%. Small discrepancies observed between the manual and automated delineations may arise from the variability in the practitioners' definitions of region of interest or imaging artifacts that reduced the tissue resolution.

Automated detection and segmentation of thoracic lymph nodes from CT using 3D foveal fully convolutional neural networks

  • Iuga, A. I.
  • Carolus, H.
  • Hoink, A. J.
  • Brosch, T.
  • Klinder, T.
  • Maintz, D.
  • Persigehl, T.
  • Baessler, B.
  • Pusken, M.
BMC Med Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: In oncology, the correct determination of nodal metastatic disease is essential for patient management, as patient treatment and prognosis are closely linked to the stage of the disease. The aim of the study was to develop a tool for automatic 3D detection and segmentation of lymph nodes (LNs) in computed tomography (CT) scans of the thorax using a fully convolutional neural network based on 3D foveal patches. METHODS: The training dataset was collected from the Computed Tomography Lymph Nodes Collection of the Cancer Imaging Archive, containing 89 contrast-enhanced CT scans of the thorax. A total number of 4275 LNs was segmented semi-automatically by a radiologist, assessing the entire 3D volume of the LNs. Using this data, a fully convolutional neuronal network based on 3D foveal patches was trained with fourfold cross-validation. Testing was performed on an unseen dataset containing 15 contrast-enhanced CT scans of patients who were referred upon suspicion or for staging of bronchial carcinoma. RESULTS: The algorithm achieved a good overall performance with a total detection rate of 76.9% for enlarged LNs during fourfold cross-validation in the training dataset with 10.3 false-positives per volume and of 69.9% in the unseen testing dataset. In the training dataset a better detection rate was observed for enlarged LNs compared to smaller LNs, the detection rate for LNs with a short-axis diameter (SAD) >/= 20 mm and SAD 5-10 mm being 91.6% and 62.2% (p < 0.001), respectively. Best detection rates were obtained for LNs located in Level 4R (83.6%) and Level 7 (80.4%). CONCLUSIONS: The proposed 3D deep learning approach achieves an overall good performance in the automatic detection and segmentation of thoracic LNs and shows reasonable generalizability, yielding the potential to facilitate detection during routine clinical work and to enable radiomics research without observer-bias.

Automated Detection of Early Pulmonary Nodule in Computed Tomography Images

  • Tariq, Ahmed Usama
2019 Thesis, cited 0 times
Website
Classification of lung cancer in CT scans majorly have two steps, detect all suspicious lesions also known as pulmonary nodules and calculate the malignancy. Currently, a lot of studies are about nodules detection, but some are about the evaluation of nodule malignancy. Since the presence of nodule does not unquestionably define the presence lung cancer and the morphology of nodule has a complex association with malignant growth, the diagnosis of lung cancer requests cautious examinations on each suspicious nodule and integrateed information every nodule. We propose a 3D CNN CAD systemto solve this problem. The system consists of two modules a 3D CNN for nodule detection, which outputs all suspicious nodules for a subject and second module train on XGBoost classifier with selective data to acquire the probability of lung malignancy for the subject.

Automated detection of glioblastoma tumor in brain magnetic imaging using ANFIS classifier

  • Thirumurugan, P
  • Ramkumar, D
  • Batri, K
  • Sundhara Raja, D
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2016 Journal Article, cited 3 times
Website
This article proposes a novel and efficient methodology for the detection of Glioblastoma tumor in brain MRI images. The proposed method consists of the following stages as preprocessing, Non-subsampled Contourlet transform (NSCT), feature extraction and Adaptive neuro fuzzy inference system classification. Euclidean direction algorithm is used to remove the impulse noise from the brain image during image acquisition process. NSCT decomposes the denoised brain image into approximation bands and high frequency bands. The features mean, standard deviation and energy are computed for the extracted coefficients and given to the input of the classifier. The classifier classifies the brain MRI image into normal or Glioblastoma tumor image based on the feature set. The proposed system achieves 99.8% sensitivity, 99.7% specificity, and 99.8% accuracy with respect to the ground truth images available in the dataset.

Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications

  • Parks, Connie L
  • Monson, Keith L
2016 Journal Article, cited 3 times
Website
The recognizability of facial images extracted from publically available medical scans raises patient privacy concerns. This study examined how accurately facial images extracted from computed tomography (CT) scans are objectively matched with corresponding photographs of the scanned individuals. The test subjects were 128 adult Americans ranging in age from 18 to 60 years, representing both sexes and three self-identified population (ancestral descent) groups (African, European, and Hispanic). Using facial recognition software, the 2D images of the extracted facial models were compared for matches against five differently sized photo galleries. Depending on the scanning protocol and gallery size, in 6-61 % of the cases, a correct life photo match for a CT-derived facial image was the top ranked image in the generated candidate lists, even when blind searching in excess of 100,000 images. In 31-91 % of the cases, a correct match was located within the top 50 images. Few significant differences (p > 0.05) in match rates were observed between the sexes or across the three age cohorts. Highly significant differences (p < 0.01) were, however, observed across the three ancestral cohorts and between the two CT scanning protocols. Results suggest that the probability of a match between a facial image extracted from a medical scan and a photograph of the individual is moderately high. The facial image data inherent in commonly employed medical imaging modalities may need to consider a potentially identifiable form of "comparable" facial imagery and protected as such under patient privacy legislation.

Automated feature extraction in brain tumor by magnetic resonance imaging using gaussian mixture models

  • Chaddad, Ahmad
Journal of Biomedical Imaging 2015 Journal Article, cited 29 times
Website

Automated grading of non-small cell lung cancer by fuzzy rough nearest neighbour method

  • Moitra, Dipanjan
  • Mandal, Rakesh Kr
Network Modeling Analysis in Health Informatics and Bioinformatics 2019 Journal Article, cited 0 times
Lung cancer is one of the most lethal diseases across the world. Most lung cancers belong to the category of non-small cell lung cancer (NSCLC). Many studies have so far been carried out to avoid the hazards and bias of manual classification of NSCLC tumors. A few of such studies were intended towards automated nodal staging using the standard machine learning algorithms. Many others tried to classify tumors as either benign or malignant. None of these studies considered the pathological grading of NSCLC. Automated grading may perfectly depict the dissimilarity between normal tissue and cancer affected tissue. Such automation may save patients from undergoing a painful biopsy and may also help radiologists or oncologists in grading the tumor or lesion correctly. The present study aims at the automated grading of NSCLC tumors using the fuzzy rough nearest neighbour (FRNN) method. The dataset was extracted from The Cancer Imaging Archive and it comprised PET/CT images of NSCLC tumors of 211 patients. Accelerated segment test (FAST) and histogram oriented gradients methods were used to detect and extract features from the segmented images. Gray level co-occurrence matrix (GLCM) features were also considered in the study. The features along with the clinical grading information were fed into four machine learning algorithms: FRNN, logistic regression, multi-layer perceptron, and support vector machine. The results were thoroughly compared in the light of various evaluation-metrics. The confusion matrix was found balanced, and the outcome was found more cost-effective for FRNN. Results were also compared with various other leading studies done earlier in this field. The proposed FRNN model performed satisfactorily during the experiment. Further exploration of FRNN may be very helpful for radiologists and oncologists in planning the treatment for NSCLC. More varieties of cancers may be considered while conducting similar studies.

Automated grading of prostate cancer using convolutional neural network and ordinal class classifier

  • Abraham, Bejoy
  • Nair, Madhu S.
Informatics in Medicine Unlocked 2019 Journal Article, cited 0 times
Website
Prostate Cancer (PCa) is one of the most prominent cancer among men. Early diagnosis and treatment planning are significant in reducing the mortality rate due to PCa. Accurate prediction of grade is required to ensure prompt treatment for cancer. Grading of prostate cancer can be considered as an ordinal class classification problem. This paper presents a novel method for the grading of prostate cancer from multiparametric magnetic resonance images using VGG-16 Convolutional Neural Network and Ordinal Class Classifier with J48 as the base classifier. Multiparametric magnetic resonance images of the PROSTATEx-2 2017 grand challenge dataset are employed for this work. The method achieved a moderate quadratic weighted kappa score of 0.4727 in the grading of PCa into 5 grade groups, which is higher than state-of-the-art methods. The method also achieved a positive predictive value of 0.9079 in predicting clinically significant prostate cancer.

Automated image quality assessment for chest CT scans

  • Reeves, A. P.
  • Xie, Y.
  • Liu, S.
Med Phys 2018 Journal Article, cited 0 times
Website
PURPOSE: Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. METHODS: For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. RESULTS: The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. CONCLUSIONS: Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods.

Automated Koos Classification of Vestibular Schwannoma

  • Kujawa, Aaron
  • Dorent, Reuben
  • Connor, Steve
  • Oviedova, Anna
  • Okasha, Mohamed
  • Grishchuk, Diana
  • Ourselin, Sebastien
  • Paddick, Ian
  • Kitchen, Neil
  • Vercauteren, Tom
  • Shapey, Jonathan
Frontiers in Radiology 2022 Journal Article, cited 0 times
Website
Objective: The Koos grading scale is a frequently used classification system for vestibular schwannoma (VS) that accounts for extrameatal tumor dimension and compression of the brain stem. We propose an artificial intelligence (AI) pipeline to fully automate the segmentation and Koos classification of VS from MRI to improve clinical workflow and facilitate patient management. Methods: We propose a method for Koos classification that does not only rely on available images but also on automatically generated segmentations. Artificial neural networks were trained and tested based on manual tumor segmentations and ground truth Koos grades of contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR images from subjects with a single sporadic VS, acquired on a single scanner and with a standardized protocol. The first stage of the pipeline comprises a convolutional neural network (CNN) which can segment the VS and 7 adjacent structures. For the second stage, we propose two complementary approaches that are combined in an ensemble. The first approach applies a second CNN to the segmentation output to predict the Koos grade, the other approach extracts handcrafted features which are passed to a Random Forest classifier. The pipeline results were compared to those achieved by two neurosurgeons. Results: Eligible patients (n = 308) were pseudo-randomly split into 5 groups to evaluate the model performance with 5-fold cross-validation. The weighted macro-averaged mean absolute error (MA-MAE), weighted macro-averaged F1 score (F1), and accuracy score of the ensemble model were assessed on the testing sets as follows: MA-MAE = 0.11 ± 0.05, F1 = 89.3 ± 3.0%, accuracy = 89.3 ± 2.9%, which was comparable to the average performance of two neurosurgeons: MA-MAE = 0.11 ± 0.08, F1 = 89.1 ± 5.2, accuracy = 88.6 ± 5.8%. Inter-rater reliability was assessed by calculating Fleiss' generalized kappa (k = 0.68) based on all 308 cases, and intra-rater reliabilities of annotator 1 (k = 0.95) and annotator 2 (k = 0.82) were calculated according to the weighted kappa metric with quadratic (Fleiss-Cohen) weights based on 15 randomly selected cases. Conclusions: We developed the first AI framework to automatically classify VS according to the Koos scale. The excellent results show that the accuracy of the framework is comparable to that of neurosurgeons and may therefore facilitate management of patients with VS. The models, code, and ground truth Koos grades for a subset of publicly available images (n = 188) will be released upon publication.

Automated lung cancer diagnosis using three-dimensional convolutional neural networks

  • Perez, Gustavo
  • Arbelaez, Pablo
2020 Journal Article, cited 0 times
Website
Lung cancer is the deadliest cancer worldwide. It has been shown that early detection using low-dose computer tomography (LDCT) scans can reduce deaths caused by this disease. We present a general framework for the detection of lung cancer in chest LDCT images. Our method consists of a nodule detector trained on the LIDC-IDRI dataset followed by a cancer predictor trained on the Kaggle DSB 2017 dataset and evaluated on the IEEE International Symposium on Biomedical Imaging (ISBI) 2018 Lung Nodule Malignancy Prediction test set. Our candidate extraction approach is effective to produce accurate candidates with a recall of 99.6%. In addition, our false positive reduction stage classifies successfully the candidates and increases precision by a factor of 2000. Our cancer predictor obtained a ROC AUC of 0.913 and was ranked 1st place at the ISBI 2018 Lung Nodule Malignancy Prediction challenge. Graphical abstract.

Automated lung field segmentation in CT images using mean shift clustering and geometrical features

  • Chama, Chanukya Krishna
  • Mukhopadhyay, Sudipta
  • Biswas, Prabir Kumar
  • Dhara, Ashis Kumar
  • Madaiah, Mahendra Kasuvinahally
  • Khandelwal, Niranjan
2013 Conference Proceedings, cited 8 times
Website

An automated lung segmentation approach using bidirectional chain codes to improve nodule detection accuracy

  • Shen, Shiwen
  • Bui, Alex AT
  • Cong, Jason
  • Hsu, William
2015 Journal Article, cited 31 times
Website
Computer-aided detection and diagnosis (CAD) has been widely investigated to improve radiologists' diagnostic accuracy in detecting and characterizing lung disease, as well as to assist with the processing of increasingly sizable volumes of imaging. Lung segmentation is a requisite preprocessing step for most CAD schemes. This paper proposes a parameter-free lung segmentation algorithm with the aim of improving lung nodule detection accuracy, focusing on juxtapleural nodules. A bidirectional chain coding method combined with a support vector machine (SVM) classifier is used to selectively smooth the lung border while minimizing the over-segmentation of adjacent regions. This automated method was tested on 233 computed tomography (CT) studies from the lung imaging database consortium (LIDC), representing 403 juxtapleural nodules. The approach obtained a 92.6% re-inclusion rate. Segmentation accuracy was further validated on 10 randomly selected CT series, finding a 0.3% average over-segmentation ratio and 2.4% under-segmentation rate when compared to manually segmented reference standards done by an expert. (C) 2014 Elsevier Ltd. All rights reserved.

Automated lung tumor detection and diagnosis in CT Scans using texture feature analysis and SVM

  • Adams, Tim
  • Dörpinghaus, Jens
  • Jacobs, Marc
  • Steinhage, Volker
Communication Papers of the Federated Conference on Computer Science and Information Systems 2018 Journal Article, cited 0 times
Website

Automated Medical Image Modality Recognition by Fusion of Visual and Text Information

  • Codella, Noel
  • Connell, Jonathan
  • Pankanti, Sharath
  • Merler, Michele
  • Smith, John R
2014 Book Section, cited 10 times
Website

An Automated Method for Locating Phantom Nodules in Anthropomorphic Thoracic Phantom CT Studies

  • Peskin, Adele P
  • Dima, Alden A
  • Saiprasad, Ganesh
2011 Conference Paper, cited 1 times
Website

Automated Muscle Segmentation from Clinical CT using Bayesian U-Net for Personalized Musculoskeletal Modeling

  • Hiasa, Yuta
  • Otake, Yoshito
  • Takao, Masaki
  • Ogawa, Takeshi
  • Sugano, Nobuhiko
  • Sato, Yoshinobu
IEEE Trans Med Imaging 2019 Journal Article, cited 2 times
Website
We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated the performance of the proposed method using two data sets: 20 fully annotated CTs of the hip and thigh regions and 18 partially annotated CTs that are publicly available from The Cancer Imaging Archive (TCIA) database. The experiments showed a Dice coefficient (DC) of 0.891+/-0.016 (mean+/-std) and an average symmetric surface distance (ASD) of 0.994+/-0.230 mm over 19 muscles in the set of 20 CTs. These results were statistically significant improvements compared to the state-of-the-art hierarchical multi-atlas method which resulted in 0.845+/-0.031 DC and 1.556+/-0.444 mm ASD. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in activelearning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitative evaluation of muscle atrophy.

Automated nuclear segmentation in head and neck squamous cell carcinoma (HNSCC) pathology reveals relationships between cytometric features and ESTIMATE stromal and immune scores

  • Blocker, Stephanie J.
  • Cook, James
  • Everitt, Jeffrey I.
  • Austin, Wyatt M.
  • Watts, Tammara L.
  • Mowery, Yvonne M.
The American Journal of Pathology 2022 Journal Article, cited 0 times
Website
The tumor microenvironment (TME) plays an important role in the progression of head and neck squamous cell carcinoma (HNSCC). Currently, pathological assessment of TME is non-standardized and subject to observer bias. Genome-wide transcriptomic approaches to understanding the TME, while less subject to bias, are expensive and not currently part of standard of care for HNSCC. To identify pathology-based biomarkers that correlate with genomic and transcriptomic signatures of TME in HNSCC, cytometric feature maps were generated in a publicly available cohort of patients with HNSCC with available whole-slide tissue images and genomic and transcriptomic phenotyping (N=49). Cytometric feature maps were generated based on whole-slide nuclear detection, using a deep learning algorithm trained for StarDist nuclear segmentation. Cytometric features were measured for each patient and compared to transcriptomic measurements, including Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data (ESTIMATE) scores, as well as stemness scores. When corrected for multiple comparisons, one feature (nuclear circularity) demonstrated a significant linear correlation with ESTIMATE stromal score. Two features (nuclear maximum and minimum diameter) correlated significantly with ESTIMATE immune score. Three features (nuclear solidity, nuclear minimum diameter, and nuclear circularity) correlated significantly with transcriptomic stemness score. This study provides preliminary evidence that observer-independent, automated tissue slide analysis can provide insights into the HNSCC TME which correlate with genomic and transcriptomic assessments.

Automated pancreas segmentation and volumetry using deep neural network on computed tomography

  • Lim, S. H.
  • Kim, Y. J.
  • Park, Y. H.
  • Kim, D.
  • Kim, K. G.
  • Lee, D. H.
2022 Journal Article, cited 0 times
Website
Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the cancer imaging archive pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.

Automated pulmonary nodule CT image characterization in lung cancer screening

  • Reeves, Anthony P
  • Xie, Yiting
  • Jirapatnakul, Artit
International Journal of Computer Assisted Radiology and Surgery 2016 Journal Article, cited 19 times
Website

Automated Screening for Abdominal Aortic Aneurysm in CT Scans under Clinical Conditions Using Deep Learning

  • Golla, A. K.
  • Tonnes, C.
  • Russ, T.
  • Bauer, D. F.
  • Froelich, M. F.
  • Diehl, S. J.
  • Schoenberg, S. O.
  • Keese, M.
  • Schad, L. R.
  • Zollner, F. G.
  • Rink, J. S.
Diagnostics (Basel) 2021 Journal Article, cited 0 times
Website
Abdominal aortic aneurysms (AAA) may remain clinically silent until they enlarge and patients present with a potentially lethal rupture. This necessitates early detection and elective treatment. The goal of this study was to develop an easy-to-train algorithm which is capable of automated AAA screening in CT scans and can be applied to an intra-hospital environment. Three deep convolutional neural networks (ResNet, VGG-16 and AlexNet) were adapted for 3D classification and applied to a dataset consisting of 187 heterogenous CT scans. The 3D ResNet outperformed both other networks. Across the five folds of the first training dataset it achieved an accuracy of 0.856 and an area under the curve (AUC) of 0.926. Subsequently, the algorithms performance was verified on a second data set containing 106 scans, where it ran fully automated and resulted in an accuracy of 0.953 and an AUC of 0.971. A layer-wise relevance propagation (LRP) made the decision process interpretable and showed that the network correctly focused on the aortic lumen. In conclusion, the deep learning-based screening proved to be robust and showed high performance even on a heterogeneous multi-center data set. Integration into hospital workflow and its effect on aneurysm management would be an exciting topic of future research.

An Automated Segmentation Method for Lung Parenchyma Image Sequences Based on Fractal Geometry and Convex Hull Algorithm

  • Xiao, Xiaojiao
  • Zhao, Juanjuan
  • Qiang, Yan
  • Wang, Hua
  • Xiao, Yingze
  • Zhang, Xiaolong
  • Zhang, Yudong
Applied Sciences 2018 Journal Article, cited 1 times
Website

Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning

  • Korfiatis, Panagiotis
  • Kline, Timothy L
  • Erickson, Bradley J
Tomography 2016 Journal Article, cited 16 times
Website
We present a deep convolutional neural network application based on autoencoders aimed at segmentation of increased signal regions in fluid-attenuated inversion recovery magnetic resonance imaging images. The convolutional autoencoders were trained on the publicly available Brain Tumor Image Segmentation Benchmark (BRATS) data set, and the accuracy was evaluated on a data set where 3 expert segmentations were available. The simultaneous truth and performance level estimation (STAPLE) algorithm was used to provide the ground truth for comparison, and Dice coefficient, Jaccard coefficient, true positive fraction, and false negative fraction were calculated. The proposed technique was within the interobserver variability with respect to Dice, Jaccard, and true positive fraction. The developed method can be used to produce automatic segmentations of tumor regions corresponding to signal-increased fluid-attenuated inversion recovery regions.

Automated Segmentation of Prostate MR Images Using Prior Knowledge Enhanced Random Walker

  • Li, Ang
  • Li, Changyang
  • Wang, Xiuying
  • Eberl, Stefan
  • Feng, David Dagan
  • Fulham, Michael
2013 Conference Proceedings, cited 9 times
Website

Automated segmentation refinement of small lung nodules in CT scans by local shape analysis

  • Diciotti, Stefano
  • Lombardo, Simone
  • Falchini, Massimo
  • Picozzi, Giulia
  • Mascalchi, Mario
IEEE Trans Biomed Eng 2011 Journal Article, cited 68 times
Website
One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.

An automated slice sorting technique for multi-slice computed tomography liver cancer images using convolutional network

  • Kaur, Amandeep
  • Chauhan, Ajay Pal Singh
  • Aggarwal, Ashwani Kumar
Expert Systems with Applications 2021 Journal Article, cited 1 times
Website
An early detection and diagnosis of liver cancer can help the radiation therapist in choosing the target area and the amount of radiation dose to be delivered to the patients. The radiologists usually spend a lot of time in selecting the most relevant slices from thousands of scans, which are usually obtained from multi-slice CT scanners. The purpose of this paper multi-organ classification of 3D CT images of liver cancer suspected patients by convolution network. A dataset consisting of 63503 CT images of liver cancer patients taken from The Cancer Imaging Archive (TCIA) has been used to validate the proposed method. The method is a CNN for classification of CT liver cancer images. The classification results in terms of accuracy, precision, sensitivity, specificity, true positive rate, false negative rate, and F1 score have been computed. The results manifest a high validation accuracy of 99.1%, when convolution network is trained with the data augmented volume slices as compared to accuracy of 98.7% with that obtained original volume slices. The overall test accuracy for data augmented volume slice dataset is 93.1% superior to other volume slices. The main contribution of this work is that it will help the radiation therapist to focus on a small subset of CT image data. This is achieved by segregating the whole set of 63503 CT images into three categories based on the likelihood of the spread of cancer to other organs in liver cancer suspected patients. Consequently, only 19453 CT images had liver visible in them, making rest of 44050 CT images less relevant for liver cancer detection. The proposed method will help in the rapid diagnosis and treatment of liver cancer patients.

Automated Systems of High-Productive Identification of Image Objects by Geometric Features

  • Poplavskyi, Oleksandr
  • Bondar, Olena
  • Pavlov, Sergiy
  • Poplavska, Anna
Applied Geometry and Engineering Graphics 2020 Journal Article, cited 0 times
The article substantiates the feasibility and practical value of using a specific simulation modeling methodology, which provides for digital processing and the mathematical essence of neural network technology. A brain tumor is a serious disease, and the number of people who die due to a brain tumor, despite significant progress in treatment remains impressive. In this research presents in detail the developed algorithm for high performance identification of objects (early detection and identification of tumors) on MRI images by geometric features. This algorithm, based on image pre-processing, analyzes the data array using a convolutional neural network (CNN) and recognizes pathologies in the images. The obtained algorithm is a step towards the creation of autonomous automatic identification and decision-making systems for the diagnosis of malignant tumors and other neoplasms in the brain by geometric features.

Automatic 3D Mesh-Based Centerline Extraction from a Tubular Geometry Form

  • Yahya-Zoubir, Bahia
  • Hamami, Latifa
  • Saadaoui, Llies
  • Ouared, Rafik
Information Technology And Control 2016 Journal Article, cited 0 times
Website

Automatic assessment of glioma burden: A deep learning algorithm for fully automated volumetric and bi-dimensional measurement

  • Chang, Ken
  • Beers, Andrew L
  • Bai, Harrison X
  • Brown, James M
  • Ly, K Ina
  • Li, Xuejun
  • Senders, Joeky T
  • Kavouridis, Vasileios K
  • Boaro, Alessandro
  • Su, Chang
  • Bi, Wenya Linda
  • Rapalino, Otto
  • Liao, Weihua
  • Shen, Qin
  • Zhou, Hao
  • Xiao, Bo
  • Wang, Yinyan
  • Zhang, Paul J
  • Pinho, Marco C
  • Wen, Patrick Y
  • Batchelor, Tracy T
  • Boxerman, Jerrold L
  • Arnaout, Omar
  • Rosen, Bruce R
  • Gerstner, Elizabeth R
  • Yang, Li
  • Huang, Raymond Y
  • Kalpathy-Cramer, Jayashree
2019 Journal Article, cited 0 times
Website
BACKGROUND: Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal FLAIR hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bi-dimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). METHODS: Two cohorts of patients were used for this study. One consisted of 843 pre-operative MRIs from 843 patients with low- or high-grade gliomas from four institutions and the second consisted 713 longitudinal, post-operative MRI visits from 54 patients with newly diagnosed glioblastomas (each with two pre-treatment "baseline" MRIs) from one institution. RESULTS: The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectivelyon the cohort of post-operative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for pre-operative FLAIR hyperintensity, post-operative FLAIR hyperintensity, and post-operative contrast-enhancing tumor volumes, respectively. Lastly, the ICC for comparing manually and automatically derived longitudinal changes in tumor burden was 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. CONCLUSIONS: Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex post-treatment settings, although further validation in multi-center clinical trials will be needed prior to widespread implementation.

Automatic Classification of Brain MRI Images Using SVM and Neural Network Classifiers

  • Natteshan, NVS
  • Jothi, J Angel Arul
2015 Conference Paper, cited 8 times
Website
Computer Aided Diagnosis (CAD) is a technique where diagnosis is performed in an automatic way. This work has developed a CAD system for automatically classifying the given brain Magnetic Resonance Imaging (MRI) image into ‘tumor affected’ or ‘tumor not affected’. The input image is preprocessed using wiener filter and Contrast Limited Adaptive Histogram Equalization (CLAHE). The image is then quantized and aggregated to get a reduced image data. The reduced image is then segmented into four regions such as gray matter, white matter, cerebrospinal fluid and high intensity tumor cluster using Fuzzy C Means (FCM) algorithm. The tumor region is then extracted using the intensity metric. A contour is evolved over the identified tumor region using Active Contour model (ACM) to extract exact tumor segment. Thirty five features including Gray Level Co-occurrence Matrix (GLCM) features, Gray Level Run Length Matrix features (GLRL), statistical features and shape based features are extracted from the tumor region. Neural network and Support Vector Machine (SVM) classifiers are trained using these features. Results indicate that Support vector machine classifier with quadratic kernel function performs better than Radial Basis Function (RBF) kernel function and neural network classifier with fifty hidden nodes performs better than twenty five hidden nodes. It is also evident from the result that average running time of FCM is less when used on reduced image data.

Automatic Classification of Brain Tumor Types with the MRI Scans and Histopathology Images

  • Chan, Hsiang-Wei
  • Weng, Yan-Ting
  • Huang, Teng-Yi
2020 Conference Paper, cited 0 times
Website
In the study, we used two neural networks, including VGG16 and Resnet50, to process the whole slide images with feature extracting. To classify the three types of brain tumors (i.e., glioblastoma, oligodendroglioma, and astrocytoma), we tried several clustering methods include k-means and random forest classification methods. In the prediction stage, we compared the prediction results with and without MRI features. The results support that the classification method performed with image features extracted by VGG16 has the highest prediction accuracy. Moreover, we found that combining with radiomics generated from MR images slightly improved the accuracy of the classification.

Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network

  • Zuo, Wangxia
  • Zhou, Fuqiang
  • He, Yuzhu
  • Li, Xiaosong
Med Phys 2019 Journal Article, cited 0 times
Website
OBJECTIVE: In the automatic lung nodule detection system, the authenticity of a large number of nodule candidates needs to be judged, which is a classification task. However, the variable shapes and sizes of the lung nodules have posed a great challenge to the classification of candidates. To solve this problem, we propose a method for classifying nodule candidates through three-dimensional (3D) convolution neural network (ConvNet) model which is trained by transferring knowledge from a multiresolution two-dimensional (2D) ConvNet model. METHODS: In this scheme, a novel 3D ConvNet model is preweighted with the weights of the trained 2D ConvNet model, and then the 3D ConvNet model is trained with 3D image volumes. In this way, the knowledge transfer method can make 3D network easier to converge and make full use of the spatial information of nodules with different sizes and shapes to improve the classification accuracy. RESULTS: The experimental results on 551 065 pulmonary nodule candidates in the LUNA16 dataset show that our method gains a competitive average score in the false-positive reduction track in lung nodule detection, with the sensitivities of 0.619 and 0.642 at 0.125 and 0.25 FPs per scan, respectively. CONCLUSIONS: The proposed method can maintain satisfactory classification accuracy even when the false-positive rate is extremely small in the face of nodules of different sizes and shapes. Moreover, as a transfer learning idea, the method to transfer knowledge from 2D ConvNet to 3D ConvNet is the first attempt to carry out full migration of parameters of various layers including convolution layers, full connection layers, and classifier between different dimensional models, which is more conducive to utilizing the existing 2D ConvNet resources and generalizing transfer learning schemes.

Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features

  • Magdy, Eman
  • Zayed, Nourhan
  • Fakhr, Mahmoud
International Journal of Biomedical Imaging 2015 Journal Article, cited 6 times
Website

Automatic classification of solitary pulmonary nodules in PET/CT imaging employing transfer learning techniques

  • Apostolopoulos, Ioannis D
  • Pintelas, Emmanuel G
  • Livieris, Ioannis E
  • Apostolopoulos, Dimitris J
  • Papathanasiou, Nikolaos D
  • Pintelas, Panagiotis E
  • Panayiotakis, George S
2021 Journal Article, cited 0 times
Website

Automatic Colorectal Segmentation with Convolutional Neural Network

  • Guachi, Lorena
  • Guachi, Robinson
  • Bini, Fabiano
  • Marinozzi, Franco
Computer-Aided Design and Applications 2019 Journal Article, cited 3 times
Website
This paper presents a new method for colon tissues segmentation on Computed Tomography images which takes advantages of using deep and hierarchical learning about colon features through Convolutional Neural Networks (CNN). The proposed method works robustly reducing misclassified colon tissues pixels that are introduced by the presence of noise, artifacts, unclear edges, and other organs or different areas characterized by the same intensity value as the colon. Patch analysis is exploited for allowing the classification of each center pixel as colon tissue or background pixel. Experimental results demonstrate the proposed method achieves a higher effectiveness in terms of sensitivity and specificity with respect to three state-of the art methods.

Automatic Design of Window Operators for the Segmentation of the Prostate Gland in Magnetic Resonance Images

  • Benalcázar, Marco E
  • Brun, Marcel
  • Ballarin, Virginia
2015 Conference Proceedings, cited 0 times
Website

Automatic Detection and Segmentation of Colorectal Cancer with Deep Residual Convolutional Neural Network

  • Akilandeswari, A.
  • Sungeetha, D.
  • Joseph, C.
  • Thaiyalnayaki, K.
  • Baskaran, K.
  • Jothi Ramalingam, R.
  • Al-Lohedan, H.
  • Al-Dhayan, D. M.
  • Karnan, M.
  • Meansbo Hadish, K.
Evid Based Complement Alternat Med 2022 Journal Article, cited 0 times
Website
Early and automatic detection of colorectal tumors is essential for cancer analysis, and the same is implemented using computer-aided diagnosis (CAD). A computerized tomography (CT) image of the colon is being used to identify colorectal carcinoma. Digital imaging and communication in medicine (DICOM) is a standard medical imaging format to process and analyze images digitally. Accurate detection of tumor cells in the complex digestive tract is necessary for optimal treatment. The proposed work is divided into two phases. The first phase involves the segmentation, and the second phase is the extraction of the colon lesions with the observed segmentation parameters. A deep convolutional neural network (DCNN) based residual network approach for the colon and polyps' segmentation from the CT images is applied over the 2D CT images. The residual stack block is being added to the hidden layers with short skip nuance, which helps to retain spatial information. ResNet-enabled CNN is employed in the current work to achieve complete boundary segmentation of the colon cancer region. The results obtained through segmentation serve as features for further extraction and classification of benign as well as malignant colon cancer. Performance evaluation metrics indicate that the proposed network model has effectively segmented and classified colorectal tumors with dice scores of 91.57% (on average), sensitivity = 98.28, specificity = 98.68, and accuracy = 98.82.

Automatic Detection of Lung Nodules Using 3D Deep Convolutional Neural Networks

  • Fu, Ling
  • Ma, Jingchen
  • Chen, Yizhi
  • Larsson, Rasmus
  • Zhao, Jun
Journal of Shanghai Jiaotong University (Science) 2019 Journal Article, cited 0 times
Website
Lung cancer is the leading cause of cancer deaths worldwide. Accurate early diagnosis is critical in increasing the 5-year survival rate of lung cancer, so the efficient and accurate detection of lung nodules, the potential precursors to lung cancer, is paramount. In this paper, a computer-aided lung nodule detection system using 3D deep convolutional neural networks (CNNs) is developed. The first multi-scale 11-layer 3D fully convolutional neural network (FCN) is used for screening all lung nodule candidates. Considering relative small sizes of lung nodules and limited memory, the input of the FCN consists of 3D image patches rather than of whole images. The candidates are further classified in the second CNN to get the final result. The proposed method achieves high performance in the LUNA16 challenge and demonstrates the effectiveness of using 3D deep CNNs for lung nodule detection.

Automatic detection of pulmonary nodules in CT images by incorporating 3D tensor filtering with local image feature analysis

  • Gong, J.
  • Liu, J. Y.
  • Wang, L. J.
  • Sun, X. W.
  • Zheng, B.
  • Nie, S. D.
Physica Medica 2018 Journal Article, cited 4 times
Website

Automatic detection of spiculation of pulmonary nodules in computed tomography images

  • Ciompi, F
  • Jacobs, C
  • Scholten, ET
  • van Riel, SJ
  • Wille, MMW
  • Prokop, M
  • van Ginneken, B
2015 Conference Proceedings, cited 5 times
Website

Automatic Electronic Cleansing in Computed Tomography Colonography Images using Domain Knowledge

  • Manjunath, KN
  • Siddalingaswamy, PC
  • Prabhu, GK
Asian Pacific Journal of Cancer Prevention 2015 Journal Article, cited 0 times

Automatic estimation of the aortic lumen geometry by ellipse tracking

  • Tahoces, Pablo G
  • Alvarez, Luis
  • González, Esther
  • Cuenca, Carmelo
  • Trujillo, Agustín
  • Santana-Cedrés, Daniel
  • Esclarín, Julio
  • Gomez, Luis
  • Mazorra, Luis
  • Alemán-Flores, Miguel
International Journal of Computer Assisted Radiology and Surgery 2019 Journal Article, cited 0 times

Automatic fissure detection in CT images based on the genetic algorithm

  • Tseng, Lin-Yu
  • Huang, Li-Chin
2010 Conference Proceedings, cited 5 times
Website
Lung cancer is one of the most frequently occurring cancer and has a very low five-year survival rate. Computer-aided diagnosis (CAD) helps reducing the burden of radiologists and improving the accuracy of abnormality detection during CT image interpretations. Owing to rapid development of the scanner technology, the volume of medical imaging data is becoming huger and huger. Automated segmentations of the target organ region are always required by the CAD systems. Although the analysis of lung fissures provides important information for treatment, it is still a challenge to extract fissures automatically based on the CT values because the appearance of lung fissures is very fuzzy and indefinite. Since the oblique fissures can be visualized more easily among other fissures on the chest CT images, they are used to check the exact localization of the lesions. In this paper, we propose a fully automatic fissure detection method based on the genetic algorithm to identify the oblique fissures. The accurate rates of identifying the oblique fissures in the right lung and the left lung are 97% and 86%, respectively when the method was tested on 87 slices.

Automatic glioma segmentation based on adaptive superpixel

  • Wu, Yaping
  • Zhao, Zhe
  • Wu, Weiguo
  • Lin, Yusong
  • Wang, Meiyun
BMC Med Imaging 2019 Journal Article, cited 0 times
Website
BACKGROUND: The automatic glioma segmentation is of great significance for clinical practice. This study aims to propose an automatic method based on superpixel for glioma segmentation from the T2 weighted Magnetic Resonance Imaging. METHODS: The proposed method mainly includes three steps. First, we propose an adaptive superpixel generation algorithm based on simple linear iterative clustering version with 0 parameter (ASLIC0). This algorithm can acquire a superpixel image with fewer superpixels and better fit the boundary of region of interest (ROI) by automatically selecting the optimal number of superpixels. Second, we compose a training set by calculating the statistical, texture, curvature and fractal features for each superpixel. Third, Support Vector Machine (SVM) is used to train classification model based on the features of the second step. RESULTS: The experimental results on Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) show that the proposed method has good segmentation performance. The average Dice, Hausdorff distance, sensitivity, and specificity for the segmented tumor against the ground truth are 0.8492, 3.4697 pixels, 81.47, and 99.64%, respectively. The proposed method shows good stability on high- and low-grade glioma samples. Comparative experimental results show that the proposed method has superior performance. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a fast and reproducible method of glioma segmentation.

Automatic GPU memory management for large neural models in TensorFlow

  • Le, Tung D.
  • Imai, Haruki
  • Negishi, Yasushi
  • Kawachiya, Kiyokuni
2019 Conference Proceedings, cited 0 times
Website
Deep learning models are becoming larger and will not fit inthe limited memory of accelerators such as GPUs for train-ing. Though many methods have been proposed to solvethis problem, they are rather ad-hoc in nature and difficultto extend and integrate with other techniques. In this pa-per, we tackle the problem in a formal way to provide astrong foundation for supporting large models. We proposea method of formally rewriting the computational graph of amodel where swap-out and swap-in operations are insertedto temporarily store intermediate results on CPU memory.By introducing a categorized topological ordering for simu-lating graph execution, the memory consumption of a modelcan be easily analyzed by using operation distances in theordering. As a result, the problem of fitting a large model intoa memory-limited accelerator is reduced to the problem ofreducing operation distances in a categorized topological or-dering. We then show how to formally derive swap-out andswap-in operations from an existing graph and present rulesto optimize the graph. Finally, we propose a simulation-basedauto-tuning to automatically find suitable graph-rewritingparameters for the best performance. We developed a modulein TensorFlow, calledLMS, by which we successfully trainedResNet-50 with a4.9x larger mini-batch size and 3D U-Netwith a5.6x larger image resolution.

Automatic intensity windowing of mammographic images based on a perceptual metric

  • Albiol, Alberto
  • Corbi, Alberto
  • Albiol, Francisco
Medical Physics 2017 Journal Article, cited 0 times
Website
PURPOSE: Initial auto-adjustment of the window level WL and width WW applied to mammographic images. The proposed intensity windowing (IW) method is based on the maximization of the mutual information (MI) between a perceptual decomposition of the original 12-bit sources and their screen displayed 8-bit version. Besides zoom, color inversion and panning operations, IW is the most commonly performed task in daily screening and has a direct impact on diagnosis and the time involved in the process. METHODS: The authors present a human visual system and perception-based algorithm named GRAIL (Gabor-relying adjustment of image levels). GRAIL initially measures a mammogram's quality based on the MI between the original instance and its Gabor-filtered derivations. From this point on, the algorithm performs an automatic intensity windowing process that outputs the WL/WW that best displays each mammogram for screening. GRAIL starts with the default, high contrast, wide dynamic range 12-bit data, and then maximizes the graphical information presented in ordinary 8-bit displays. Tests have been carried out with several mammogram databases. They comprise correlations and an ANOVA analysis with the manual IW levels established by a group of radiologists. A complete MATLAB implementation of GRAIL is available at https://github.com/TheAnswerIsFortyTwo/GRAIL. RESULTS: Auto-leveled images show superior quality both perceptually and objectively compared to their full intensity range and compared to the application of other common methods like global contrast stretching (GCS). The correlations between the human determined intensity values and the ones estimated by our method surpass that of GCS. The ANOVA analysis with the upper intensity thresholds also reveals a similar outcome. GRAIL has also proven to specially perform better with images that contain micro-calcifications and/or foreign X-ray-opaque elements and with healthy BI-RADS A-type mammograms. It can also speed up the initial screening time by a mean of 4.5 s per image. CONCLUSIONS: A novel methodology is introduced that enables a quality-driven balancing of the WL/WW of mammographic images. This correction seeks the representation that maximizes the amount of graphical information contained in each image. The presented technique can contribute to the diagnosis and the overall efficiency of the breast screening session by suggesting, at the beginning, an optimal and customized windowing setting for each mammogram.

Automatic interstitial photodynamic therapy planning via convex optimization

  • Yassine, Abdul-Amir
  • Kingsford, William
  • Xu, Yiwen
  • Cassidy, Jeffrey
  • Lilge, Lothar
  • Betz, Vaughn
Biomedical Optics Express 2018 Journal Article, cited 3 times
Website

AUTOMATIC KIDNEY SEGMENTATION, RECONSTRUCTION, PREOPERATIVE PLANNING, AND 3D PRINTING

  • ZAGKOU, SPYRIDOULA
2021 Thesis, cited 0 times
Website
Renal cancer is the seventh most prevalent cancer among men and the tenth most frequent cancer among women, accounting for 5% and 3% of all adult malignancies, respectively. Κidney cancer is increasing dramatically in developing countries due to inadequate living conditions but and in developed countries due to bad lifestyles, smoking, obesity, and hypertension. For decades, radical nephrectomy (RN) was the standard method to address the problem of the high incidence of kidney cancer. However, the utilization of minimally invasive partial nephrectomy (PN), for the treatment of localized small renal masses has increased with the advent of laparoscopic and robotic-assisted procedures. In this framework, certain factors must be considered in surgical planning and decision-making of partial nephrectomies, such as the morphology and location of the tumor. Advanced technologies such as automatic image segmentation, image and surface reconstruction, and 3D printing have been developed to assess the tumor anatomy before surgery and its relationship to surrounding structures, such as the arteriovenous system, with the aim of preventing damage. Overall, it is obvious that 3D printed anatomical kidney models are very useful to urologists, surgeons, and researchers as a reference point for preoperative planning and intraoperative visualization for a more efficient treatment and a high standard of care. Furthermore, they can provide a lot of degrees of comfort in education, in patient counseling, and in delivering therapeutic methods customized to the needs of each individual patient. To this context, the fundamental objective of this thesis is to provide an analytical and general pipeline for the generation of a renal 3D printed model from CT images. In addition, there are proposed methods to enhance preoperative planning and help surgeons to prepare with increased accuracy the surgical procedure so that improve their performance. Keywords: Medical Image, Computed Tomography (CT), Semantic Segmentation, Convolutional Neural Networks (CNNs), Surface Reconstruction, Mesh Processing, 3D Printing of Kidney, Operative assistance

Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration

  • Otake, Y
  • Schafer, S
  • Stayman, JW
  • Zbijewski, W
  • Kleinszig, G
  • Graumann, R
  • Khanna, AJ
  • Siewerdsen, JH
2012 Conference Proceedings, cited 8 times
Website

Automatic lung nodule classification with radiomics approach

  • Ma, Jingchen
  • Wang, Qian
  • Ren, Yacheng
  • Hu, Haibo
  • Zhao, Jun
2016 Conference Proceedings, cited 10 times
Website
Lung cancer is the first killer among the cancer deaths. Malignant lung nodules have extremely high mortality while some of the benign nodules don't need any treatment .Thus, the accuracy of diagnosis between benign or malignant nodules diagnosis is necessary. Notably, although currently additional invasive biopsy or second CT scan in 3 months later may help radiologists to make judgments, easier diagnosis approaches are imminently needed. In this paper, we propose a novel CAD method to distinguish the benign and malignant lung cancer from CT images directly, which can not only improve the efficiency of rumor diagnosis but also greatly decrease the pain and risk of patients in biopsy collecting process. Briefly, according to the state-of-the-art radiomics approach, 583 features were used at the first step for measurement of nodules' intensity, shape, heterogeneity and information in multi-frequencies. Further, with Random Forest method, we distinguish the benign nodules from malignant nodules by analyzing all these features. Notably, our proposed scheme was tested on all 79 CT scans with diagnosis data available in The Cancer Imaging Archive (TCIA) which contain 127 nodules and each nodule is annotated by at least one of four radiologists participating in the project. Satisfactorily, this method achieved 82.7% accuracy in classification of malignant primary lung nodules and benign nodules. We believe it would bring much value for routine lung cancer diagnosis in CT imaging and provide improvement in decision-support with much lower cost.

Automatic lung nodule detection using multi-scale dot nodule-enhancement filter and weighted support vector machines in chest computed tomography

  • Gu, Y.
  • Lu, X.
  • Zhang, B.
  • Zhao, Y.
  • Yu, D.
  • Gao, L.
  • Cui, G.
  • Wu, L.
  • Zhou, T.
PLoS One 2019 Journal Article, cited 0 times
Website
A novel CAD scheme for automated lung nodule detection is proposed to assist radiologists with the detection of lung cancer on CT scans. The proposed scheme is composed of four major steps: (1) lung volume segmentation, (2) nodule candidate extraction and grouping, (3) false positives reduction for the non-vessel tree group, and (4) classification for the vessel tree group. Lung segmentation is performed first. Then, 3D labeling technology is used to divide nodule candidates into two groups. For the non-vessel tree group, nodule candidates are classified as true nodules at the false positive reduction stage if the candidates survive the rule-based classifier and are not screened out by the dot filter. For the vessel tree group, nodule candidates are extracted using dot filter. Next, RSFS feature selection is used to select the most discriminating features for classification. Finally, WSVM with an undersampling approach is adopted to discriminate true nodules from vessel bifurcations in vessel tree group. The proposed method was evaluated on 154 thin-slice scans with 204 nodules in the LIDC database. The performance of the proposed CAD scheme yielded a high sensitivity (87.81%) while maintaining a low false rate (1.057 FPs/scan). The experimental results indicate the performance of our method may be better than the existing methods.

Automatic Lung Segmentation and Lung Nodule Type Identification over LIDC-IDRI dataset

  • Suji, R. Jenkin
  • Bhadauria, Sarita Singh
Indian Journal of Computer Science and Engineering 2021 Journal Article, cited 0 times
Website
Accurate segmentation of lung parenchyma is one of the basic steps for lung nodule detection and diagnosis. Using thresholding and morphology based methods for lung parenchyma segmentation is challenging due to the homogeneous intensities present in lung images. Further, typically, datasets do not contain explicit labels of their nodule types and there little literature on how to typify nodules into different nodule types eventhough identifying nodule types help to understand and explain the progress and shortcomings of various steps in the computer aided diagnosis pipeline. Hence, this work also presents methods for identification of nodule types, juxta-vascular, juxta-pleural and isolated. This work presents thresholding and morphological operation based methods for both lung segmentation and lung nodule type identification. Thresholding and morphology based methods have been chosen over sophisticated approaches due to the reasons of simplicity and rapidity. Qualitative validation of the proposed lung segmentation method is provided in terms of step by step output on a scan from LIDC-IDRI dataset and lung nodule type identification method is provided by output volume images. Further, the lung segmentation method is validated by percentage of overlap and the results on nodule type identification for various lung segmentation outputs have been analysed. The provided analysis offers a peekview into the ability to analyse the lung segmentation algorithms and nodule detection and segmentation algorithms interms of nodule types and motivates the need to provide nodule type groundtruth information also for developing better nodule type classification/identification algorithms. Keywords: Lung Segmentation; Juxta-vascular nodules; Juxta-pleural nodules; Thresholding; Morphological operations.

Automatic Lung Segmentation for the Inclusion of Juxtapleural Nodules and Pulmonary Vessels using Curvature based Border Correction

  • Singadkar, Ganesh
  • Mahajan, Abhishek
  • Thakur, Meenakshi
  • Talbar, Sanjay
Journal of King Saud University-Computer and Information Sciences 2018 Journal Article, cited 1 times
Website

Automatic lung segmentation in low-dose chest CT scans using convolutional deep and wide network (CDWN)

  • Agnes, S Akila
  • Anitha, J
  • Peter, J Dinesh
Neural Computing and Applications 2018 Journal Article, cited 0 times
Website

Automatic mass detection in mammograms using deep convolutional neural networks

  • Agarwal, Richa
  • Diaz, Oliver
  • Lladó, Xavier
  • Yap, Moi Hoon
  • Martí, Robert
Journal of Medical Imaging 2019 Journal Article, cited 0 times
Website
With recent advances in the field of deep learning, the use of convolutional neural networks (CNNs) in medical imaging has become very encouraging. The aim of our paper is to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition to evaluating CNNs pretrained with the ImageNet dataset, we investigate the use of transfer learning for a particular domain adaptation. First, the CNN is trained using a large public database of digitized mammograms (CBIS-DDSM dataset), and then the model is transferred and tested onto the smaller database of digital mammograms (INbreast dataset). We evaluate three widely used CNNs (VGG16, ResNet50, InceptionV3) and show that the InceptionV3 obtains the best performance for classifying the mass and nonmass breast region for CBIS-DDSM. We further show the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Mass detection evaluation follows a fivefold cross-validation strategy using free-response operating characteristic curves. Results show that the transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 0.02 at 1.67 false positives per image (FPI), compared with transfer learning from ImageNet with TPR of 0.91 0.07 at 2.1 FPI. In addition, the proposed framework improves upon mass detection results described in the literature on the INbreast database, in terms of both TPR and FPI.

Automatic MRI Breast tumor Detection using Discrete Wavelet Transform and Support Vector Machines

  • Ibraheem, Amira Mofreh
  • Rahouma, Kamel Hussein
  • Hamed, Hesham F. A.
2019 Conference Paper, cited 0 times
Website
The human right is to live a healthy life free of serious diseases. Cancer is the most serious disease facing humans and possibly leading to death. So, a definitive solution must be done to these diseases, to eliminate them and also to protect humans from them. Breast cancer is considered being one of the dangerous types of cancers that face women in particular. Early examination should be done periodically and the diagnosis must be more sensitive and effective to preserve the women lives. There are various types of breast cancer images but magnetic resonance imaging (MRI) has become one of the important ways in breast cancer detection. In this work, a new method is done to detect the breast cancer using the MRI images that is preprocessed using a 2D Median filter. The features are extracted from the images using discrete wavelet transform (DWT). These features are reduced to 13 features. Then, support vector machine (SVM) is used to detect if there is a tumor or not. Simulation results have been accomplished using the MRI images datasets. These datasets are extracted from the standard Breast MRI database known as the “Reference Image Database to Evaluate Response (RIDER)”. The proposed method has achieved an accuracy of 98.03 % using the available MRIs database. The processing time for all processes was recorded as 0.894 seconds. The obtained results have demonstrated the superiority of the proposed system over the available ones in the literature.

Automatic Multi-organ Segmentation on Abdominal CT with Dense V-networks

  • Gibson E
  • Giganti F
  • Hu Y
  • Bonmati
  • S. Bandula E
  • Gurusamy K
  • Davidson B
  • Pereira S
  • Clarkson M
  • Barratt D
IEEE Transactions on Medical Imaging 2018 Journal Article, cited 14 times
Website

Automatic Pancreas Segmentation using A Novel Modified Semantic Deep Learning Bottom-Up Approach

  • Paithane, Pradip Mukundrao
  • Kakarwal, Dr S. N.
International Journal of Intelligent Systems and Applications in Engineering 2022 Journal Article, cited 0 times
Website
Sharpe and smooth pancreas segmentation is acrucial and arduous problem in medical image analysis and investigation. A semantic deep learning bottom-up approach isthemost popular and efficient method used for pancreas segmentation withasmooth and sharp result. The Automatic pancreas segmentation process is performed through semantic segmentation for abdominal computed tomography (CT) clinical images. A novel semantic segmentation is applied for acute pancreas segmentation with different anglesof CT images. Inthenovel modified semantic approach,12 layers are used. The proposed model is executed on a dataset of 80 patient single-phase CT images. For training purposes,699 images and testing purposes150 images are taken from a dataset with a different angle. The Proposed approach is used for many organs segmentation from CT scans clinical images with high accuracy. “transposedConv2dLayer” layeris used for up-sampling and down-sampling so the computation time period is reduced as related to the state-of-art. Bfscore, Dice Coefficient, Jaccard Coefficient are used to calculate similarity index values between test image and expected output image only. The proposed approach achieved a dice similarity index score upto 81±7.43%.The Class balancing process is executed with the help of class weight and data augmentation. In novel modified semantic segmentation, max-pooling layer, RELU layer, softmax layer, transposed conv2d layer and dicePixelClassification layer are used. DicePixelClassification is newly introduced and incorporated in a novel method for improved results. VGG-16, VGG-19 and RSnet-18 deep learning models are used for pancreas segmentation.

Automatic Prostate Cancer Segmentation Using Kinetic Analysis in Dynamic Contrast-Enhanced MRI

  • Lavasani, S Navaei
  • Mostaar, A
  • Ashtiyani, M
Journal of Biomedical Physics & Engineering 2018 Journal Article, cited 0 times
Website

Automatic rectum limit detection by anatomical markers correlation

  • Namías, R
  • D’Amato, JP
  • Del Fresno, M
  • Vénere, M
Computerized Medical Imaging and Graphics 2014 Journal Article, cited 1 times
Website
Several diseases take place at the end of the digestive system. Many of them can be diagnosed by means of different medical imaging modalities together with computer aided detection (CAD) systems. These CAD systems mainly focus on the complete segmentation of the digestive tube. However, the detection of limits between different sections could provide important information to these systems. In this paper we present an automatic method for detecting the rectum and sigmoid colon limit using a novel global curvature analysis over the centerline of the segmented digestive tube in different imaging modalities. The results are compared with the gold standard rectum upper limit through a validation scheme comprising two different anatomical markers: the third sacral vertebra and the average rectum length. Experimental results in both magnetic resonance imaging (MRI) and computed tomography colonography (CTC) acquisitions show the efficacy of the proposed strategy in automatic detection of rectum limits. The method is intended for application to the rectum segmentation in MRI for geometrical modeling and as contextual information source in virtual colonoscopies and CAD systems. (C) 2014 Elsevier Ltd. All rights reserved.

Automatic Removal of Mechanical Fixations from CT Imagery with Particle Swarm Optimisation

  • Ryalat, Mohammad Hashem
  • Laycock, Stephen
  • Fisher, Mark
2017 Conference Proceedings, cited 0 times
Website
Fixation devices are used in radiotherapy treatment of head and neck cancers to ensure successive treatment fractions are accurately targeted. Typical fixations usually take the form of a custom made mask that is clamped to the treatment couch and these are evident in many CT data sets as radiotherapy treatment is normally planned with the mask in place. But the fixations can make planning more difficult for certain tumor sites and are often unwanted by third parties wishing to reuse the data. Manually editing the CT images to remove the fixations is time consuming and error prone. This paper presents a fast and automatic approach that removes artifacts due to fixations in CT images without affecting pixel values representing tissue. The algorithm uses particle swarm optimisation to speed up the execution time and presents results from five CT data sets that show it achieves an average specificity of 92.01% and sensitivity of 99.39%.

Automatic Segmentation and Shape, Texture-based Analysis of Glioma Using Fully Convolutional Network

  • Khan, Mohammad Badhruddouza
  • Saha, Pranto Soumik
  • Roy,Amit Dutta
2021 Conference Paper, cited 0 times
Website
Lower-grade glioma is a type of brain tumor that is usually found in the human brain and spinal cord. Early detection and accurate diagnosis of lower-grade glioma can reduce the fatal risk of the affected patients. An essential step for lower-grade glioma analysis is MRI Image Segmentation. Manual segmentation processes are time-consuming and depend on the expertise of the pathologist. In this study, three different deep-learning-based automatic segmentation models were used to segment the tumor-affected region from the MRI slice. The segmentation accuracy of the three models-U-Net, FCN, and U-Net with ResNeXt50 backbone were respectively 80%, 84%, and 91%. Two shape-based features- (angular standard deviation, marginal fluctuation) and six texture-based features (entropy, local binary pattern, homogeneity, contrast, correlation, energy) were extracted from the segmented images to find the association with seven existing genomic data types. It was found out that there was a significant association between the genomic data type-microRNA cluster and texture-based feature-entropy case and genomic data type-RNA sequence cluster with shape-based feature-angular standard deviation case. In both of these cases, the p values were observed less than 0.05 for the Fisher exact test.

Automatic Segmentation of Colon in 3D CT Images and Removal of Opacified Fluid Using Cascade Feed Forward Neural Network

  • Gayathri Devi, K
  • Radhakrishnan, R
Computational and Mathematical Methods in Medicine 2015 Journal Article, cited 5 times
Website

Automatic segmentation of organs at risk and tumors in CT images of lung cancer from partially labelled datasets with a semi-supervised conditional nnU-Net

  • Zhang, G.
  • Yang, Z.
  • Huo, B.
  • Chai, S.
  • Jiang, S.
Comput Methods Programs Biomed 2021 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: Accurately and reliably defining organs at risk (OARs) and tumors are the cornerstone of radiation therapy (RT) treatment planning for lung cancer. Almost all segmentation networks based on deep learning techniques rely on fully annotated data with strong supervision. However, existing public imaging datasets encountered in the RT domain frequently include singly labelled tumors or partially labelled organs because annotating full OARs and tumors in CT images is both rigorous and tedious. To utilize labelled data from different sources, we proposed a dual-path semi-supervised conditional nnU-Net for OARs and tumor segmentation that is trained on a union of partially labelled datasets. METHODS: The framework employs the nnU-Net as the base model and introduces a conditioning strategy by incorporating auxiliary information as an additional input layer into the decoder. The conditional nnU-Net efficiently leverages prior conditional information to classify the target class at the pixelwise level. Specifically, we employ the uncertainty-aware mean teacher (UA-MT) framework to assist in OARs segmentation, which can effectively leverage unlabelled data (images from a tumor labelled dataset) by encouraging consistent predictions of the same input under different perturbations. Furthermore, we individually design different combinations of loss functions to optimize the segmentation of OARs (Dice loss and cross-entropy loss) and tumors (Dice loss and focal loss) in a dual path. RESULTS: The proposed method is evaluated on two publicly available datasets of the spinal cord, left and right lung, heart, esophagus, and lung tumor, in which satisfactory segmentation performance has been achieved in term of both the region-based Dice similarity coefficient (DSC) and the boundary-based Hausdorff distance (HD). CONCLUSIONS: The proposed semi-supervised conditional nnU-Net breaks down the barriers between nonoverlapping labelled datasets and further alleviates the problem of "data hunger" and "data waste" in multi-class segmentation. The method has the potential to help radiologists with RT treatment planning in clinical practice.

Automatic tumor segmentation in single-spectral MRI using a texture-based and contour-based algorithm

  • Nabizadeh, Nooshin
  • Kubat, Miroslav
Expert Systems with Applications 2017 Journal Article, cited 8 times
Website
Automatic detection of brain tumors in single-spectral magnetic resonance images is a challenging task. Existing techniques suffer from inadequate performance, dependence on initial assumptions, and, sometimes, the need for manual interference. The research reported in this paper seeks to reduce some of these shortcomings, and to remove others, achieving satisfactory performance at reasonable computational costs. The success of the system described here is explained by the synergy of the following aspects: (1) a broad choice of high-level features to characterize the image's texture, (2) an efficient mechanism to eliminate less useful features (3) a machine-learning technique to induce a classifier that signals the presence of a tumor-affected tissue, and (4) an improved version of the skippy greedy snake algorithm to outline the tumor's contours. The paper describes the system and reports experiments with synthetic as well as real data. (C) 2017 Elsevier Ltd. All rights reserved.

AutoSeg - Steering the Inductive Biases for Automatic Pathology Segmentation

  • Meissen, Felix
  • Kaissis, Georgios
  • Rueckert, Daniel
2021 Conference Paper, cited 0 times
Website
In medical imaging, un-, semi-, or self-supervised pathology detection is often approached with anomaly- or out-of-distribution detection methods, whose inductive biases are not intentionally directed towards detecting pathologies, and are therefore sub-optimal for this task. To tackle this problem, we propose AutoSeg, an engine that can generate diverse artificial anomalies that resemble the properties of real-world pathologies. Our method can accurately segment unseen artificial anomalies and outperforms existing methods for pathology detection on a challenging real-world dataset of Chest X-ray images. We experimentally evaluate our method on the Medical Out-of-Distribution Analysis Challenge 2021 (Code available under: https://github.com/FeliMe/autoseg).

AVT: Multicenter aortic vessel tree CTA dataset collection with ground truth segmentation masks

  • Radl, L.
  • Jin, Y.
  • Pepe, A.
  • Li, J.
  • Gsaxner, C.
  • Zhao, F. H.
  • Egger, J.
2022 Journal Article, cited 2 times
Website
In this article, we present a multicenter aortic vessel tree database collection, containing 56 aortas and their branches. The datasets have been acquired with computed tomography angiography (CTA) scans and each scan covers the ascending aorta, the aortic arch and its branches into the head/neck area, the thoracic aorta, the abdominal aorta and the lower abdominal aorta with the iliac arteries branching into the legs. For each scan, the collection provides a semi-automatically generated segmentation mask of the aortic vessel tree (ground truth). The scans come from three different collections and various hospitals, having various resolutions, which enables studying the geometry/shape variabilities of human aortas and its branches from different geographic locations. Furthermore, creating a robust statistical model of the shape of human aortic vessel trees, which can be used for various tasks such as the development of fully-automatic segmentation algorithms for new, unseen aortic vessel tree cases, e.g. by training deep learning-based approaches. Hence, the collection can serve as an evaluation set for automatic aortic vessel tree segmentation algorithms.

Batch and online variational learning of hierarchical Dirichlet process mixtures of multivariate Beta distributions in medical applications

  • Manouchehri, Narges
  • Bouguila, Nizar
  • Fan, Wentao
Pattern Analysis and Applications 2021 Journal Article, cited 1 times
Website
Thanks to the significant developments in healthcare industries, various types of medical data are generated. Analysing such valuable resources aid healthcare experts to understand the illnesses more precisely and provide better clinical services. Machine learning as one of the capable tools could assist healthcare experts in achieving expressive interpretation and making proper decisions. As annotation of medical data is a costly and sensitive task that can be performed just by healthcare professionals, label-free methods could be significantly promising. Interpretability and evidence-based decision are other concerns in medicine. These needs were our motivators to propose a novel clustering method based on hierarchical Dirichlet process mixtures of multivariate Beta distributions. To learn it, we applied batch and online variational methods for finding the proper number of clusters as well as estimating model parameters at the same time. The effectiveness of the proposed models is evaluated on three medical real applications, namely oropharyngeal carcinoma diagnosis, osteosarcoma analysis, and white blood cell counting.

Batch Similarity Based Triplet Loss Assembled into Light-Weighted Convolutional Neural Networks for Medical Image Classification

  • Huang, Z.
  • Zhou, Q.
  • Zhu, X.
  • Zhang, X.
Sensors (Basel) 2021 Journal Article, cited 0 times
Website
In many medical image classification tasks, there is insufficient image data for deep convolutional neural networks (CNNs) to overcome the over-fitting problem. The light-weighted CNNs are easy to train but they usually have relatively poor classification performance. To improve the classification ability of light-weighted CNN models, we have proposed a novel batch similarity-based triplet loss to guide the CNNs to learn the weights. The proposed loss utilizes the similarity among multiple samples in the input batches to evaluate the distribution of training data. Reducing the proposed loss can increase the similarity among images of the same category and reduce the similarity among images of different categories. Besides this, it can be easily assembled into regular CNNs. To appreciate the performance of the proposed loss, some experiments have been done on chest X-ray images and skin rash images to compare it with several losses based on such popular light-weighted CNN models as EfficientNet, MobileNet, ShuffleNet and PeleeNet. The results demonstrate the applicability and effectiveness of our method in terms of classification accuracy, sensitivity and specificity.

A Bayesian approach to tissue-fraction estimation for oncological PET segmentation

  • Liu, Z.
  • Mhlanga, J. C.
  • Laforest, R.
  • Derenoncourt, P. R.
  • Siegel, B. A.
  • Jha, A. K.
Phys Med Biol 2021 Journal Article, cited 0 times
Website
Tumor segmentation in oncological PET is challenging, a major reason being the partial-volume effects (PVEs) that arise due to low system resolution and finite voxel size. The latter results in tissue-fraction effects (TFEs), i.e. voxels contain a mixture of tissue classes. Conventional segmentation methods are typically designed to assign each image voxel as belonging to a certain tissue class. Thus, these methods are inherently limited in modeling TFEs. To address the challenge of accounting for PVEs, and in particular, TFEs, we propose a Bayesian approach to tissue-fraction estimation for oncological PET segmentation. Specifically, this Bayesian approach estimates the posterior mean of the fractional volume that the tumor occupies within each image voxel. The proposed method, implemented using a deep-learning-based technique, was first evaluated using clinically realistic 2D simulation studies with known ground truth, in the context of segmenting the primary tumor in PET images of patients with lung cancer. The evaluation studies demonstrated that the method accurately estimated the tumor-fraction areas and significantly outperformed widely used conventional PET segmentation methods, including a U-net-based method, on the task of segmenting the tumor. In addition, the proposed method was relatively insensitive to PVEs and yielded reliable tumor segmentation for different clinical-scanner configurations. The method was then evaluated using clinical images of patients with stage IIB/III non-small cell lung cancer from ACRIN 6668/RTOG 0235 multi-center clinical trial. Here, the results showed that the proposed method significantly outperformed all other considered methods and yielded accurate tumor segmentation on patient images with Dice similarity coefficient (DSC) of 0.82 (95% CI: 0.78, 0.86). In particular, the method accurately segmented relatively small tumors, yielding a high DSC of 0.77 for the smallest segmented cross-section of 1.30 cm(2). Overall, this study demonstrates the efficacy of the proposed method to accurately segment tumors in PET images.

Bayesian Kernel Models for Statistical Genetics and Cancer Genomics

  • Crawford, Lorin
2017 Thesis, cited 0 times

Benefit of overlapping reconstruction for improving the quantitative assessment of CT lung nodule volume

  • Gavrielides, Marios A
  • Zeng, Rongping
  • Myers, Kyle J
  • Sahiner, Berkman
  • Petrick, Nicholas
Academic Radiology 2013 Journal Article, cited 23 times
Website
RATIONALE AND OBJECTIVES: The aim of this study was to quantify the effect of overlapping reconstruction on the precision and accuracy of lung nodule volume estimates in a phantom computed tomographic (CT) study. MATERIALS AND METHODS: An anthropomorphic phantom was used with a vasculature insert on which synthetic lung nodules were attached. Repeated scans of the phantom were acquired using a 64-slice CT scanner. Overlapping and contiguous reconstructions were performed for a range of CT imaging parameters (exposure, slice thickness, pitch, reconstruction kernel) and a range of nodule characteristics (size, density). Nodule volume was estimated with a previously developed matched-filter algorithm. RESULTS: Absolute percentage bias across all nodule sizes (n = 2880) was significantly lower when overlapping reconstruction was used, with an absolute percentage bias of 6.6% (95% confidence interval [CI], 6.4-6.9), compared to 13.2% (95% CI, 12.7-13.8) for contiguous reconstruction. Overlapping reconstruction also showed a precision benefit, with a lower standard percentage error of 7.1% (95% CI, 6.9-7.2) compared with 15.3% (95% CI, 14.9-15.7) for contiguous reconstructions across all nodules. Both effects were more pronounced for the smaller, subcentimeter nodules. CONCLUSIONS: These results support the use of overlapping reconstruction to improve the quantitative assessment of nodule size with CT imaging.

Beyond Non-maximum Suppression - Detecting Lesions in Digital Breast Tomosynthesis Volumes

  • Shoshan, Yoel
  • Zlotnick, Aviad
  • Ratner, Vadim
  • Khapun, Daniel
  • Barkan, Ella
  • Gilboa-Solomon, Flora
2021 Conference Paper, cited 0 times
Website
Detecting the specific locations of malignancy signs in a medical image is a non-trivial and time-consuming task for radiologists. A complex, 3D version of this task, was presented in the DBTex 2021 Grand Challenge on Digital Breast Tomosynthesis Lesion Detection. Teams from all over the world competed in an attempt to build AI models that predict the 3D locations that require biopsy. We describe a novel method to combine detection candidates from multiple models with minimum false positives. This method won the second place in the DBTex competition, with a very small margin from being first and a standout from the rest. We performed an ablation study to show the contribution of each one of the different new components in the proposed ensemble method, including additional performance improvements done after the competition.

Big biomedical image processing hardware acceleration: A case study for K-means and image filtering

  • Neshatpour, Katayoun
  • Koohi, Arezou
  • Farahmand, Farnoud
  • Joshi, Rajiv
  • Rafatirad, Setareh
  • Sasan, Avesta
  • Homayoun, Houman
IEEE International Symposium on Circuits and Systems (ISCAS) 2016 Journal Article, cited 7 times
Website

Binary Classification for Lung Nodule Based on Channel Attention Mechanism

  • Lai, Khai Dinh
  • Le, Thai Hoang
  • Nguyen, Thuy Thanh
2021 Conference Proceedings, cited 0 times
Website
In order to effectively handle the problem of tumor detection on the LUNA16 dataset, we present a new methodology for data augmentation to address the issue of imbalance between the number of positive and negative candidates in this study. Furthermore, a new deep learning model - ASS (a model that combines Convnet sub-attention with Softmax loss) is also proposed and evaluated on patches with different sizes of the LUNA16. Data enrichment techniques are implemented in two ways: off-line augmentation increases the number of images based on the image under consideration, and on-line augmentation increases the number of images by rotating the image at four angles (0°, 90°, 180°, and 270°). We build candidate boxes of various sizes based on the coordinates of each candidate, and these candidate boxes are used to demonstrate the usefulness of the suggested ASS model. The results of cross-testing (with four cases: case 1, ASS trained and tested on a dataset of size 50 × 50; case 2, using ASS trained on a dataset of size 50 × 50 to test a dataset of size 100 × 100; case 3, ASS trained and tested on a dataset of size 100 × 100 and case 4, using ASS trained on a dataset of size 100 × 100 to test a dataset of size 50 × 50) show that the proposed ASS model is feasible.

A biomarker basing on radiomics for the prediction of overall survival in non–small cell lung cancer patients

  • He, Bo
  • Zhao, Wei
  • Pi, Jiang-Yuan
  • Han, Dan
  • Jiang, Yuan-Ming
  • Zhang, Zhen-Guang
Respiratory research 2018 Journal Article, cited 0 times
Website

Biomechanical model for computing deformations for whole‐body image registration: A meshless approach

  • Li, Mao
  • Miller, Karol
  • Joldes, Grand Roman
  • Kikinis, Ron
  • Wittek, Adam
International Journal for Numerical Methods in Biomedical Engineering 2016 Journal Article, cited 13 times
Website

BIOMEDICAL IMAGE RETRIEVAL USING LBWP

  • Babu, Joyce Sarah
  • Mathew, Soumya
  • Simon, Rini
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website

A Block Adaptive Near-Lossless Compression Algorithm for Medical Image Sequences and Diagnostic Quality Assessment

  • Sharma, Urvashi
  • Sood, Meenakshi
  • Puthooran, Emjee
J Digit Imaging 2019 Journal Article, cited 0 times
Website
The near-lossless compression technique has better compression ratio than lossless compression technique while maintaining a maximum error limit for each pixel. It takes the advantage of both the lossy and lossless compression methods providing high compression ratio, which can be used for medical images while preserving diagnostic information. The proposed algorithm uses a resolution and modality independent threshold-based predictor, optimal quantization (q) level, and adaptive block size encoding. The proposed method employs resolution independent gradient edge detector (RIGED) for removing inter-pixel redundancy and block adaptive arithmetic encoding (BAAE) is used after quantization to remove coding redundancy. Quantizer with an optimum q level is used to implement the proposed method for high compression efficiency and for the better quality of the recovered images. The proposed method is implemented on volumetric 8-bit and 16-bit standard medical images and also validated on real time 16-bit-depth images collected from government hospitals. The results show the proposed algorithm yields a high coding performance with BPP of 1.37 and produces high peak signal-to-noise ratio (PSNR) of 51.35 dB for 8-bit-depth image dataset as compared with other near-lossless compression. The average BPP values of 3.411 and 2.609 are obtained by the proposed technique for 16-bit standard medical image dataset and real-time medical dataset respectively with maintained image quality. The improved near-lossless predictive coding technique achieves high compression ratio without losing diagnostic information from the image.

A blockchain-based protocol for tracking user access to shared medical imaging

  • de Aguiar, Erikson J.
  • dos Santos, Alyson J.
  • Meneguette, Rodolfo I.
  • De Grande, Robson E.
  • Ueyama, Jó
Future Generation Computer Systems 2022 Journal Article, cited 0 times
Website
Modern healthcare systems are complex and regularly share sensitive data among multiple stakeholders, such as doctors, patients, and pharmacists. Patients’ data has increased and requires safe methods for its management. Research works related to blockchain, such as MIT MedRec, have strived to draft trustworthy and immutable systems to share data. However, blockchain may be challenging in healthcare scenarios due to issues about privacy and control of data sharing destinations. This paper presents a protocol for tracking shared medical data, which includes images, and controlling the medical data access by multiple conflicting stakeholders. Several efforts rely on blockchain for healthcare, but just a few are concerned about malicious data leakage in blockchain-based healthcare systems. We implement a token mechanism stored in DICOM files and managed by Hyperledger Fabric Blockchain. Our findings and evaluations revealed low chances of a hash collision, such as employing a fitting-resistance birthday attack. Although our solution was devised for healthcare, it can inspire and be easily ported to other blockchain-based application scenarios, such as Ethereum or Hyperledger Besu for business networks.

Bolus arrival time and its effect on tissue characterization with dynamic contrast-enhanced magnetic resonance imaging

  • Mehrtash, Alireza
  • Gupta, Sandeep N
  • Shanbhag, Dattesh
  • Miller, James V
  • Kapur, Tina
  • Fennessy, Fiona M
  • Kikinis, Ron
  • Fedorov, Andriy
Journal of Medical Imaging 2016 Journal Article, cited 6 times
Website
Matching the bolus arrival time (BAT) of the arterial input function (AIF) and tissue residue function (TRF) is necessary for accurate pharmacokinetic (PK) modeling of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We investigated the sensitivity of volume transfer constant ([Formula: see text]) and extravascular extracellular volume fraction ([Formula: see text]) to BAT and compared the results of four automatic BAT measurement methods in characterization of prostate and breast cancers. Variation in delay between AIF and TRF resulted in a monotonous change trend of [Formula: see text] and [Formula: see text] values. The results of automatic BAT estimators for clinical data were all comparable except for one BAT estimation method. Our results indicate that inaccuracies in BAT measurement can lead to variability among DCE-MRI PK model parameters, diminish the quality of model fit, and produce fewer valid voxels in a region of interest. Although the selection of the BAT method did not affect the direction of change in the treatment assessment cohort, we suggest that BAT measurement methods must be used consistently in the course of longitudinal studies to control measurement variability.

Bone Marrow and Tumor Radiomics at (18)F-FDG PET/CT: Impact on Outcome Prediction in Non-Small Cell Lung Cancer

  • Mattonen, Sarah A
  • Davidzon, Guido A
  • Benson, Jalen
  • Leung, Ann N C
  • Vasanawala, Minal
  • Horng, George
  • Shrager, Joseph B
  • Napel, Sandy
  • Nair, Viswam S.
RadiologyRadiology 2019 Journal Article, cited 0 times
Website
Background Primary tumor maximum standardized uptake value is a prognostic marker for non-small cell lung cancer. In the setting of malignancy, bone marrow activity from fluorine 18-fluorodeoxyglucose (FDG) PET may be informative for clinical risk stratification. Purpose To determine whether integrating FDG PET radiomic features of the primary tumor, tumor penumbra, and bone marrow identifies lung cancer disease-free survival more accurately than clinical features alone. Materials and Methods Patients were retrospectively analyzed from two distinct cohorts collected between 2008 and 2016. Each tumor, its surrounding penumbra, and bone marrow from the L3-L5 vertebral bodies was contoured on pretreatment FDG PET/CT images. There were 156 bone marrow and 512 tumor and penumbra radiomic features computed from the PET series. Randomized sparse Cox regression by least absolute shrinkage and selection operator identified features that predicted disease-free survival in the training cohort. Cox proportional hazards models were built and locked in the training cohort, then evaluated in an independent cohort for temporal validation. Results There were 227 patients analyzed; 136 for training (mean age, 69 years +/- 9 [standard deviation]; 101 men) and 91 for temporal validation (mean age, 72 years +/- 10; 91 men). The top clinical model included stage; adding tumor region features alone improved outcome prediction (log likelihood, -158 vs -152; P = .007). Adding bone marrow features continued to improve performance (log likelihood, -158 vs -145; P = .001). The top model integrated stage, two bone marrow texture features, one tumor with penumbra texture feature, and two penumbra texture features (concordance, 0.78; 95% confidence interval: 0.70, 0.85; P < .001). This fully integrated model was a predictor of poor outcome in the independent cohort (concordance, 0.72; 95% confidence interval: 0.64, 0.80; P < .001) and a binary score stratified patients into high and low risk of poor outcome (P < .001). Conclusion A model that includes pretreatment fluorine 18-fluorodeoxyglucose PET texture features from the primary tumor, tumor penumbra, and bone marrow predicts disease-free survival of patients with non-small cell lung cancer more accurately than clinical features alone. (c) RSNA, 2019 Online supplemental material is available for this article.

Bone suppression for chest X-ray image using a convolutional neural filter

  • Matsubara, N.
  • Teramoto, A.
  • Saito, K.
  • Fujita, H.
Australas Phys Eng Sci Med 2019 Journal Article, cited 0 times
Website
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.

Bone-Cancer Assessment and Destruction Pattern Analysis in Long-Bone X-ray Image

  • Bandyopadhyay, Oishila
  • Biswas, Arindam
  • Bhattacharya, Bhargab B
J Digit Imaging 2018 Journal Article, cited 0 times
Website
Bone cancer originates from bone and rapidly spreads to the rest of the body affecting the patient. A quick and preliminary diagnosis of bone cancer begins with the analysis of bone X-ray or MRI image. Compared to MRI, an X-ray image provides a low-cost diagnostic tool for diagnosis and visualization of bone cancer. In this paper, a novel technique for the assessment of cancer stage and grade in long bones based on X-ray image analysis has been proposed. Cancer-affected bone images usually appear with a variation in bone texture in the affected region. A fusion of different methodologies is used for the purpose of our analysis. In the proposed approach, we extract certain features from bone X-ray images and use support vector machine (SVM) to discriminate healthy and cancerous bones. A technique based on digital geometry is deployed for localizing cancer-affected regions. Characterization of the present stage and grade of the disease and identification of the underlying bone-destruction pattern are performed using a decision tree classifier. Furthermore, the method leads to the development of a computer-aided diagnostic tool that can readily be used by paramedics and doctors. Experimental results on a number of test cases reveal satisfactory diagnostic inferences when compared with ground truth known from clinical findings.

BRAIN CANCER DETECTION FROM MRI: A MACHINE LEARNING APPROACH (TENSORFLOW)

  • Sawant, Aaswad
  • Bhandari, Mayur
  • Yadav, Ravikumar
  • Yele, Rohan
  • Bendale, Mrs Sneha
BRAIN 2018 Journal Article, cited 0 times
Website

Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training

  • Thakur, S.
  • Doshi, J.
  • Pati, S.
  • Rathore, S.
  • Sako, C.
  • Bilello, M.
  • Ha, S. M.
  • Shukla, G.
  • Flanders, A.
  • Kotrotsou, A.
  • Milchenko, M.
  • Liem, S.
  • Alexander, G. S.
  • Lombardo, J.
  • Palmer, J. D.
  • LaMontagne, P.
  • Nazeri, A.
  • Talbar, S.
  • Kulkarni, U.
  • Marcus, D.
  • Colen, R.
  • Davatzikos, C.
  • Erus, G.
  • Bakas, S.
Neuroimage 2020 Journal Article, cited 0 times
Website
Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ''modality-agnostic training'' technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach(1) obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.

Brain image classification by the combination of different wavelet transforms and support vector machine classification

  • Mishra, Shailendra Kumar
  • Deepthi, V. Hima
Journal of Ambient Intelligence and Humanized Computing 2020 Journal Article, cited 0 times
Website
The human brain is the primary organ, and it is located in the centre of the nervous system in the human body. The abnormal cells in the brain are known as a brain tumor. The tumor in the brain does not spread to the other parts of the human body. Early diagnosis of brain tumor is required. In this work, an efficient technique is presented for magnetic resonance imaging (MRI) brain image classification using different wavelet transforms like discrete wavelet transform (DWT), stationary wavelet transform (SWT) and dual tree M-band wavelet transform (DMWT) for feature extraction and selection of coefficients and support vector machine classifier is used for classification. The normal and abnormal MRI brain image features are decomposed by DWT, SWT and DMWT. The coefficients of sub-bands are selected by rank features for the classification. Results show that DWT, SWT and DMWT produce 98% accuracy for the MRI brain classification system.

Brain Tumor Automatic Detection from MRI Images Using Transfer Learning Model with Deep Convolutional Neural Network

  • Bayoumi, Esraa
  • Abd-Ellah, mahmoud
  • Khalaf, Ashraf A. M.
  • Gharieb, Reda
Journal of Advanced Engineering Trends 2021 Journal Article, cited 1 times
Website
Brain tumor detection successfully in early-stage plays important role in improving patient treatment and survival. Evaluating magnetic resonance imaging (MRI) images manually is a very difficult task due to the numerous numbers of images produced in the clinic routinely. So, there is a need for using a computer-aided diagnosis (CAD) system for early detection and classification of brain tumors as normal and abnormal. The paper aims to design and evaluate the convolution neural network (CNN) Transfer Learning state-of-the-art performance proposed for image classification over the recent years. Five different modifications have been applied to five different famous CNN to know the most effective modification. Five-layer modifications with parameter tuning are applied for each architecture providing a new CNN architecture for brain tumor detection. Most brain tumor datasets have a small number of images to train the deep learning structure. Therefore, two datasets are used in the evaluation to ensure the effectiveness of the proposed structures. Firstly, a standard dataset from the RIDER Neuro MRI database including 349 brain MRI images with 109 normal images and 240 abnormal images. Secondly, a collection of 120 brain MRI images including 60 abnormal images and 60 normal images. The results show that the proposed CNN Transfer Learning with MRI’s can learn significant biomarkers of brain tumor, however, the best accuracy, specificity, and sensitivity gained is 100% for all of them.

Brain tumor classification from multi-modality MRI using wavelets and machine learning

  • Usman, Khalid
  • Rajpoot, Kashif
Pattern Analysis and Applications 2017 Journal Article, cited 17 times
Website

Brain Tumor Classification Using MRI Images with K-Nearest Neighbor Method

  • Ramdlon, Rafi Haidar
  • Martiana Kusumaningtyas, Entin
  • Karlita, Tita
2019 Conference Proceedings, cited 0 times
The accuracy level in diagnosing tumor type through MRI results is required to establish appropriate medical treatment. MRI results can be computationally examined using K-Nearest Neighbor method, a basic science application and classification technique of image processing. Tumor classification system is designed to detect tumor and edema in T1 and T2 images sequences, as well as to label and classify tumor type. Data interpretation of such system derives from Axial section of MRI results only, which is classified into three classes: Astrocytoma, Glioblastoma, and Oligodendroglioma. To detect tumor area, basic image processing technique is employed, comprising of image enhancement, image binarization, morphological image, and watershed. Tumor classification is applied after segmentation process of Shape Extration Feature is undertaken. The results of tumor classification obtained was 89.5 percent, which is able to provide information regarding tumor detection more clearly and specifically.

Brain Tumor Classification using Support Vector Machine

  • Vani, N
  • Sowmya, A
  • Jayamma, N
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website

Brain tumor classification using the fused features extracted from expanded tumor region

  • Öksüz, Coşku
  • Urhan, Oğuzhan
  • Güllü, Mehmet Kemal
Biomedical Signal Processing and Control 2022 Journal Article, cited 0 times
Website

Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy

  • Özyurt, Fatih
  • Sert, Eser
  • Avci, Engin
  • Dogantekin, Esin
Measurement 2019 Journal Article, cited 0 times
Brain tumor classification is a challenging task in the field of medical image processing. The present study proposes a hybrid method using Neutrosophy and Convolutional Neural Network (NS-CNN). It aims to classify tumor region areas that are segmented from brain images as benign and malignant. In the first stage, MRI images were segmented using the neutrosophic set – expert maximum fuzzy-sure entropy (NS-EMFSE) approach. The features of the segmented brain images in the classification stage were obtained by CNN and classified using SVM and KNN classifiers. Experimental evaluation was carried out based on 5-fold cross-validation on 80 of benign tumors and 80 of malign tumors. The findings demonstrated that the CNN features displayed a high classification performance with different classifiers. Experimental results indicate that CNN features displayed a better classification performance with SVM as simulation results validated output data with an average success of 95.62%.

Brain tumor detection based on Naïve Bayes Classification

  • Zaw, Hein Tun
  • Maneerat, Noppadol
  • Win, Khin Yadanar
2019 Conference Paper, cited 2 times
Website
Brain cancer is caused by the population of abnormal cells called glial cells that takes place in the brain. Over the years, the number of patients who have brain cancer is increasing with respect to the aging population, is a worldwide health problem. The objective of this paper is to develop a method to detect the brain tissues which are affected by cancer especially for grade-4 tumor, Glioblastoma multiforme (GBM). GBM is one of the most malignant cancerous brain tumors as they are fast growing and more likely to spread to other parts of the brain. In this paper, Naïve Bayes classification is utilized for recognition of a tumor region accurately that contains all spreading cancerous tissues. Brain MRI database, preprocessing, morphological operations, pixel subtraction, maximum entropy threshold, statistical features extraction, and Naïve Bayes classifier based prediction algorithm are used in this research. The goal of this method is to detect the tumor area from different brain MRI images and to predict that detected area whether it is a tumor or not. When compared to other methods, this method can properly detect the tumor located in different regions of the brain including the middle region (aligned with eye level) which is the significant advantage of this method. When tested on 50 MRI images, this method develops 81.25% detection rate on tumor images and 100% detection rate on non-tumor images with the overall accuracy 94%.

Brain tumor detection from MRI image: An approach

  • Ghosh, Debjyoti
  • Bandyopadhyay, Samir Kumar
International Journal of Applied Research 2017 Journal Article, cited 0 times
Website
A brain tumor is an abnormal growth of cells within the brain, which can be cancerous or noncancerous (benign). This paper detects different types of tumors and cancerous growth within the brain and other associated areas within the brain by using computerized methods on MRI images of a patient. It is also possible to track the growth patterns of such tumors.

Brain Tumor Detection using Curvelet Transform and Support Vector Machine

  • Gupta, Bhawna
  • Tiwari, Shamik
International Journal of Computer Science and Mobile Computing 2014 Journal Article, cited 8 times
Website

Brain Tumor Extraction from MRI Using Clustering Methods and Evaluation of Their Performance

  • Singh, Vipula
  • Tunga, P. Prakash
2019 Conference Paper, cited 0 times
Website
In this paper, we consider the extraction of brain tumor from MRI (Magnetic Resonance Imaging) images using K-means, Fuzzy c-means and Region growing clustering methods. After extraction, various parameters related to performance of clustering methods, and also, parameters related to description of tumor are calculated. MRI is a non-invasive method which provides the view of structural features of tissues in the body at very high resolution (typically on 100 μm scale). Therefore, it will be advantageous if the detection and segmentation of brain tumors are based on MRI. This work is in the direction of replacing the manual identification and separation of tumor structures from brain MRI by computer aided techniques, which would add great value with respect to accuracy, reproducibility, diagnosis and treatment planning. The brain tumor separated from original image is referred as Region of Interest (ROI) and remaining portion of original image is referred as Non-region of Interest (NROI).

Brain tumor segmentation approach based on the extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms running on Raspberry Pi hardware

  • ŞİŞİK, Fatih
  • Sert, Eser
Medical Hypotheses 2020 Journal Article, cited 0 times
Automatic decision support systems have gained importance in health sector in recent years. In parallel with recent developments in the fields of artificial intelligence and image processing, embedded systems are also used in decision support systems for tumor diagnosis. Extreme learning machine (ELM), is a recently developed, quick and efficient algorithm which can quickly and flawlessly diagnose tumors using machine learning techniques. Similarly, significantly fast and robust fuzzy C-means clustering algorithm (FRFCM) is a novel and fast algorithm which can display a high performance. In the present study, a brain tumor segmentation approach is proposed based on extreme learning machine and significantly fast and robust fuzzy C-means clustering algorithms (BTS-ELM-FRFCM) running on Raspberry Pi (PRI) hardware. The present study mainly aims to introduce a new segmentation system hardware containing new algorithms and offering a high level of accuracy the health sector. PRI’s are useful mobile devices due to their cost-effectiveness and satisfying hardware. 3200 training images were used to train ELM in the present study. 20 pieces of MRI images were used for testing process. Figure of merid (FOM), Jaccard similarity coefficient (JSC) and Dice indexes were used in order to evaluate the performance of the proposed approach. In addition, the proposed method was compared with brain tumor segmentation based on support vector machine (BTS-SVM), brain tumor segmentation based on fuzzy C-means (BTS-FCM) and brain tumor segmentation based on self-organizing maps and k-means (BTS-SOM). The statistical analysis on FOM, JSC and Dice results obtained using four different approaches indicated that BTS-ELM-FRFCM displayed the highest performance. Thus, it can be concluded that the embedded system designed in the present study can perform brain tumor segmentation with a high accuracy rate.

Brain Tumor Segmentation based on Knowledge Distillation and Adversarial Training

  • Hou, Yaqing
  • Li, Tianbo
  • Zhang, Qiang
  • Yu, Hua
  • Ge, Hongwei
2021 Conference Paper, cited 0 times
Website
3D MRI brain tumor segmentation is a reliable method for disease diagnosis and treatment plans in the future. Early on, the segmentation of brain tumors is mostly done manually. However, manual segmentation of 3D MRI brain tumor requires professional anatomical knowledge and may be inaccurate. In this paper, we propose a 3D MRI brain tumor segmentation architecture based on the encoder-decoder structure. Specially, we introduce knowledge distillation and adversarial training methods, which compresses and improves the accuracy and robustness of the model. Furthermore, we obtain soft targets by designing multiple teacher network training and then apply them to the student network. Finally, we evaluate our method on a challenging BraTS dataset. As a result, the performance of our proposed model is superior to state-of-the-art methods.

Brain tumor segmentation in MRI images using nonparametric localization and enhancement methods with U-net

  • Ilhan, A.
  • Sekeroglu, B.
  • Abiyev, R.
Int J Comput Assist Radiol Surg 2022 Journal Article, cited 2 times
Website
PURPOSE: Segmentation is one of the critical steps in analyzing medical images since it provides meaningful information for the diagnosis, monitoring, and treatment of brain tumors. In recent years, several artificial intelligence-based systems have been developed to perform this task accurately. However, the unobtrusive or low-contrast occurrence of some tumors and similarities to healthy brain tissues make the segmentation task challenging. These yielded researchers to develop new methods for preprocessing the images and improving their segmentation abilities. METHODS: This study proposes an efficient system for the segmentation of the complete brain tumors from MRI images based on tumor localization and enhancement methods with a deep learning architecture named U-net. Initially, the histogram-based nonparametric tumor localization method is applied to localize the tumorous regions and the proposed tumor enhancement method is used to modify the localized regions to increase the visual appearance of indistinct or low-contrast tumors. The resultant images are fed to the original U-net architecture to segment the complete brain tumors. RESULTS: The performance of the proposed tumor localization and enhancement methods with the U-net is tested on benchmark datasets, BRATS 2012, BRATS 2019, and BRATS 2020, and achieved superior results as 0.94, 0.85, 0.87, 0.88 dice scores for the BRATS 2012 HGG-LGG, BRATS 2019, and BRATS 2020 datasets, respectively. CONCLUSION: The results and comparisons showed how the proposed methods improve the segmentation ability of the deep learning models and provide high-accuracy and low-cost segmentation of complete brain tumors in MRI images. The results might yield the implementation of the proposed methods in segmentation tasks of different medical fields.

Brain tumor segmentation in multi‐spectral MRI using convolutional neural networks (CNN)

  • Iqbal, Sajid
  • Ghani, M Usman
  • Saba, Tanzila
  • Rehman, Amjad
Microscopy research and technique 2018 Journal Article, cited 8 times
Website

Brain Tumor Segmentation Using Deep Learning Technique

  • Singh, Oyesh Mann
2017 Thesis, cited 0 times
Website

Brain tumor segmentation using morphological processing and the discrete wavelet transform

  • Lojzim, Joshua Michael
  • Fries, Marcus
Journal of Young Investigators 2017 Journal Article, cited 0 times
Website

Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field

  • Hu, Kai
  • Gan, Qinghai
  • Zhang, Yuan
  • Deng, Shuhua
  • Xiao, Fen
  • Huang, Wei
  • Cao, Chunhong
  • Gao, Xieping
IEEE Access 2019 Journal Article, cited 2 times
Website
Accurate segmentation of brain tumor is an indispensable component for cancer diagnosis and treatment. In this paper, we propose a novel brain tumor segmentation method based on multicascaded convolutional neural network (MCCNN) and fully connected conditional random fields (CRFs). The segmentation process mainly includes the following two steps. First, we design a multi-cascaded network architecture by combining the intermediate results of several connected components to take the local dependencies of labels into account and make use of multi-scale features for the coarse segmentation. Second, we apply CRFs to consider the spatial contextual information and eliminate some spurious outputs for the fine segmentation. In addition, we use image patches obtained from axial, coronal, and sagittal views to respectively train three segmentation models, and then combine them to obtain the final segmentation result. The validity of the proposed method is evaluated on three publicly available databases. The experimental results show that our method achieves competitive performance compared with the state-of-the-art approaches.

Brain Tumor Segmentation Using Non-local Mask R-CNN and Single Model Ensemble

  • Dai, Zhenzhen
  • Wen, Ning
  • Carver, Eric
2022 Conference Paper, cited 0 times
Website
Gliomas are the most common primary malignant brain tumors. Accurate segmentation and quantitative analysis of brain tumor are critical for diagnosis and treatment planning. Automatically segmenting tumors and their subregions is a challenging task as demonstrated by the annual Multimodal Brain Tumor Segmentation Challenge (BraTS). In order to tackle this challenging task, we trained 2D non-local Mask R-CNN with 814 patients from the BraTS 2021 training dataset. Our performance on another 417 patients from the BraTS 2021 training dataset were as follows: DSC of 0.784, 0.851 and 0.817; sensitivity of 0.775, 0.844 and 0.825 for the enhancing tumor, whole tumor and tumor core, respectively. By applying the focal loss function, our method achieved a DSC of 0.775, 0.885 and 0.829, as well as sensitivity of 0.757, 0.877 and 0.801. We also experimented with data distillation to ensemble single model’s predictions. Our refined results were DSC of 0.797, 0.884 and 0.833; sensitivity of 0.820, 0.855 and 0.820.

Brain Tumor Segmentation Using UNet-Context Encoding Network

  • Rahman, Md Monibor
  • Sadique, Md Shibly
  • Temtam, Ahmed G.
  • Farzana, Walia
  • Vidyaratne, L.
  • Iftekharuddin, Khan M.
2022 Conference Paper, cited 0 times
Website
Glioblastoma is an aggressive type of cancer that can develop in the brain or spinal cord. Magnetic Resonance Imaging (MRI) is key to diagnosing and tracking brain tumors in clinical settings. Brain tumor segmentation in MRI is required for disease diagnosis, surgical planning, and prognosis. As these tumors are heterogeneous in shape and appearance, their segmentation becomes a challenging task. The performance of automated medical image segmentation has considerably improved because of recent advances in deep learning. Introducing context encoding with deep CNN models has shown promise for semantic segmentation of brain tumors. In this work, we use a 3D UNet-Context Encoding (UNCE) deep learning network for improved brain tumor segmentation. Further, we introduce epistemic and aleatoric Uncertainty Quantification (UQ) using Monte Carlo Dropout (MCDO) and Test Time Augmentation (TTA) with the UNCE deep learning model to ascertain confidence in tumor segmentation performance. We build our model using the training MRI image sets of RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2021. We evaluate the model performance using the validation and test images from the BraTS challenge dataset. Online evaluation of validation data shows dice score coefficients (DSC) of 0.7787, 0.8499, and 0.9159 for enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The dice score coefficients of the test datasets are 0.6684 for ET, 0.7056 for TC, and 0.7551 for WT, respectively.

Brain Tumor Segmentation with Patch-Based 3D Attention UNet from Multi-parametric MRI

  • Feng, Xue
  • Bai, Harrison
  • Kim, Daniel
  • Maragkos, Georgios
  • Machaj, Jan
  • Kellogg, Ryan
2022 Book Section, cited 0 times
Website
Accurate segmentation of different sub-regions of gliomas including peritumoral edema, necrotic core, enhancing and non-enhancing tumor core from multiparametric MRI scans has important clinical relevance in diagnosis, prognosis and treatment of brain tumors. However, due to the highly heterogeneous appearance and shape, segmentation of the sub-regions is very challenging. Recent development using deep learning models has proved its effectiveness in the past several brain segmentation challenges as well as other semantic and medical image segmentation problems. In this paper we developed a deep-learning-based segmentation method using a patch-based 3D UNet with the attention block. Hyper-parameters tuning and training and testing augmentations were applied to increase the model performance. Preliminary results showed effectiveness of the segmentation model and achieved mean Dice scores of 0.806 (ET), 0.863 (TC) and 0.918 (WT) in the validation dataset.

Brain tumors detection and segmentation in MR images: Gabor wavelet vs. statistical features

  • Nabizadeh, Nooshin
  • Kubat, Miroslav
Computers & Electrical Engineering 2015 Journal Article, cited 85 times
Website
Automated recognition of brain tumors in magnetic resonance images (MRI) is a difficult procedure owing to the variability and complexity of the location, size, shape, and texture of these lesions. Because of intensity similarities between brain lesions and normal tissues, some approaches make use of multi-spectral anatomical MRI scans. However, the time and cost restrictions for collecting multi-spectral MRI scans and some other difficulties necessitate developing an approach that can detect tumor tissues using a single-spectral anatomical MRI images. In this paper, we present a fully automatic system, which is able to detect slices that include tumor and, to delineate the tumor area. The experimental results on single contrast mechanism demonstrate the efficacy of our proposed technique in successfully segmenting brain tumor tissues with high accuracy and low computational complexity. Moreover, we include a study evaluating the efficacy of statistical features over Gabor wavelet features using several classifiers. This contribution fills the gap in the literature, as is the first to compare these sets of features for tumor segmentation applications. (C) 2015 Elsevier Ltd. All rights reserved.

Brain tumour classification using two-tier classifier with adaptive segmentation technique

  • Anitha, V
  • Murugavalli, S
IET Computer VisionIet Comput Vis 2016 Journal Article, cited 46 times
Website
A brain tumour is a mass of tissue that is structured by a gradual addition of anomalous cells and it is important to classify brain tumours from the magnetic resonance imaging (MRI) for treatment. Human investigation is the routine technique for brain MRI tumour detection and tumours classification. Interpretation of images is based on organised and explicit classification of brain MRI and also various techniques have been proposed. Information identified with anatomical structures and potential abnormal tissues which are noteworthy to treat are given by brain tumour segmentation on MRI, the proposed system uses the adaptive pillar K-means algorithm for successful segmentation and the classification methodology is done by the two-tier classification approach. In the proposed system, at first the self-organising map neural network trains the features extracted from the discrete wavelet transform blend wavelets and the resultant filter factors are consequently trained by the K-nearest neighbour and the testing process is also accomplished in two stages. The proposed two-tier classification system classifies the brain tumours in double training process which gives preferable performance over the traditional classification method. The proposed system has been validated with the support of real data sets and the experimental results showed enhanced performance.

Brain Tumour Segmentation with a Muti-Pathway ResNet Based UNet

  • Saha, Aheli
  • Zhang, Yu-Dong
  • Satapathy, Suresh Chandra
Journal of Grid ComputingJ Grid Comput 2021 Journal Article, cited 0 times
Website
Automatic segmentation of brain tumour regions is essential in today’s scenario for proper diagnosis and treatment of the disease. Gliomas can appear in any region and can be of any shape and size, which makes automatic detection challenging. However, now, with the availability of high-quality MRI scans, various strides have been made in this field. In this paper, we propose a novel multi-pathway UNet incorporated with residual networks and skip connections to segment multimodal Magnetic Resonance images into three hierarchical glioma sub-regions. The multi-pathway serves as a medium to decompose the multiclass segmentation problem into subsequent binary segmentation tasks, where each pathway is responsible for segmenting one class from the background. Instead of a cascaded architecture for the hierarchical regions, we propose a shared encoder, followed by separate decoders for each category. Residual connections employed in the model facilitate increasing the performance. Experiments have been carried out on BraTS 2020 dataset and have achieved promising results.

BraTS Multimodal Brain Tumor Segmentation Challenge

  • Bakas, Spyridon
2017 Conference Proceedings, cited 2030 times
Website

Breast cancer cell-derived microRNA-155 suppresses tumor progression via enhancing immune cell recruitment and anti-tumor function

  • Wang, Junfeng
  • Wang, Quanyi
  • Guan, Yinan
  • Sun, Yulu
  • Wang, Xiaozhi
  • Lively, Kaylie
  • Wang, Yuzhen
  • Luo, Ming
  • Kim, Julian A
  • Murphy, E Angela
2022 Journal Article, cited 0 times
Website

Breast Cancer Diagnostic System Based on MR images Using KPCA-Wavelet Transform and Support Vector Machine

  • AL-Dabagh, Mustafa Zuhaer
  • AL-Mukhtar, Firas H
IJAERS 2017 Journal Article, cited 0 times
Website

Breast Cancer Mass Detection in Mammograms Using Gray Difference Weight and MSER Detector

  • Divyashree, B. V.
  • Kumar, G. Hemantha
SN Computer Science 2021 Journal Article, cited 0 times
Website
Breast cancer is a deadly and one of the most prevalent cancers in women across the globe. Mammography is widely used imaging modality for diagnosis and screening of breast cancer. Segmentation of breast region and mass detection are crucial steps in automatic breast cancer detection. Due to the non-uniform distribution of various tissues, it is a challenging task to analyze mammographic images with high accuracy. In this paper, background suppression and pectoral muscle removal are performed using gradient weight map followed by gray difference weight and fast marching method. Enhancement of breast region is performed using contrast limited adaptive histogram equalization (CLAHE) and de-correlation stretch. Detection of breast masses is accomplished by gray difference weight and maximally stable external regions (MSER) detector. Experimentation on Mammographic Image Analysis Society (MIAS) and curated breast imaging subset of digital database for screening mammography (CBIS-DDSM) show that the method proposed performs breast boundary segmentation and mass detection with best accuracies. Mass detection achieved high accuracies of about 97.64% and 94.66% for MIAS and CBIS-DDSM dataset, respectively. The method is simple, robust, less affected to noise, density, shape and size which could provide reasonable support for mammographic analysis.

Breast cancer masses classification using deep convolutional neural networks and transfer learning

  • Hassan, Shayma’a A.
  • Sayed, Mohammed S.
  • Abdalla, Mahmoud I.
  • Rashwan, Mohsen A.
Multimedia Tools and Applications 2020 Journal Article, cited 0 times
Website
With the recent advances in the deep learning field, the use of deep convolutional neural networks (DCNNs) in biomedical image processing becomes very encouraging. This paper presents a new classification model for breast cancer masses based on DCNNs. We investigated the use of transfer learning from AlexNet and GoogleNet pre-trained models to suit this task. We experimentally determined the best DCNN model for accurate classification by comparing different models, which vary according to the design and hyper-parameters. The effectiveness of these models were demonstrated using four mammogram databases. All models were trained and tested using a mammographic dataset from CBIS-DDSM and INbreast databases to select the best AlexNet and GoogleNet models. The performance of the two proposed models was further verified using images from Egyptian National Cancer Institute (NCI) and MIAS database. When tested on CBIS-DDSM and INbreast databases, the proposed AlexNet model achieved an accuracy of 100% for both databases. While, the proposed GoogleNet model achieved accuracy of 98.46% and 92.5%, respectively. When tested on NCI images and MIAS databases, AlexNet achieved an accuracy of 97.89% with AUC of 98.32%, and accuracy of 98.53% with AUC of 98.95%, respectively. GoogleNet achieved an accuracy of 91.58% with AUC of 96.5%, and accuracy of 88.24% with AUC of 94.65%, respectively. These results suggest that AlexNet has better performance and more robustness than GoogleNet. To the best of our knowledge, the proposed AlexNet model outperformed the latest methods. It achieved the highest accuracy and AUC score and the lowest testing time reported on CBIS-DDSM, INbreast and MIAS databases.

Breast cancer molecular subtype classifier that incorporates MRI features

  • Sutton, Elizabeth J
  • Dashevsky, Brittany Z
  • Oh, Jung Hun
  • Veeraraghavan, Harini
  • Apte, Aditya P
  • Thakur, Sunitha B
  • Morris, Elizabeth A
  • Deasy, Joseph O
Journal of Magnetic Resonance Imaging 2016 Journal Article, cited 34 times
Website
Purpose: To use features extracted from magnetic resonance (MR) images and a machine-learning method to assist in differentiating breast cancer molecular subtypes. Materials and Methods: This retrospective Health Insurance Portability and Accountability Act (HIPAA)-compliant study received Institutional Review Board (IRB) approval. We identified 178 breast cancer patients between 2006-2011 with: 1) ERPR+ (n=95, 53.4%), ERPR-/HER2+ (n=35, 19.6%), or triple negative (TN, n=48, 27.0%) invasive ductal carcinoma (IDC), and 2) preoperative breast MRI at 1.5T or 3.0T. Shape, texture, and histogram-based features were extracted from each tumor contoured on pre- and three postcontrast MR images using in-house software. Clinical and pathologic features were also collected. Machine-learning-based (support vector machines) models were used to identify significant imaging features and to build models that predict IDC subtype. Leave-one-out cross-validation (LOOCV) was used to avoid model overfitting. Statistical significance was determined using the Kruskal-Wallis test. Results: Each support vector machine fit in the LOOCV process generated a model with varying features. Eleven out of the top 20 ranked features were significantly different between IDC subtypes with P < 0.05. When the top nine pathologic and imaging features were incorporated, the predictive model distinguished IDC subtypes with an overall accuracy on LOOCV of 83.4%. The combined pathologic and imaging model's accuracy for each subtype was 89.2% (ERPR+), ;63.6% (ERPR-/HER2+), and 82.5% (TN). When only the top nine imaging features were incorporated, the predictive model distinguished IDC subtypes with an overall accuracy on LOOCV of 71.2%. The combined pathologic and imaging model's accuracy for each subtype was 69.9% (ERPR+), 62.9% (ERPR-/HER2+), and 81.0% (TN). Conclusion: We developed a machine-learning-based predictive model using features extracted from MRI that can distinguish IDC subtypes with significant predictive power.

Breast Cancer Response Prediction in Neoadjuvant Chemotherapy Treatment Based on Texture Analysis

  • Ammar, Mohammed
  • Mahmoudi, Saïd
  • Stylianos, Drisis
Procedia Computer Science 2016 Journal Article, cited 2 times
Website
MRI modality is one of the most usual techniques used for diagnosis and treatment planning of breast cancer. The aim of this study is to prove that texture based feature techniques such as co-occurrence matrix features extracted from MRI images can be used to quantify response of tumor treatment. To this aim, we use a dataset composed of two breast MRI examinations for 9 patients. Three of them were responders and six non responders. The first exam was achieved before the initiation of the treatment (baseline). The later one was done after the first cycle of the chemo treatment (control). A set of selected texture parameters have been selected and calculated for each exam. These selected parameters are: Cluster Shade, dissimilarity, entropy, homogeneity. The p-values estimated for the pathologic complete responders pCR and non pathologic complete responders pNCR patients prove that homogeneity (P-value=0.027) and cluster shade (P-value=0.0013) are the more relevant parameters related to pathologic complete responders pCR.

Breast DCE-MRI segmentation for lesion detection by multi-level thresholding using student psychological based optimization

  • Patra, Dipak Kumar
  • Si, Tapas
  • Mondal, Sukumar
  • Mukherjee, Prakash
Biomedical Signal Processing and Control 2021 Journal Article, cited 0 times
Website
In recent years, the high prevalence of breast cancer in women has risen dramatically. Therefore, segmentation of breast Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is a necessary task to assist the radiologist inaccurate diagnosis and detection of breast cancer in breast DCE-MRI. For image segmentation, thresholding is a simple and effective method. In breast DCE-MRI analysis for lesion detection and segmentation, radiologists agree that optimization via multi-level thresholding technique is important to differentiate breast lesions from dynamic DCE-MRI. In this paper, multi-level thresholding using Student Psychology-Based Optimizer (SPBO) is proposed to segment the breast DCE-MR images for lesion detection. First, MR images are denoised using the anisotropic diffusion filter and then, Intensity Inhomogeneities (IIHs) are corrected in the preprocessing step. The preprocessed MR images are segmented using the SPBO algorithm. Finally, the lesions are extracted from the segmented images and localized in the original MR images. The proposed method is applied on 300 Sagittal T2-Weighted DCE-MRI slices of 50 patients, histologically proven, and analyzed. The roposed method is compared with algorithms such as Particle Swarm Optimizer (PSO), Dragonfly Optimization (DA), Slime Mould Optimization (SMA), Multi-Verse Optimization (MVO), Grasshopper Optimization Algorithm (GOA), Hidden Markov Random Field (HMRF), Improved Markov Random Field (IMRF), and Conventional Markov Random Field (CMRF) methods. The high accuracy level of 99.44%, sensitivity 96.84%, and Dice Similarity Coefficient (DSC) 93.41% are achieved using the proposed automatic segmentation method. Both quantitative and qualitative results demonstrate that the proposed method performs better than the eight compared methods.

Breast DCE-MRI Segmentation for Lesion Detection Using Clustering with Fireworks Algorithm

  • Si, T.
  • Mukhopadhyay, A.
2020 Conference Proceedings, cited 0 times
Website

Breast Lesion Segmentation in DCE- MRI Imaging

  • Frackiewicz, Mariusz
  • Koper, Zuzanna
  • Palus, Henryk
  • Borys, Damian
  • Psiuk-Maksymowicz, Krzysztof
2018 Conference Proceedings, cited 0 times
Website
Breast cancer is one of the most common cancers in women. Typically, the course of the disease is asymptomatic in the early stages of breast cancer. Imaging breast examinations allow early detection of the cancer, which is associated with increased chances of a complete cure. There are many breast imaging techniques such as: mammography (MM), ultrasound imaging (US), positron-emission tomography (PET), computed tomography (CT), and magnetic resonance imaging (MRI). These imaging techniques differ in terms of effectiveness, price, type of physical phenomenon, the impact on the patient and its availability. In this paper, we focus on MRI imaging and we compare three breast lesion segmentation algorithms that have been tested on QIN Breast DCE-MRI database, which is publicly available. The obtained values of Dice and Jaccard indices indicate the segmentation using k-means algorithm.

Breast Lesion Segmentation in DCE-MRI using Multi-Objective Clustering with NSGA-II

  • Si, Tapas
  • Dipak Kumar Patra
  • Sukumar Mondal
  • Prakash Mukherjee
2022 Conference Paper, cited 0 times
Website
Breast cancer causes the highest death among all types of cancers in women. Early detection and diagnosis leading to early treatment can save the life. The computer-assisted methodologies for breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) segmentation can help the radiologists/doctors in the diagnosis of the disease as well as further treatment planning. In this article, we propose a breast DCE-MRI segmentation method using a hard-clustering technique with a Non-dominated Sorting Genetic Algorithm (NSGA-II). The well-known cluster validity metrics namely DB-index and Dunn-index are utilized as objective functions in NSGA-II algorithm. The noise and intensity inhomogeneities in MRI are removed from MRI in the preprocessing step as these artifacts affect the segmentation process. After segmentation, the lesions are separated and finally, localized in the MRI. The devised method is applied to segment 10 Sagittal T2-Weighted fat-suppressed DCE-MRI of the breast. A comparative study has been conducted with the K-means algorithm and the devised method outperforms K-means both quantitatively and qualitatively.

Breast MRI radiomics: comparison of computer- and human-extracted imaging phenotypes

  • Sutton, Elizabeth J
  • Huang, Erich P
  • Drukker, Karen
  • Burnside, Elizabeth S
  • Li, Hui
  • Net, Jose M
  • Rao, Arvind
  • Whitman, Gary J
  • Zuley, Margarita
  • Ganott, Marie
  • Bonaccio, Ermelinda
  • Giger, Maryellen L
  • Morris, Elizabeth A
European Radiology Experimental 2017 Journal Article, cited 17 times
Website
Background: In this study, we sought to investigate if computer-extracted magnetic resonance imaging (MRI) phenotypes of breast cancer could replicate human-extracted size and Breast Imaging-Reporting and Data System (BI-RADS) imaging phenotypes using MRI data from The Cancer Genome Atlas (TCGA) project of the National Cancer Institute. Methods: Our retrospective interpretation study involved analysis of Health Insurance Portability and Accountability Act-compliant breast MRI data from The Cancer Imaging Archive, an open-source database from the TCGA project. This study was exempt from institutional review board approval at Memorial Sloan Kettering Cancer Center and the need for informed consent was waived. Ninety-one pre-operative breast MRIs with verified invasive breast cancers were analysed. Three fellowship-trained breast radiologists evaluated the index cancer in each case according to size and the BI-RADS lexicon for shape, margin, and enhancement (human-extracted image phenotypes [HEIP]). Human inter-observer agreement was analysed by the intra-class correlation coefficient (ICC) for size and Krippendorff's alpha for other measurements. Quantitative MRI radiomics of computerised three-dimensional segmentations of each cancer generated computer-extracted image phenotypes (CEIP). Spearman's rank correlation coefficients were used to compare HEIP and CEIP. Results: Inter-observer agreement for HEIP varied, with the highest agreement seen for size (ICC 0.679) and shape (ICC 0.527). The computer-extracted maximum linear size replicated the human measurement with p < 10(-12). CEIP of shape, specifically sphericity and irregularity, replicated HEIP with both p values < 0.001. CEIP did not demonstrate agreement with HEIP of tumour margin or internal enhancement. Conclusions: Quantitative radiomics of breast cancer may replicate human-extracted tumour size and BI-RADS imaging phenotypes, thus enabling precision medicine.

Breast MRI Registration Using Metaheuristic Algorithms

  • Nayak, Somen
  • Si, Tapas
  • Sarkar, Achyuth
2021 Conference Paper, cited 0 times
Website
Ten percent of the women in the whole world are suffering from breast cancer in their lives. Breast MRI registration is an important task to align MR images of pre-and post-contrast for diagnosis and classification of cancer type into benign and malignant using pharmacokinetic analysis. It is also very much essential to align images that are to be taken in various time intervals to isolate the lesion of small intervals. This registration technique is also useful to monitor various types of cancer therapies. The main enlightenment of algorithms used for image registration has also transferred from a control point for semi-automated techniques, to sophisticated voxel-based automated techniques which use mutual information as a resemblance measure. In this manuscript, breast MRI registration using Multi-verse optimization (MVO) algorithm and Student Phycology based optimization (SPBO) algorithm is proposed; MVO and SPBO are metaheuristics-based Optimization algorithm which we have applied to register breast MR images. We have considered 40 pairs of breast MR-images of pre and post-contrast. After that, images are registered using MVO and SPBO algorithms. The results of the SPBO-based breast MRI registration method are compared with that MVO-based registration method. The experimental results inferred that the SPBO-based registration method statistically outperforms the MVO-based registration method in the registration of breast MR images.

Bronchus Segmentation and Classification by Neural Networks and Linear Programming

  • Zhao, Tianyi
  • Yin, Zhaozheng
  • Wang, Jiao
  • Gao, Dashan
  • Chen, Yunqiang
  • Mao, Yunxiang
2019 Book Section, cited 0 times
Airway segmentation is a critical problem for lung disease analysis. However, building a complete airway tree is still a challenging problem because of the complex tree structure, and tracing the deep bronchi is not trivial in CT images because there are numerous small airways with various directions. In this paper, we develop two-stage 2D+3D neural networks and a linear programming based tracking algorithm for airway segmentation. Furthermore, we propose a bronchus classification algorithm based on the segmentation results. Our algorithm is evaluated on a dataset collected from 4 resources. We achieved the dice coefficient of 0.94 and F1 score of 0.86 by a centerline based evaluation metric, compared to the ground-truth manually labeled by our radiologists.

Building a X-ray Database for Mammography on Vietnamese Patients and automatic Detecting ROI Using Mask-RCNN

  • Thang, Nguyen Duc
  • Dung, Nguyen Viet
  • Duc, Tran Vinh
  • Nguyen, Anh
  • Nguyen, Quang H.
  • Anh, Nguyen Tu
  • Cuong, Nguyen Ngoc
  • Linh, Le Tuan
  • Hanh, Bui My
  • Phu, Phan Huy
  • Phuong, Nguyen Hoang
2021 Book Section, cited 0 times
This paper describes the method of building a X-ray database for Mammography on Vietnamese patients that we collected at Hanoi Medical University Hospital. This dataset has 4664 images (Dicom) corresponding to 1161 standard patients with uniform distribution according to BIRAD from 0 to 5. This paper also presents the method of detecting Region of Interest (ROI) in mammogram based on Mask R-CNN architecture. The method of determining the ROI for accuracy mAP@0.5 = 0.8109 and the accuracy of classification BIRAD levels is 58.44%.

C-NMC: B-lineage acute lymphoblastic leukaemia: A blood cancer dataset

  • Gupta, Ritu
  • Gehlot, Shiv
  • Gupta, Anubha
Medical engineering & physics 2022 Journal Article, cited 0 times
Website
Development of computer-aided cancer diagnostic tools is an active research area owing to the advancements in deep-learning domain. Such technological solutions provide affordable and easily deployable diagnostic tools. Leukaemia, or blood cancer, is one of the leading cancers causing more than 0.3 million deaths every year. In order to aid the development of such an AI-enabled tool, we collected and curated a microscopic image dataset, namely C-NMC, of more than 15000 cancer cell images at a very high resolution of B-Lineage Acute Lymphoblastic Leukaemia (B-ALL). The dataset is prepared at the subject-level and contains images of both healthy and cancer patients. So far, this is the largest (as well as curated) dataset on B-ALL cancer in the public domain. C-NMC is available at The Cancer Imaging Archive (TCIA), USA and can be helpful for the research community worldwide for the development of B-ALL cancer diagnostic tools. This dataset was utilized in an international medical imaging challenge held at ISBI 2019 conference in Venice, Italy. In this paper, we present a detailed description and challenges of this dataset. We also present benchmarking results of all the methods applied so far on this dataset.

A CADe system for nodule detection in thoracic CT images based on artificial neural network

  • Liu, Xinglong
  • Hou, Fei
  • Qin, Hong
  • Hao, Aimin
Science China Information Sciences 2017 Journal Article, cited 11 times
Website

Cancer as a Model System for Testing Metabolic Scaling Theory

  • Brummer, Alexander B.
  • Savage, Van M.
Frontiers in Ecology and Evolution 2021 Journal Article, cited 0 times
Website
Biological allometries, such as the scaling of metabolism to mass, are hypothesized to result from natural selection to maximize how vascular networks fill space yet minimize internal transport distances and resistance to blood flow. Metabolic scaling theory argues two guiding principles—conservation of fluid flow and space-filling fractal distributions—describe a diversity of biological networks and predict how the geometry of these networks influences organismal metabolism. Yet, mostly absent from past efforts are studies that directly, and independently, measure metabolic rate from respiration and vascular architecture for the same organ, organism, or tissue. Lack of these measures may lead to inconsistent results and conclusions about metabolism, growth, and allometric scaling. We present simultaneous and consistent measurements of metabolic scaling exponents from clinical images of lung cancer, serving as a first-of-its-kind test of metabolic scaling theory, and identifying potential quantitative imaging biomarkers indicative of tumor growth. We analyze data for 535 clinical PET-CT scans of patients with non-small cell lung carcinoma to establish the presence of metabolic scaling between tumor metabolism and tumor volume. Furthermore, we use computer vision and mathematical modeling to examine predictions of metabolic scaling based on the branching geometry of the tumor-supplying blood vessel networks in a subset of 56 patients diagnosed with stage II-IV lung cancer. Examination of the scaling of maximum standard uptake value with metabolic tumor volume, and metabolic tumor volume with gross tumor volume, yield metabolic scaling exponents of 0.64 (0.20) and 0.70 (0.17), respectively. We compare these to the value of 0.85 (0.06) derived from the geometric scaling of the tumor-supplying vasculature. These results: (1) inform energetic models of growth and development for tumor forecasting; (2) identify imaging biomarkers in vascular geometry related to blood volume and flow; and (3) highlight unique opportunities to develop and test the metabolic scaling theory of ecology in tumors transitioning from avascular to vascular geometries.

Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data

  • Gutman, David A
  • Cobb, Jake
  • Somanna, Dhananjaya
  • Park, Yuna
  • Wang, Fusheng
  • Kurc, Tahsin
  • Saltz, Joel H
  • Brat, Daniel J
  • Cooper, Lee AD
  • Kong, Jun
Journal of the American Medical Informatics Association 2013 Journal Article, cited 70 times
Website
BACKGROUND: The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. OBJECTIVE: To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. MATERIALS AND METHODS: All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. RESULTS: The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20,000 whole-slide images from 22 cancer types. DISCUSSION: The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. CONCLUSIONS: With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints.

CARes‐UNet: Content‐Aware residual UNet for lesion segmentation of COVID‐19 from chest CT images

  • Xu, Xinhua
  • Wen, Yuhang
  • Zhao, Lu
  • Zhang, Yi
  • Zhao, Youjun
  • Tang, Zixuan
  • Yang, Ziduo
  • Chen, Calvin Yu‐Chian
Medical Physics 2021 Journal Article, cited 0 times
Website

A Cascaded Deep Learning-Based Artificial Intelligence Algorithm for Automated Lesion Detection and Classification on Biparametric Prostate Magnetic Resonance Imaging

  • Mehralivand, Sherif
  • Yang, Dong
  • Harmon, Stephanie A
  • Xu, Daguang
  • Xu, Ziyue
  • Roth, Holger
  • Masoudi, Samira
  • Sanford, Thomas H
  • Kesani, Deepak
  • Lay, Nathan S
  • Merino, Maria J
  • Wood, Bradford J
  • Pinto, Peter A
  • Choyke, Peter L
  • Turkbey, Baris
Acad Radiol 2021 Journal Article, cited 0 times
Website
RATIONALE AND OBJECTIVES: Prostate MRI improves detection of clinically significant prostate cancer; however, its diagnostic performance has wide variation. Artificial intelligence (AI) has the potential to assist radiologists in the detection and classification of prostatic lesions. Herein, we aimed to develop and test a cascaded deep learning detection and classification system trained on biparametric prostate MRI using PI-RADS for assisting radiologists during prostate MRI read out. MATERIALS AND METHODS: T2-weighted, diffusion-weighted (ADC maps, high b value DWI) MRI scans obtained at 3 Tesla from two institutions (n = 1043 in-house and n = 347 Prostate-X, respectively) acquired between 2015 to 2019 were used for model training, validation, testing. All scans were retrospectively reevaluated by one radiologist. Suspicious lesions were contoured and assigned a PI-RADS category. A 3D U-Net-based deep neural network was used to train an algorithm for automated detection and segmentation of prostate MRI lesions. Two 3D residual neural network were used for a 4-class classification task to predict PI-RADS categories 2 to 5 and BPH. Training and validation used 89% (n = 1290 scans) of the data using 5 fold cross-validation, the remaining 11% (n = 150 scans) were used for independent testing. Algorithm performance at lesion level was assessed using sensitivities, positive predictive values (PPV), false discovery rates (FDR), classification accuracy, Dice similarity coefficient (DSC). Additional analysis was conducted to compare AI algorithm's lesion detection performance with targeted biopsy results. RESULTS: Median age was 66 years (IQR = 60-71), PSA 6.7 ng/ml (IQR = 4.7-9.9) from in-house cohort. In the independent test set, algorithm correctly detected 111 of 198 lesions leading to 56.1% (49.3%-62.6%) sensitivity. PPV was 62.7% (95% CI 54.7%-70.7%) with FDR of 37.3% (95% CI 29.3%-45.3%). Of 79 true positive lesions, 82.3% were tumor positive at targeted biopsy, whereas of 57 false negative lesions, 50.9% were benign at targeted biopsy. Median DSC for lesion segmentation was 0.359. Overall PI-RADS classification accuracy was 30.8% (95% CI 24.6%-37.8%). CONCLUSION: Our cascaded U-Net, residual network architecture can detect, classify cancer suspicious lesions at prostate MRI with good detection, reasonable classification performance metrics.

A cascaded fully convolutional network framework for dilated pancreatic duct segmentation

  • Shen, C.
  • Roth, H. R.
  • Hayashi, Y.
  • Oda, M.
  • Miyamoto, T.
  • Sato, G.
  • Mori, K.
Int J Comput Assist Radiol Surg 2022 Journal Article, cited 1 times
Website
PURPOSE: Pancreatic duct dilation can be considered an early sign of pancreatic ductal adenocarcinoma (PDAC). However, there is little existing research focused on dilated pancreatic duct segmentation as a potential screening tool for people without PDAC. Dilated pancreatic duct segmentation is difficult due to the lack of readily available labeled data and strong voxel imbalance between the pancreatic duct region and other regions. To overcome these challenges, we propose a two-step approach for dilated pancreatic duct segmentation from abdominal computed tomography (CT) volumes using fully convolutional networks (FCNs). METHODS: Our framework segments the pancreatic duct in a cascaded manner. The pancreatic duct occupies a tiny portion of abdominal CT volumes. Therefore, to concentrate on the pancreas regions, we use a public pancreas dataset to train an FCN to generate an ROI covering the pancreas and use a 3D U-Net-like FCN for coarse pancreas segmentation. To further improve the dilated pancreatic duct segmentation, we deploy a skip connection on each corresponding resolution level and an attention mechanism in the bottleneck layer. Moreover, we introduce a combined loss function based on Dice loss and Focal loss. Random data augmentation is adopted throughout the experiments to improve the generalizability of the model. RESULTS: We manually created a dilated pancreatic duct dataset with semi-automated annotation tools. Experimental results showed that our proposed framework is practical for dilated pancreatic duct segmentation. The average Dice score and sensitivity were 49.9% and 51.9%, respectively. These results show the potential of our approach as a clinical screening tool. CONCLUSIONS: We investigate an automated framework for dilated pancreatic duct segmentation. The cascade strategy effectively improved the segmentation performance of the pancreatic duct. Our modifications to the FCNs together with random data augmentation and the proposed combined loss function facilitate automated segmentation.

A Cascaded Neural Network for Staging in Non-Small Cell Lung Cancer Using Pre-Treatment CT

  • Choi, J.
  • Cho, H. H.
  • Kwon, J.
  • Lee, H. Y.
  • Park, H.
Diagnostics (Basel) 2021 Journal Article, cited 0 times
Website
BACKGROUND AND AIM: Tumor staging in non-small cell lung cancer (NSCLC) is important for treatment and prognosis. Staging involves expert interpretation of imaging, which we aim to automate with deep learning (DL). We proposed a cascaded DL method comprised of two steps to classification between early- and advanced-stage NSCLC using pretreatment computed tomography. METHODS: We developed and tested a DL model to classify between early- and advanced-stage using training (n = 90), validation (n = 8), and two test (n = 37, n = 26) cohorts obtained from the public domain. The first step adopted an autoencoder network to compress the imaging data into latent variables and the second step used the latent variable to classify the stages using the convolutional neural network (CNN). Other DL and machine learning-based approaches were compared. RESULTS: Our model was tested in two test cohorts of CPTAC and TCGA. In CPTAC, our model achieved accuracy of 0.8649, sensitivity of 0.8000, specificity of 0.9412, and area under the curve (AUC) of 0.8206 compared to other approaches (AUC 0.6824-0.7206) for classifying between early- and advanced-stages. In TCGA, our model achieved accuracy of 0.8077, sensitivity of 0.7692, specificity of 0.8462, and AUC of 0.8343. CONCLUSION: Our cascaded DL model for classification NSCLC patients into early-stage and advanced-stage showed promising results and could help future NSCLC research.

Cascaded Training Pipeline for 3D Brain Tumor Segmentation

  • Luu, Minh Sao Khue
  • Pavlovskiy, Evgeniy
2022 Conference Paper, cited 0 times
Website
We apply a cascaded training pipeline for the 3D U-Net to segment each brain tumor sub-region separately and chronologically. Firstly, the volumetric data of four modalities are used to segment the whole tumor in the first round of training. Then, our model combines the whole tumor segmentation with the mpMRI images to segment the tumor core. Finally, the network uses whole tumor and tumor core segmentations to predict enhancing tumor regions. Unlike the standard 3D U-Net, we use Group Normalization and Randomized Leaky Rectified Linear Unit in the encoding and decoding blocks. We achieved dice scores on the validation set of 88.84, 81.97, and 75.02 for whole tumor, tumor core, and enhancing tumor, respectively.

CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance

  • Jesson, Andrew
  • Guizard, Nicolas
  • Ghalehjegh, Sina Hamidi
  • Goblot, Damien
  • Soudan, Florian
  • Chapados, Nicolas
2017 Conference Proceedings, cited 18 times
Website
We introduce CASED, a novel curriculum sampling algorithm that facilitates the optimization of deep learning segmentation or detection models on data sets with extreme class imbalance. We evaluate the CASED learning framework on the task of lung nodule detection in chest CT. In contrast to two-stage solutions, wherein nodule candidates are first proposed by a segmentation model and refined by a second detection stage, CASED improves the training of deep nodule segmentation models (e.g. UNet) to the point where state of the art results are achieved using only a trivial detection stage. CASED improves the optimization of deep segmentation models by allowing them to first learn how to distinguish nodules from their immediate surroundings, while continuously adding a greater proportion of difficult-to-classify global context, until uniformly sampling from the empirical data distribution. Using CASED during training yields a minimalist proposal to the lung nodule detection problem that tops the LUNA16 nodule detection benchmark with an average sensitivity score of 88.35%. Furthermore, we find that models trained using CASED are robust to nodule annotation quality by showing that comparable results can be achieved when only a point and radius for each ground truth nodule are provided during training. Finally, the CASED learning framework makes no assumptions with regard to imaging modality or segmentation target and should generalize to other medical imaging problems where class imbalance is a persistent problem.

Challenges in predicting glioma survival time in multi-modal deep networks

  • Abdulrhman Aljouie
  • Yunzhe Xue
  • Meiyan Xie
  • Uman Roshan
2020 Conference Paper, cited 0 times
Website
Prediction of cancer survival time is of considerable interest in medicine as it leads to better patient care and reduces health care costs. In this study, we propose a multi-path multimodal neural network that predicts Glioblastoma Multiforme (GBM) survival time at the 14 months threshold. We obtained image, gene expression, and SNP variants from whole-exome sequences all from the The Cancer Genome Atlas portal for a total of 126 patients. We perform a 10-fold cross-validation experiment on each of the data sources separately as well as the model with all data combined. From post-contrast Tl MRI data, we used 3D scans and 2D slices that we selected manually to show the tumor region. We find that the model with 2D MRI slices and genomic data combined gives the highest accuracies over individual sources but by a modest margin. We see considerable variation in accuracies across the 10 folds and that our model achieves 100% accuracy on the training data but lags behind in test accuracy. With dropout our training accuracy falls considerably. This shows that predicting glioma survival time is a challenging task but it is unclear if this is also a symptom of insufficient data. A clear direction here is to augment our data that we plan to explore with generative models. Overall we present a novel multi-modal network that incorporates SNP, gene expression, and MRI image data for glioma survival time prediction.

Characterization of Pulmonary Nodules Based on Features of Margin Sharpness and Texture

  • Ferreira, José Raniery
  • Oliveira, Marcelo Costa
  • de Azevedo-Marques, Paulo Mazzoncini
2017 Journal Article, cited 1 times
Website

Chest CT Cinematic Rendering of SARS-CoV-2 Pneumonia

  • Necker, F. N.
  • Scholz, M.
RadiologyRadiology 2021 Journal Article, cited 0 times
Website
The SARS-Cov-2 pandemic has spread rapidly throughout the world since its first reported infection in Wuhan, China. Despite the introduction of vaccines for this important viral infection, there remains a significant public health risk to the population as this virus continues to mutate. While it remains unknown if these new mutations will evade the current vaccines, it is possible that we may be living with this infection for many years to come as it becomes endemic. Cinematic rendering of CT images is a new way to show the three dimensionality of the various densities contained in volumetric CT data. We show an example of PCR-positive SARS-Cov-2 pneumonia using this new technique (Figure; (Movie [online]). This case is from the RSNA-RICORD dataset (1, 2). It shows the typical presentation of SARS-Cov-2 pneumonia with ground-glass subpleural opacities that are clearly seen (Figure). The higher attenuation of lung tissue filled with fluid results in these areas appearing patchy or spongy.

Classificação Multirrótulo na Anotação Automática de Nódulo Pulmonar Solitário

  • Villani, Leonardo
  • Prati, Ronaldo Cristiano
2012 Conference Proceedings, cited 0 times

Classification of benign and malignant lung nodules using image processing techniques

  • Vas, Moffy Crispin
  • Dessai, Amita
International Research Journal of Engineering and Technology 2017 Journal Article, cited 0 times
Website
Lung, CT, Feature extraction, segmentation, Computer tomography; Lung cancer; malignant; Haralick features; ANN; Haar Wavelet,

Classification of Benign and Malignant Tumors of Lung Using Bag of Features

  • Suzan, A Melody
  • Prathibha, G
Journal of Scientific & Engineering Research 2017 Journal Article, cited 0 times
Website
This paper presents a novel approach for feature extraction and classification of lung cancer, i.e., Benign or malignant. Classification of lung cancer is based on a code book generated by using Bag of features algorithm. In this paper 300 regions of Interest (ROI’s) of lung cancer images from The Cancer Imaging Archive (TICA) sponsored by SPIE are used. In this approach Scale-Invariant Feature Transform (SIFT) is used for feature extraction and this coefficients are quantized using a bag of features into a predefined code book. This code book is given as input to the KNN classifier. The overall performance of the system in classifying tumors of lung is evaluated by using Receiver Operating Characteristics Curve (ROC). Area under the curve (AUC) is Az=0.95.

Classification of brain tumor isocitrate dehydrogenase status using MRI and deep learning

  • Nalawade, S.
  • Murugesan, G. K.
  • Vejdani-Jahromi, M.
  • Fisicaro, R. A.
  • Bangalore Yogananda, C. G.
  • Wagner, B.
  • Mickey, B.
  • Maher, E.
  • Pinho, M. C.
  • Fei, B.
  • Madhuranthakam, A. J.
  • Maldjian, J. A.
J Med Imaging (Bellingham) 2019 Journal Article, cited 0 times
Website
Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.

Classification of Clinically Significant Prostate Cancer on Multi-Parametric MRI: A Validation Study Comparing Deep Learning and Radiomics

  • Castillo T., Jose M.
  • Arif, Muhammad
  • Starmans, Martijn P. A.
  • Niessen, Wiro J.
  • Bangma, Chris H.
  • Schoots, Ivo G.
  • Veenland, Jifke F.
Cancers 2022 Journal Article, cited 0 times
Website

Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs in CT Scans Using Q-Deformed Entropy and Deep Learning Features

  • Hasan, Ali M.
  • Al-Jawad, Mohammed M.
  • Jalab, Hamid A.
  • Shaiba, Hadil
  • Ibrahim, Rabha W.
  • Al-Shamasneh, Ala’a R.
Entropy 2020 Journal Article, cited 0 times
Website
Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists’ efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

Classification of COVID-19 in chest radiographs: assessing the impact of imaging parameters using clinical and simulated images

  • Fricks, Rafael
  • Abadi, Ehsan
  • Ria, Francesco
  • Samei, Ehsan
  • Drukker, Karen
  • Mazurowski, Maciej A.
2021 Conference Paper, cited 1 times
Website
As computer-aided diagnostics develop to address new challenges in medical imaging, including emerging diseases such as COVID-19, the initial development is hampered by availability of imaging data. Deep learning algorithms are particularly notorious for performance that tends to improve proportionally to the amount of available data. Simulated images, as available through advanced virtual trials, may present an alternative in data-constrained applications. We begin with our previously trained COVID-19 x-ray classification model (denoted as CVX) that leveraged additional training with existing pre-pandemic chest radiographs to improve classification performance in a set of COVID-19 chest radiographs. The CVX model achieves demonstrably better performance on clinical images compared to an equivalent model that applies standard transfer learning from ImageNet weights. The higher performing CVX model is then shown to generalize effectively to a set of simulated COVID-19 images, both quantitative comparisons of AUCs from clinical to simulated image sets, but also in a qualitative sense where saliency map patterns are consistent when compared between sets. We then stratify the classification results in simulated images to examine dependencies in imaging parameters when patient features are constant. Simulated images show promise in optimizing imaging parameters for accurate classification in data-constrained applications.

Classification of CT pulmonary opacities as perifissural nodules: reader variability

  • Schreuder, Anton
  • van Ginneken, Bram
  • Scholten, Ernst T
  • Jacobs, Colin
  • Prokop, Mathias
  • Sverzellati, Nicola
  • Desai, Sujal R
  • Devaraj, Anand
  • Schaefer-Prokop, Cornelia M
RadiologyRadiology 2018 Journal Article, cited 3 times
Website

Classification of lung adenocarcinoma transcriptome subtypes from pathological images using deep convolutional networks

  • Antonio, Victor Andrew A
  • Ono, Naoaki
  • Saito, Akira
  • Sato, Tetsuo
  • Altaf-Ul-Amin, Md
  • Kanaya, Shigehiko
International Journal of Computer Assisted Radiology and Surgery 2018 Journal Article, cited 0 times
Website
PURPOSE: Convolutional neural networks have become rapidly popular for image recognition and image analysis because of its powerful potential. In this paper, we developed a method for classifying subtypes of lung adenocarcinoma from pathological images using neural network whose that can evaluate phenotypic features from wider area to consider cellular distributions. METHODS: In order to recognize the types of tumors, we need not only to detail features of cells, but also to incorporate statistical distribution of the different types of cells. Variants of autoencoders as building blocks of pre-trained convolutional layers of neural networks are implemented. A sparse deep autoencoder which minimizes local information entropy on the encoding layer is then proposed and applied to images of size [Formula: see text]. We applied this model for feature extraction from pathological images of lung adenocarcinoma, which is comprised of three transcriptome subtypes previously defined by the Cancer Genome Atlas network. Since the tumor tissue is composed of heterogeneous cell populations, recognition of tumor transcriptome subtypes requires more information than local pattern of cells. The parameters extracted using this approach will then be used in multiple reduction stages to perform classification on larger images. RESULTS: We were able to demonstrate that these networks successfully recognize morphological features of lung adenocarcinoma. We also performed classification and reconstruction experiments to compare the outputs of the variants. The results showed that the larger input image that covers a certain area of the tissue is required to recognize transcriptome subtypes. The sparse autoencoder network with [Formula: see text] input provides a 98.9% classification accuracy. CONCLUSION: This study shows the potential of autoencoders as a feature extraction paradigm and paves the way for a whole slide image analysis tool to predict molecular subtypes of tumors from pathological features.

Classification of Lung CT Images using BRISK Features

  • Sambasivarao, B.
  • Prathiba, G.
International Journal of Engineering and Advanced Technology (IJEAT) 2019 Journal Article, cited 0 times
Website
Lung cancer is the major cause of death in humans. To increase the survival rate of the people, early detection of cancer is required. Lung cancer that starts in the cells of lung is mainly of two types i.e., cancerous (malignant) and non-cancerous cell (benign). In this paper, work is done on the lung images obtained from the Society of Photographic Instrumentation Engineers (SPIE) database. This SPIE database contains normal, benign and malignant images. In this work, 300 images from the database are used out of which 150 are benign and 150 are malignant. Feature points of lung tumor images are extracted by using Binary Robust Invariant Scale Keypoints (BRISK). BRISK attains commensurate characteristic of correspondence at much less computation time. BRISK is adaptive, high quality accomplishments in avant-grade algorithms. BRISK features divide the pairs of pixels surrounding the keypoint into two subsets: short-distance and long-distance pairs. The orientation of the feature point is calculated by Local intensity gradients from long distance pairs. Rotation of Short distance pairs is obtained using this orientation. These BRISK features are used by classifier for classifying the lung tumors as either benign or malignant. The performance is evaluated by calculating the accuracy.

Classification of lung nodule malignancy in computed tomography imaging utilising generative adversarial networks and semi-supervised transfer learning

  • Apostolopoulos, Ioannis D.
  • Papathanasiou, Nikolaos D.
  • Panayiotakis, George S.
Biocybernetics and Biomedical Engineering 2021 Journal Article, cited 2 times
Website
The pulmonary nodules' malignancy rating is commonly confined in patient follow-up; examining the nodule's activity is estimated with the Positron Emission Tomography (PET) system or biopsy. However, these strategies are usually after the initial detection of the malignant nodules acquired from the Computed Tomography (CT) scan. In this study, a Deep Learning methodology to address the challenge of the automatic characterisation of Solitary Pulmonary Nodules (SPN) detected in CT scans is proposed. The research methodology is based on Convolutional Neural Networks, which have proven to be excellent automatic feature extractors for medical images. The publicly available CT dataset, called Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), and a small CT scan dataset derived from a PET/CT system, is considered the classification target. New, realistic nodule representations are generated employing Deep Convolutional Generative Adversarial Networks to circumvent the shortage of large-scale data to train robust CNNs. Besides, a hierarchical CNN called Feature Fusion VGG19 (FF-VGG19) was developed to enhance feature extraction of the CNN proposed by the Visual Geometry Group (VGG). Moreover, the generated nodule images are separated into two classes by utilising a semi-supervised approach, called self-training, to tackle weak labelling due to DC-GAN inefficiencies. The DC-GAN can generate realistic SPNs, as the experts could only distinguish 23 % of the synthetic nodule images. As a result, the classification accuracy of FF-VGG19 on the LIDC-IDRI dataset increases by +7%, reaching 92.07 %, while the classification accuracy on the CT dataset is increased by 5 %, reaching 84,3 %.

Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning

  • Wu, Panpan
  • Sun, Xuanchao
  • Zhao, Ziping
  • Wang, Haishuai
  • Pan, Shirui
  • Schuller, Bjorn
Comput Intell Neurosci 2020 Journal Article, cited 0 times
Website
The classification process of lung nodule detection in a traditional computer-aided detection (CAD) system is complex, and the classification result is heavily dependent on the performance of each step in lung nodule detection, causing low classification accuracy and high false positive rate. In order to alleviate these issues, a lung nodule classification method based on a deep residual network is proposed. Abandoning traditional image processing methods and taking the 50-layer ResNet network structure as the initial model, the deep residual network is constructed by combining residual learning and migration learning. The proposed approach is verified by conducting experiments on the lung computed tomography (CT) images from the publicly available LIDC-IDRI database. An average accuracy of 98.23% and a false positive rate of 1.65% are obtained based on the ten-fold cross-validation method. Compared with the conventional support vector machine (SVM)-based CAD system, the accuracy of our method improved by 9.96% and the false positive rate decreased by 6.95%, while the accuracy improved by 1.75% and 2.42%, respectively, and the false positive rate decreased by 2.07% and 2.22%, respectively, in contrast to the VGG19 model and InceptionV3 convolutional neural networks. The experimental results demonstrate the effectiveness of our proposed method in lung nodule classification for CT images.

Classification of malignant tumors by a non-sequential recurrent ensemble of deep neural network model

  • Moitra, D.
  • Mandal, R. K.
Multimed Tools Appl 2022 Journal Article, cited 0 times
Website
Many significant efforts have so far been made to classify malignant tumors by using various machine learning methods. Most of the studies have considered a particular tumor genre categorized according to its originating organ. This has enriched the domain-specific knowledge of malignant tumor prediction, we are devoid of an efficient model that may predict the stages of tumors irrespective of their origin. Thus, there is ample opportunity to study if a heterogeneous collection of tumor images can be classified according to their respective stages. The present research work has prepared a heterogeneous tumor dataset comprising eight different datasets from The Cancer Imaging Archives and classified them according to their respective stages, as suggested by the American Joint Committee on Cancer. The proposed model has been used for classifying 717 subjects comprising different imaging modalities and varied Tumor-Node-Metastasis stages. A new non-sequential deep hybrid model ensemble has been developed by exploiting branched and re-injected layers, followed by bidirectional recurrent layers to classify tumor images. Results have been compared with standard sequential deep learning models and notable recent studies. The training and validation accuracy along with the ROC-AUC scores have been found satisfactory over the existing models. No model or method in the literature could ever classify such a diversified mix of tumor images with such high accuracy. The proposed model may help radiologists by acting as an auxiliary decision support system and speed up the tumor diagnosis process.

Classification of non-small cell lung cancer using one-dimensional convolutional neural network

  • Moitra, Dipanjan
  • Kr. Mandal, Rakesh
Expert Systems with Applications 2020 Journal Article, cited 0 times
Website
Non-Small Cell Lung Cancer (NSCLC) is a major lung cancer type. Proper diagnosis depends mainly on tumor staging and grading. Pathological prognosis often faces problems because of the limited availability of tissue samples. Machine learning methods may play a vital role in such cases. 2D or 3D Deep Neural Networks (DNNs) has been the predominant technology in this domain. Contemporary studies tried to classify NSCLC tumors as benign or malignant. The application of 1D CNN in automated staging and grading of NSCLC is not very frequent. The aim of the present study is to develop a 1D CNN model for automated staging and grading of NSCLC. The updated NSCLC Radiogenomics Collection from The Cancer Imaging Archive (TCIA) was used in the study. The segmented tumor images were fed into a hybrid feature detection and extraction model (MSER-SURF). The extracted features were clubbed with the clinical TNM stage and histopathological grade information and fed into the 1D CNN model. The performance of the proposed CNN model was satisfactory. The accuracy and ROC-AUC score were higher than the other leading machine learning methods. The study also did well compared to state-of-the-art studies. The proposed model shows that 1D CNN is equally useful in NSCLC prediction like a conventional 2D/3D CNN model. The model may further be refined by carrying out experiments with varied hyper-parameters. Further studies may be conducted by considering semi-supervised or unsupervised learning techniques.

Classification of the glioma grading using radiomics analysis

  • Cho, Hwan-ho
  • Lee, Seung-hak
  • Kim, Jonghoon
  • Park, Hyunjin
PeerJ 2018 Journal Article, cited 0 times
Website

CLCU-Net: Cross-level connected U-shaped network with selective feature aggregation attention module for brain tumor segmentation

  • Wang, Y. L.
  • Zhao, Z. J.
  • Hu, S. Y.
  • Chang, F. L.
Comput Methods Programs Biomed 2021 Journal Article, cited 0 times
Website
BACKGROUND AND OBJECTIVE: Brain tumors are among the most deadly cancers worldwide. Due to the development of deep convolutional neural networks, many brain tumor segmentation methods help clinicians diagnose and operate. However, most of these methods insufficiently use multi-scale features, reducing their ability to extract brain tumors' features and details. To assist clinicians in the accurate automatic segmentation of brain tumors, we built a new deep learning network to make full use of multi-scale features for improving the performance of brain tumor segmentation. METHODS: We propose a novel cross-level connected U-shaped network (CLCU-Net) to connect different scales' features for fully utilizing multi-scale features. Besides, we propose a generic attention module (Segmented Attention Module, SAM) on the connections of different scale features for selectively aggregating features, which provides a more efficient connection of different scale features. Moreover, we employ deep supervision and spatial pyramid pooling (SSP) to improve the method's performance further. RESULTS: We evaluated our method on the BRATS 2018 dataset by five indexes and achieved excellent performance with a Dice Score of 88.5%, a Precision of 91.98%, a Recall of 85.62%, a Params of 36.34M and Inference Time of 8.89ms for the whole tumor, which outperformed six state-of-the-art methods. Moreover, the performed analysis of different attention modules' heatmaps proved that the attention module proposed in this study was more suitable for segmentation tasks than the other existing popular attention modules. CONCLUSION: Both the qualitative and quantitative experimental results indicate that our cross-level connected U-shaped network with selective feature aggregation attention module can achieve accurate brain tumor segmentation and is considered quite instrumental in clinical practice implementation.

Clinical application of mask region-based convolutional neural network for the automatic detection and segmentation of abnormal liver density based on hepatocellular carcinoma computed tomography datasets

  • Yang, C. J.
  • Wang, C. K.
  • Fang, Y. D.
  • Wang, J. Y.
  • Su, F. C.
  • Tsai, H. M.
  • Lin, Y. J.
  • Tsai, H. W.
  • Yeh, L. R.
PLoS One 2021 Journal Article, cited 0 times
Website
The aim of the study was to use a previously proposed mask region-based convolutional neural network (Mask R-CNN) for automatic abnormal liver density detection and segmentation based on hepatocellular carcinoma (HCC) computed tomography (CT) datasets from a radiological perspective. Training and testing datasets were acquired retrospectively from two hospitals of Taiwan. The training dataset contained 10,130 images of liver tumor densities of 11,258 regions of interest (ROIs). The positive testing dataset contained 1,833 images of liver tumor densities with 1,874 ROIs, and negative testing data comprised 20,283 images without abnormal densities in liver parenchyma. The Mask R-CNN was used to generate a medical model, and areas under the curve, true positive rates, false positive rates, and Dice coefficients were evaluated. For abnormal liver CT density detection, in each image, we identified the mean area under the curve, true positive rate, and false positive rate, which were 0.9490, 91.99%, and 13.68%, respectively. For segmentation ability, the highest mean Dice coefficient obtained was 0.8041. This study trained a Mask R-CNN on various HCC images to construct a medical model that serves as an auxiliary tool for alerting radiologists to abnormal CT density in liver scans; this model can simultaneously detect liver lesions and perform automatic instance segmentation.

A Clinical System for Non-invasive Blood-Brain Barrier Opening Using a Neuronavigation-Guided Single-Element Focused Ultrasound Transducer

  • Pouliopoulos, Antonios N
  • Wu, Shih-Ying
  • Burgess, Mark T
  • Karakatsani, Maria Eleni
  • Kamimura, Hermes A S
  • Konofagou, Elisa E
Ultrasound Med Biol 2020 Journal Article, cited 3 times
Website
Focused ultrasound (FUS)-mediated blood-brain barrier (BBB) opening is currently being investigated in clinical trials. Here, we describe a portable clinical system with a therapeutic transducer suitable for humans, which eliminates the need for in-line magnetic resonance imaging (MRI) guidance. A neuronavigation-guided 0.25-MHz single-element FUS transducer was developed for non-invasive clinical BBB opening. Numerical simulations and experiments were performed to determine the characteristics of the FUS beam within a human skull. We also validated the feasibility of BBB opening obtained with this system in two non-human primates using U.S. Food and Drug Administration (FDA)-approved treatment parameters. Ultrasound propagation through a human skull fragment caused 44.4 +/- 1% pressure attenuation at a normal incidence angle, while the focal size decreased by 3.3 +/- 1.4% and 3.9 +/- 1.8% along the lateral and axial dimension, respectively. Measured lateral and axial shifts were 0.5 +/- 0.4 mm and 2.1 +/- 1.1 mm, while simulated shifts were 0.1 +/- 0.2 mm and 6.1 +/- 2.4 mm, respectively. A 1.5-MHz passive cavitation detector transcranially detected cavitation signals of Definity microbubbles flowing through a vessel-mimicking phantom. T1-weighted MRI confirmed a 153 +/- 5.5 mm(3) BBB opening in two non-human primates at a mechanical index of 0.4, using Definity microbubbles at the FDA-approved dose for imaging applications, without edema or hemorrhage. In conclusion, we developed a portable system for non-invasive BBB opening in humans, which can be achieved at clinically relevant ultrasound exposures without the need for in-line MRI guidance. The proposed FUS system may accelerate the adoption of non-invasive FUS-mediated therapies due to its fast application, low cost and portability.

Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study

  • Hosny, Ahmed
  • Bitterman, Danielle S.
  • Guthier, Christian V.
  • Qian, Jack M.
  • Roberts, Hannah
  • Perni, Subha
  • Saraf, Anurag
  • Peng, Luke C.
  • Pashtan, Itai
  • Ye, Zezhong
  • Kann, Benjamin H.
  • Kozono, David E.
  • Christiani, David
  • Catalano, Paul J.
  • Aerts, Hugo J. W. L.
  • Mak, Raymond H.
2022 Journal Article, cited 0 times
Website
Background Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts. Methods In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting. Findings We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0·91 [IQR 0·83–0·92], p=0·0062; SD 0·86 [0·71–0·91], p=0·0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0·83 (IQR 0·76–0·88) and SD 0·79 (0·68–0·88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0·70 (0·56–0·80) and SD 0·50 (0·34–0·71). Performance on RTOG-0617 clinical trial data was VD 0·71 (0·60–0·81) and SD 0·47 (0·35–0·59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5·4 min; p<0·0001) and a 32% reduction in interobserver variability (SD; p=0·013). Interpretation We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts' segmentation style and preference might affect model performance. Funding US National Institutes of Health and EU European Research Council.

Clinically applicable deep learning framework for organs at risk delineation in CT images

  • Tang, Hao
  • Chen, Xuming
  • Liu, Yang
  • Lu, Zhipeng
  • You, Junhua
  • Yang, Mingzhou
  • Yao, Shengyu
  • Zhao, Guoqi
  • Xu, Yi
  • Chen, Tingfeng
  • Liu, Yong
  • Xie, Xiaohui
Nature Machine Intelligence 2019 Journal Article, cited 0 times
Radiation therapy is one of the most widely used therapies for cancer treatment. A critical step in radiation therapy planning is to accurately delineate all organs at risk (OARs) to minimize potential adverse effects to healthy surrounding organs. However, manually delineating OARs based on computed tomography images is time-consuming and error-prone. Here, we present a deep learning model to automatically delineate OARs in head and neck, trained on a dataset of 215 computed tomography scans with 28 OARs manually delineated by experienced radiation oncologists. On a hold-out dataset of 100 computed tomography scans, our model achieves an average Dice similarity coefficient of 78.34% across the 28 OARs, significantly outperforming human experts and the previous state-of-the-art method by 10.05% and 5.18%, respectively. Our model takes only a few seconds to delineate an entire scan, compared to over half an hour by human experts. These findings demonstrate the potential for deep learning to improve the quality and reduce the treatment planning time of radiation therapy.

Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study

  • Nikolov, Stanislav
  • Blackwell, Sam
  • Zverovitch, Alexei
  • Mendes, Ruheena
  • Livne, Michelle
  • De Fauw, Jeffrey
  • Patel, Yojan
  • Meyer, Clemens
  • Askham, Harry
  • Romera-Paredes, Bernadino
  • Kelly, Christopher
  • Karthikesalingam, Alan
  • Chu, Carlton
  • Carnell, Dawn
  • Boon, Cheng
  • D'Souza, Derek
  • Moinuddin, Syed Ali
  • Garie, Bethany
  • McQuinlan, Yasmin
  • Ireland, Sarah
  • Hampton, Kiarna
  • Fuller, Krystle
  • Montgomery, Hugh
  • Rees, Geraint
  • Suleyman, Mustafa
  • Back, Trevor
  • Hughes, Cian Owen
  • Ledsam, Joseph R
  • Ronneberger, Olaf
J Med Internet Res 2021 Journal Article, cited 0 times
Website
BACKGROUND: Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE: Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS: The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS: We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS: Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.

Clinically relevant modeling of tumor growth and treatment response

  • Yankeelov, Thomas E
  • Atuegwu, Nkiruka
  • Hormuth, David
  • Weis, Jared A
  • Barnes, Stephanie L
  • Miga, Michael I
  • Rericha, Erin C
  • Quaranta, Vito
Science Translational Medicine 2013 Journal Article, cited 70 times
Website
Current mathematical models of tumor growth are limited in their clinical application because they require input data that are nearly impossible to obtain with sufficient spatial resolution in patients even at a single time point--for example, extent of vascularization, immune infiltrate, ratio of tumor-to-normal cells, or extracellular matrix status. Here we propose the use of emerging, quantitative tumor imaging methods to initialize a new generation of predictive models. In the near future, these models could be able to forecast clinical outputs, such as overall response to treatment and time to progression, which will provide opportunities for guided intervention and improved patient care.

Cloud-based NoSQL open database of pulmonary nodules for computer-aided lung cancer diagnosis and reproducible research

  • Junior, José Raniery Ferreira
  • Oliveira, Marcelo Costa
  • de Azevedo-Marques, Paulo Mazzoncini
2016 Journal Article, cited 14 times
Website

CNN models discriminating between pulmonary micro-nodules and non-nodules from CT images

  • Monkam, Patrice
  • Qi, Shouliang
  • Xu, Mingjie
  • Han, Fangfang
  • Zhao, Xinzhuo
  • Qian, Wei
BioMedical Engineering OnLine 2018 Journal Article, cited 1 times
Website

CNN-based CT denoising with an accurate image domain noise insertion technique

  • Kim, Byeongjoon
  • Divel, Sarah E.
  • Pelc, Norbert J.
  • Baek, Jongduk
  • Bosmans, Hilde
  • Zhao, Wei
  • Yu, Lifeng
2021 Conference Paper, cited 0 times
Website
Convolutional neural network (CNN)-based CT denoising methods have attracted great interest for improving the image quality of low-dose CT (LDCT) images. However, CNN requires a large amount of paired data consisting of normal-dose CT (NDCT) and LDCT images, which are generally not available. In this work, we aim to synthesize paired data from NDCT images with an accurate image domain noise insertion technique and investigate its effect on the denoising performance of CNN. Fan-beam CT images were reconstructed using extended cardiac-torso phantoms with Poisson noise added to projection data to simulate NDCT and LDCT. We estimated local noise power spectra and a variance map from a NDCT image using information on photon statistics and reconstruction parameters. We then synthesized image domain noise by filtering and scaling white Gaussian noise using the local noise power spectrum and variance map, respectively. The CNN architecture was U-net, and the loss function was a weighted summation of mean squared error, perceptual loss, and adversarial loss. CNN was trained with NDCT and LDCT (CNN-Ideal) or NDCT and synthesized LDCT (CNN-Proposed). To evaluate denoising performance, we measured the root mean squared error (RMSE), structural similarity index (SSIM), noise power spectrum (NPS), and modulation transfer function (MTF). The MTF was estimated from the edge spread function of a circular object with 12 mm diameter and 60 HU contrast. Denoising results from CNN-Ideal and CNN-Proposed show no significant difference in all metrics while providing high scores in RMSE and SSIM compared to NDCT and similar NPS shapes to that of NDCT.

A CNN-based unified framework utilizing projection loss in unison with label noise handling for multiple Myeloma cancer diagnosis

  • Gehlot, S.
  • Gupta, A.
  • Gupta, R.
Med Image Anal 2021 Journal Article, cited 0 times
Website
Multiple Myeloma (MM) is a malignancy of plasma cells. Similar to other forms of cancer, it demands prompt diagnosis for reducing the risk of mortality. The conventional diagnostic tools are resource-intense and hence, these solutions are not easily scalable for extending their reach to the masses. Advancements in deep learning have led to rapid developments in affordable, resource optimized, easily deployable computer-assisted solutions. This work proposes a unified framework for MM diagnosis using microscopic blood cell imaging data that addresses the key challenges of inter-class visual similarity of healthy versus cancer cells and that of the label noise of the dataset. To extract class distinctive features, we propose projection loss to maximize the projection of a sample's activation on the respective class vector besides imposing orthogonality constraints on the class vectors. This projection loss is used along with the cross-entropy loss to design a dual branch architecture that helps achieve improved performance and provides scope for targeting the label noise problem. Based on this architecture, two methodologies have been proposed to correct the noisy labels. A coupling classifier has also been proposed to resolve the conflicts in the dual-branch architecture's predictions. We have utilized a large dataset of 72 subjects (26 healthy and 46 MM cancer) containing a total of 74996 images (including 34555 training cell images and 40441 test cell images). This is so far the most extensive dataset on Multiple Myeloma cancer ever reported in the literature. An ablation study has also been carried out. The proposed architecture performs best with a balanced accuracy of 94.17% on binary cell classification of healthy versus cancer in the comparative performance with ten state-of-the-art architectures. Extensive experiments on two additional publicly available datasets of two different modalities have also been utilized for analyzing the label noise handling capability of the proposed methodology. The code will be available under https://github.com/shivgahlout/CAD-MM.

Co-Segmentation Methods for Improving Tumor Target Delineation in PET-CT Images

  • Yu, Zexi
2016 Thesis, cited 0 times
Website

COLI‐Net: Deep learning‐assisted fully automated COVID‐19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images

  • Shiri, Isaac
  • Arabi, Hossein
  • Salimi, Yazdan
  • Sanaat, Amirhossein
  • Akhavanallaf, Azadeh
  • Hajianfar, Ghasem
  • Askari, Dariush
  • Moradi, Shakiba
  • Mansouri, Zahra
  • Pakbin, Masoumeh
  • Sandoughdaran, Saleh
  • Abdollahi, Hamid
  • Radmard, Amir Reza
  • Rezaei‐Kalantari, Kiara
  • Ghelich Oghli, Mostafa
  • Zaidi, Habib
International Journal of Imaging Systems and TechnologyInternational Journal of Imaging Systems and Technology 2021 Journal Article, cited 0 times
Website
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347′259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7′333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98–0.99) and 0.91 ± 0.038 (95% CI, 0.90–0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, −0.12 to 0.18) and −0.18 ± 3.4% (95% CI, −0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16–0.59) and 0.81 ± 6.6% (95% CI, −0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (−6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.

Collaborative and Reproducible Research: Goals, Challenges, and Strategies

  • Langer, S. G.
  • Shih, G.
  • Nagy, P.
  • Landman, B. A.
J Digit Imaging 2018 Journal Article, cited 1 times
Website
Combining imaging biomarkers with genomic and clinical phenotype data is the foundation of precision medicine research efforts. Yet, biomedical imaging research requires unique infrastructure compared with principally text-driven clinical electronic medical record (EMR) data. The issues are related to the binary nature of the file format and transport mechanism for medical images as well as the post-processing image segmentation and registration needed to combine anatomical and physiological imaging data sources. The SiiM Machine Learning Committee was formed to analyze the gaps and challenges surrounding research into machine learning in medical imaging and to find ways to mitigate these issues. At the 2017 annual meeting, a whiteboard session was held to rank the most pressing issues and develop strategies to meet them. The results, and further reflections, are summarized in this paper.

Collaborative learning of joint medical image segmentation tasks from heterogeneous and weakly-annotated data

  • Dorent, Reuben
2022 Thesis, cited 0 times
Website
Convolutional Neural Networks (CNNs) have become the state-of-the-art for most image segmentation tasks and therefore one would expect them to be able to learn joint tasks, such as brain structures and pathology segmentation. However, annotated databases required to train CNNs are usually dedicated to a single task, leading to partial annotations (e.g. brain structure or pathology delineation but not both for joint tasks). Moreover, the information required for these tasks may come from distinct magnetic resonance (MR) sequences to emphasise different types of tissue contrast, leading to datasets with different sets of image modalities. Similarly, the scans may have been acquired at different centres, with different MR parameters, leading to differences in resolution and visual appearance among databases (domain shift). Given the large amount of resources, time and expertise required to carefully annotate medical images, it is unlikely that large and fully-annotated databases will become readily available for every joint problem. For this reason, there is a need to develop collaborative approaches that exploit existing heterogeneous and task-specific datasets, as well as weak annotations instead of time-consuming pixel-wise annotations. In this thesis, I present methods to learn joint medical segmentation tasks from task-specific, domain-shifted, hetero-modal and weakly-annotated datasets. The problem lies at the intersection of several branches of Machine Learning: Multi-Task Learning, Hetero-Modal Learning, Domain Adaptation and Weakly Supervised Learning. First, I introduce a mathematical formulation of a joint segmentation problem under the constraint of missing modalities and partial annotations, in which Domain Adaptation techniques can be directly integrated, and a procedure to optimise it. Secondly, I propose a principled approach to handle missing modalities based on Hetero-Modal Variational Auto-Encoders. Thirdly, in this thesis, I focus on Weakly Supervised Learning techniques and present a novel approach to train deep image segmentation networks using particularly weak train-time annotations: only 4 (2D) or 6 (3D) extreme clicks at the boundary of the objects of interest. The proposed framework connects the extreme points using a new formulation of geodesics that integrates the network outputs and uses the generated paths for supervision. Fourthly, I introduce a new weakly-supervised Domain Adaptation technique using scribbles on the target domain and formulate as a cross-domain CRF optimisation problem. Finally, I led the organisation of the first medical segmentation challenge for unsupervised cross-modality domain adaptation (crossMoDA). The benchmark reported in this thesis provides a comprehensive characterisation of cross-modality domain adaptation techniques. Experiments are performed on brain MR images from patients with different types of brain diseases: gliomas, white matter lesions and vestibular schwannoma. The results demonstrate the broad applicability of the presented frameworks to learn joint segmentation tasks with the potential to improve brain disease diagnosis and patient management in clinical practice.

Collaborative projects

  • Armato, S
  • McNitt-Gray, M
  • Meyer, C
  • Reeves, A
  • Clarke, L
Int J CARS 2012 Journal Article, cited 307 times
Website

Collage CNN for Renal Cell Carcinoma Detection from CT

  • Hussain, Mohammad Arafat
  • Amir-Khalili, Alborz
  • Hamarneh, Ghassan
  • Abugharbieh, Rafeef
2017 Conference Proceedings, cited 0 times
Website

Combination of fuzzy c-means clustering and texture pattern matrix for brain MRI segmentation

  • Shijin Kumar, P.S.
  • Dharun, V.S.
Biomedical Research 2017 Journal Article, cited 0 times
The process of image segmentation can be defined as splitting an image into different regions. It is an important step in medical image analysis. We introduce a hybrid tumor tracking and segmentation algorithm for Magnetic Resonance Images (MRI). This method is based on Fuzzy C-means clustering algorithm (FCM) and Texture Pattern Matrix (TPM). The key idea is to use texture features along with intensity while performing segmentation. The performance parameters can be improved by using Texture Pattern Matrix (TPM). FCM is capable of predicting tumor cells with high accuracy. In FCM homogeneous regions in an image are obtained based on intensity. Texture Pattern Matrix (TPM) provides details about spatial distribution of pixels in an image. Experimental results obtained by applying proposed segmentation method for tracking tumors are presented. Various performance parameters are evaluated by comparing the outputs of proposed method and Fuzzy C-means algorithm. The computational complexity and computation time can be reduced by using this hybrid segmentation method.

The Combination of Low Skeletal Muscle Mass and High Tumor Interleukin-6 Associates with Decreased Survival in Clear Cell Renal Cell Carcinoma

  • Kays, J. K.
  • Koniaris, L. G.
  • Cooper, C. A.
  • Pili, R.
  • Jiang, G.
  • Liu, Y.
  • Zimmers, T. A.
Cancers (Basel) 2020 Journal Article, cited 0 times
Website
Clear cell renal carcinoma (ccRCC) is frequently associated with cachexia which is itself associated with decreased survival and quality of life. We examined relationships among body phenotype, tumor gene expression, and survival. Demographic, clinical, computed tomography (CT) scans and tumor RNASeq for 217 ccRCC patients were acquired from the Cancer Imaging Archive and The Cancer Genome Atlas (TCGA). Skeletal muscle and fat masses measured from CT scans and tumor cytokine gene expression were compared with survival by univariate and multivariate analysis. Patients in the lowest skeletal muscle mass (SKM) quartile had significantly shorter overall survival versus the top three SKM quartiles. Patients who fell into the lowest quartiles for visceral adipose mass (VAT) and subcutaneous adipose mass (SCAT) also demonstrated significantly shorter overall survival. Multiple tumor cytokines correlated with mortality, most strongly interleukin-6 (IL-6); high IL-6 expression was associated with significantly decreased survival. The combination of low SKM/high IL-6 was associated with significantly lower overall survival compared to high SKM/low IL-6 expression (26.1 months vs. not reached; p < 0.001) and an increased risk of mortality (HR = 5.95; 95% CI = 2.86-12.38). In conclusion, tumor cytokine expression, body composition, and survival are closely related, with low SKM/high IL-6 expression portending worse prognosis in ccRCC.

A combinatorial radiographic phenotype may stratify patient survival and be associated with invasion and proliferation characteristics in glioblastoma

  • Rao, Arvind
  • Rao, Ganesh
  • Gutman, David A
  • Flanders, Adam E
  • Hwang, Scott N
  • Rubin, Daniel L
  • Colen, Rivka R
  • Zinn, Pascal O
  • Jain, Rajan
  • Wintermark, Max
Journal of neurosurgery 2016 Journal Article, cited 19 times
Website
OBJECTIVE Individual MRI characteristics (e.g., volume) are routinely used to identify survival-associated phenotypes for glioblastoma (GBM). This study investigated whether combinations of MRI features can also stratify survival. Furthermore, the molecular differences between phenotype-induced groups were investigated. METHODS Ninety-two patients with imaging, molecular, and survival data from the TCGA (The Cancer Genome Atlas)GBM collection were included in this study. For combinatorial phenotype analysis, hierarchical clustering was used. Groups were defined based on a cutpoint obtained via tree-based partitioning. Furthermore, differential expression analysis of microRNA (miRNA) and mRNA expression data was performed using GenePattern Suite. Functional analysis of the resulting genes and miRNAs was performed using Ingenuity Pathway Analysis. Pathway analysis was performed using Gene Set Enrichment Analysis. RESULTS Clustering analysis reveals that image-based grouping of the patients is driven by 3 features: volume-class, hemorrhage, and T1/FLAIR-envelope ratio. A combination of these features stratifies survival in a statistically significant manner. A cutpoint analysis yields a significant survival difference in the training set (median survival difference: 12 months, p = 0.004) as well as a validation set (p = 0.0001). Specifically, a low value for any of these 3 features indicates favorable survival characteristics. Differential expression analysis between cutpoint-induced groups suggests that several immune-associated (natural killer cell activity, T-cell lymphocyte differentiation) and metabolism-associated (mitochondrial activity, oxidative phosphorylation) pathways underlie the transition of this phenotype. Integrating data for mRNA and miRNA suggests the roles of several genes regulating proliferation and invasion. CONCLUSIONS A 3-way combination of MRI phenotypes may be capable of stratifying survival in GBM. Examination of molecular processes associated with groups created by this combinatorial phenotype suggests the role of biological processes associated with growth and invasion characteristics.

Combined Megavoltage and Contrast-Enhanced Radiotherapy as an Intrafraction Motion Management Strategy in Lung SBRT

  • Coronado-Delgado, Daniel A
  • Garnica-Garza, Hector M
Technol Cancer Res Treat 2019 Journal Article, cited 0 times
Website
Using Monte Carlo simulation and a realistic patient model, it is shown that the volume of healthy tissue irradiated at therapeutic doses can be drastically reduced using a combination of standard megavoltage and kilovoltage X-ray beams with a contrast agent previously loaded into the tumor, without the need to reduce standard treatment margins. Four-dimensional computed tomography images of 2 patients with a centrally located and a peripherally located tumor were obtained from a public database and subsequently used to plan robotic stereotactic body radiotherapy treatments. Two modalities are assumed: conventional high-energy stereotactic body radiotherapy and a treatment with contrast agent loaded in the tumor and a kilovoltage X-ray beam replacing the megavoltage beam (contrast-enhanced radiotherapy). For each patient model, 2 planning target volumes were designed: one following the recommendations from either Radiation Therapy Oncology Group (RTOG) 0813 or RTOG 0915 task group depending on the patient model and another with a 2-mm uniform margin determined solely on beam penumbra considerations. The optimized treatments with RTOG margins were imparted to the moving phantom to model the dose distribution that would be obtained as a result of intrafraction motion. Treatment plans are then compared to the plan with the 2-mm uniform margin considered to be the ideal plan. It is shown that even for treatments in which only one-fifth of the total dose is imparted via the contrast-enhanced radiotherapy modality and with the use of standard treatment margins, the resultant absorbed dose distributions are such that the volume of healthy tissue irradiated to high doses is close to what is obtained under ideal conditions.

Combined molecular subtyping, grading, and segmentation of glioma using multi-task deep learning

  • van der Voort, S. R.
  • Incekara, F.
  • Wijnenga, M. M. J.
  • Kapsas, G.
  • Gahrmann, R.
  • Schouten, J. W.
  • Nandoe Tewarie, R.
  • Lycklama, G. J.
  • De Witt Hamer, P. C.
  • Eijgelaar, R. S.
  • French, P. J.
  • Dubbink, H. J.
  • Vincent, Ajpe
  • Niessen, W. J.
  • van den Bent, M. J.
  • Smits, M.
  • Klein, S.
2022 Journal Article, cited 0 times
Website
BACKGROUND: Accurate characterization of glioma is crucial for clinical decision making. A delineation of the tumor is also desirable in the initial decision stages but is time-consuming. Previously, deep learning methods have been developed that can either non-invasively predict the genetic or histological features of glioma, or that can automatically delineate the tumor, but not both tasks at the same time. Here, we present our method that can predict the molecular subtype and grade, while simultaneously providing a delineation of the tumor. METHODS: We developed a single multi-task convolutional neural network that uses the full 3D, structural, pre-operative MRI scans to predict the IDH mutation status, the 1p/19q co-deletion status, and the grade of a tumor, while simultaneously segmenting the tumor. We trained our method using a patient cohort containing 1508 glioma patients from 16 institutes. We tested our method on an independent dataset of 240 patients from 13 different institutes. RESULTS: In the independent test set we achieved an IDH-AUC of 0.90, an 1p/19q co-deletion AUC of 0.85, and a grade AUC of 0.81 (grade II/III/IV). For the tumor delineation, we achieved a mean whole tumor DICE score of 0.84. CONCLUSIONS: We developed a method that non-invasively predicts multiple, clinically relevant features of glioma. Evaluation in an independent dataset shows that the method achieves a high performance and that it generalizes well to the broader clinical population. This first of its kind method opens the door to more generalizable, instead of hyper-specialized, AI methods.

Combining Generative Models for Multifocal Glioma Segmentation and Registration

  • Kwon, Dongjin
  • Shinohara, Russell T
  • Akbari, Hamed
  • Davatzikos, Christos
2014 Book Section, cited 55 times
Website
In this paper, we propose a new method for simultaneously segmenting brain scans of glioma patients and registering these scans to a normal atlas. Performing joint segmentation and registration for brain tumors is very challenging when tumors include multifocal masses and have complex shapes with heterogeneous textures. Our approach grows tumors for each mass from multiple seed points using a tumor growth model and modifies a normal atlas into one with tumors and edema using the combined results of grown tumors. We also generate a tumor shape prior via the random walk with restart, utilizing multiple tumor seeds as initial foreground information. We then incorporate this shape prior into an EM framework which estimates the mapping between the modified atlas and the scans, posteriors for each tissue labels, and the tumor growth model parameters. We apply our method to the BRATS 2013 leaderboard dataset to evaluate segmentation performance. Our method shows the best performance among all participants.

Combining weakly and strongly supervised learning improves strong supervision in Gleason pattern classification

  • Otalora, S.
  • Marini, N.
  • Muller, H.
  • Atzori, M.
BMC Med Imaging 2021 Journal Article, cited 0 times
Website
BACKGROUND: One challenge to train deep convolutional neural network (CNNs) models with whole slide images (WSIs) is providing the required large number of costly, manually annotated image regions. Strategies to alleviate the scarcity of annotated data include: using transfer learning, data augmentation and training the models with less expensive image-level annotations (weakly-supervised learning). However, it is not clear how to combine the use of transfer learning in a CNN model when different data sources are available for training or how to leverage from the combination of large amounts of weakly annotated images with a set of local region annotations. This paper aims to evaluate CNN training strategies based on transfer learning to leverage the combination of weak and strong annotations in heterogeneous data sources. The trade-off between classification performance and annotation effort is explored by evaluating a CNN that learns from strong labels (region annotations) and is later fine-tuned on a dataset with less expensive weak (image-level) labels. RESULTS: As expected, the model performance on strongly annotated data steadily increases as the percentage of strong annotations that are used increases, reaching a performance comparable to pathologists ([Formula: see text]). Nevertheless, the performance sharply decreases when applied for the WSI classification scenario with [Formula: see text]. Moreover, it only provides a lower performance regardless of the number of annotations used. The model performance increases when fine-tuning the model for the task of Gleason scoring with the weak WSI labels [Formula: see text]. CONCLUSION: Combining weak and strong supervision improves strong supervision in classification of Gleason patterns using tissue microarrays (TMA) and WSI regions. Our results contribute very good strategies for training CNN models combining few annotated data and heterogeneous data sources. The performance increases in the controlled TMA scenario with the number of annotations used to train the model. Nevertheless, the performance is hindered when the trained TMA model is applied directly to the more challenging WSI classification problem. This demonstrates that a good pre-trained model for prostate cancer TMA image classification may lead to the best downstream model if fine-tuned on the WSI target dataset. We have made available the source code repository for reproducing the experiments in the paper: https://github.com/ilmaro8/Digital_Pathology_Transfer_Learning.

Comparative Analysis of Lossless Image Compression Algorithms based on Different Types of Medical Images

  • Alzahrani, Mona
  • Albinali, Mona
2021 Conference Paper, cited 0 times
Website
In the medical field, there is a demand for highspeed transmission and efficient storage of medical images between healthcare organizations. Therefore, image compression techniques are essential in that field. In this study, we conducted an experimental comparison between two famous lossless algorithms: lossless Discrete Cosine Transform (DCT) and lossless Haar Wavelet Transform (HWT). Covering three different datasets that contain different types of medical images: MRI, CT, and gastrointestinal endoscopic images; with different image formats PNG, JPG and TIF. According to the conducted experiments, in terms of compressed image size and compression ratio, we found that DCT outperforms HWT regarding PNG and TIF format which represent CT-grey and MRI-color images. And regarding JPG format which represents the gastrointestinal endoscopic color images, DCT performs well when grey-scale images are used; where HWT outperforms DCT when color images are used. However, HWT outperforms DCT in compression time regarding all the image types and formats.

Comparative evaluation of conventional and deep learning methods for semi-automated segmentation of pulmonary nodules on CT

  • Bianconi, Francesco
  • Fravolini, Mario Luca
  • Pizzoli, Sofia
  • Palumbo, Isabella
  • Minestrini, Matteo
  • Rondini, Maria
  • Nuvoli, Susanna
  • Spanu, Angela
  • Palumbo, Barbara
Quant Imaging Med Surg 2021 Journal Article, cited 2 times
Website
Background: Accurate segmentation of pulmonary nodules on computed tomography (CT) scans plays a crucial role in the evaluation and management of patients with suspicion of lung cancer (LC). When performed manually, not only the process requires highly skilled operators, but is also tiresome and time-consuming. To assist the physician in this task several automated and semi-automated methods have been proposed in the literature. In recent years, in particular, the appearance of deep learning has brought about major advances in the field. Methods: Twenty-four (12 conventional and 12 based on deep learning) semi-automated-'one-click'-methods for segmenting pulmonary nodules on CT were evaluated in this study. The experiments were carried out on two datasets: a proprietary one (383 images from a cohort of 111 patients) and a public one (259 images from a cohort of 100). All the patients had a positive transcript for suspect pulmonary nodules. Results: The methods based on deep learning clearly outperformed the conventional ones. The best performance [Sorensen-Dice coefficient (DSC)] in the two datasets was, respectively, 0.853 and 0.763 for the deep learning methods, and 0.761 and 0.704 for the traditional ones. Conclusions: Deep learning is a viable approach for semi-automated segmentation of pulmonary nodules on CT scans.

Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning

  • Wong, Jordan
  • Fong, Allan
  • McVicar, Nevin
  • Smith, Sally
  • Giambattista, Joshua
  • Wells, Derek
  • Kolbeck, Carter
  • Giambattista, Jonathan
  • Gondara, Lovedeep
  • Alexander, Abraham
Radiother Oncol 2019 Journal Article, cited 0 times
Website
BACKGROUND: Deep learning-based auto-segmented contours (DC) aim to alleviate labour intensive contouring of organs at risk (OAR) and clinical target volumes (CTV). Most previous DC validation studies have a limited number of expert observers for comparison and/or use a validation dataset related to the training dataset. We determine if DC models are comparable to Radiation Oncologist (RO) inter-observer variability on an independent dataset. METHODS: Expert contours (EC) were created by multiple ROs for central nervous system (CNS), head and neck (H&N), and prostate radiotherapy (RT) OARs and CTVs. DCs were generated using deep learning-based auto-segmentation software trained by a single RO on publicly available data. Contours were compared using Dice Similarity Coefficient (DSC) and 95% Hausdorff distance (HD). RESULTS: Sixty planning CT scans had 2-4 ECs, for a total of 60 CNS, 53 H&N, and 50 prostate RT contour sets. The mean DC and EC contouring times were 0.4 vs 7.7 min for CNS, 0.6 vs 26.6 min for H&N, and 0.4 vs 21.3 min for prostate RT contours. There were minimal differences in DSC and 95% HD involving DCs for OAR comparisons, but more noticeable differences for CTV comparisons. CONCLUSIONS: The accuracy of DCs trained by a single RO is comparable to expert inter-observer variability for the RT planning contours in this study. Use of deep learning-based auto-segmentation in clinical practice will likely lead to significant benefits to RT planning workflow and resources.

Comparing nonrigid registration techniques for motion corrected MR prostate diffusion imaging

  • Buerger, C
  • Sénégas, J
  • Kabus, S
  • Carolus, H
  • Schulz, H
  • Agarwal, H
  • Turkbey, B
  • Choyke, PL
  • Renisch, S
Medical Physics 2015 Journal Article, cited 4 times
Website
PURPOSE: T2-weighted magnetic resonance imaging (MRI) is commonly used for anatomical visualization in the pelvis area, such as the prostate, with high soft-tissue contrast. MRI can also provide functional information such as diffusion-weighted imaging (DWI) which depicts the molecular diffusion processes in biological tissues. The combination of anatomical and functional imaging techniques is widely used in oncology, e.g., for prostate cancer diagnosis and staging. However, acquisition-specific distortions as well as physiological motion lead to misalignments between T2 and DWI and consequently to a reduced diagnostic value. Image registration algorithms are commonly employed to correct for such misalignment. METHODS: The authors compare the performance of five state-of-the-art nonrigid image registration techniques for accurate image fusion of DWI with T2. RESULTS: Image data of 20 prostate patients with cancerous lesions or cysts were acquired. All registration algorithms were validated using intensity-based as well as landmark-based techniques. CONCLUSIONS: The authors' results show that the "fast elastic image registration" provides most accurate results with a target registration error of 1.07 +/- 0.41 mm at minimum execution times of 11 +/- 1 s.

Comparison Between Radiological Semantic Features and Lung-RADS in Predicting Malignancy of Screen-Detected Lung Nodules in the National Lung Screening Trial

  • Li, Qian
  • Balagurunathan, Yoganand
  • Liu, Ying
  • Qi, Jin
  • Schabath, Matthew B
  • Ye, Zhaoxiang
  • Gillies, Robert J
Clinical Lung Cancer 2017 Journal Article, cited 3 times
Website

Comparison of Accuracy of Color Spaces in Cell Features Classificationin Images of Leukemia types ALL and MM

  • Espinoza-Del Angel, Cinthia
  • Femat-Diaz, Aurora
2022 Journal Article, cited 0 times
Website
This study presents a methodology for identifying the color space that provides the best performance in an image processing application. When measurements are performed without selecting the appropriate color model, the accuracy of the results is considerably altered. It is significant in computation, mainly when a diagnostic is based on stained cell microscopy images. This work shows how the proper selection of the color model provides better characterization in two types of cancer, acute lymphoid leukemia, and multiple myeloma. The methodology uses images from a public database. First, the nuclei are segmented, and then statistical moments are calculated for class identification. After, a principal component analysis is performed to reduce the extracted features and identify the most significant ones. At last, the predictive model is evaluated using the k-nearest neighbor algorithm and a confusion matrix. For the images used, the results showed that the CIE L*a*b color space best characterized the analyzed cancer types with an average accuracy of 95.52%. With an accuracy of 91.81%, RGB and CMY spaces followed. HSI and HSV spaces had an accuracy of 87.86% and 89.39%, respectively, and the worst performer was grayscale with an accuracy of 55.56%.

Comparison of Active Learning Strategies Applied to Lung Nodule Segmentation in CT Scans

  • Zotova, Daria
  • Lisowska, Aneta
  • Anderson, Owen
  • Dilys, Vismantas
  • O’Neil, Alison
2019 Book Section, cited 0 times
Supervised machine learning techniques require large amounts of annotated training data to attain good performance. Active learning aims to ease the data collection process by automatically detecting which instances an expert should annotate in order to train a model as quickly and effectively as possible. Such strategies have been previously reported for medical imaging, but for other tasks than focal pathologies where there is high class imbalance and heterogeneous background appearance. In this study we evaluate different data selection approaches (random, uncertain, and representative sampling) and a semi-supervised model training procedure (pseudo-labelling), in the context of lung nodule segmentation in CT volumes from the publicly available LIDC-IDRI dataset. We find that active learning strategies allow us to train a model with equal performance but less than half of the annotation effort; data selection by uncertainty sampling offers the most gain, with the incorporation of representativeness or the addition of pseudo-labelling giving further small improvements. We conclude that active learning is a valuable tool and that further development of these strategies can play a key role in making diagnostic algorithms viable.

Comparison of Automatic Seed Generation Methods for Breast Tumor Detection Using Region Growing Technique

  • Melouah, Ahlem
2015 Book Section, cited 7 times
Website

Comparison of CT and MRI images for the prediction of soft-tissue sarcoma grading and lung metastasis via a convolutional neural networks model

  • Zhang, L.
  • Ren, Z.
Clin Radiol 2019 Journal Article, cited 0 times
Website
AIM: To realise the automated prediction of soft-tissue sarcoma (STS) grading and lung metastasis based on computed tomography (CT), T1-weighted (T1W) magnetic resonance imaging (MRI), and fat-suppressed T2-weighted MRI (FST2W) via the convolutional neural networks (CNN) model. MATERIALS AND METHODS: MRI and CT images of 51 patients diagnosed with STS were analysed retrospectively. The patients could be divided into three groups based on disease grading: high-grade group (n=28), intermediate-grade group (n=15), low-grade group (n=8). Among these patients, 32 had lung metastasis, while the remaining 19 had no lung metastasis. The data were divided into the training, validation, and testing groups according to the ratio of 5:2:3. The receiver operating characteristic (ROC) curves and accuracy values were acquired using the testing dataset to evaluate the performance of the CNN model. RESULTS: For STS grading, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W testing data were 0.86, 0.89, 0.86, and 0.85, respectively. In addition, Area Under Curve (AUC) were 0.96, 0.97, 0.97, and 0.94 respectively. For the prediction of lung metastasis, the accuracy of the T1W, FST2W, CT, and the fusion of T1W and FST2W test data were 0.92, 0.93, 0.88, and 0.91, respectively. The corresponding AUC values were 0.97, 0.96, 0.95, and 0.95, respectively. FST2W MRI performed best for predicting STS grading and lung metastasis. CONCLUSION: MRI and CT images combined with the CNN model can be useful for making predictions regarding STS grading and lung metastasis, thus providing help for patient diagnosis and treatment.

A comparison of ground truth estimation methods

  • Biancardi, Alberto M
  • Jirapatnakul, Artit C
  • Reeves, Anthony P
International Journal of Computer Assisted Radiology and Surgery 2010 Journal Article, cited 17 times
Website
PURPOSE: Knowledge of the exact shape of a lesion, or ground truth (GT), is necessary for the development of diagnostic tools by means of algorithm validation, measurement metric analysis, accurate size estimation. Four methods that estimate GTs from multiple readers' documentations by considering the spatial location of voxels were compared: thresholded Probability-Map at 0.50 (TPM(0.50)) and at 0.75 (TPM(0.75)), simultaneous truth and performance level estimation (STAPLE) and truth estimate from self distances (TESD). METHODS: A subset of the publicly available Lung Image Database Consortium archive was used, selecting pulmonary nodules documented by all four radiologists. The pair-wise similarities between the estimated GTs were analyzed by computing the respective Jaccard coefficients. Then, with respect to the readers' marking volumes, the estimated volumes were ranked and the sign test of the differences between them was performed. RESULTS: (a) the rank variations among the four methods and the volume differences between STAPLE and TESD are not statistically significant, (b) TPM(0.50) estimates are statistically larger (c) TPM(0.75) estimates are statistically smaller (d) there is some spatial disagreement in the estimates as the one-sided 90% confidence intervals between TPM(0.75) and TPM(0.50), TPM(0.75) and STAPLE, TPM(0.75) and TESD, TPM(0.50) and STAPLE, TPM(0.50) and TESD, STAPLE and TESD, respectively, show: [0.67, 1.00], [0.67, 1.00], [0.77, 1.00], [0.93, 1.00], [0.85, 1.00], [0.85, 1.00]. CONCLUSIONS: The method used to estimate the GT is important: the differences highlighted that STAPLE and TESD, notwithstanding a few weaknesses, appear to be equally viable as a GT estimator, while the increased availability of computing power is decreasing the appeal afforded to TPMs. Ultimately, the choice of which GT estimation method, between the two, should be preferred depends on the specific characteristics of the marked data that is used with respect to the two elements that differentiate the method approaches: relative reliabilities of the readers and the reliability of the region boundaries.

Comparison of iterative parametric and indirect deep learning-based reconstruction methods in highly undersampled DCE-MR Imaging of the breast

  • Rastogi, A.
  • Yalavarthy, P. K.
Med Phys 2020 Journal Article, cited 0 times
Website
PURPOSE: To compare the performance of iterative direct and indirect parametric reconstruction methods with indirect deep learning-based reconstruction methods in estimating tracer-kinetic parameters from highly undersampled DCE-MR Imaging breast data and provide a systematic comparison of the same. METHODS: Estimation of tracer-kinetic parameters using indirect methods from undersampled data requires to reconstruct the anatomical images initially by solving an inverse problem. This reconstructed images gets utilized in turn to estimate the tracer-kinetic parameters. In direct estimation, the parameters are estimated without reconstructing the anatomical images. Both problems are ill-posed and are typically solved using prior-based regularization or using deep learning. In this study, for indirect estimation, two deep learning-based reconstruction frameworks namely, ISTA-Net(+) and MODL, were utilized. For direct and indirect parametric estimation, sparsity inducing priors (L1 and Total-Variation) and limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm as solver was deployed. The performance of these techniques were compared systematically in estimation of vascular permeability ( K trans ) from undersampled DCE-MRI breast data using Patlak as pharmaco-kinetic model. The experiments involved retrospective undersampling of the data 20x, 50x, and 100x and compared the results using PSNR, nRMSE, SSIM, and Xydeas metrics. The K trans maps estimated from fully sampled data were utilized as ground truth. The developed code was made available as https://github.com/Medical-Imaging-Group/DCE-MRI-Compare open-source for enthusiastic users. RESULTS: The reconstruction methods performance was evaluated using ten patients breast data (five patients each for training and testing). Consistent with other studies, the results indicate that direct parametric reconstruction methods provide improved performance compared to the indirect parameteric reconstruction methods. The results also indicate that for 20x undersampling, deep learning-based methods performs better or at par with direct estimation in terms of PSNR, SSIM, and nRMSE. However, for higher undersampling rates (50x and 100x) direct estimation performs better in all metrics. For all undersampling rates, direct reconstruction performed better in terms of Xydeas metric, which indicated fidelity in magnitude and orientation of edges. CONCLUSION: Deep learning-based indirect techniques perform at par with direct estimation techniques for lower undersampling rates in the breast DCE-MR imaging. At higher undersampling rates, they are not able to provide much needed generalization. Direct estimation techniques are able to provide more accurate results than both deep learning- and parametric-based indirect methods in these high undersampling scenarios.

A Comparison of Lung Nodule Segmentation Algorithms: Methods and Results from a Multi-institutional Study

  • Kalpathy-Cramer, Jayashree
  • Zhao, Binsheng
  • Goldgof, Dmitry
  • Gu, Yuhua
  • Wang, Xingwei
  • Yang, Hao
  • Tan, Yongqiang
  • Gillies, Robert
  • Napel, Sandy
2016 Journal Article, cited 18 times
Website
Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 mul to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.

Comparison of methods for sensitivity correction in Talbot-Lau computed tomography

  • Felsner, L.
  • Roser, P.
  • Maier, A.
  • Riess, C.
Int J Comput Assist Radiol Surg 2021 Journal Article, cited 0 times
Website
PURPOSE: In Talbot-Lau X-ray phase contrast imaging, the measured phase value depends on the position of the object in the measurement setup. When imaging large objects, this may lead to inhomogeneous phase contributions within the object. These inhomogeneities introduce artifacts in tomographic reconstructions of the object. METHODS: In this work, we compare recently proposed approaches to correct such reconstruction artifacts. We compare an iterative reconstruction algorithm, a known operator network and a U-net. The methods are qualitatively and quantitatively compared on the Shepp-Logan phantom and on the anatomy of a human abdomen. We also perform a dedicated experiment on the noise behavior of the methods. RESULTS: All methods were able to reduce the specific artifacts in the reconstructions for the simulated and virtual real anatomy data. The results show method-specific residual errors that are indicative for the inherently different correction approaches. While all methods were able to correct the artifacts, we report a different noise behavior. CONCLUSION: The iterative reconstruction performs very well, but at the cost of a high runtime. The known operator network shows consistently a very competitive performance. The U-net performs slightly worse, but has the benefit that it is a general-purpose network that does not require special application knowledge.

Comparison of novel multi-level Otsu (MO-PET) and conventional PET segmentation methods for measuring FDG metabolic tumor volume in patients with soft tissue sarcoma

  • Lee, Inki
  • Im, Hyung-Jun
  • Solaiyappan, Meiyappan
  • Cho, Steve Y
EJNMMI physics 2017 Journal Article, cited 0 times
Website

Comparison of performances of conventional and deep learning-based methods in segmentation of lung vessels and registration of chest radiographs

  • Guo, W.
  • Gu, X.
  • Fang, Q.
  • Li, Q.
Radiol Phys Technol 2020 Journal Article, cited 0 times
Website
Conventional machine learning-based methods have been effective in assisting physicians in making accurate decisions and utilized in computer-aided diagnosis for more than 30 years. Recently, deep learning-based methods, and convolutional neural networks in particular, have rapidly become preferred options in medical image analysis because of their state-of-the-art performance. However, the performances of conventional and deep learning-based methods cannot be compared reliably because of their evaluations on different datasets. Hence, we developed both conventional and deep learning-based methods for lung vessel segmentation and chest radiograph registration, and subsequently compared their performances on the same datasets. The results strongly indicated the superiority of deep learning-based methods over their conventional counterparts.

Comparison of Safety Margin Generation Concepts in Image Guided Radiotherapy to Account for Daily Head and Neck Pose Variations

  • Stoll, Markus
  • Stoiber, Eva Maria
  • Grimm, Sarah
  • Debus, Jürgen
  • Bendl, Rolf
  • Giske, Kristina
PLoS One 2016 Journal Article, cited 2 times
Website
PURPOSE: Intensity modulated radiation therapy (IMRT) of head and neck tumors allows a precise conformation of the high-dose region to clinical target volumes (CTVs) while respecting dose limits to organs a risk (OARs). Accurate patient setup reduces translational and rotational deviations between therapy planning and therapy delivery days. However, uncertainties in the shape of the CTV and OARs due to e.g. small pose variations in the highly deformable anatomy of the head and neck region can still compromise the dose conformation. Routinely applied safety margins around the CTV cause higher dose deposition in adjacent healthy tissue and should be kept as small as possible. MATERIALS AND METHODS: In this work we evaluate and compare three approaches for margin generation 1) a clinically used approach with a constant isotropic 3 mm margin, 2) a previously proposed approach adopting a spatial model of the patient and 3) a newly developed approach adopting a biomechanical model of the patient. All approaches are retrospectively evaluated using a large patient cohort of over 500 fraction control CT images with heterogeneous pose changes. Automatic methods for finding landmark positions in the control CT images are combined with a patient specific biomechanical finite element model to evaluate the CTV deformation. RESULTS: The applied methods for deformation modeling show that the pose changes cause deformations in the target region with a mean motion magnitude of 1.80 mm. We found that the CTV size can be reduced by both variable margin approaches by 15.6% and 13.3% respectively, while maintaining the CTV coverage. With approach 3 an increase of target coverage was obtained. CONCLUSION: Variable margins increase target coverage, reduce risk to OARs and improve healthy tissue sparing at the same time.

Comparison of segmentation-free and segmentation-dependent computer-aided diagnosis of breast masses on a public mammography dataset

  • Sawyer Lee, Rebecca
  • Dunnmon, Jared A
  • He, Ann
  • Tang, Siyi
  • Re, Christopher
  • Rubin, Daniel L
J Biomed Inform 2021 Journal Article, cited 1 times
Website
PURPOSE: To compare machine learning methods for classifying mass lesions on mammography images that use predefined image features computed over lesion segmentations to those that leverage segmentation-free representation learning on a standard, public evaluation dataset. METHODS: We apply several classification algorithms to the public Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM), in which each image contains a mass lesion. Segmentation-free representation learning techniques for classifying lesions as benign or malignant include both a Bag-of-Visual-Words (BoVW) method and a Convolutional Neural Network (CNN). We compare classification performance of these techniques to that obtained using two different segmentation-dependent approaches from the literature that rely on specific combinations of end classifiers (e.g. linear discriminant analysis, neural networks) and predefined features computed over the lesion segmentation (e.g. spiculation measure, morphological characteristics, intensity metrics). RESULTS: We report area under the receiver operating characteristic curve (AZ) values for malignancy classification on CBIS-DDSM for each technique. We find average AZ values of 0.73 for a segmentation-free BoVW method, 0.86 for a segmentation-free CNN method, 0.75 for a segmentation-dependent linear discriminant analysis of Rubber-Band Straightening Transform features, and 0.58 for a hybrid rule-based neural network classification using a small number of hand-designed features. CONCLUSIONS: We find that malignancy classification performance on the CBIS-DDSM dataset using segmentation-free BoVW features is comparable to that of the best segmentation-dependent methods we study, but also observe that a common segmentation-free CNN model substantially and significantly outperforms each of these (p < 0.05). These results reinforce recent findings suggesting that representation learning techniques such as BoVW and CNNs are advantageous for mammogram analysis because they do not require lesion segmentation, the quality and specific characteristics of which can vary substantially across datasets. We further observe that segmentation-dependent methods achieve performance levels on CBIS-DDSM inferior to those achieved on the original evaluation datasets reported in the literature. Each of these findings reinforces the need for standardization of datasets, segmentation techniques, and model implementations in performance assessments of automated classifiers for medical imaging.

Comparison of Supervised and Unsupervised Approaches for the Generation of Synthetic CT from Cone-Beam CT

  • Rossi, M.
  • Cerveri, P.
Diagnostics (Basel) 2021 Journal Article, cited 0 times
Website
Due to major artifacts and uncalibrated Hounsfield units (HU), cone-beam computed tomography (CBCT) cannot be used readily for diagnostics and therapy planning purposes. This study addresses image-to-image translation by convolutional neural networks (CNNs) to convert CBCT to CT-like scans, comparing supervised to unsupervised training techniques, exploiting a pelvic CT/CBCT publicly available dataset. Interestingly, quantitative results were in favor of supervised against unsupervised approach showing improvements in the HU accuracy (62% vs. 50%), structural similarity index (2.5% vs. 1.1%) and peak signal-to-noise ratio (15% vs. 8%). Qualitative results conversely showcased higher anatomical artifacts in the synthetic CBCT generated by the supervised techniques. This was motivated by the higher sensitivity of the supervised training technique to the pixel-wise correspondence contained in the loss function. The unsupervised technique does not require correspondence and mitigates this drawback as it combines adversarial, cycle consistency, and identity loss functions. Overall, two main impacts qualify the paper: (a) the feasibility of CNN to generate accurate synthetic CT from CBCT images, which is fast and easy to use compared to traditional techniques applied in clinics; (b) the proposal of guidelines to drive the selection of the better training technique, which can be shifted to more general image-to-image translation.

A Comparison of the Efficiency of Using a Deep CNN Approach with Other Common Regression Methods for the Prediction of EGFR Expression in Glioblastoma Patients

  • Hedyehzadeh, Mohammadreza
  • Maghooli, Keivan
  • MomenGharibvand, Mohammad
  • Pistorius, Stephen
J Digit Imaging 2020 Journal Article, cited 0 times
Website
To estimate epithermal growth factor receptor (EGFR) expression level in glioblastoma (GBM) patients using radiogenomic analysis of magnetic resonance images (MRI). A comparative study using a deep convolutional neural network (CNN)-based regression, deep neural network, least absolute shrinkage and selection operator (LASSO) regression, elastic net regression, and linear regression with no regularization was carried out to estimate EGFR expression of 166 GBM patients. Except for the deep CNN case, overfitting was prevented by using feature selection, and loss values for each method were compared. The loss values in the training phase for deep CNN, deep neural network, Elastic net, LASSO, and the linear regression with no regularization were 2.90, 8.69, 7.13, 14.63, and 21.76, respectively, while in the test phase, the loss values were 5.94, 10.28, 13.61, 17.32, and 24.19 respectively. These results illustrate that the efficiency of the deep CNN approach is better than that of the other methods, including Lasso regression, which is a regression method known for its advantage in high-dimension cases. A comparison between deep CNN, deep neural network, and three other common regression methods was carried out, and the efficiency of the CNN deep learning approach, in comparison with other regression models, was demonstrated.

A comparison of two methods for estimating DCE-MRI parameters via individual and cohort based AIFs in prostate cancer: A step towards practical implementation

  • Fedorov, Andriy
  • Fluckiger, Jacob
  • Ayers, Gregory D
  • Li, Xia
  • Gupta, Sandeep N
  • Tempany, Clare
  • Mulkern, Robert
  • Yankeelov, Thomas E
  • Fennessy, Fiona M
Magnetic resonance imaging 2014 Journal Article, cited 30 times
Website
Multi-parametric Magnetic Resonance Imaging, and specifically Dynamic Contrast Enhanced (DCE) MRI, play increasingly important roles in detection and staging of prostate cancer (PCa). One of the actively investigated approaches to DCE MRI analysis involves pharmacokinetic (PK) modeling to extract quantitative parameters that may be related to microvascular properties of the tissue. It is well-known that the prescribed arterial blood plasma concentration (or Arterial Input Function, AIF) input can have significant effects on the parameters estimated by PK modeling. The purpose of our study was to investigate such effects in DCE MRI data acquired in a typical clinical PCa setting. First, we investigated how the choice of a semi-automated or fully automated image-based individualized AIF (iAIF) estimation method affects the PK parameter values; and second, we examined the use of method-specific averaged AIF (cohort-based, or cAIF) as a means to attenuate the differences between the two AIF estimation methods. Two methods for automated image-based estimation of individualized (patient-specific) AIFs, one of which was previously validated for brain and the other for breast MRI, were compared. cAIFs were constructed by averaging the iAIF curves over the individual patients for each of the two methods. Pharmacokinetic analysis using the Generalized kinetic model and each of the four AIF choices (iAIF and cAIF for each of the two image-based AIF estimation approaches) was applied to derive the volume transfer rate (K(trans)) and extravascular extracellular volume fraction (ve) in the areas of prostate tumor. Differences between the parameters obtained using iAIF and cAIF for a given method (intra-method comparison) as well as inter-method differences were quantified. The study utilized DCE MRI data collected in 17 patients with histologically confirmed PCa. Comparison at the level of the tumor region of interest (ROI) showed that the two automated methods resulted in significantly different (p<0.05) mean estimates of ve, but not of K(trans). Comparing cAIF, different estimates for both ve, and K(trans) were obtained. Intra-method comparison between the iAIF- and cAIF-driven analyses showed the lack of effect on ve, while K(trans) values were significantly different for one of the methods. Our results indicate that the choice of the algorithm used for automated image-based AIF determination can lead to significant differences in the values of the estimated PK parameters. K(trans) estimates are more sensitive to the choice between cAIF/iAIF as compared to ve, leading to potentially significant differences depending on the AIF method. These observations may have practical consequences in evaluating the PK analysis results obtained in a multi-site setting.

Compressibility variations of JPEG2000 compressed computed tomography

  • Pambrun, Jean-Francois
  • Noumeir, Rita
2013 Conference Proceedings, cited 3 times
Website

Computational assessment of stomach tumor volume from multi-slice computerized tomography images in presence of type 2 cancer

  • Chacón, Gerardo
  • Rodríguez, Johel E
  • Bermúdez, Valmore
  • Vera, Miguel
  • Hernández, Juan Diego
  • Vargas, Sandra
  • Pardo, Aldo
  • Lameda, Carlos
  • Madriz, Delia
  • Bravo, Antonio J
F1000Research 2018 Journal Article, cited 0 times
Website
Background: The multi-slice computerized tomography (MSCT) is a medical imaging modality that has been used to determine the size and location of the stomach cancer. Additionally, MSCT is considered the best modality for the staging of gastric cancer. One way to assess the type 2 cancer of stomach is by detecting the pathological structure with an image segmentation approach. The tumor segmentation of MSCT gastric cancer images enables the diagnosis of the disease condition, for a given patient, without using an invasive method as surgical intervention. Methods: This approach consists of three stages. The initial stage, an image enhancement, consists of a method for correcting non homogeneities present in the background of MSCT images. Then, a segmentation stage using a clustering method allows to obtain the adenocarcinoma morphology. In the third stage, the pathology region is reconstructed and then visualized with a three-dimensional (3-D) computer graphics procedure based on marching cubes algorithm. In order to validate the segmentations, the Dice score is used as a metric function useful for comparing the segmentations obtained using the proposed method with respect to ground truth volumes traced by a clinician. Results: A total of 8 datasets available for patients diagnosed, from the cancer data collection of the project, Cancer Genome Atlas Stomach Adenocarcinoma (TCGASTAD) is considered in this research. The volume of the type 2 stomach tumor is estimated from the 3-D shape computationally segmented from the each dataset. These 3-D shapes are computationally reconstructed and then used to assess the morphopathology macroscopic features of this cancer. Conclusions: The segmentations obtained are useful for assessing qualitatively and quantitatively the stomach type 2 cancer. In addition, this type of segmentation allows the development of computational models that allow the planning of virtual surgical processes related to type 2 cancer.

Computational Challenges and Collaborative Projects in the NCI Quantitative Imaging Network

  • Farahani, Keyvan
  • Kalpathy-Cramer, Jayashree
  • Chenevert, Thomas L
  • Rubin, Daniel L
  • Sunderland, John J
  • Nordstrom, Robert J
  • Buatti, John
  • Hylton, Nola
Tomography 2016 Journal Article, cited 2 times
Website
The Quantitative Imaging Network (QIN) of the National Cancer Institute (NCI) conducts research in development and validation of imaging tools and methods for predicting and evaluating clinical response to cancer therapy. Members of the network are involved in examining various imaging and image assessment parameters through network-wide cooperative projects. To more effectively use the cooperative power of the network in conducting computational challenges in benchmarking of tools and methods and collaborative projects in analytical assessment of imaging technologies, the QIN Challenge Task Force has developed policies and procedures to enhance the value of these activities by developing guidelines and leveraging NCI resources to help their administration and manage dissemination of results. Challenges and Collaborative Projects (CCPs) are further divided into technical and clinical CCPs. As the first NCI network to engage in CCPs, we anticipate a variety of CCPs to be conducted by QIN teams in the coming years. These will be aimed to benchmark advanced software tools for clinical decision support, explore new imaging biomarkers for therapeutic assessment, and establish consensus on a range of methods and protocols in support of the use of quantitative imaging to predict and assess response to cancer therapy.

Computational Identification of Tumor Anatomic Location Associated with Survival in 2 Large Cohorts of Human Primary Glioblastomas

  • Liu, T T
  • Achrol, A S
  • Mitchell, L A
  • Du, W A
  • Loya, J J
  • Rodriguez, S A
  • Feroze, A
  • Westbroek, E M
  • Yeom, K W
  • Stuart, J M
  • Chang, S D
  • Harsh, G R 4th
  • Rubin, D L
American Journal of Neuroradiology 2016 Journal Article, cited 6 times
Website
BACKGROUND AND PURPOSE: Tumor location has been shown to be a significant prognostic factor in patients with glioblastoma. The purpose of this study was to characterize glioblastoma lesions by identifying MR imaging voxel-based tumor location features that are associated with tumor molecular profiles, patient characteristics, and clinical outcomes. MATERIALS AND METHODS: Preoperative T1 anatomic MR images of 384 patients with glioblastomas were obtained from 2 independent cohorts (n = 253 from the Stanford University Medical Center for training and n = 131 from The Cancer Genome Atlas for validation). An automated computational image-analysis pipeline was developed to determine the anatomic locations of tumor in each patient. Voxel-based differences in tumor location between good (overall survival of >17 months) and poor (overall survival of <11 months) survival groups identified in the training cohort were used to classify patients in The Cancer Genome Atlas cohort into 2 brain-location groups, for which clinical features, messenger RNA expression, and copy number changes were compared to elucidate the biologic basis of tumors located in different brain regions. RESULTS: Tumors in the right occipitotemporal periventricular white matter were significantly associated with poor survival in both training and test cohorts (both, log-rank P < .05) and had larger tumor volume compared with tumors in other locations. Tumors in the right periatrial location were associated with hypoxia pathway enrichment and PDGFRA amplification, making them potential targets for subgroup-specific therapies. CONCLUSIONS: Voxel-based location in glioblastoma is associated with patient outcome and may have a potential role for guiding personalized treatment.

A computational model for texture analysis in images with a reaction-diffusion based filter

  • Hamid, Lefraich
  • Fahim, Houda
  • Zirhem, Mariam
  • Alaa, Nour Eddine
Journal of Mathematical Modeling 2021 Journal Article, cited 0 times
Website
As one of the most important tasks in image processing, texture analysis is related to a class of mathematical models that characterize the spatial variations of an image. In this paper, in order to extract features of interest, we propose a reaction diffusion based model which uses the variational approach. In the first place, we describe the mathematical model, then, aiming to simulate the latter accurately, we suggest an efficient numerical scheme. Thereafter, we compare our method to literature findings. Finally, we conclude our analysis by a number of experimental results showing the robustness and the performance of our algorithm.

Computational Models for Automated Histopathological Assessment of Colorectal Liver Metastasis Progression

  • XU, Xiaoyang
2019 Thesis, cited 0 times
Website
Histopathology imaging is a type of microscopy imaging commonly used for the microlevel clinical examination of a patient’s pathology. Due to the extremely large size of histopathology images, especially whole slide images (WSIs), it is difficult for pathologists to make a quantitative assessment by inspecting the details of a WSI. Hence, a computeraided system is necessary to provide a subjective and consistent assessment of the WSI for personalised treatment decisions. In this thesis, a deep learning framework for the automatic analysis of whole slide histopathology images is presented for the first time, which aims to address the challenging task of assessing and grading colorectal liver metastasis (CRLM). Quantitative evaluations of a patient’s condition with CRLM are conducted through quantifying different tissue components in resected tumorous specimens. This study mimics the visual examination process of human experts, by focusing on three levels of information, the tissue level, cell level and pixel level, to achieve the step by step segmentation of histopathology images. At the tissue level, patches with category information are utilised to analyse the WSIs. Both classification-based approaches and segmentation-based approaches are investigated to locate the metastasis region and quantify different components of the WSI. For the classification-based method, different factors that might affect the classification accuracy are explored using state-of-the-art deep convolutional neural networks (DCNNs). Furthermore, a novel network is proposed to merge the information from different magnification levels to include contextual information to support the final decision. With the support by the segmentation-based method, edge information from the image is integrated with the proposed fully convolutional neural network to further enhance the segmentation results. At the cell level, nuclei related information is examined to tackle the challenge of inadequate annotations. The problem is approached from two aspects: a weakly supervised nuclei detection and classification method is presented to model the nuclei in the CRLM by integrating a traditional image processing method and variational auto-encoder (VAE). A novel nuclei instance segmentation framework is proposed to boost the accuracy of the nuclei detection and segmentation using the idea of transfer learning. Afterwards, a fusion framework is proposed to enhance the tissue level segmentation results by leveraging the statistical and spatial properties of the cells. At the pixel level, the segmentation problem is tackled by introducing the information from the immunohistochemistry (IHC) stained images. Firstly, two data augmentation approaches, synthesis-based and transfer-based, are proposed to address the problem of insufficient pixel level segmentation. Afterwards, with the paired image and masks having been obtained, an end-to-end model is trained to achieve pixel level segmentation. Secondly, another novel weakly supervised approach based on the generative adversarial network (GAN) is proposed to explore the feasibility of transforming unpaired haematoxylin and eosin (HE) images to IHC stained images. Extensive experiments reveal that the virtually stained images can also be used for pixel level segmentation.

Computed Tomography (CT) Image Quality Enhancement via a Uniform Framework Integrating Noise Estimation and Super-Resolution Networks

  • Chi, Jianning
  • Zhang, Yifei
  • Yu, Xiaosheng
  • Wang, Ying
  • Wu, Chengdong
Sensors (Basel) 2019 Journal Article, cited 2 times
Website
Computed tomography (CT) imaging technology has been widely used to assist medical diagnosis in recent years. However, noise during the process of imaging, and data compression during the process of storage and transmission always interrupt the image quality, resulting in unreliable performance of the post-processing steps in the computer assisted diagnosis system (CADs), such as medical image segmentation, feature extraction, and medical image classification. Since the degradation of medical images typically appears as noise and low-resolution blurring, in this paper, we propose a uniform deep convolutional neural network (DCNN) framework to handle the de-noising and super-resolution of the CT image at the same time. The framework consists of two steps: Firstly, a dense-inception network integrating an inception structure and dense skip connection is proposed to estimate the noise level. The inception structure is used to extract the noise and blurring features with respect to multiple receptive fields, while the dense skip connection can reuse those extracted features and transfer them across the network. Secondly, a modified residual-dense network combined with joint loss is proposed to reconstruct the high-resolution image with low noise. The inception block is applied on each skip connection of the dense-residual network so that the structure features of the image are transferred through the network more than the noise and blurring features. Moreover, both the perceptual loss and the mean square error (MSE) loss are used to restrain the network, leading to better performance in the reconstruction of image edges and details. Our proposed network integrates the degradation estimation, noise removal, and image super-resolution in one uniform framework to enhance medical image quality. We apply our method to the Cancer Imaging Archive (TCIA) public dataset to evaluate its ability in medical image quality enhancement. The experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on de-noising and super-resolution by providing higher peak signal to noise ratio (PSNR) and structure similarity index (SSIM) values.

Computed tomography image reconstruction using stacked U-Net

  • Mizusawa, S.
  • Sei, Y.
  • Orihara, R.
  • Ohsuga, A.
Comput Med Imaging Graph 2021 Journal Article, cited 0 times
Website
Since the development of deep learning methods, many researchers have focused on image quality improvement using convolutional neural networks. They proved its effectivity in noise reduction, single-image super-resolution, and segmentation. In this study, we apply stacked U-Net, a deep learning method, for X-ray computed tomography image reconstruction to generate high-quality images in a short time with a small number of projections. It is not easy to create highly accurate models because medical images have few training images due to patients' privacy issues. Thus, we utilize various images from the ImageNet, a widely known visual database. Results show that a cross-sectional image with a peak signal-to-noise ratio of 27.93db and a structural similarity of 0.886 is recovered for a 512x512 image using 360-degree rotation, 512 detectors, and 64 projections, with a processing time of 0.11s on the GPU. Therefore, the proposed method has a shorter reconstruction time and better image quality than the existing methods.

COMPUTER AIDED DETECTION OF LUNG CYSTS USING CONVOLUTIONAL NEURAL NETWORK (CNN)

  • Kishore Sebastian
  • S. Devi
Turkish Journal of Physiotherapy and Rehabilitation 2021 Journal Article, cited 0 times
Website
Lung cancer is one of the baleful diseases. The survival rate will be low if the diagonisation and treatment of lung tumour gets delayed. But the survival rate and saving lives can be enhanced with opportune diagnosis and prompt treatment. The seriousness of the disease calls for a highly efficient system that can identify cancerous growth with high accuracy level. Computer Tomography (CT) scan is used to obtain detailed picture of different body parts. However it is difficult to scrutinize the presence and coverage of cancerous cells in the lungs using this scan; even for professionals.So a new model based on the Mumford and Shah Model using convolutional neural network (CNN) classification is proposed in this paper. The proposed model will provide an output with higher efficiency and accuracy in lesser amount of time. This system uses seven metrics for assessment used in this system are Classification Accuracy, sensitivity, AUC, F Measure, Specificity, precision, Brier Score and MCC. And finally the results obtained using SVM are then compared in terms of these seven metrics with the results obtained using Decision-Tree, KNN, CNN and Adaptive Boosting algorithms, and this clearly shows the higher accuracy of the proposed system over the existing system

Computer Aided Detection Scheme To Improve The Prognosis Assessment Of Early Stage Lung Cancer Patients

  • Athira, KV
  • Nithin, SS
Computer 2018 Journal Article, cited 0 times
Website
To develop a computer aided detection scheme to predict the stage 1 non-small cell lung cancer recurrence risk in lung cancer patients after surgery. By using chest computed tomography images; that taken before surgery, this system automatically segment the tumor that seen on CT images and extract the tumor related morphological and texture-based image features. We trained a Naïve Bayesian network classifier using six image features and an ANN classifier using two genomic biomarkers, these biomarkers are protein expression of the excision repair cross-complementing 1 gene (ERCC1) & a regulatory subunit of ribonucleotide reductase (RRM1) to predict the cancer recurrence risk, respectively. We developed a new approach that has a high potential to assist doctors in more effectively managing first stage NSCLC patients to reduce the cancer recurrence risk.

A COMPUTER AIDED DIAGNOSIS SYSTEM FOR LUNG CANCER DETECTION USING SVM

  • EMİRZADE, ERKAN
2016 Thesis, cited 1 times
Website
Computer aided diagnosis is starting to be implemented broadly in the diagnosis and detection of many varieties of abnormities acquired during various imaging procedures. The main aim of the CAD systems is to increase the accuracy and decrease the time of diagnoses, while the general achievement for CAD systems are to find the place of nodules and to determine the characteristic features of the nodule. As lung cancer is one of the fatal and leading cancer types, there has been plenty of studies for the usage of the CAD systems to detect lung cancer. Yet, the CAD systems need to be developed a lot in order to identify the different shapes of nodules, lung segmentation and to have higher level of sensitivity, specifity and accuracy. This challenge is the motivation of this study in implementation of CAD system for lung cancer detection. In the study, LIDC database is used which comprises of an image set of lung cancer thoracic documented CT scans. The presented CAD system consists of CT image reading, image pre-processing, segmentation, feature extraction and classification steps. To avoid losing important features, the CT images were read as a raw form in DICOM file format. Then, filtration and enhancement techniques were used as an image processing. Otsu’s algorithm, edge detection and morphological operations are applied for the segmentation, following the feature extractions step. Finally, support vector machine with Gaussian RBF is utilized for the classification step which is widely used as a supervised classifier.

Computer Simulation of Low-dose CT with Clinical Lung Image Database: a preliminary study

  • Ronga, Junyan
  • Gaoa, Peng
  • Liua, Wenlei
  • Zhanga, Yuanke
  • Liua, Tianshuai
  • Lu, Hongbing
2017 Conference Proceedings, cited 1 times
Website

Computer-aided classification of prostate cancer grade groups from MRI images using texture features and stacked sparse autoencoder

  • Abraham, Bejoy
  • Nair, Madhu S
Computerized Medical Imaging and Graphics 2018 Journal Article, cited 1 times
Website

Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy

  • Firmino, Macedo
  • Angelo, Giovani
  • Morais, Higor
  • Dantas, Marcel R
  • Valentim, Ricardo
BioMedical Engineering OnLine 2016 Journal Article, cited 63 times
Website
BACKGROUND: CADe and CADx systems for the detection and diagnosis of lung cancer have been important areas of research in recent decades. However, these areas are being worked on separately. CADe systems do not present the radiological characteristics of tumors, and CADx systems do not detect nodules and do not have good levels of automation. As a result, these systems are not yet widely used in clinical settings. METHODS: The purpose of this article is to develop a new system for detection and diagnosis of pulmonary nodules on CT images, grouping them into a single system for the identification and characterization of the nodules to improve the level of automation. The article also presents as contributions: the use of Watershed and Histogram of oriented Gradients (HOG) techniques for distinguishing the possible nodules from other structures and feature extraction for pulmonary nodules, respectively. For the diagnosis, it is based on the likelihood of malignancy allowing more aid in the decision making by the radiologists. A rule-based classifier and Support Vector Machine (SVM) have been used to eliminate false positives. RESULTS: The database used in this research consisted of 420 cases obtained randomly from LIDC-IDRI. The segmentation method achieved an accuracy of 97 % and the detection system showed a sensitivity of 94.4 % with 7.04 false positives per case. Different types of nodules (isolated, juxtapleural, juxtavascular and ground-glass) with diameters between 3 mm and 30 mm have been detected. For the diagnosis of malignancy our system presented ROC curves with areas of: 0.91 for nodules highly unlikely of being malignant, 0.80 for nodules moderately unlikely of being malignant, 0.72 for nodules with indeterminate malignancy, 0.67 for nodules moderately suspicious of being malignant and 0.83 for nodules highly suspicious of being malignant. CONCLUSIONS: From our preliminary results, we believe that our system is promising for clinical applications assisting radiologists in the detection and diagnosis of lung cancer.

Computer-Aided Detection for Early Detection of Lung Cancer Using CT Images

  • Desai, Usha
  • Kamath, Sowmya
  • Shetty, Akshaya D.
  • Prabhu, M. Sandeep
2022 Conference Proceedings, cited 0 times
Website
Doctors face difficulty in the diagnosis of lung cancer due to the complex nature and clinical interrelations of computer-diagnosed scan images. Hence, the visual inspection and subjective evaluation methods are time consuming and tedious, which leads to inter and intra observer inconsistency or imprecise classification. The Computer-Aided Detection (CAD) can help the clinicians for objective decision-making, early diagnosis, and classification of cancerous abnormalities. In this work, CAD has been employed to enhance the accuracy, sensitivity, and specificity of automated detection in which, the phases of lung cancer are discriminated using image processing tools. Cancer is the second leading cause of death in non-communicable diseases worldwide. Lung cancer is, in fact, the most dangerous form of cancer that affects both the genders. Either or both sides of the lung begin to expand during the uncontrolled growth of extraordinary cells. The most widely used imaging technique for lung cancer diagnosis is Computerised Tomography (CT) scanning. In this work, the CAD method is used to differentiate between the phases of pictures of lung cancer stages. Abnormality detection consists of 4 steps: pre-processing, segmentation, extraction of features, and classification of input CT images. For the segmentation process, Marker-controlled watershed segmentation and the K-means algorithm are used. From CT images, normal and abnormal information is extracted and its characteristics are determined. Stages 1–4 of cancerous imaging were discriminated and graded with approximately 80% efficiency using neural network feedforward backpropagation algorithm. Input data is collected from the Lung Image Database Consortium (LIDC), which out of 1018 dataset cases uses 100 cases. For the output display, a graphical user interface (GUI) is developed. This automated and robust CAD system is necessary for accurate and quick screening of the mass population.

Computer-aided detection of brain tumors using image processing techniques

  • Kazdal, Seda
  • Dogan, Buket
  • Camurcu, Ali Yilmaz
2015 Conference Proceedings, cited 3 times
Website
Computer-aided detection applications has managed to make significant contributions to medical world in today's technology. In this study, the detection of brain tumors in magnetic resonance images was performed. This study proposes a computer aided detection system that is based on morphological reconstruction and rule based detection of tumors that using the morphological features of the regions of interest. The steps involved in this study are: the pre-processing stage, the segmentation stage, the stage of identification of the region of interest and the stage of detection of tumors. With these methods applied on 497 magnetic resonance image slices of 10 patients, the performance of the computer aided detection system is achieved 84,26% accuracy.

Computer-aided detection of lung nodules using outer surface features

  • Demir, Önder
  • Yılmaz Çamurcu, Ali
Bio-Medical Materials and EngineeringBio-Med Mater Eng 2015 Journal Article, cited 28 times
Website
In this study, a computer-aided detection (CAD) system was developed for the detection of lung nodules in computed tomography images. The CAD system consists of four phases, including two-dimensional and three-dimensional preprocessing phases. In the feature extraction phase, four different groups of features are extracted from volume of interests: morphological features, statistical and histogram features, statistical and histogram features of outer surface, and texture features of outer surface. The support vector machine algorithm is optimized using particle swarm optimization for classification. The CAD system provides 97.37% sensitivity, 86.38% selectivity, 88.97% accuracy and 2.7 false positive per scan using three groups of classification features. After the inclusion of outer surface texture features, classification results of the CAD system reaches 98.03% sensitivity, 87.71% selectivity, 90.12% accuracy and 2.45 false positive per scan. Experimental results demonstrate that outer surface texture features of nodule candidates are useful to increase sensitivity and decrease the number of false positives in the detection of lung nodules in computed tomography images.

Computer-aided detection of Pulmonary Nodules based on SVM in thoracic CT images

  • Eskandarian, Parinaz
  • Bagherzadeh, Jamshid
2015 Conference Proceedings, cited 12 times
Website
Computer-Aided diagnosis of Solitary Pulmonary Nodules using the method of X-ray CT images is the early detection of lung cancer. In this study, a computer-aided system for detection of pulmonary nodules on CT scan based support vector machine classifier is provided for the diagnosis of solitary pulmonary nodules. So at the first step, by data mining techniques, volume of data are reduced. Then divided by the area of the chest, the suspicious nodules are identified and eventually nodules are detected. In comparison with the threshold-based methods, support vector machine classifier to classify more accurately describes areas of the lungs. In this study, the false positive rate is reduced by combination of threshold with support vector machine classifier. Experimental results based on data from 147 patients with lung LIDC image database show that the proposed system is able to obtained sensitivity of 89.9% and false positive of 3.9 per scan. In comparison to previous systems, the proposed system demonstrates good performance.

Computer-aided Diagnosis for Lung Cancer: Usefulness of Nodule Heterogeneity

  • Nishio, Mizuho
  • Nagashima, Chihiro
Academic Radiology 2017 Journal Article, cited 12 times
Website
RATIONALE AND OBJECTIVES: To develop a computer-aided diagnosis system to differentiate between malignant and benign nodules. MATERIALS AND METHODS: Seventy-three lung nodules revealed on 60 sets of computed tomography (CT) images were analyzed. Contrast-enhanced CT was performed in 46 CT examinations. The images were provided by the LUNGx Challenge, and the ground truth of the lung nodules was unavailable; a surrogate ground truth was, therefore, constructed by radiological evaluation. Our proposed method involved novel patch-based feature extraction using principal component analysis, image convolution, and pooling operations. This method was compared to three other systems for the extraction of nodule features: histogram of CT density, local binary pattern on three orthogonal planes, and three-dimensional random local binary pattern. The probabilistic outputs of the systems and surrogate ground truth were analyzed using receiver operating characteristic analysis and area under the curve. The LUNGx Challenge team also calculated the area under the curve of our proposed method based on the actual ground truth of their dataset. RESULTS: Based on the surrogate ground truth, the areas under the curve were as follows: histogram of CT density, 0.640; local binary pattern on three orthogonal planes, 0.688; three-dimensional random local binary pattern, 0.725; and the proposed method, 0.837. Based on the actual ground truth, the area under the curve of the proposed method was 0.81. CONCLUSIONS: The proposed method could capture discriminative characteristics of lung nodules and was useful for the differentiation between malignant and benign nodules.

Computer-aided diagnosis of clinically significant prostate cancer from MRI images using sparse autoencoder and random forest classifier

  • Abraham, Bejoy
  • Nair, Madhu S
Biocybernetics and Biomedical Engineering 2018 Journal Article, cited 0 times
Website

Computer-aided diagnosis of hepatocellular carcinoma fusing imaging and structured health data

  • Menegotto, A. B.
  • Becker, C. D. L.
  • Cazella, S. C.
Health Inf Sci Syst 2021 Journal Article, cited 0 times
Website
Introduction: Hepatocellular carcinoma is the prevalent primary liver cancer, a silent disease that killed 782,000 worldwide in 2018. Multimodal deep learning is the application of deep learning techniques, fusing more than one data modality as the model's input. Purpose: A computer-aided diagnosis system for hepatocellular carcinoma developed with multimodal deep learning approaches could use multiple data modalities as recommended by clinical guidelines, and enhance the robustness and the value of the second-opinion given to physicians. This article describes the process of creation and evaluation of an algorithm for computer-aided diagnosis of hepatocellular carcinoma developed with multimodal deep learning techniques fusing preprocessed computed-tomography images with structured data from patient Electronic Health Records. Results: The classification performance achieved by the proposed algorithm in the test dataset was: accuracy = 86.9%, precision = 89.6%, recall = 86.9% and F-Score = 86.7%. These classification performance metrics are closer to the state-of-the-art in this area and were achieved with data modalities which are cheaper than traditional Magnetic Resonance Imaging approaches, enabling the use of the proposed algorithm by low and mid-sized healthcare institutions. Conclusion: The classification performance achieved with the multimodal deep learning algorithm is higher than human specialists diagnostic performance using only CT for diagnosis. Even though the results are promising, the multimodal deep learning architecture used for hepatocellular carcinoma prediction needs more training and test processes using different datasets before the use of the proposed algorithm by physicians in real healthcare routines. The additional training aims to confirm the classification performance achieved and enhance the model's robustness.

Computer-Aided Diagnosis of Life-Threatening Diseases

  • Kumar, Pramod
  • Ambekar, Sameer
  • Roy, Subarna
  • Kunchur, Pavan
2019 Book Section, cited 0 times
According to WHO, the incidence of life-threatening diseases like cancer, diabetes, and Alzheimer’s disease is escalating globally. In the past few decades, traditional methods have been used to diagnose such diseases. These traditional methods often have limitations such as lack of accuracy, expense, and time-consuming procedures. Computer-aided diagnosis (CAD) aims to overcome these limitations by personalizing healthcare issues. Machine learning is a promising CAD method, offering effective solutions for these diseases. It is being used for early detection of cancer, diabetic retinopathy, as well as Alzheimer’s disease, and also to identify diseases in plants. Machine learning can increase efficiency, making the process more cost effective, with quicker delivery of results. There are several CAD algorithms (ANN, SVM, etc.) that can be used to train the disease dataset, and eventually make significant predictions. It has also been proven that CAD algorithms have potential to diagnose and early detection of life-threatening diseases.

Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization

  • Nishio, Mizuho
  • Nishizawa, Mitsuo
  • Sugiyama, Osamu
  • Kojima, Ryosuke
  • Yakami, Masahiro
  • Kuroda, Tomohiro
  • Togashi, Kaori
PLoS One 2018 Journal Article, cited 3 times
Website

Computer-aided diagnosis of prostate cancer using multiparametric MRI and clinical features: A patient-level classification framework

  • Mehta, P.
  • Antonelli, M.
  • Ahmed, H. U.
  • Emberton, M.
  • Punwani, S.
  • Ourselin, S.
Med Image Anal 2021 Journal Article, cited 1 times
Website
Computer-aided diagnosis (CAD) of prostate cancer (PCa) using multiparametric magnetic resonance imaging (mpMRI) is actively being investigated as a means to provide clinical decision support to radiologists. Typically, these systems are trained using lesion annotations. However, lesion annotations are expensive to obtain and inadequate for characterizing certain tumor types e.g. diffuse tumors and MRI invisible tumors. In this work, we introduce a novel patient-level classification framework, denoted PCF, that is trained using patient-level labels only. In PCF, features are extracted from three-dimensional mpMRI and derived parameter maps using convolutional neural networks and subsequently, combined with clinical features by a multi-classifier support vector machine scheme. The output of PCF is a probability value that indicates whether a patient is harboring clinically significant PCa (Gleason score >/=3+4) or not. PCF achieved mean area under the receiver operating characteristic curves of 0.79 and 0.86 on the PICTURE and PROSTATEx datasets respectively, using five-fold cross-validation. Clinical evaluation over a temporally separated PICTURE dataset cohort demonstrated comparable sensitivity and specificity to an experienced radiologist. We envision PCF finding most utility as a second reader during routine diagnosis or as a triage tool to identify low-risk patients who do not require a clinical read.

Computer-aided diagnostic system kinds and pulmonary nodule detection efficacy

  • Kadhim, Omar Raad
  • Motlak, Hassan Jassim
  • Abdalla, Kasim Karam
International Journal of Electrical and Computer Engineering (IJECE) 2022 Journal Article, cited 0 times
Website
This paper summarizes the literature on computer-aided detection (CAD) systems used to identify and diagnose lung nodules in images obtained with computed tomography (CT) scanners. The importance of developing such systems lies in the fact that the process of manually detecting lung nodules is painstaking and sequential work for radiologists, as it takes a long time. Moreover, the pulmonary nodules have multiple appearances and shapes, and the large number of slices generated by the scanner creates great difficulty in accurately locating the lung nodules. The handcraft nodules detection process can be caused by messing some nodules spicily when these nodules' diameter be less than 10 mm. So, the CAD system is an essential assistant to the radiologist in this case of nodule detection, and it contributed to reducing time consumption in nodules detection; moreover, it applied more accuracy in this field. The objective of this paper is to follow up on current and previous work on lung cancer detection and lung nodule diagnosis. This literature dealt with a group of specialized systems in this field quickly and showed the methods used in them. It dealt with an emphasis on a system based on deep learning involving neural convolution networks.

Computer-aided grading of gliomas based on local and global MRI features

  • Hsieh, Kevin Li-Chun
  • Lo, Chung-Ming
  • Hsiao, Chih-Jou
Computer Methods and Programs in Biomedicine 2016 Journal Article, cited 13 times
Website
BACKGROUND AND OBJECTIVES: A computer-aided diagnosis (CAD) system based on quantitative magnetic resonance imaging (MRI) features was developed to evaluate the malignancy of diffuse gliomas, which are central nervous system tumors. METHODS: The acquired image database for the CAD performance evaluation was composed of 34 glioblastomas and 73 diffuse lower-grade gliomas. In each case, tissues enclosed in a delineated tumor area were analyzed according to their gray-scale intensities on MRI scans. Four histogram moment features describing the global gray-scale distributions of gliomas tissues and 14 textural features were used to interpret local correlations between adjacent pixel values. With a logistic regression model, the individual feature set and a combination of both feature sets were used to establish the malignancy prediction model. RESULTS: Performances of the CAD system using global, local, and the combination of both image feature sets achieved accuracies of 76%, 83%, and 88%, respectively. Compared to global features, the combined features had significantly better accuracy (p = 0.0213). With respect to the pathology results, the CAD classification obtained substantial agreement kappa = 0.698, p < 0.001. CONCLUSIONS: Numerous proposed image features were significant in distinguishing glioblastomas from lower-grade gliomas. Combining them further into a malignancy prediction model would be promising in providing diagnostic suggestions for clinical use.

Computer-aided nodule assessment and risk yield risk management of adenocarcinoma: the future of imaging?

  • Foley, Finbar
  • Rajagopalan, Srinivasan
  • Raghunath, Sushravya M
  • Boland, Jennifer M
  • Karwoski, Ronald A
  • Maldonado, Fabien
  • Bartholmai, Brian J
  • Peikert, Tobias
2016 Conference Proceedings, cited 7 times
Website

Computer-aided nodule detection and volumetry to reduce variability between radiologists in the interpretation of lung nodules at low-dose screening CT

  • Jeon, Kyung Nyeo
  • Goo, Jin Mo
  • Lee, Chang Hyun
  • Lee, Youkyung
  • Choo, Ji Yung
  • Lee, Nyoung Keun
  • Shim, Mi-Suk
  • Lee, In Sun
  • Kim, Kwang Gi
  • Gierada, David S
Investigative radiology 2012 Journal Article, cited 51 times
Website

Computer-Assisted Decision Support System in Pulmonary Cancer Detection and Stage Classification on CT Images

  • Masood, Anum
  • Sheng, Bin
  • Li, Ping
  • Hou, Xuhong
  • Wei, Xiaoer
  • Qin, Jing
  • Feng, Dagan
Journal of Biomedical Informatics 2018 Journal Article, cited 10 times
Website

Computer-assisted subtyping and prognosis for non-small cell lung cancer patients with unresectable tumor

  • Saad, Maliazurina
  • Choi, Tae-Sun
Computerized Medical Imaging and Graphics 2018 Journal Article, cited 0 times
Website
BACKGROUND: The histological classification or subtyping of non-small cell lung cancer is essential for systematic therapy decisions. Differentiating between the two main subtypes of pulmonary adenocarcinoma and squamous cell carcinoma highlights the considerable differences that exist in the prognosis of patient outcomes. Physicians rely on a pathological analysis to reveal these phenotypic variations that requires invasive methods, such as biopsy and resection sample, but almost 70% of tumors are unresectable at the point of diagnosis. METHOD: A computational method that fuses two frameworks of computerized subtyping and prognosis was proposed, and it was validated against publicly available dataset in The Cancer Imaging Archive that consisted of 82 curated patients with CT scans. The accuracy of the proposed method was compared with the gold standard of pathological analysis, as defined by theInternational Classification of Disease for Oncology (ICD-O). A series of survival outcome test cases were evaluated using the Kaplan-Meier estimator and log-rank test (p-value) between the computational method and ICD-O. RESULTS: The computational method demonstrated high accuracy in subtyping (96.2%) and good consistency in the statistical significance of overall survival prediction for adenocarcinoma and squamous cell carcinoma patients (p<0.03) with respect to its counterpart pathological subtyping (p<0.02). The degree of reproducibility between prognosis taken on computational and pathological subtyping was substantial with an averaged concordance correlation coefficient (CCC) of 0.9910. CONCLUSION: The findings in this study support the idea that quantitative analysis is capable of representing tissue characteristics, as offered by a qualitative analysis.

Computer-extracted MR imaging features are associated with survival in glioblastoma patients

  • Mazurowski, Maciej A
  • Zhang, Jing
  • Peters, Katherine B
  • Hobbs, Hasan
Journal of Neuro-Oncology 2014 Journal Article, cited 33 times
Website
Automatic survival prognosis in glioblastoma (GBM) could result in improved treatment planning for the patient. The purpose of this research is to investigate the association of survival in GBM patients with tumor features in pre-operative magnetic resonance (MR) images assessed using a fully automatic computer algorithm. MR imaging data for 68 patients from two US institutions were used in this study. The images were obtained from the Cancer Imaging Archive. A fully automatic computer vision algorithm was applied to segment the images and extract eight imaging features from the MRI studies. The features included tumor side, proportion of enhancing tumor, proportion of necrosis, T1/FLAIR ratio, major axis length, minor axis length, tumor volume, and thickness of enhancing margin. We constructed a multivariate Cox proportional hazards regression model and used a likelihood ratio test to establish whether the imaging features are prognostic of survival. We also evaluated the individual prognostic value of each feature through multivariate analysis using the multivariate Cox model and univariate analysis using univariate Cox models for each feature. We found that the automatically extracted imaging features were predictive of survival (p = 0.031). Multivariate analysis of individual features showed that two individual features were predictive of survival: proportion of enhancing tumor (p = 0.013), and major axis length (p = 0.026). Univariate analysis indicated the same two features as significant (p = 0.021, and p = 0.017 respectively). We conclude that computer-extracted MR imaging features can be used for survival prognosis in GBM patients.

Conditional Generative Adversarial Refinement Networks for Unbalanced Medical Image Semantic Segmentation

  • Rezaei, Mina
  • Yang, Haojin
  • Harmuth, Konstantin
  • Meinel, Christoph
2019 Conference Proceedings, cited 0 times
Website

Conditional random fields improve the CNN-based prostate cancer classification performance

  • Lapa, Paulo Alberto Fernandes
2019 Thesis, cited 0 times
Website
Prostate cancer is a condition with life-threatening implications but without clear causes yet identified. Several diagnostic procedures can be used, ranging from human dependent and very invasive to using state of the art non-invasive medical imaging. With recent academic and industry focus on the deep learning field, novel research has been performed on to how to improve prostate cancer diagnosis using Convolutional Neural Networks to interpret Magnetic Resonance images. Conditional Random Fields have achieved outstanding results in the image segmentation task, by promoting homogeneous classification at the pixel level. A new implementation, CRF-RNN defines Conditional Random Fields by means of convolutional layers, allowing the end to end training of the feature extractor and classifier models. This work tries to repurpose CRFs for the image classification task, a more traditional sub-field of imaging analysis, on a way that to the best of the author’s knowledge, has not been implemented before. To achieve this, a purpose-built architecture was refitted, adding a CRF layer as a feature extractor step. To serve as the implementation’s benchmark, a multi-parametric Magnetic Resonance Imaging dataset was used, initially provided for the PROSTATEx Challenge 2017 and collected by the Radboud University. The results are very promising, showing an increase in the network’s classification quality. Cancro da próstata é uma condição que pode apresentar risco de vida, mas sem causas ainda corretamente identificadas. Vários métodos de diagnóstico podem ser utilizados, desde bastante invasivos e dependentes do operador humano a métodos não invasivos de ponta através de imagens médicas. Com o crescente interesse das universidades e da indústria no campo do deep learning, investigação tem sido desenvolvida com o propósito de melhorar o diagnóstico de cancro da próstata através de Convolutional Neural Networks (CNN) (Redes Neuronais Convolucionais) para interpretar imagens de Ressonância Magnética. Conditional Random Fields (CRF) (Campos Aleatórios Condicionais) alcançaram resultados muito promissores no campo da Segmentação de Imagem, por promoverem classificações homogéneas ao nível do pixel. Uma nova implementação, CRF-RNN redefine os CRF através de camadas de CNN, permitindo assim o treino integrado da rede que extrai as características e o modelo que faz a classificação. Este trabalho tenta aproveitar os CRF para a tarefa de Classificação de Imagem, um campo mais tradicional, numa abordagem que nunca foi implementada anteriormente, para o conhecimento do autor. Para conseguir isto, uma nova arquitetura foi definida, utilizando uma camada CRF-RNN como um extrator de características. Como meio de comparação foi utilizada uma base de dados de imagens multiparamétricas de Ressonância Magnética, recolhida pela Universidade de Radboud e inicialmente utilizada para o PROSTATEx Challenge 2017. Os resultados são bastante promissores, mostrando uma melhoria na capacidade de classificação da rede neuronal.

Constructing 3D-Printable CAD Models of Prostates from MR Images

  • Breseman, Kelsey
  • Lee, Christopher
  • Bloch, B Nicholas
  • Jaffe, Carl
2013 Conference Proceedings, cited 1 times
Website
This paper describes the development of a procedure to generate patient-specific, three-dimensional (3D) solid models of prostates (and related anatomy) from magnetic resonance (MR) images. The 3D models are rendered in STL file format which can be physically printed or visualized on a holographic display system. An example is presented in which a 3D model is printed following this procedure.

Content based medical image retrieval using topic and location model

  • Shamna, P.
  • Govindan, V. K.
  • Abdul Nazeer, K. A.
Journal of Biomedical Informatics 2019 Journal Article, cited 0 times
Website
Background and objective Retrieval of medical images from an anatomically diverse dataset is a challenging task. Objective of our present study is to analyse the automated medical image retrieval system incorporating topic and location probabilities to enhance the performance. Materials and methods In this paper, we present an automated medical image retrieval system using Topic and Location Model. The topic information is generated using Guided Latent Dirichlet Allocation (GuidedLDA) method. A novel Location Model is proposed to incorporate the spatial information of visual words. We also introduce a new metric called position weighted Precision (wPrecision) to measure the rank order of the retrieved images. Results Experiments on two large medical image datasets - IRMA 2009 and Multimodal dataset - revealed that the proposed method outperforms existing medical image retrieval systems in terms of Precision and Mean Average Precision. The proposed method achieved better Mean Average Precision (86.74%) compared to the recent medical image retrieval systems using the Multimodal dataset with 7200 images. The proposed system achieved better Precision (97.5%) for top ten images compared to the recent medical image retrieval systems using IRMA 2009 dataset with 14,410 images. Conclusion Supplementing spatial details of visual words to the Topic Model enhances the retrieval efficiency of medical images from large repositories. Such automated medical image retrieval systems can be used to assist physician to retrieve medical images with better precision compared to the state-of-the-art retrieval systems.

Content dependent intra mode selection for medical image compression using HEVC

  • Parikh, S
  • Ruiz, D
  • Kalva, H
  • Fern, G
2016 Conference Proceedings, cited 3 times
Website
This paper presents a method for complexity reduction in medical image encoding that exploits the structure of medical images. The amount of texture detail and structure in medical images depends on the modality used to capture the image and the body part captured by that image. The proposed approach was evaluated using Computed Radiography (CR) modality, commonly known as x-ray imaging, and three body parts. The proposed method essentially reduces the number of CU partitions evaluated as well as the number of intra prediction modes for each evaluated partition. Evaluation using the HEVC reference software (HM) 16.4 and lossless intra coding shows an average reduction of 52.47% in encoding time with a negligible penalty of up to 0.22%, increase in compressed file size.

Content-Based Image Retrieval System for Pulmonary Nodules Using Optimal Feature Sets and Class Membership-Based Retrieval

  • Mehre, Shrikant A
  • Dhara, Ashis Kumar
  • Garg, Mandeep
  • Kalra, Naveen
  • Khandelwal, Niranjan
  • Mukhopadhyay, Sudipta
2018 Journal Article, cited 0 times
Website

Content-Based Medical Image Retrieval: A Survey of Applications to Multidimensional and Multimodality Data

  • Kumar, Ashnil
  • Kim, Jinman
  • Cai, Weidong
  • Fulham, Michael
  • Feng, Dagan
2013 Journal Article, cited 109 times
Website
Medical imaging is fundamental to modern healthcare, and its widespread use has resulted in the creation of image databases, as well as picture archiving and communication systems. These repositories now contain images from a diverse range of modalities, multidimensional (three-dimensional or time-varying) images, as well as co-aligned multimodality images. These image collections offer the opportunity for evidence-based diagnosis, teaching, and research; for these applications, there is a requirement for appropriate methods to search the collections for images that have characteristics similar to the case(s) of interest. Content-based image retrieval (CBIR) is an image search technique that complements the conventional text-based retrieval of images by using visual features, such as color, texture, and shape, as search criteria. Medical CBIR is an established field of study that is beginning to realize promise when applied to multidimensional and multimodality medical data. In this paper, we present a review of state-of-the-art medical CBIR approaches in five main categories: two-dimensional image retrieval, retrieval of images with three or more dimensions, the use of nonimage data to enhance the retrieval, multimodality image retrieval, and retrieval from diverse datasets. We use these categories as a framework for discussing the state of the art, focusing on the characteristics and modalities of the information used during medical image retrieval.

A Content-Based-Image-Retrieval Approach for Medical Image Repositories

  • el Rifai, Diaa
  • Maeder, Anthony
  • Liyanage, Liwan
2015 Conference Paper, cited 2 times
Website

Context aware deep learning for brain tumor segmentation, subtype classification, and survival prediction using radiology images

  • Linmin, Pei
  • Lasitha, Vidyaratne
  • Monibor, Rahman Md
  • Iftekharuddin, Khan M
Scientific Reports (Nature Publisher Group) 2020 Journal Article, cited 0 times
Website