Grading Rop With Feedforward Network

Topics: Other

Abstract

Retinopathy of premature (ROP), a disease that affects premature infants, can result in blindness. Because of the high birth rate of pre-mature babies in india and expanded neonatal care, ROP incidence in India is worrying now for days. Building understanding of the disease is urgently needed. There is an urgent need to create awareness about disease. In this paper a novel way for grading of ROP with feed forward networks utilizing characteristics of second order texture. Experiments are conducted with six different Texture characteristics of the second order are mean, entropy, contrast correlation homogeneity, energy from the co-occurrence matrix of the gray level (GLCM) are in three directions 0, 45 and 90 degree are considered.

The results obtained indicate Feed forward network offers an easy, yet effective, paradigm for ROP Grading.

Keywords: ROP, Grading, Feed forward, Multilayer, Retinopathy.

Introduction

Premature retinopathy is a condition that develops in premature infants and babies of low birth weight. ROP is a retina condition that can lead to blindness in these children.

Retina and retinal vasculature are fully developed in full-term babies, and ROP cannot occur. ROP can lead to permanent loss of vision in children at their early stages of life if not treated at the right time Nevertheless; the development of the eye is incomplete in premature infants. The risk factors for premature retinopathy development are early child birth less than 32 weeks, birth weight less than 1500 grams and high oxygen supplementation rates (Hartnett and Penn, 2012).Improved survival of premature infants and very low birth weight has increased incidence of ROP.

Get quality help now
Sweet V
Verified

Proficient in: Other

4.9 (984)

“ Ok, let me say I’m extremely satisfy with the result while it was a last minute thing. I really enjoy the effort put in. ”

+84 relevant experts are online
Hire writer

Developed countries have conducted demographic studies on ROP and set the guidelines and screening criteria for ROP based on weight and gestational age according to the. Developing countries are yet to assess and set the guidelines for ROP.

Background And Related Work

Rony Gelman et al., (2005) developed semi-automated multi-scale research program. RGB 640X480 pixel format frames were known. Segmentation, skeleton design, vessel root collection, and monitoring measures are taken by the code. The curvature, diameter and tortuosity index (TI) for geometric properties were measured for each section. These parameters a have higher values for deceased image hence used for the detection of plus decease.

Katie et al., (2013) performed plus disease detection using vascular tortuosity. A key component of the international classification system for Plus disease is venous dilation in the posterior pole. There are several limitations for this approach. Other factors like rate of vascular change are also considered along with posterior retinal vessels. According to authors domain knowledge will improve the accuracy.

Praveen Sen et al., (2015), discuss about the latest scenario of ROP in India and various treatment available for ROP. Authors conclude that therapy with laser is the best option for diagnosis. To order for children to seek appropriate care, knowledge needs to be created.

Walter M. et al., (2015) discuss the various aspects of telemedicine like imaging techniques, procedure to be followed, image quality, equipment maintenance, data Storage, image transfer protocol, backup etc. Authors discuss about the advantages of telemedicine evaluated i.e like increase the number of infants, improve parent education about ROP. There are several drawbacks such as price, the fact that RDFI-TM gathers fewer data than is needed to assess the extent of ROP and existing differences in practical knowledge.

Jayadev et.al (2015) designed software Code which performed three image pre-processing protocols Grey Enhanced, Color Enhanced, and Vesselness Test. ROP specialist analyses these images. Results indicate that clinically relevant features improved by each of the protocols provided enhanced clinically relevant data compared to standard images that are not processed. These images are evaluated by ROP specialist. Results indicate that clinically relevant features enhanced by each of the protocols, provided enhanced clinically relevant information compared to the standard non-processed images.

Peter Campbell et al., (2016) used when accuracy is highest when Tortuosity of both arteries and veins, are considered. When using only arterial tortuosity is considered accuracy is 90%.

Paraq shah et al., (2016) The ROP outbreak was caused by a delayed study of the incidence and triggers of ROP and, in the late 1940s and 1950s, unmonitored oxygen ingestion in Europe and North America. Among developed countries such as the United Kingdom, underweight babies with a birth weight of less than 1500 g still live, whereas in developing countries such as India, much smaller infants with a birth weight between 1750 and 2000g have higher risk of ROP. The root cause for this is the researchers ‘ lack of proper neonatal care and inadequate oxygen management. Past, present, &future 2016.Shantala et al., (2016) proposed a novel technique with Haar wavelet and First order features from Horizontal, vertical and Diagonal components. K-NN and Decision tree are used for classification. K-NN classifier yielded better performance with 85% accuracy.

Stefano Piermarocchi (2016) Considered criteria such as GA, BW, weight gain, oxygen and blood transfusion treatment and used three formulas for ROP, WINROP, ROP Rating and CHOP ROP. WINROP is a system for interpretation. The method is based on daily postnatal weights and serum concentrations of the element of insulin production (IGF). ROPScore is another easily accessible algorithm that includes mechanical ventilation with BW, GA, weight at 6th week of life, presence or absence of blood transfusion and oxygen (Eckert et al. 2012). The ROP system of the CHOP (Children’s Hospital Of Philadelphia) deals for postnatal weight gain, adapted from the ROP Model of PINT (Premature Infants in Need of Transfusion) using SAS technology.

Hu.J et al., (2018) Performed grading of ROP images using Deep Neural Network filter unclassified images. Authors used Alex Net, VGG-16, and Google Net to conduct experiments. Transfer training method used in each model, including the best Accuracy is 98.88% VGG-16 accuracy. Pediatric ophthalmologists compare results.

Jianyong Wang et al., (2018) two different DNN models have been developed, i.e. classification and grading functions.

Zhang et al., (2018) Study conducted to determine the prevalence and severity of CNN premature retinopathy. In this paper, a novel CNN architecture is composed of a variable image function extract sub feature in an analysis to aggregate operator to connect features. Authors have experimented with optimal ensemble model and best model has yielded results of 97.6% accuracy.

Feedforward Neural Network

A neural network multilayer feed forward consists of a set of input units, hidden units and yield of layer units called output units. There can be at least one layer of hidden layers. Multilayer systems can speak to non-direct capacities. Data from one layer of neurons feeds into the next level of neurons in these networks. There are no connections backwards. To know the target role correctly, certain minimum number of units is required. The quantity of information layer units and number of yield layer units is obvious from the application. Deciding the number hidden layers unit is an art and requires experimentation. Too few units will prevent the network from learning. Too many units can result in over fitting the network and also increases the training time. Some minimum number of units is needed to learn the target function accurately. Connection between the weights has a weight associated with it. Initially all these are initialized to random small value. During training phase, these values are gradually adjusted.

Model 1: In this model we are considering one input layer with 18 neurons corresponding to 18 features. Five hidden layers with 10 neurons each. One output layer with 2 neurons, one for each class.

Model 2: In this model we are considering one input layer with 18 neurons corresponding to 18 features. Two hidden layers with sixteen neurons each, and one hidden layer with ten neurons, and one hidden layer with eight neuron. One output layer with 2 neurons, because we are having 2 output classes.

Model 3: In this model we are considering one input layer with eighteen nodes. We are having 18 features extracted from GLCM code so eighteen attributes in our input values. Three hidden layers with twelve neurons each, and 1 hidden layer with 10 neurons, and 1 hidden layer with 8 neuron. One output layer with 2 neurons, because we are having 2 output classes.

Model 4: We are having 6 features extracted from GLCM code so. Two hidden layers with sixteen neurons each, and 1 hidden layer with 10 neurons, and 1 hidden layer with 8 neuron. One output layer with 2 neurons.

Model 5: We are having 6 features extracted from GLCM Two hidden layers with sixteen neurons each, and 1 hidden layer with 10 neurons, and 1 hidden layer with 8 neuron. One output layer with 2 neurons.

Model 6: We are having 6 features extracted from GLCM Two hidden layers with sixteen neurons each, and 1 hidden layer with 10 neurons, and 1 hidden layer with 8 neuron. One output layer with 2 neurons.

Results

The Dataset consists of 100 Images. A Comparative study of effectiveness of Haralick Features Computed from gray scale image. A total of 6 features have been identified which contribute in the Grading of ROP. The texture features include mean, entropy, contrast, correlation, homogeneity, energy. Feed Forward Neural Network is implemented using Keas. Thus, NN gives good results for classification by adjusting its errors and obtaining a minimum square error to give high accuracy of classification. For this work, NN toolbox in jupyter notebook has been used for performing classification.

Discussion and Conclusion

Our study showed ROP scoring using neural network model Feed forward. Practical images are sent for grading to clinicians and not graded accurately when the patient is in for screening. The skilled networks of Feed forward make a fast ranking as feasible. In this experiment we are predicting different model with their accuracy.Model1 is created getting the accuracy of 43.37%, Model2 is created and getting accuracy of 75%. Model3 is created getting the accuracy of 75%.In this project comparative study for different models has been studied. Also, classification and detection of all the five stages of ROP as well as plus disease should constitute the future work.

References

  1. American Academy of Ophthalmology, American Association for Pediatric Ophthalmology
  2. Bankhead, P., Scholfield, C.N., McGeown, J.G. and Curtis, T.M. (2012) ‘Fast retinal vessel detection and measurement using wavelets and edge location refinement’, PlosOne, Vol. 7, No. 3,e32435.
  3. Campbell, J. Peter, Esra Ataer-Cansizoglu, Veronica Bolon-Canedo, Alican Bozkurt, Deniz Erdogmus, Jayashree Kalpathy-Cramer, Samir N. Patel et al. “Expert diagnosis of plus disease in retinopathy of prematurity from computer-based image analysis.” JAMA ophthalmology 134, no. 6 (2016): 651-657.
  4. Fierson, Walter M., Antonio Capone, and American Academy of Pediatrics Section on Ophthalmology. “Telemedicine for evaluation of retinopathy of prematurity.” Pediatrics 135, no. 1 (2015): e238-e254.
  5. Gelman, Rony, M. Elena Martinez-Perez, Deborah K. Vanderveen, Anne Moskowitz, and Anne B. Fulton. “Diagnosis of plus disease in retinopathy of prematurity using Retinal Image multiScale Analysis.” Investigative ophthalmology & visual science 46, no. 12 (2005): 4734-4738.
  6. Giraddi, Shantala, Savita Gadwal, and Jagadeesh Pujari. “Abnormality detection in retinal images using Haar wavelet and First order features.” In 2016 2nd International Conference on Applied and Theoretical Computing and Communication Technology (iCATccT), pp. 657-661. IEEE, 2016.
  7. Jayadev, Chaitra, Anand Vinekar, Poornima Mohanachandra, Samit Desai, Amit Suveer, Shwetha Mangalesh, Noel Bauer, and Bhujang Shetty. “Enhancing image characteristics of retinal images of aggressive posterior retinopathy of prematurity using a novel software,(RetiView).” BioMed research international 2015 (2015).
  8. Keck, Katie M., Jayashree Kalpathy-Cramer, Esra Ataer-Cansizoglu, Sheng You, Deniz Erdogmus, and Michael F. Chiang. “Plus disease diagnosis in retinopathy of prematurity: vascular tortuosity as a function of distance from optic disc.”Retina (Philadelphia, Pa.) 33, no. 8 (2013): 1700.
  9. Piermarocchi, Stefano, Silvia Bini, Ferdinando Martini, Marianna Berton, Anna Lavini, Elena Gusson, Giorgio Marchini et al. “Predictive algorithms for early detection of retinopathy of prematurity.” Acta ophthalmologica 95, no. 2 (2017): 158-164.
  10. Sen P, Rao C, Bansal N. Retinopathy of prematurity: An update. Sci J Med Vis Res Found 2015;XXXIII:93-6.
  11. Shah, Parag K., Vishma Prabhu, Smita S. Karandikar, Ratnesh Ranjan, Venkatapathy Narendran, and Narendran Kalpana. “Retinopathy of prematurity: past, present and future.” World journal of clinical pediatrics 5, no. 1 (2016): 35.Strabismus and American Association of Certified Orthoptists (2013) ‘Screening examination Vol. 131, No. 1, pp.189–194.
  12. Wang, Jianyong, Rong Ju, Yuanyuan Chen, Lei Zhang, Junjie Hu, Yu Wu, Wentao Dong, Jie Zhong, and Zhang Yi. “Automated retinopathy of prematurity screening using deep neural networks.” EBioMedicine 35 (2018): 361-368.
  13. Wang, Jianyong, Rong Ju, Yuanyuan Chen, Lei Zhang, Junjie Hu, Yu Wu, Wentao Dong, Jie Zhong, and Zhang Yi. “Automated retinopathy of prematurity screening using deep neural networks.” EBioMedicine 35 (2018): 361-368.

Cite this page

Grading Rop With Feedforward Network. (2019, Dec 14). Retrieved from https://paperap.com/grading-rop-with-feedforward-network-shantala-giraddi-b-satyadhyan-best-essay/

Grading Rop With Feedforward Network
Let’s chat?  We're online 24/7