A scientific article by the programmer, Mustafa Mortada Khalaf, and the teacher, A.L Muhaimin Samir Aref, entitled (Using deep machine learning technology for the purpose of diagnosing eye diseases) Date: 05/01/2023 | Views: 298

Share in :

Unless you're an ophthalmologist or Google's deep learning machine, this eye may not appear to be damaged by your vision, but if you give the computer enough images and an appropriate algorithm, it can learn to see. And if the eyes shown in the above pictures are damaged, then this computer will be able to learn to diagnose eye diseases better than human diagnosis.

People with diabetes usually suffer from a condition called (diabetic retinopathy) in which the tiny capillaries at the back of the eye become damaged and start to leak. One in three diabetics suffers from this damage, and it can cause blindness if left untreated. Treatment of this condition is possible in the event of early diagnosis.

The problem for most people is that it is difficult to find an ophthalmologist who can diagnose the condition. Where there are 387 million people with diabetes in the world who need to see an ophthalmologist to be able to detect the disease early, the prevention currently used is not sufficient, as diabetic retinopathy is the main cause of vision impairment and blindness in people of working age.

“Google” devised a method to employ machine learning by teaching neural networks how to diagnose diabetic retinopathy from images of a patient’s eye, as “Google” published the research in the Journal of the American Medical Association.

A neural network is simply like an artificial brain. Engineers can train the retina to differentiate between injured and unaffected eyes by viewing a large set of images of patients with and without retinal damage. After the end of the training, the Google team tested the neural network to ensure that the algorithm could diagnose diabetic retinopathy with the same accuracy as the ophthalmologists who looked at the same pictures.

The “Google” algorithm achieved slightly better results than human doctors in this test, as these results indicate the ability of the neural network to diagnose patients in the future, or at least help the doctor in the diagnosis process.

Doctors are now using similar technology to help them diagnose heart disease or some types of cancer. The current technology does not match Google's deep learning algorithm, but they share the same principle. By looking at your body images, doctors can identify blockages in the arteries in heart disease or abnormal growths of cancers, whether from radiographs or CT scans. The specialist in examining these images (radiologist) has many years of experience, and after examining these images, he can choose the affected areas.

A person's vision is good enough, but a person is prone to making mistakes. If the computer can do the same thing, then it will be able to outperform the human ability in finding cancerous tumors or clogged arteries. The logical solution is to teach the computer the difference between a healthy image and an infected image. This solution may seem simple, because understanding the image is a simple task for humans.

The dilemma lies in understanding the difference between the difficulty of this task for the computer and its difficulty for the human brain. If an image is displayed on a computer, it will only be able to see pixels of certain colors assigned to it. While if you see the picture you will see a beach and a woman. You can recognize her glasses and her hat. And you will know that she is jumping and that she is wearing a green bathing suit decorated with white flowers and the picture has dark colors. A computer will not be able to know these details unless it has computer vision.

Computer vision is a way of teaching a computer to “see,” meaning to look at pictures and know that this person is lying on the beach and wearing a purple bathing suit.

Google has the power to change this fact. Where they have the necessary experience in computer vision due to having a large amount of data as a partial reason. You can see for yourself how it works, as Google uses computer vision technology to arrange your personal photos. If you log into Google Photos now (assuming you have a Google account, and you probably do), you can see the photos are formatted and you'll be able to search by term. You can try “snow pictures” or “dog pictures”. Pictures of snow and dogs will appear, not because someone mentioned this in text, but rather thanks to Google's computer vision algorithm that was able to identify snow and dogs in the images.

Diabetic retinopathy diagnosis is one of the first applications Google found for the deep learning computer vision team. The other teams are now working on other similar projects. Connell University has a Vision and Image Analysis group that uses computer vision to diagnose lung diseases, heart problems, and bone health problems. A Finnish group is also working on how to diagnose malaria from blood images, and IBM has spent two years developing an algorithm to detect skin cancer.

Computer vision and deep learning will one day change the way doctors diagnose their patients. The Food and Drug Administration (FDA) has not yet approved the use of these technologies in medicine, but if the way to the future passes through here, the administration will find a way to authorize the use of neural networks safely. In the meantime, you can test Google's computer vision to find pictures of your pet. This idea may not work, but it can pass the time.