Waiting
Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove

Bioengineering

Superior Auto-Identification of Trypanosome Parasites by Using a Hybrid Deep-Learning Model

Published: October 27, 2023 doi: 10.3791/65557

Summary

Worldwide medical blood parasites were automatically screened using simple steps on a low-code AI platform. The prospective diagnosis of blood films was improved by using an object detection and classification method in a hybrid deep learning model. The collaboration of active monitoring and well-trained models helps to identify hotspots of trypanosome transmission.

Abstract

Trypanosomiasis is a significant public health problem in several regions across the world, including South Asia and Southeast Asia. The identification of hotspot areas under active surveillance is a fundamental procedure for controlling disease transmission. Microscopic examination is a commonly used diagnostic method. It is, nevertheless, primarily reliant on skilled and experienced personnel. To address this issue, an artificial intelligence (AI) program was introduced that makes use of a hybrid deep learning technique of object identification and object classification neural network backbones on the in-house low-code AI platform (CiRA CORE). The program can identify and classify the protozoan trypanosome species, namely Trypanosoma cruzi, T. brucei, and T. evansi, from oil-immersion microscopic images. The AI program utilizes pattern recognition to observe and analyze multiple protozoa within a single blood sample and highlights the nucleus and kinetoplast of each parasite as specific characteristic features using an attention map.

To assess the AI program's performance, two unique modules are created that provide a variety of statistical measures such as accuracy, recall, specificity, precision, F1 score, misclassification rate, receiver operating characteristics (ROC) curves, and precision versus recall (PR) curves. The assessment findings show that the AI algorithm is effective at identifying and categorizing parasites. By delivering a speedy, automated, and accurate screening tool, this technology has the potential to transform disease surveillance and control. It could also assist local officials in making more informed decisions on disease transmission-blocking strategies.

Introduction

Trypanosomiasis is a significant challenge to global health issues due to a variety of zoonotic species causing human disease with a wide range of geographical distribution outside the African and American continents, such as South and Southeast Asia1,2,3. Human African trypanosomiasis (HAT) or sleeping sickness, is caused by Trypanosoma brucei gambiense and T. b. rhodesiense which produce the chronic and acute forms, respectively, representing the major spread in Africa. The causative parasite belongs to the Salivaria group due to the transmission by infected saliva of Tsetse flies4. Whereas, the well-known American trypanosomiasis (Chagas's disease) caused by T. cruzi has been a public health concern for non-endemic countries; including Canada, the USA, Europe, Australia, and Japan, because of the frequent migration of individuals from endemic areas5. The trypanosome infection belongs to the Stercoraria group because it is transmitted by the infected feces of reduviid bugs. The trypanosomiases and trypanosomoses (Surra disease) caused by the T. evansi infection are endemic in Africa, South America, Western and Eastern Asia, and South and Southeast Asian countries3,6. Although human trypanosomiasis caused by the trypanosome has been reported3,4,7,8,9,10,11,12, the route of transmission of the parasite infection is debated: either the mechanical or infected blood through hematophagous insects such as tsetse flies and tabanids or horse flies6,7,8,9,10,12,13,14. No case report has been found in Thailand, however, a high prevalence of the T. evansi infection in dog15, racing horses, and water buffalo in the eastern region has been published16, suggesting an acquired transmission between domestic animals would have occurred. Several atypical human infections caused by animal trypanosomes (T. vivax, T. b. brucei, T. congolense, T. lewisi, and T. evansi) were reported, which are not the classical forms of human trypanosomes17. Awareness about atypical human infections might be underestimated, highlighting the need for improved diagnostic tests and field investigations for detection and confirmation of these atypical cases, and allowing for proper control and treatment of animal pathogenic diseases that affect global livestock, food security18, and human healthcare. This led to the development of a potential strategy integrated with an existing common method (microscopic examination) to rapidly screen blood samples in remote areas during active surveillance, enabling the identification of the hotspot zones for restricting and controlling the disease.

Having a sporadic incidence of the Surra disease in a wide range of domestic animals such as dromedaries, cattle, equines, and dogs that evoke a euryxenous T. evansi may be zoonotic to humans1,4,13,14. Human infection seems impossible because a trypanolytic factor in human serum, expressed from a sra-like gene, is capable of preventing human T. brucei and T. congolense12,19. Furthermore, as the first case report from India demonstrates, the illness has no association with immunocompromised HIV patients4. As described above, the possible human infection may be related to a high-density lipoprotein deficiency with abnormal function of the trypanosome lytic factor, which is a rare autosomal recessive genetic disorder, namely Tangier disease4. In 2016, a Vietnamese patient was discovered to possess two wild-type APOL1 alleles and a serum APOL1 concentration within the normal range. However, the theory of APOL-1 deficiency is no longer considered valid12. Therefore, one possible mechanism of trypanosome infection is direct contact of a wound with infected animal blood during occupational animal farming4,12. Microscopic examination reveals that T. evansi morphology is a monomorphic form of the trypomastigote including a predominant long slender, flagellated, and dividing trypanosome which is similar to their relative species of T. brucei1,12,13. The nucleus is in the central position with a visible small kinetoplast in the posterior position. A previous study indicated that the parasite can exist in two comparable forms, known as the classical and truncated forms. However, it remains necessary to confirm their respective pathogenic effects on hosts20. The course of symptoms varies ranging from intermittent fever associated with chills and sweating. Suramin, fortunately, is a successful first-line therapy for early-stage human African trypanosomiasis with no invasion of the central nervous system (CNS), healing patients in India and Vietnam4,12,21.

Except for clinical sign examination, several diagnostic methods for T. evansi parasites exist, including parasitological microscopic observation4,9,12, serological4,8,9,10,12, and molecular biological tests4,12. Thin-blood films stained with Giemsa are often used to visualize the parasite present under microscopic examination, which is routinely and commonly used22. However, the procedure appears to be feasible; nonetheless, it is time-consuming and labor-intensive, has inter-rater assessment variability, is sensitive to only an acute phase, and requires a personal trainee23. Both molecular biology and serological testing also needed highly skilled personnel to perform multiple processes of sample preparation, including extracting and purifying the samples before testing them with expensive apparatus, which is difficult to standardize, risk of contamination with extra-parasitic materials, and discrepancies in results24. Based on the rationale described above, rapid and early screening technology is needed to support the field surveillance study and ensure that the survey result is reported in a timely manner to identify the hotspot zone for further control of the disease transmission1,8. Computerized-based devices (CAD) have been proposed as an innovative technology for medical fields, including histopathological and cytopathological tasks25. The CAD mentioned above was performed at high speed and computed using pattern recognition, namely, artificial intelligence (AI). The AI method is accomplished using convolutional neural network algorithms that can be used to deal with a large number of dataset samples, especially, a supervised learning approach that trains a well-trained model upon data consumption.

In general, AI is the ability of computers to solve tasks that require expert intelligence, such as data labeling. Machine learning (ML), a subfield of AI, is represented as a computer system with two different processes comprised of feature extraction and pattern recognition. Deep learning (DL), or advanced ML algorithms, refers to the development of computerized programs and devices comparing human-like performance with levels of accuracy greater and equal to that accomplished by human professionals26. Currently, the role of DL in medical and veterinary fields is promisingly expanding and revolutionizing communicable disease prevention with the aim of recent prevention and guiding it to individual health staff22,27. The potential DL application is limitless with quality labels and a large number of augmented datasets, freeing specialists to manage the project task. Specifically, an advance in the digital image along with computer-assisted analysis, improved the automatic diagnostic and screening in five categories of pathology reported; including static, dynamic, robotic, whole slide imaging, and hybrid methods28. It is necessary to consider that the integration of DL algorithm approaches and digital image data could encourage local staff to utilize the technology in their daily practices.

Previously, the increase in prediction accuracy of using a hybrid model had been proven27. To identify the trypanosome parasite in microscopic images, this research presents two hybrid models, incorporating the YOLOv4-tiny (object detection) and Densenet201 (object classification) algorithms. Among several detection models, YOLOv4-tiny with a CSPDarknet53 backbone showed high performance as a prediction result in terms of localization and classification29. Since the real-time detector has modified the optimal balance among the input network resolution, the amount of the convolutional layer, the total parameter, and the number of layer outputs, it has improved prioritizing fast operating speeds and optimizing for parallel computations when compared to previous versions. Dense Convolutional Network (DenseNet) is another popular model that achieves state-of-the-art results across competitive datasets. DenseNet201 yielded a similar validation error comparable to that of ResNet101; however, DenseNet201 has fewer than 20 million parameters, which is less than ResNet101's more than 40 million parameters30. Therefore, the DenseNet model could improve prediction accuracy with an increasing number of parameters with no sign of overfitting. Here, an artificial intelligence (AI) program utilizes a hybrid deep learning algorithm with deep detection- and classification neural network backbones on the in-house CiRA CORE platform. The developed program can identify and classify the protozoan trypanosome species, namely Trypanosoma cruzi, T. brucei, and T. evansi, from oil-immersion microscopic images. This technology has the potential to revolutionize disease surveillance and control by providing a rapid, automated, and accurate screening method. It could aid local staff in making more informed decisions on transmission-blocking strategies for parasitic protozoan disease.

Subscription Required. Please recommend JoVE to your librarian.

Protocol

Archived blood films and project design were approved by the Institutional Biosafety Committee, the Institutional Animal Care and Use Committee of the Faculty of Veterinary Science, Chulalongkorn University (IBC No. 2031033 and IACUC No. 1931027), and Human Research Ethics Committee of King Mongkut's Institute of Technology Ladkrabang (EC-KMITL_66_014).

1. Preparation of raw images

  1. The image dataset preparation
    1. Obtain at least 13 positive slides with blood-parasite infections, including T. brucei, T. cruzi, and T. evansi, confirmed by parasitologist experts. Separate the 13 slides for training (10 slides) and testing (three slides).
    2. Acquire images of the Giemsa stained-thin blood films described above under an oil-immersion field of a light microscope with a digital camera. Obtain images containing multiple objects of the trypomastigotes of all three parasite species under microscopic examination; look for a slender shape, long tails, an undulating membrane, and a kinetoplast at the anterior end.
      NOTE: Creating both thick and thin smears would enhance the detection of acute phase trypanosomiasis31. The blood collection by finger-prick is recommended by WHO32. Nevertheless, thin films are more effective in identifying Trypanosoma cruzi and other species, as these organisms tend to become distorted in thick films33. In light of this, we utilized thin blood film images to maintain the appropriate morphology of the parasites for this study.
    3. Store all images in a parasite-specific folder with the following specifications: 1,600 x 1,200 pixels, 24-bit depth, and JPG file format. Split the images into the training and test sets at a ~6:1 ratio.
      NOTE: See https://gitlab.com/parasite3/superior-auto-identification-of-medically-important-trypanosome-parasites-by-using-a-hybrid-deep-learning-model/-/blob/main/JOVEimage.zip; 650 images were split to train (560 images) and test (90 images) model.
    4. Define the region of interest as a rectangular label for two classes: trypanosomes and non-trypanosomes. Use the auto-cropping module to crop all detected images by using the well-trained object detection model. The auto-cropping module is the module developed in the in-house CiRA CORE program (see Table of Materials). Collect a single object per image for training the object classification.
      NOTE: For this paper, 1,017 images were split for training (892 images) and testing (126 images). The model training was performed with four labeled classes, including leukocyte, T. brucei, T. cruzi, and T. evansi.

2. Training process with in-house CiRA CORE platform

  1. Starting a new project
    1. Open the CiRA CORE application from the computer desktop (see Table of Materials) and create a new project by double-clicking on the program's icon.
    2. Choose the operation icon on the left vertical toolbar to select the required tools.
  2. Object detection model training
    1. Select the training-DL model function for data labeling and training by using the drag-and-drop method. Go to the General toolbar | CiRA AI | Drag DeepTrain | Drop DeepTrain on the screen (right-hand side).
      NOTE: For additional options, right-click on the selected tool and perform the appropriate functions: Copy, Cut, or Delete.
    2. Import the images using DeepTrain tool's settings. Click on the Load images button and navigate to the image directory. Label the objects by holding the left-click and naming the selected object. Adjust the rectangle line thickness and font size by clicking on the Display Setting button and Save GT as a .gt file in the same directory.
      NOTE: Save as needed to avoid any undesired conditions such as power shortage, automatic program closures, and hanging within the labeling process.
    3. Prior to model training, expand the data to gather sufficient information using the four augmentation techniques: Rotation, Contrast, Noise, and Blur. Click the Gen Setting button to access this feature.
    4. Initiate model training by clicking the Training button in the DeepTrain tool. The training part has two sub-functions: Generate Training Files and Train. Under the Generate Training Files function, select the desired models, batch size, and subdivisions. Click the Generate button to generate data and save it in the directory. In the Train function, choose the following options: i) use another generated training location for conditions and backup, ii) use prebuilt weights for continued training, or iii) override parameters for current training design. This will design the model configuration and training conditions.
      NOTE: The generation process time depends on the image file size, augmentation usage, and available memory space.
    5. Once all necessary configurations are complete, begin the model training by clicking on the Train button. Allow the program to continuously execute, evaluating the training loss and adjusting the weight of the dataset during the training process. If the model achieves optimal loss, save the trained weight file in the specified directory by clicking on the Export button.

3. Object detection model evaluation

  1. Select the object detection model evaluation function for model evaluation using the drag-and-drop method. Go to the Plugin toolbar | Evaluate | Drag EvalDetect | Drop EvalDetect on the screen (right-hand side).
  2. Click on Setting and wait for three functions: Detection, Evaluate, and Plot. Initiate model evaluation by importing the trained weight file from the directory (step 2.2.5) by clicking on Load Config.
  3. Under the Detection function, select non-maximum suppression (NMS) value to enhance accuracy by eliminating redundant false positive (FP) detections. NMS removes duplicate model-generated detections for improved reliability.
  4. Proceed with the following steps under the Evaluation function:
    1. Import test images from the image file directory by clicking on Browse. Import the GT file from the directory where it was saved in step 2.2.2 by clicking on Load GT.
    2. Choose the Intersection over Union (IoU) value to assess accuracy on the specific image test dataset.
    3. Click the Evaluation button to assess the detection model in the specified directory. Once the evaluation is completed, the results will be automatically saved as a CSV file in the same directory, sorted by class name. This CSV file will provide essential parameters such as True Positive (TP), False Positive (FP), False Negative (FN), Recall, and Precision for each class.
  5. To plot the Precision-Recall (PR) curve, follow these steps under the Plot function: Import the CSV files from the previous section (step 3.4) directory by clicking on Browse. Choose classes from the list and click the Plot button to display the editable PR curve image.
  6. Finally, to save an image with the AUC values of the PR curve in the required image format at the specified directory, click on the Save button of the image.

4. Image cropping for a single object per image

  1. Prior to cropping the images, complete the following steps:
    1. Import the images from the image file directory by accessing the settings of the Image Slide tool.
    2. Import the trained weight file (saved in step 2.2.8) by accessing the settings of the Deep Detect tool. Click on the Config button | + button, select the backend (CUDA or CPU), provide a name, click OK, choose the weight file directory, and click Choose. Within the Deep Detect tool, select the detection parameters (threshold and non-maxima suppression (nms)); drawing parameters; tracking parameters; and region of interest (ROI) parameters.
    3. Select the directory where the cropped images will be saved by accessing the settings of the Deep Crop tool. Click Browse | choose the directory to save the cropped images | click Choose | select the image format (jpg or png) | enable the Auto Save option.
  2. Crop images to obtain a single object per image for image classification and segmentation. To carry out this process, utilize four tools and establish connections between them: go to the General toolbar | General | Button Run. Next, navigate to General toolbar | CiRA AI | DeepDetect; then, go to General toolbar | CiRA AI | DeepCrop. Finally, go to Image toolbar | Acquisition | ImageSlide.
  3. Once all the necessary settings are in place, initiate the image cropping process by clicking on the Button Run tool.
  4. Obtain a new image training dataset consisting of single-object images with a size of 608 x 608.

5. Image classification as model training

  1. Use drag-and-drop to select the image classification model training function for data training. Go to the Image toolbar | DeepClassif | Drag ClassifTrain | Drop ClassifTrain on the screen.
  2. Import images for model training using the ClassifTrain tool's settings. Click on the Open folder button and navigate to the desired image directory. Before training, expand the data by clicking on the Augmentation button for more information using techniques such as Rotation, Contrast, Flipping (horizontal and/or vertical), Noise, and Blur.
  3. To commence model training, click on the GenTrain button of the ClassifTrain tool. Under the GenTrain function, select the models, batch size, and subdivisions. Assign a directory to save the generated file. Click the Generate button to proceed with data for training. In the Train function, tick the appropriate options: Continue training with default weight or custom weight.
    NOTE: The generation process may take time depending on factors such as image file size, augmentation usage, class balancing, and available memory space.
  4. Once all preparations are complete, initiate the model training by clicking the Start button. Allow the program to execute continuously, evaluating the training loss and adjusting the weight of the dataset during the training process. If the model achieves the desired level of loss, save the trained weight file to the specified directory by clicking on the Export button.

6. Classification model evaluation

  1. Select the image classification model evaluation function for model evaluation using the drag-and-drop method. Go to the Plugin toolbar | Evaluate | Drag EvaluateClassif | Drop EvaluateClassif on the screen (the right-hand side).
  2. Click on Setting to access additional functions within the EvaluateClassif tool, namely Evaluate and PlotROC.
  3. To initiate model evaluation, click on the Evaluate button in the EvaluateClassif tool. Follow these steps under the Evaluate function.
    1. Import the test images from the image file directory by clicking on the Load folder image. Import the trained weight file from the directory (saved in step 5.4) by clicking on Load Config. Click the Start button to evaluate the classification model.
    2. Once the evaluation is complete, save the evaluated file as CSV in the specified directory by clicking on the Export to CSV button. For evaluation of data at every threshold, save the CSV file with class names in the specified directory by clicking on Start all threshold. The saved CSV file includes parameters such as Recall (True Positive Rate), False Positive Rate, and Precision for each class.
  4. To plot the Receiver Operating Characteristics (ROC) curve, click on the PlotROC button within the EvaluateClassif tool. Follow these steps under the PlotROC function.
    1. Import CSV files from the directory obtained earlier by clicking on Browse. Inspect the imported class list and select each class label to plot the ROC curve.
    2. Click the Plot button to visualize the ROC curve as an image. Make the desired edits to adjust image properties, including font size, font colors, rounding the decimal, line styles, and line colors.
  5. Finally, save an image of the ROC curve with the AUC values in the required image format at the specified directory by clicking on the Save button.

7. Testing the process with the CiRA CORE application

  1. Object detection as model testing
    1. To perform model testing, utilize four tools and establish connections between them. Go to the General toolbar | General | Button Run. Then, General toolbar | General | Debug. After that, click on General toolbar | CiRA AI | DeepDetect, and finally Image toolbar | Acquisition | ImageSlide.
    2. Before testing the images, follow these steps:
      1. Import the test images from the image file directory by clicking on the Setting option in the Image Slide tool.
      2. Import the saved trained weight file from step 2.2.8 by clicking on the Setting option in the DeepDetect tool. Click on the Config button, then the + button, select the backend (CUDA or CPU), provide a name, click OK, choose the weight file directory, and click Choose. Under the DeepDetect tool, select the detection parameters (Threshold and nms), drawing parameters, tracking parameters, and ROI parameters.
      3. View the test image results by clicking on the image function in the Debug tool.
    3. Finally, check the predicted results for each image by clicking on the Run button on the Button Run tool.
  2. Image classification as model testing
    1. To perform model testing, utilize four tools and establish connections between them. Go to the General toolbar | General | Button Run; then, General toolbar | Debug. After that, navigate to Image toolbar | Acquisition | ImageSlide, and finally, Image toolbar | DeepClassif | DeepClassif.
    2. Before testing the images, follow these steps:
      1. Import the test images from the image file directory by clicking on the Setting option in the Image Slide tool.
      2. Import the saved trained weight file from section 5.5 by clicking on the Setting option in the DeepClassif tool. Click on the Config button | + button | select the backend (CUDA or CPU) | provide a name | click OK | choose the weight file directory | click Choose. Under the DeepClassif tool, select the classification parameters (Threshold and number of top-class predictions), Guide map parameters (threshold, alpha, beta, and color map), and various parameters in the color map.
      3. View the test image results by clicking on the image function in the Debug tool.
    3. Finally, check the predicted results for each image by clicking on the Run button on the Button Run tool.

8. Hybrid (detection and classification) as model testing

  1. To perform this model testing, utilize four tools and establish connections between them. Go to the General toolbar | General | ButtonRun. Then, General toolbar | General | Debug. After that, Image toolbar | Acquisition | ImageSlide, and finally, Image toolbar | DeepComposite | DeepD->C.
  2. Before testing the images, follow these steps: Import test images from the image file directory by clicking on the Setting option in the Image Slide tool. Import the two saved trained weight files from section 2.1.5 and section 4.4 by clicking on the Setting option in the DeepD->C tool:
    1. For the Detect function, click on the Config button |+ button, select the backend (CUDA or CPU) | provide a name | click OK | choose the weight file directory | click Choose. Under the Detect function, select the detection parameters (Threshold and nms), drawing parameters, tracking parameters, and ROI parameters.
    2. For the Classif function, click on the Config button |+ button, select the backend (CUDA or CPU) | provide a name | click OK | choose the weight file directory | click Choose. Under the Classif function, select the classification parameters (Threshold and number of top-class predictions) and Guide map parameters (threshold, alpha, beta, and color map).
  3. View the test image results by clicking on the image function in the Debug tool. Finally, check the predicted results for each image by clicking on the Run button on the Button Run tool.

9. Five-fold cross-validation

NOTE: To validate the performance of the proposed model more effectively, K-fold cross-validation is used.

  1. Divide the dataset into five sections, corresponding to the five folds of cross-validation. During each iteration of model training and testing, use one section as the validation set for testing and the remaining four sections for training. Repeat this process five times, with each fold being used as the validation set once.
  2. For Folds 1 through 5:
    1. Repeat section 5 to train the model using the training data from the four folds.
    2. Repeat section 7.2 to test the model using the remaining fold as the test set.

10. Model evaluation

  1. Confusion matrix
    1. Based on the test results, the four conditions will happen as follows:
      1. True Positive (TP): When the input image is true, and the prediction is also true.
      2. False Positive (FP): When the input image is false, but the prediction is true.
      3. False Negative (FN): When the input image is true, but the prediction is false.
      4. True Negative (TN): When the input image is false, and the prediction is also false.
    2. Using these four conditions, evaluate the performances with the confusion matrix.
  2. Performance evaluations
    1. The most commonly used classification performance metrics are accuracy, precision, recall, specificity, and F1-score values. Calculate all evaluation metrics in equations (1-6) used to evaluate model performance from values from the confusion matrix.
      Equation 1    (1)
      Equation 2    (2)
      Equation 3    (3)
      Equation 4    (4)
      Equation 5    (5)
      Equation 6    (6)
  3. ROC curve
    NOTE: The ROC curve is a performance measure for classification problems with different threshold settings. The area under the ROC curve (AUC) represents the degree or measure of separability, while the ROC is a probability curve.
    1. The ROC curve is a two-dimensional graph with the true positive rate (TPR) and false positive rate (FPR) values plotted on the Y and X axes, respectively. Construct the ROC curves using the TPR and TFR values obtained from the confusion matrix. The TPR value is the same as the sensitivity; calculate the FPR value using the equation (7).
      Equation 7    (7)
    2. After obtaining the TPR and FPR values, plot the ROC curve using the Jupyter Notebook open-source web tool in a Python environment. The AUC is an effective way to assess the performance of the proposed model in ROC curve analysis.
  4. PR curve
    1. Use the PR curve to evaluate models by measuring the area under the PR curve. Construct the PR curve by plotting the models' precision and recall using the model's confidence threshold functions. Because the PR curve is also a two-dimensional graph, plot Recall on the x-axis and Precision on the y-axis.
    2. Plot the PR curve, like the ROC curve, using the open-source Jupyter Notebook web tool in a Python environment. The area under the Precision-Recall curve (AUC) score is also helpful in multilabel classification.

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

In this study, hybrid deep learning algorithms were proposed to help automatically predict the positivity of a blood sample with a trypanosome parasite infection. Archived, Giemsa-stained blood films were sorted to localize and classify the parasitized versus non-parasitic by using the object detection algorithm based on a darknet backbone neural network. Within any rectangular box prediction result obtained by the previous model, the best-selected classification model was developed to classify all three species of medically and veterinary important trypanosomes including T. brucei, T. cruzi, and T. evansi. The final output of the hybrid models used revealed the robustness of the proposed models against the variation of 100x microscopic images that might affect the prediction result, including the blood-stage morphology of the parasite. In addition, environmental factors may disturb the image quality of staining color change by storing time, intensity from the light sources of the microscope, and blood film preparation skills. Nevertheless, the best-selected model can achieve the goal with high performance.

Localization and classification of multi-class labels
Since the detection of parasitic protozoa from Giemsa staining blood film under oil-immersion microscopy is tedious and lengthens turn-around time, this leads to prone error bias. Well-trained-AI approaches require a large pool of image data with rescaling 416 x 416 pixels and varying feature characteristics of 3-RGB color channels to increase the correct prediction of localization and classification. The number of parameters during training and optimizing models is set up with a learning rate of 0.002, burn-in of 1,000, and ranging steps between 400,000 and 450,000. Low training loss but high training accuracy were considered the optimum level or saturation under momentum of 0.9, hue of 0.1, and decay of 0.0005. In the testing phase with unseen data, correct localization and classification were performed by using the concepts of intersection over union (IOU) and percentage of the probability. The testing interpretation result was performed at a threshold of 50% and a non-maximum suppression (NMS) of 0.4, which gave the correct answer with a % probability.

As with all parasitized blood films studied, discrimination of the trypanosome out of the non-trypanosome has been performed by using a detection neural network model that can function for both localization and classification (Figure 1)22. The prediction result of the detection task proposed revealed an outstanding result with a mean average precision of 93.10% (Table 1). Although the trained detection model can be used to identify the non-trypanosome class more than that used to identify the trypanosome parasite, it brings us greater precision than 91% for both class labels. In addition, the precision versus recall curve showed a highly average AUC value of 0.969, which gave the AUC values for the parasite and non-parasite at 0.976 and 0.961, respectively (Figure 2). This led us to assure ourselves that the trained model could be trustworthy. The rectangular box of the first detection result was cropped by using the image capture module under the in-house CiRA CORE program. The cropped images mentioned above were sorted into three folders that are specific to the trypanosome species. This process was prepared to input data for the training classification model that is illustrated in the next subsection.

Classification model-wise classification
To find an appropriately trained model for classifying the well-known species of the parasite, T. brucei, T. cruzi, and T. evansi were kept in folders that were assigned their relative class names. During AI training, rescaled 256 x 256 pixels of images were fed into three RGB channels, learning rate of 0.1, burn-in of 1000, momentum of 0.9, hue of 0.1, and decay of 0.0005. The training loss and training accuracy were used to find the optimum trained model. The classification prediction was analyzed using the concepts of pixel-wise determination and % probability at a threshold of 50%.

The comparison of three popular classification neural network algorithms was studied to find the best one27,30. These three neural networks have been widely used in classifying multiclass labels in medical and veterinary fields27,34,35. The inference result of the trained model with a % probability that ranked 0 to 1, was justified above the threshold of 50%. Additionally, different pattern recognitions of each parasite were highlighted and specific to the nucleus of the middle portion of T. evansi by the attention map. The largest kinetoplast organelle of the anterior portion of T. cruzi when compared to the other two species was also highlighted. Both nuclease and kinetoplast were emphasized by the attention map found for T. brucei (Figure 3).

Several statistical metrics were used to measure those three models proposed, including accuracy, misclassification rate, recall (true positive rate), specificity (true negative rate), false positive rate, false negative rate, precision, and F1 score. As a result, almost all evaluation metrics using the Densenet201 neural network showed superior values to the others. On average, the metric values of accuracy, recall, specificity, precision, and F1 score were remarkably greater and equal to 98%. However, the model performance importance revealed less than and equal to 1.5% of the misclassification, false positive, and false negative rates (Table 2). Considering the class-wise comparison, the Densenet201 model seems to correctly identify T. evansi without error while doing this with unseen testing data, suggesting the potential trained model is for distinguishing the parasite species.

In Figure 4A-C, the AUC under the ROC curve gave the greatest degree of average accuracy at 0.931 obtained from the best classification model (Figure 4C), which was representative of confirming the best selected model studied. The AUC of T. evansi was 0.817, which is lower than others (0.980-1.00 for T. brucei and 0.955-0.977 for T. cruzi) and a contrast to the statistical metrics above. This may be because these two values are calculated by different formulas. The AUC was obtained from all thresholds but the statistical metrics from only a threshold of 50%, suggesting these two values cannot be compared. Hence, consistent AUC values by class names obtained from all three models indicate the general accuracy of T. brucei > T. cruzi > T. evansi, respectively.

k-fold cross validation
To assess the robustness of the best-selected classification model studied in terms of estimating the true prediction error and tuning the model parameters as described above36, the five-fold cross-validation technique was used. A random split of the data into five folders was done. Assigned trained data by four-folders and tested data for the rest folder were prepared before training with the selected classification algorithm.

As a result, the average statistical metrics; accuracy, recall (true positive rate), specificity (true negative rate), precision, and F1 score, provided similar values of the statistical metrics studied that showed greater than 98% (Table 3). Considering each metric studied, a ranking of 0.992-1.000 in accuracy was found. High specificity values ranging from 0.994 to 1.000 were provided. Both recall and F1 scores ranging from 0.988 to 1.000 were shown, likewise, 0.989-1.000 were studied by precision. Interestingly, low rates of misclassification, false negatives, and false positives were found at less than 1.2%. This quality performance supported the outstanding trained model with varied data folds and represented robustness.

Accompanying the metrics proposed, the average AUC under the ROC curve obtained revealed closed values ranging from 0.937 to 0.944, giving similar values of general accuracy among the five folds of the data (Figure 5). The class-wise comparison provided a varied AUC of 0.831 for T. evansi, 0.982-1.000 for T. cruzi, and 1.000 for T. brucei. Although T. evansi's AUC value was lower than the others, the values may be exposed to the high degree of false positive rate (~33%) belonging to the thresholds 1% to 97% which results in smaller AUC values when compared to those of the other two classes (Figure 6).

The hybrid deep learning a practical screening
In this section, the contribution of the hybrid deep learning approach between object detection and, on the other hand, the classification technique is shown in Figure 7. The parasite and non-parasite features were distinguished and their relative classes identified within the pink bounding box by using the first detection model. Next, the specific species of the parasite were diagnosed in different colors by using the well-trained classification model. The green label was for T. evansi, the pink label for T. brucei, and the orange label for T. cruzi. The second classification label would not be shown if the first detection model failed, suggesting the well-connected functions between these two different neural network backbones in the D-C module of the in-house CIRA CORE platform.

Figure 1
Figure 1: Architecture for a hybrid model. All three parasite species of trypanosomes (including Trypanosoma evansi, T. brucei, and T. cruzi) were used as input. Multi-objects within a 100x microscopic image were detected by using the detection model. A single cropped object from the previous model was then classified according to its relative species by using the best-classification model. An attention map integrated with the best classification model highlighted areas specific to each class label. Please click here to view a larger version of this figure.

Figure 2
Figure 2: PR curve. The area under the PR curve, or AUC value, in this study, is used to measure the ability to discriminate between non-trypanosome and trypanosome classes. All samples can be detected on both class labels. An AUC of 1 is a perfect prediction, while an AUC of 0.5 is a random prediction. The curve is used to measure the performance of the proposed detection model. This model can detect the trypanosome class at a higher rate (AUC = 0.976) than the non-trypanosome class (AUC = 0.961). The average AUC value of 0.969 was obtained from the binary result of two class labels, the non-trypanosome and the trypanosome. Abbreviations: PR = precision versus recall; AUC = area under the curve. Please click here to view a larger version of this figure.

Figure 3
Figure 3: Predictive result of the classification model. All three trypanosome species were used to test the best proposed trained models. Output images of species classification-based probability and attention maps are shown. Specifically, the attention maps highlighted the significant areas within the unseen object that were guiding the discrimination of the parasite species. Please click here to view a larger version of this figure.

Figure 4
Figure 4: Model-wise comparison-based ROC curves. The AUC under the ROC curve is a graphical plot of the performance of a classification system based on its varied threshold of discrimination. Similar to the AUC-PR curve, the AUC-ROC of 1 is a perfect prediction, while the AUC of 0.5 is a random prediction, which is indicated by dashed lines in each graph. Three classification models were compared, including (A) the 1st classification model with an average AUC of 0.925, (B) the 2nd classification with an average AUC of 0.924, and (C) the best classification with an average AUC of 0.931. Therefore, the higher the AUC, the better the performance. Abbreviations: ROC = receiver operating characteristics; AUC = area under the curve. Please click here to view a larger version of this figure.

Figure 5
Figure 5: Five-fold cross-validation. All experiments based on the best classification neural network models were compared. Similar AUC values of five-fold data included (A) 0.944, (B) 0.944, (C) 0.937, (D) 0.941, and (E) 0.938, which suggest the robustness of the proposed trained model used against the variation of the biological data. Please click here to view a larger version of this figure.

Figure 6
Figure 6: true positive rate and false positive rate per class name. The X-axis is representative of thresholds from 1% to 97%. The Y-axis is representative of the degrees of the statistical metrics. Please click here to view a larger version of this figure.

Figure 7
Figure 7: Final output of the hybrid models. The final step of the hybrid model contribution can be applied with input data as a raw microscopic image by 20 µm. The predictive result can be obtained from both the object detection and the classification models. The first predictive result provided whether the unseen testing image contained trypanosome parasites with a rectangle (pink-colored labels). Then the classification results specific to the parasite species will be followed by the first detection with multi-color labels; green for T. evansi, pink for T. brucei, and orange for T. cruzi. Please click here to view a larger version of this figure.

Table 1: Average precision by class and mean Average Precision (mAP) of the detection model. Please click here to download this Table.

Table 2: Classification model-wise comparison. Eight evaluation metrics were used to measure the model's performance, including accuracy, misclassification rate, recall (true positive rate), specificity (true negative rate), false positive rate, false negative rate, precision, and F1-score. The bold value is representative of the greatest value per class label. The italic value is representative of the average value of each evaluation metric. Please click here to download this Table.

Table 3: Five-fold cross-validation. The bold value is representative of the average value per evaluation metric. Please click here to download this Table.

Subscription Required. Please recommend JoVE to your librarian.

Discussion

Microscopic observation for Trypanosoma protozoa infection is early and commonly used, especially during surveillance in remote areas where there is a lack of skilled technicians and labor-intensive and time-consuming processes that are all obstacles to reporting the health organization timely. Although molecular biology techniques such as immunology and polymerase chain reaction (PCR) have been approved as high-sensitivity methods to support the effectiveness of lab findings, expensive chemicals, apparatus, and professionals are needed to deal with them, which are mostly situated in a central laboratory at a big healthcare center. Shared morphology, mixed and immature infection, and characteristics of three Trypanosoma species are prone to user bias and misidentification, reducing the drug response and control measure37. Using the modified and hybrid algorithms between two different deep learning models within the proposed AI program can overcome many challenges, making a new era of standard taxonomy automatic and achievable. Previous publications have confirmed the potential of hybrid models in identifying malarial blood stages27,38. Here is the explanation of the protocol for training, testing, and evaluating the proposed AI models to recognize the mature stages of three well-known Trypanosoma species with a simplified process to analyze for practical identification and further quantitation of the parasitic protozoa under a microscopic field.

The proposed model looks beyond the machine learning model using the random forest algorithm, which has been applied to identify the infection of T. cruzi from blood smear samples. The machine learning model achieved a precision of 87.6%, a sensitivity of 90.5%, and an area under the receiver operating characteristic curve of 0.94239. In 2015, two methods called AdaBoost learning and SVM learning were conducted to distinguish T. cruzi from malaria infection in blood smears. Although a high degree of both sensitivity and specificity was reported, a limited dataset of 120 color images of low dimension 256 × 256 pixels was studied, which may not be representative of the entire population40. In this study, three well-known zoonotic Trypanosoma species (e.g., T. cruzi, T. brucei, and T. evansi) were separated by using the proposed hybrid model, which outperformed previous studies described above. This represents the cost-effectiveness of the deep learning model. Nevertheless, several large datasets may require validation of the performance of the proposed model to confirm its generalization41. T. lewisi has the potential to infect humans opportunistically, and it is recognized as an emerging zoonotic disease transmitted by rats, often linked to impoverished conditions. Cases have been documented in some countries, such as Thailand and China20,42. Furthermore, the morphologies of T. evansi and T. lewisi bear a striking resemblance17. To enhance the dataset and the proposed model, the inclusion of more instances of T. lewisi could be beneficial in the development of a deep learning model in the future. To broaden the scope of potential deep learning techniques for the diagnosis of additional animal trypanosomoses, it is advisable to gather datasets for other species such as T. vivax, T. theileria, and T. melophagium. One significant challenge to address is the diagnosis of mixed infections involving various Trypanosoma species, as antibody detection methods may exhibit reduced specificity due to cross-reactions43. It is essential to enhance and fortify diagnostic techniques to advance artificial intelligence applications and safeguard the health of livestock, humans, and the environment.

Prior to training the proposed AI program to recognize the 2D image of the parasite protozoa, the important criteria needed to complete it, such as a large sample size, class balancing, data augmentation, and quality labeling by experts. As critical steps, the error signal of the training phase can be unraveled by professionals to reproduce the ground truth labels for both the Darknet and Densenet algorithms. A major advantage of the AI program proposed is its friendly use for non-coding users through easy drag-and-drop steps. Another important feature is the combination module of the detection version and attention map integrated with the classification models, which helps facilitate testing the unseen data as fast as possible without worrying about the raw image file format. This is because a broader range of image formats can be used, including .jpeg, .jpg, .png, .tif, .tiff, .pdf, and .bmp. Application of the AI program with a c-mount component of the microscope can lead to real-time detection in remote areas.

Limitations of the method may affect the proposed protocols in the pre-training phase. Before training an AI model begins, some requirements should be well-prepared, specifically, dataset quality and expert labels. Within the dataset, a small sample size and class unbalancing led the model to reach the global minima and have difficulty reaching the optimum stage. The use of a large sample size and balancing the data help optimize the model with high accuracy and low loss during training. The variation of images, such as the developmental stage through the protozoa's life-cycle period and varied color by Giemsa staining27,44, the environmental and image scales, wish to be normalized prior to feeding into the training of both deep learning models. To fix the proposed problems mentioned above, various augmentation functions such as rotation angles, brightness and contrast, vertical and horizontal flips, Gaussian noise, and Gaussian blur, can be used to deal with the pretraining phase45.

The important application of the hybrid AI models proposed is to identify the parasitic protozoan in real time in the microscopic data as raw data from the microscope, frozen images, and video clips. It allows us to deploy the trained model with embedded-edge devices46, cloud-based mobile application47, browser user interface (BUI)48, and web-based model deployment49. As a result, the AI model has the potential to apply hybrid deep learning to active surveillance and provide a timely result due to its ability to support the local staff's decision within milliseconds, suggesting automatic screening technology for auxiliary epidemiology.

Subscription Required. Please recommend JoVE to your librarian.

Disclosures

All authors have no financial disclosures and no conflicts of interest.

Acknowledgments

This work (Research grant for New Scholar, Grant No. RGNS 65 - 212) was financially supported by the Office of the Permanent Secretary, Ministry of Higher Education, Science, Research and Innovation (OPS MHESI), Thailand Science Research and Innovation (TSRI) and King Mongkut's Institute of Technology Ladkrabang. We are grateful to the National Research Council of Thailand (NRCT) [NRCT5-RSA63001-10] for funding the research project. M.K. was funded by Thailand Science Research and Innovation Fund Chulalongkorn University. We also thank the College of Advanced Manufacturing Innovation, King Mongkut's Institute of Technology, Ladkrabang who have provided the deep learning platform and software to support the research project.

Materials

Name Company Catalog Number Comments
Darknet19, Darknet53 and Densenet201 Gao Huang, Z. L., Laurens van der Maaten. Densely Connected Convolutional Networks. arXiv:1608.06993 [cs.CV]. (2016) https://github.com/liuzhuang13/DenseNet  Deep convolutional neural network model that can function to  classification
Generic name: YOLO model/ detection model?
Olympus CX31 Model CX31RRBSFA  Olympus, Tokyo, Japan SN 4G42178  A light microscope 
Olympus DP21-SAL U-TV0.5XC-3  Olympus, Tokyo, Japan SN 3D03838 A digital camera
Generic name: Classification models/ densely CNNs
Window 10 Microsoft Window 10 Operation system in computers
YOLO v4-tiny  Naing, K. M. et al. Automatic recognition of parasitic products in stool examination using object detection approach. PeerJ Comput Sci. 8 e1065, (2022). https://git.cira-lab.com/users/sign_in Deep convolutional neural network model that can function to both localization and also classification 
https://git.cira-lab.com/users/sign_in

DOWNLOAD MATERIALS LIST

References

  1. Kasozi, K. I., et al. Epidemiology of trypanosomiasis in wildlife-implications for humans at the wildlife interface in Africa. Frontiers in Veterinary Science. 8, 621699 (2021).
  2. Ola-Fadunsin, S. D., Gimba, F. I., Abdullah, D. A., Abdullah, F. J. F., Sani, R. A. Molecular prevalence and epidemiology of Trypanosoma evansi among cattle in peninsular Malaysia. Acta Parasitologica. 65 (1), 165-173 (2020).
  3. Aregawi, W. G., Agga, G. E., Abdi, R. D., Buscher, P. Systematic review and meta-analysis on the global distribution, host range, and prevalence of Trypanosoma evansi. Parasites & Vectors. 12 (1), 67 (2019).
  4. Joshi, P. P., et al. Human trypanosomiasis caused by Trypanosoma evansi in India: the first case report. The Am Journal of Tropical Medicine and Hygiene. 73 (3), 491-495 (2005).
  5. Lidani, K. C. F., et al. Chagas disease: from discovery to a worldwide health problem. Frontiers in Public Health. 7, 166 (2019).
  6. Sazmand, A., Desquesnes, M., Otranto, D. Trypanosoma evansi. Trends in Parasitology. 38 (6), 489-490 (2022).
  7. Powar, R. M., et al. A rare case of human trypanosomiasis caused by Trypanosoma evansi.Indian. Journal of Medical Microbiology. 24 (1), 72-74 (2006).
  8. Shegokar, V. R., et al. Short report: Human trypanosomiasis caused by Trypanosoma evansi in a village in India: preliminary serologic survey of the local population. American Journal of Tropical Medicine and Hygiene. 75 (5), 869-870 (2006).
  9. Haridy, F. M., El-Metwally, M. T., Khalil, H. H., Morsy, T. A. Trypanosoma evansi in dromedary camel: with a case report of zoonosis in greater Cairo, Egypt. Journal of the Egyptian Society of Parasitology. 41 (1), 65-76 (2011).
  10. Dey, S. K. CATT/T.evansi antibody levels in patients suffering from pyrexia of unknown origin in a tertiary care hospital in Kolkata. Research Journal of Pharmaceutical, Biological and Chemical Sciences. 5, 334-338 (2014).
  11. Dakshinkar, N. P., et al. Aberrant trypanosomias in human. Royal Veterinary Journal of India. 3 (1), 6-7 (2007).
  12. Vn Vinh Chau, N., et al. A clinical and epidemiological investigation of the first reported human infection with the zoonotic parasite Trypanosoma evansi in Southeast Asia. Clinical Infectious Diseases. 62 (8), 1002-1008 (2016).
  13. Misra, K. K., Roy, S., Choudhary, A. Biology of Trypanosoma (Trypanozoon) evansi in experimental heterologous mammalian hosts. Journal of Parasitic Diseases. 40 (3), 1047-1061 (2016).
  14. Nakayima, J., et al. Molecular epidemiological studies on animal trypanosomiases in Ghana. Parasites & Vectors. 5, 217 (2012).
  15. Riana, E., et al. The occurrence of Trypanosoma in bats from Western Thailand. The 20th Chulalongkorn University Veterinary Conference CUVC 2021: Research in practice. 51, Bangkok, Thailand. (2021).
  16. Camoin, M., et al. The Indirect ELISA Trypanosoma evansi in equids: optimisation and application to a serological survey including racing horses, in Thailand. BioMed Research International. 2019, 2964639 (2019).
  17. Truc, P., et al. Atypical human infections by animal trypanosomes. PLoS Neglected Tropical Diseases. 7 (9), 2256 (2013).
  18. Desquesnes, M., et al. Diagnosis of animal trypanosomoses: proper use of current tools and future prospects. Parasites & Vectors. 15 (1), 235 (2022).
  19. Da Silva, A. S., et al. Trypanocidal activity of human plasma on Trypanosoma evansi in mice. Revista Brasileira de Parasitologia Veterinaria. 21 (1), 55-59 (2012).
  20. Desquesnes, M., et al. Trypanosoma evansi and surra: a review and perspectives on transmission, epidemiology and control, impact, and zoonotic aspects. BioMed Research International. 2013, 321237 (2013).
  21. World Health Organization. A new form of human trypanosomiasis in India. Description of the first human case in the world caused by Trypanosoma evansi. Weekly Epidemiological Record. 80 (7), 62-63 (2005).
  22. Naing, K. M., et al. Automatic recognition of parasitic products in stool examination using object detection approach. PeerJ Computer Science. 8, 1065 (2022).
  23. Wongsrichanalai, C., Barcus, M. J., Muth, S., Sutamihardja, A., Wernsdorfer, W. H. A review of malaria diagnostic tools: microscopy and rapid diagnostic test (RDT). American Journal of Tropical Medicine and Hygiene. 77, 119-127 (2007).
  24. Rostami, A., Karanis, P., Fallahi, S. Advances in serological, imaging techniques and molecular diagnosis of Toxoplasma gondii infection. Infection. 46 (3), 303-315 (2018).
  25. Ahmad, Z., Rahim, S., Zubair, M., Abdul-Ghafar, J. Artificial intelligence (AI) in medicine, current applications and future role with special emphasis on its potential and promise in pathology: present and future impact, obstacles including costs and acceptance among pathologists, practical and philosophical considerations. A comprehensive review. Diagnostic Pathology. 16 (1), 24 (2021).
  26. Sarker, I. H. Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Computer Science. 2 (6), 420 (2021).
  27. Kittichai, V., et al. Classification for avian malaria parasite Plasmodium gallinaceum blood stages by using deep convolutional neural networks. Scientific Reports. 11 (1), 16919 (2021).
  28. Baskota, S. U., Wiley, C., Pantanowitz, L. The next generation robotic microscopy for intraoperative teleneuropathology consultation. Journal of Pathology Informatics. 11, 13 (2020).
  29. Bochkovskiy, A., Wang, C. -Y., Liao, H. -Y. M. YOLOv4: optimal speed and accuracy of object detection. arXiv. , 10934 (2004).
  30. Huang, G., Liu, Z., vander Maaten, L., Weinberger, K. Q. Densely connected convolutional networks. arXiv. , 06993 (2018).
  31. CDC-DPDx. Diagnostic procedures - Blood specimens. , Available from: https://www.cdc.gov/dpdx/diagosticprocedures/blood/specimenproc.html#print (2020).
  32. World Health Organization. Control and surveillance of African trypanosomiasis: report of a WHO expert committee. WHO Technical Report Series 881. , Available from: https://iris.who.int/bitstream/handle/10665/42087/WHO_TRS_881.pdf?sequence=1 (1998).
  33. Leber, A. L. Detection of blood parasites. Clinical Microbiology Procedures Handbook. , ASM Press. Washington, DC. (2022).
  34. Huang, L. -P., Hong, M. -H., Luo, C. -H., Mahajan, S., Chen, L. -J. A vector mosquitoes classification system based on edge computing and deep learning. Proceedings-2018 Conmference on Technologies and Applications of Artifical Intelligence. , Taichung, Taiwan. 24-27 (2018).
  35. Cihan, P., Gökçe, E., Kalipsiz, O. A review of machine learning applications in veterinary field. Kafkas Universitesi Veteriner Fakultesi Dergisi. 23 (4), 673-680 (2017).
  36. Berrar, D. Cross-validation. Encyclopedia of Bioinformatics and Computational Biology. 1, 542-545 (2019).
  37. Gaithuma, A. K., et al. A single test approach for accurate and sensitive detection and taxonomic characterization of Trypanosomes by comprehensive analysis of internal transcribed spacer 1 amplicons. PLoS Neglected Tropical Diseases. 13 (2), 0006842 (2019).
  38. Vijayalakshmi, A., Rajesh Kanna, B. Deep learning approach to detect malaria from microscopic images. Multimedia Tools and Applications. 79 (21-22), 15297-15317 (2019).
  39. Morais, M. C. C., et al. Automatic detection of the parasite Trypanosoma cruzi in blood smears using a machine learning approach applied to mobile phone images. PeerJ. 10, 13470 (2022).
  40. Uc-Cetina, V., Brito-Loeza, C., Ruiz-Pina, H. Chagas parasite detection in blood images using AdaBoost. Computational and Mathematical Methods in Medicine. 2015, 139681 (2015).
  41. Zhang, C., et al. Deep learning for microscopic examination of protozoan parasites. Computational and Structural Biotechnology Journal. 20, 1036-1043 (2022).
  42. Sarataphan, N., et al. Diagnosis of a Trypanosoma lewisi-like (Herpetosoma) infection in a sick infant from Thailand. Journal of Medical Microbiology. 56, 1118-1121 (2007).
  43. Desquesnes, M., et al. A review on the diagnosis of animal trypanosomoses. Parasites & Vectors. 15 (1), 64 (2022).
  44. Fuhad, K. M. F., et al. Deep learning based automatic malaria parasite detection from blood smear and its smartphone based application. Diagnostics (Basel). 10 (5), 329 (2020).
  45. Christian Matek, S. S., Spiekermann, K., Marr, C. Human-level recognition of blast cells in acute myeloid leukaemia with convolutional neural networks. Nature Machine Intelligence. 1, 538-544 (2019).
  46. Hamdan, S., Ayyash, M., Almajali, S. Edge-computing architectures for internet of things applications: a survey. Sensors (Basel). 20 (22), 6441 (2020).
  47. Visser, T., et al. A comparative evaluation of mobile medical APPS (MMAS) for reading and interpreting malaria rapid diagnostic tests. Malaria Journal. 20 (1), 39 (2021).
  48. Giorgi, E., Macharia, P. M., Woodmansey, J., Snow, R. W., Rowlingson, B. Maplaria: a user friendly web-application for spatio-temporal malaria prevalence mapping. Malaria Journal. 20 (1), 471 (2021).
  49. Rajaraman, S., Jaeger, S., Antani, S. K. Performance evaluation of deep neural ensembles toward malaria parasite detection in thin-blood smear images. PeerJ. 7, 6977 (2019).

Tags

Superior Auto-Identification Trypanosome Parasites Hybrid Deep-Learning Model Trypanosomiasis Public Health Problem South Asia Southeast Asia Hotspot Areas Active Surveillance Microscopic Examination Skilled Personnel Artificial Intelligence (AI) Program Hybrid Deep Learning Technique Object Identification Object Classification Neural Network Backbones Low-code AI Platform (CiRA CORE) Protozoan Trypanosome Species Trypanosoma Cruzi T. Brucei T. Evansi Oil-immersion Microscopic Images Pattern Recognition Nucleus And Kinetoplast Attention Map Statistical Measures Accuracy Recall Specificity Precision F1 Score Misclassification Rate Receiver Operating Characteristics (ROC) Curves Precision Versus Recall (PR) Curves
Superior Auto-Identification of Trypanosome Parasites by Using a Hybrid Deep-Learning Model
Play Video
PDF DOI DOWNLOAD MATERIALS LIST

Cite this Article

Kittichai, V., Kaewthamasorn, M.,More

Kittichai, V., Kaewthamasorn, M., Thanee, S., Sasisaowapak, T., Naing, K. M., Jomtarak, R., Tongloy, T., Chuwongin, S., Boonsang, S. Superior Auto-Identification of Trypanosome Parasites by Using a Hybrid Deep-Learning Model. J. Vis. Exp. (200), e65557, doi:10.3791/65557 (2023).

Less
Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter