Open Access

Detection of Malignant and Benign Breast Cancer Using the ANOVA-BOOTSTRAP-SVM


Cite

Introduction

Breast cancer has been one of the most common types of cancer among women. It can be benign and malignant. Medical researchers have been focused on timely detection of malignant cancer as it spreads in surrounding tissues in the body (Siegel, Miller, & Jemal, 2015). Timely detection of malignant cancer is important for the survival rate after undergoing cancer treatment. Some recent research show that the breast cancer survival rate in the developed countries is between 80% and 90% (Mustafa et al., 2016), while it is much lower in developing countries (Jemal et al., 2011). To keep the survival rate increasing, interdisciplinary studies have developed both medical tests (Breit et al., 2019; Noske et al., 2020) and machine learning algorithms (Singh, 2019; Wu et al., 2019) to detect malignant cancer cells. Medical tests focus on detecting deviations in breast cells, while machine learning algorithms use image classification (Wang et al., 2020) for screening deviations in breast tissue. Machine learning classification algorithms (Salama, Abdelhalim, & Zeid, 2012; Wang et al., 2019) are also used to detect malignant breast cancer. Most of them (Asri et al., 2016) achieve accuracy lower than 99%.

The aim of this study is to propose a modification of the SVMs, which classifies malignant tumors with 99.6% accuracy. This result is an important contribution to academic literature as this is the highest accuracy achieved on the Wisconsin breast cancer dataset using classification algorithms, particularly the SVMs. Also, our proposition results in the smallest error rate compared to other existing methods (Asri et al., 2016; Maldonado et al., 2014; Vrigazova and Ivanov, 2019). We also tested the algorithm on the mammographic mass breast cancer dataset (Elter, Schulz-Wendtland, & Wittenberg, 2007) and it boosted the classification of benign and malignant cancer significantly compared to the cassic SVM. Therefore, our algorithm can be applied to various datasets to boost chances of discovering malignant breast cancer.

Literature review

Deep learning methods for detecting breast cancer include neural networks (Yan et al., 2019; Ting, Tan, & Sim, 2019) for image classification of affected breast tissue. Researchers in this field use images of cancerous breast tissue to build classification algorithms that can predict the stage of development of breast cancer. Neural networks can be combined with feature selection like the ridge and liner discriminant analysis to perform image classification (Toğaçar, Ergen, & Cömert, 2020). These algorithms have wide application in medical screening as they can classify a patient as healthy or sick and determine the type of breast cancer without requiring prior knowledge about the lack or presence of breast cancer (Ting, Tan, & Sim, 2019). These algorithms also achieve high accuracy when predicting the type of cancer (Ting, Tan, & Sim, 2019; Toğaçar, Ergen, & Cömert, 2020) and they can be used as additional diagnostic method along with medical tests.

Machine learning techniques are used to extract text information about typical symptoms of malignant breast cancer by checking medical records of patients. For instance, Forsyth et al. (2018) built a machine learning algorithm to find the most common symptoms of breast cancer that patients reported. They algorithm checked 103,564 sentences to identify pain, fatigue, and nausea as the most common cancer symptoms. Zhang et al. (2019) also studied characteristics of breast cancer to predict its occurrence. They achieved f1-score of 93.53% using neural networks. Such research extends medical knowledge about causes and symptoms of breast cancer dataset and increases chances of correct and timely diagnosis.

Along with neural networks, classification methods are another popular machine learning technique to explore breast cancer. Unlike previously mentioned work, classification methods use quantitative data to predict the type of breast cancer. The most common dataset used is the Wisconsin breast cancer dataset. It contains quantitative data about breast cancer physical characteristics, while the target variable is a categorical variable corresponding to benign or malignant cancer. The classification task is to predict the malignant cases. Many authors have examined this dataset using various approaches. For instance, Liu et al. (2019) devised a novel approach for feature selection on the dataset called IGSAGAW-CSSVM to improve prediction accuracy. Asri et al. (2016) applied support vector machines, k-nearest neighbors, naive Bayes and decision tree. He achieved accuracy of 97.13% in predicting the malignant cases. The purpose of classification methods is to help diagnose breast cancer based on its physical characteristics.

The decision tree classifier (Quinlan, 1996) achieved 94.7% accuracy on the Wisconsin breast cancer dataset, while ensemble learning algorithms (Bashir, Qamar, & Khan, 2015) achieved 97.4% accuracy. Although not widely used, a nero-rule based approach (Setiono, 2000) managed to achieve 98.2% accuracy on the dataset. Support vector machines (SVMs) is the most commonly used algorithm to achieve high accuracy when predicting malignant breast cancer (Liu et al., 2019; Wang et al., 2018). Previous research (Chaurasia and Pal, 2017) has showed that the SVMs with RBF kernel is the most suitable algorithm to detect malignant breast cancer. They (Chaurasia and Pal, 2017) achieved accuracy of 96.8%. Other papers also showed that the SVMs may be the most appropriate classification method for detecting malignant cancer (Maldonado et al., 2014).

In a previous research we (Vrigazova & Ivanov, 2019) presented a modification of the SVMs called the ANOVA-SVM-BOOTSTRAP and achieved accuracy of 97.5%. In this paper, we tune the ANOVA-BOOTSTRAP-SVM (Vrigazova & Ivanov, 2019) to increase prediction accuracy on the Wisconsin breast cancer dataset. We show that our version outperforms some current research (Maldonado et al., 2014) and (Khairunnahar et al., 2019). We also show that our algorithm significantly boosts the results achieved on other breast cancer datasets compared to the classical SVMs. With this discovery, we propose an optimized version of the Support vector machines that can be applied on various breast cancer datasets to detect malignant cancer. Section 2 presents the algorithm. Sections 3 and 4 present the results and some discussion.

Methodology

We call the proposed algorithm for detecting malignant cancer the ANOVA-BOOTSTRAP-RBF-SVM. This procedure detects the number of features that results in the highest accuracy of the Support vector classifier. We use the bootstrap procedure as model validation procedure. The advantages of our algorithm include fast computation, improved classification ability of the support vector classifier and simplicity of application. We perform our research in Python 3.6 by following the steps below.

Step 1: We first standardize the input data by using the StandardScaler() function in Python and applying equation 1: zij=xijμσ{z_{ij}} = {{{x_{ij}} - \mu} \over \sigma}

Step 2: We normalize (Eq. 2) the standardized data from step 1 to have values between 0 and 1. We do that by the MinMaxScaler() function in scikitlearn. nij=zijmin(zij)max(zij)min(zij){n_{ij}} = {{{z_{ij}} - \min ({z_{ij}})} \over {\max \left({{z_{ij}}} \right) - \min \left({{z_{ij}}} \right)}}

Step 3: We split the dataset into training and test set by using the 10-fold bootstrap procedure described in (Vrigazova & Ivanov, 2019).

Step 4: We fix C=32 and use the radial basis function (RBF) kernel for the support vector classifier. We chose C=32 as we ran the ANOVA-BOOTSTRAP-RBF-SVM with a grid of integer values for C between 1 and 32. The value of 32 proved to result in the highest accuracy as shown in the next section. The reason we chose this grid of values is the fact that the Wisconsin breast dataset has 32 variables, including the target variable. So, we introduce new constraint on the classical C-optimization problem for SVMs introduced in (Cortes et al., 1995). Eq. 34 show our modification: min12w2w+Ci=1lξi*,whereC>0andC=p+1min{1 \over 2}{w^2}w + C\sum\limits_{i = 1}^l {{\xi _{i*}},\,\,where\,\,C > 0\,\,and\,\,C = p + 1} subject to yi*(wTϕ(xi)+b1ξi*,whereξi*0,i=11{y_{i*}}\left({{w^T}\phi ({x_i}} \right) + b1 - {\xi _{i*}},\,\,where\,\,{\xi _{i*}}0,i = 1 \ldots 1

The vector w denotes the weight for each variable (p), l denotes the length of the training set, which unlike the standard SVM, we chose using the bootstrap procedure. The term ξi* marks the error term, while φ(xi) denotes the kernel function. In the standard SVM, C is regularization term that shrinks the weight of coefficients. It is a positive constant that is usually determined by preliminary setting a grid of values. The value that provides the highest accuracy is the one used in the model.

In academic literature, there is no existing rule to choose the values in the grid. Therefore, in the standard SVMs the constraint imposed on the regularization term is to be positive. However, we introduce additional constraint, particularly C=1+p, where p is the number of features. The value of 1 corresponds to the fact that the target variable is only one. We derived this rule based on empirical results using the grid [1,1+p].

Step 5: For each percentile of features we run the support vector classifier subject to the constraints introduced in steps 1–4. We use the same Python functions as described in our previpus work (Vrigazova & Ivanov, 2019).

Step 6: We select the combination of variables that produced the highest accuracy.

Figure 1

The ANOVA-BOOTSTRAP-RBF-SVM: illustration.

We compare the performance of our modification to other existing methods in academic literature. Next section shows our empirical results and compares them to existing research.

Empirical results

Various research has worked on the Wisconsin diagnostic breast dataset (https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)). For example, Maldonado et al. (2014) used the mixed linear integer approach to modify the SVM for the breast cancer dataset. Table 1 shows their results. They achieved the highest accuracy (98.1%) using the MILP1 approach and 26 features. Their approach led to the lowest error rate of 1.9%. The rest of their modifications provided similar results—accuracy of 97.9% and error rate of 2.1%.

Maldonado et al.’s results on WBCD (Mixed linear integer approach).

ACCAUCkError rate
l2-SVM*97.90097.300312.1
LP-SVM*97.20096.500312.8
l1-SVM*97.50097.200312.5
Fisher+ SVM*97.90097.300312.1
RFE-SVM*97.90097.300232.1
l0-SVM*97.90097.300162.1
MILP1*98.10097.700261.9
MILP2*97.90097.300172.1

Source: Maldonado et al., 2014; error rate—author’s calculations.

In 2019, Khairunnahar et al. (2019) used sigmoid function to improve classification ability on the breast cancer dataset. She achieved highest accuracy of 97.3%, which is comparable to Maldonado’s results. We calculated the error rate of her study as Table 2 shows. The lowest error rate that the sigmoid classification produced was 2.6%. However, the AUC score in this case was 90, which was by 7.7 percentage points lower than Maldonado’s research. Unlike Maldonado et al., she used fewer features to get this accuracy.

Khairunnahar et al.’s results on WBCD (sigmoid classification function).

ACCAUCkError rate
Classical system95.70882.000124.3
Proposed sigmoid97.42590.000122.6
Classical system95.42397.540314.6
Proposed sigmoid96.83199.000313.2

Source: Khairunnahar et al., 2019, error rate—author’s calculations.

Table 3 shows our experimental results on the WBCD that we first obtained in (Vrigazova & Ivanov, 2019).

Vrigazova’s results on the WBCD.

ACCAUCkError rate
ANOVA-CV-RBF-SVM97.31099.75032.7
ANOVA-Bootstrap-L-SVM*97.27099.131242.7
ANOVA-Bootstrap-RBF-SVM97.56199.477272.4
ANOVA-PCA-Bootstrap-L-SVM*95.98599.221274.0
ANOVA-PCA-Bootstrap-RBF-SVM*92.90899.44537.1
Classical SVM with linear kernel C=197.54099.990302.5
Classical SVM with RBF kernel C=197.72099.990302.3
Classical SVM with linear kernel C=0.1097.89099.941302.1
Classical SVM with RBF kernel C=0.1094.55099.059305.5

Source:

Vrigazova & Ivanov, 2019, the rest is authors’ calculations.

We manage to achieve accuracy of 97.3% in our previous work (Vrigazova & Ivanov, 2019). We achieved that by using the ANOVA-BOOTSTRAP-L-SVM with C=1 and linear kernel. In the current research we did experiments comparing the performance of the classical SVM to the ANOVA-BOOTSTRAP-L-SVM. We ran experiments with C=1, kernel linear and rbf respectively. In another experiment we changed C=1 to be C=0.1. Table 3 shows that these classical versions of the SVM provide similar accuracy and AUC scores using all features unlike the ANOVA-BOOTSTRAP-L-SVM. The other ANOVA SVM versions (marked with asterisk) that we proposed in (Vrigazova & Ivanov, 2019) provided worse results than the ANOVA_BOOTSTRAP-L-SVM. As a result, we fitted ANOVA-BOOTSTRAP-SVM with rbf kernel that produced one of the highest accuracies in Table 3. The accuracy achieved was 97.6%, AUC score 99.5 and error rate of 2.4%. This metrics is comparable to the classical SVM with linear kernel and C=0.10. Therefore, we decided to do experiments with the ANOVA-BOOTSTRAP-RBF-SVM and find value of C that can improve the classification ability of the SVMs on the WBCD.

We first ran experiments by fitting logistic regression, decision tree, support vector machines and k-nearest neighbors using the tenfold bootstrap procedure as train/test splitting technique. As Table 4 shows the results were very similar to those in (Maldonado et al., 2014), (Khairunnahar et al., 2019; Vrigazova & Ivanov, 2019) without significant improvement of the performance metrics. We then experimented with the value of C in the rbf version of the bootstrapped ANOVA-SVM as Table 4 shows.

Performance of modified classifications on the WBCD.

ACCAUCkError rate
LR with bootstrap97.36299.494302.6
DT with bootstrap92.08592.458307.9
SVM bootstrap97.07099.451302.9
KNN bootstrap96.15998.082303.8
ANOVA-Bootstrap-RBF-SVM C=598.56199.425271.4
ANOVA-Bootstrap-RBF-SVM C=798.20199.412301.8
ANOVA-Bootstrap-RBF-SVM C=1398.22199.088211.8
ANOVA-Bootstrap-RBF-SVM C=1498.27699.648271.7
ANOVA-Bootstrap-RBF-SVM C=3098.91399.445241.1
ANOVA-Bootstrap-RBF-SVM C=3299.62798.808270.4

Source: authors’calculations.

Fitting ANOVA-BOOTSTRAP-RBF-SVM with C=5, 7, 13, and 14 provided accuracy from 98.2% to 98.6%. These accuracies are better than the classical classification methods and our previous attempts (Table 3). These versions of the ANOVA-BOOTSTRAP-SVM resulted in the smallest error rate compared to those in Table 3. Also, when C=13, the accuracy of 98.2% was achieved by using 21 features instead of 27 in the ANOVA-BOOTSTRAP-RBF-SVM when C=1. As Table 4 shows, the ANOVA-BOOTSTRAP-RBF-SVM with C=5, 7, 13, and 14 resulted in accuracies higher than Maldonado’s (Table 1) and Khairunnahar’s (Table 2). On the one hand Khairunnahar et al. (2019) used a sigmoid version of the logistic regression that did not lead to significant improved of prediction accuracy as Table 2 shows. On the other hand, as Tables 1, 3, and 4 show the support vector machines may be the most appropriate classification method on the WBDC. Therefore, the tuning of the ANOVA-BOOTSTRAP-RBF-SVM led to better performance of our proposion compared to Madonado’s (Table 1).

The classical version of the support vector machines in Table 3 resulted in high accuracy (97.9%). AUC score (99.9) despite using 30 features for classification. Similar metrics result from the classical SVMs fitting with different value of C (Table 3). Maldonado’s (Maldonado et al., 2014) mixed linear integer versions of the support vector classifier improve the results from the classical SVM by resulting in accuracy of 97.9 and AUC 97.7 (Table 1) by using 6 features. As he shows in his research (Maldonado et al., 2014) his versions of SVM are faster than the classical SVMs. He managed to improve the accuracy on the WDCD by reaching 98.1% and AUC score 97.9 by using 26 features (Table 1).

As Table 4 shows our proposed ANOVA-BOOTSTRAP-RBF-SVM with C=30 also outperforms Maldonado;s best accuracy of 98.1% and our previous experiments (Table 3) as well other bootstrapped classification methods (Table 4). When C=30, the accuracy reached 98.9%, while AUC score was 99.4% using only 24 features. This version of the proposed method resulted in the smallest error rate (1.1%) compared to previous work. As we show, tuning the ANOVA-BOOTSTRAP-SVM significantly outperformed classical and modified classification methods, as well as other versions of the support vector classifier.

Although error rate of 1.1% and accuracy of 98.9% are very good metrics for SVM’s performance, we found a value of C, which reduced the error rate to 0.4%. We achieved this result when C=32 as Table 4 shows. The accuracy rate we achieved was 99.6% using 27 features. As Tables 14 show, existing research could not reach these metrics. The support vector classifier and its existing modifications were able to reach maximum of 98.1% accuracy, while we improved the classification ability of the SVMs by going beyond 99% accuracy. As Tables 14 show our modification outperforms not only the support vector classifier but also some other existing classification methods and their versions. Moreover, we achieved the lowest error rate (0.4%) so far using classification methods, particularly the SVMs.

As we show in Vrigazova and Ivanov (2019), the ANOVA-BOOTSTRAP-SVM tends to be faster than Maldonado’s suggestions. As Table 5 suggests the ANOVA-BOOTSTRAP-RBF-SVM needs 0.07s to produce error rate of 0.4%, while Maldonado’s mixed linear integer versions of SVM (marked in green) are much slower. Much slower are also the classical versions of the SVMs (shown in yellow).

Time comparison of SVMs’s versions on the WBCD.

Time
l2-SVM0.20
LP-SVM0.10
MILP1-NFS0.20
MILP2-NFS0.30
Fisher + SVM0.20
l1-SVM0.40
RFE-SVM0.40
l0-SVM0.50
MILP1-FS0.20
MILP2-FS0.20
ANOVA-CV-L-SVM0.04
ANOVA-CV-RBF-SVM0.05
ANOVA-Bootstrap-L-SVM0.05
ANOVA-Bootstrap-RBF-SVM0.07
ANOVA-PCA-Bootstrap-L-SVM0.07
ANOVA-PCA-Bootstrap-RBF-SVM0.06
Classical SVM with linear kernel C=10.65
Classical SVM with RBF kerne C=10.95
Classical SVM with linear kernel C=0.10.55
Classical SVM with RBF kernel C=0.11.02

Source: green—Maldonado et al., 2014; authors’ calculations.

As Tables 15 show when tuned, the ANOVA-BOOTSTRAP-RBF-ANOVA outperformed other classification methods, provided the highest accuracy, the smallest error rate and the fastest execution time. Therefore, we managed to improve Maldonado’s best result of 98.1% accuracy, 1.9% error rate and execution time of 0.20s. We contribute to academic literature by showing that the support vector machines can successfully lead to accuracy higher than 99%, reduce error rate and computational time.

We also tested our algorithm the Wisconsin Prognostic Breast Cancer dataset (https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Prognostic)) and the Mammographic Mass dataset (http://archive.ics.uci.edu/ml/datasets/mammographic+mass). These datasets have a binary variable denoting benign and malignant breast cancer. We ran experiments with the ANOVA-BOOTSTRAP-RBF-SVM and compare its performance to the classical cross-validated support vector machines. Table 6 presents the results.

Comparison of the ANOVA-BOOTSTRAP-RBF-SVM performance and the classic ANOVA-SVMs with cross validation.

AlgorithmDatsetKernelCACCAUCN of featuresError rate
ANOVA-BOOTSTRAP-RBF-SVMWPBCrbf3085.471.92014.6
rbf582.370.32617.7
rbf784.571.02015.5
rbf1383.771.52616.3
rbf1487.875.82312.2
rbf3284.671.92615.4
Mamographicrbf583.383.5416.7
Mass datasetrbf781.583.5418.5
rbf1382.483.7517.6
rbf1482.583.7517.5
rbf3284.583.7515.5
rbf3081.783.8318.3
Classic ANOVA SVMs with tenfold cross validationWPBCrbf3078.571.72621.5
rbf574.669.1325.4
rbf774.969.1325.1
rbf1375.169.12324.9
rbf1475.469.13024.6
rbf3278.571.72621.5
Mamographicrbf578.785.8321.3
Mass datasetrbf779.085.9321.0
rbf1379.586.4420.5
rbf1479.686.3420.4
rbf3280.087.0520.0
rfb3080.186.9519.9

Source: author’s calculations.

We have used the same kernel and values of C in the classic version of the ANOVA-SVM. The only difference between our algorithm and the classic one is the mechanism to avoid overfitting. In the ANOVA-BOOTSTRAP-RBF-SVM, we use the tenfold bootstrap procedure unlike the classic ANOVA-SVM, where tenfold cross validation is used. At each fold, the bootstrap procedure uses different number and indices of the observations to perform classification. Therefore, the ANOVA-BOOTSTRAP-RBF-SVM avoids overfitting.

Table 6 shows that the cross validated ANOVA-SVM resulted in the highest accuracy of 78.5% on the Wisconsin Prognostic Breast Cancer dataset. This accuracy was achieved by setting the C parameter to 30 or 32 and leaving twenty-six features of the dataset. In comparison, the ANOVA-BOOTSTRAP-RBF-SVM achieved the highest accuracy of 87.8% on the WPBC dataset using 23 features. Moreover, the AUC score of the classic algorithm was 71.7, while the bootstrapped version produced AUC score of 75.8. Our proposed algorithm improved significantly both the accuracy (ACC) and the AUC score of the model without overfitting. Table 6 also shows that irrespective of the value chosen for C, the classic ANOVA-SVM resulted in lower accuracy and AUC scores on the WPBC dataset than the bootstrapped ANOVA-SVM. The ANOVA-BOOTSTRAP-RBF-SVM showed better performance on the WPBC dataset than the cross-validated ANOVA-SVM in terms of the error rate. Our proposed method significantly reduced the error rate (12.2%) of the model compared to the standard method (21.5%).

Similar are the results on the Mamographic Mass dataset. The classic model produced the highest accuracy of 80.1% and AUC score of 86.9, while the bootstrapped version resulted in the best accuracy of 84.5% and AUC score equal to 83.7. Both methods used all features in the dataset. The ANOVA-BOOTSTRAP-RBF-SVM improved the accuracy on the dataset at the cost of AUC scores. However, the AUC score remained above 80, so it was high enough to consider the model good. Our model could separate correctly benign cells from malignant ones in 83.7% of the cases, while the classic one—in 86.9% of the cases. Despite this, the bootstrapped version classified correctly the class of the observation in 84.5% in the cases against 80.1% in the classic version. Table 6 shows that the bootstrapped ANOVA-SVM improved the accuracy on the Mammographic Mass dataset irrespective of the value of the parameter C. However, in all cases this came at the cost of small loss of AUC scores. The cross-validated ANOVA-SVM produced lower accuracy but higher AUC score in all cases. Despite this, the loss of AUC score in the bootstrapped version was not significant and it resulted in the smallest error rate (15.5% against 19.9% in the classic version). We consider the ANOVA-BOOTSTRAP-RBF-SVM to have a better performance on the Mammographic Mass dataset than the cross-validated ANOVA-SVM as it: 1. Produced higher accuracy score, 2. high enough AUC score and 3. lower error rate.

Tables 14 show that our algorithm resulted in higher accuracy scores and AUC score on the Wisconsin Diagnostic Breast Cancer dataset and lower error rate compared to other authors. An important finding from our research is that the bootstrap procedure increases the accuracy of the classification model and lowers its accuracy rate. However, it can either increase or slightly decrease the AUC score. In case, it increases the AUC score, we can accept the model as better than the classic version. In case, there is a slight decrease in the AUC score, we can still accept the model as better. If the AUC score decreases significantly, the model should be tuned, e.g. switch the kernel, the value of C, etc. or another model can be more suitable for the given dataset. Our experiments on other datasets (Vrigazova et al., 2019) show that rarely does the AUC score deteriorates significantly as a result of the bootstrap procedure although not impossible. As we show in Table 5, the ANOVA-BOOTSTRAP-RBF-SVM can also be faster than some classical models, while avoiding overfitting due to its resampling nature.

Conclusion and discussion

In this research we propose the ANOVA-BOOTSTRAP-RBF-SVM algorithm to increase prediction accuracy on breast cancer datasets like the Wisconsin Breast cancer dataset (99.6%) and the Mammographic Mass dataset (84.5%). With this method we aim to propose another version of the ANOVA-SVM that can improve the quality of detecting malignant breast cancer. The advantages of our proposed algorithms include improvement of the accuracy, reducing the error rate and producing high enough AUC score. Our algorithm can separate the benign from malignant cells and classify them properly with high accuracy without overfitting. As we show, in some cases our algorithm can be faster than existing ones.

It should be noted, however, that our proposition is sensitive to the kernel and the value of C in the ANOVA-SVM algorithm. Despite this, the ANOVA-BOOTSTRAP-RBF-SVM outperforms existing SVM versions. A further research can be made into the precision, recall and f1-score measures to observe the model’s performance on each class as the accuracy measure is a general measure. Each class performance can be affected differently depending on the resampling procedure. Our previous experiments show that if the ability of the model to correctly classify the two binary outcomes had not improved, then the accuracy could not be improved. Therefore, we did not show the precision, recall and f1-score measures in our experiments.

With the results from this research, we extend the practical applications of the bootstrap procedure for detecting breast cancer as the ANOVA-BOOTSTRAP-RBF-SVM provided the smallest error rate, high enough AUC score and could reduce execution time in some cases. With this, we extend existing academic literature and propose optimization of the SVMs for detecting malignant and benign breast cancer that can be applied on various datasets. Moreover, the benefits from our algorithm can be extended to other medical datasets to improve ability to detect other medical conditions.

eISSN:
2543-683X
Language:
English
Publication timeframe:
4 times per year
Journal Subjects:
Computer Sciences, Information Technology, Project Management, Databases and Data Mining