Arabic Handwritten Characters Classification Using Logistic Regression, SVM, and Neural Networks
A dataset of 16,800 32x32 RGB images was used to test three machine learning algorithms, Support Vector Machine, Logistic Regression, and Convolutional Neural Network.
An image generated with DALL-E AI.
Abstract
The Arabic Letter dataset (with 16,800 32x32 RGB images) was used in this research project. The dataset was separated into training set (13,440 images) and testing set (3,360 images). Three machine learning algorithms, namely Support Vector Machine (SVM), Logistic Regression and Convolutional Neural Network (CNN) were trained on the data. Linear, radial basis function (RBF) and sigmoid function kernel were used when training the data with SVMs, with the penalty parameter, C, of the error term varying over a range 0.0001 - 100. While training our CNN, different values of alpha (in ReLU) and activation functions were used to find the most accurate model. In addition, increase in the number of hidden layers resulted in very little change in accuracy. As the result, the Convolutional Neural Network (CNN) (with L2 reg. term = 0.001 and 4 hidden layers) was found to produce the best results with a classification accuracy of 94.73%, a slightly poorer accuracy of 75.83% with a radial basis function (RBF) (with C = 100), and the least accurate was the Logistic Regression with accuracy of 41.85%. Index Terms—Logistic, Neural Networks, Convolution, SVM.
The Confusion Matrices of the Best Accuracy Models
The results below represent the best accuracy models for SVM, Logistic Regression, and CNN. Each figure is accompanied with a small explanation of what was done for the specific ML algorithm:

Figure 1: Confusion Matrix of the Best Accuracy SVM Model.

For SVM, we used three types of kernels to train the data, which are linear kernel, the radial basis function (RBF) kernel and the sigmoid function kernel. The accuracy was acquired from testing set, where we adjusted our penalty parameter, C, for each run. The penalty parameter, C, of the error term was varied over the range from 0.0001 to 100. The best results were obtained with the RBF kernel SVM for C = 100 with test accuracy of 72.53%.


Figure 2: Confusion Matrix of the Best Accuracy Logistic Regression Model.

For Logistic Regression, we have used three data feature transformations - none, degree 2, and degree 3. The accuracy was acquired from testing set, where we adjusted our penalty parameter, C, for each run. The penalty parameter, C, of the error term was varied over the range from 0.0001 to 100. The best results were obtained with the no data transformation model for C = 0.1 with test accuracy of 41.85%.


Figure 3: Confusion Matrix of the Best Accuracy Convolutional Neural Network Model.

For CNN, we used four types of activation functions to train the data such as Linear, Leaky ReLU, Sigmoid and Tanh. Each time we trained our model 5 times (5 epochs). The best results were obtained by using Tanh activation function with a 91.63% of validation accuracy. Additionally, we analyzed how the alpha term affects the model’s accuracy when using Leaky ReLU activation function. After running the model, the final accuracy is equal to around 0.947 or 94.7%.