1. Introduction
Due to the specificity and stability of human writing habits, handwritten material such as an individual's signature has been a common way of indicating the identity and wishes of the person concerned since ancient times. HSV is used as a highly reliable means of identification, helping to determine the authenticity of a document and thus to identify a person. Nowadays, HSV has been widely applied in the field of financial, administrative, commercial, and judicial contexts.
Traditional signature verification comes from the accumulated experience of handwriting examiners such as professional forensic document examiners (FDEs), and the scientific basis has yet to be proven and strengthened. Thus, the results of traditional signature verification are often disputed in practice in the financial, administrative, and judicial fields. In the late 20th century, attempts were made to use computers to replace the work of traditional handwriting examiners, with the hope of obtaining more accurate and objective results. Signature verification by computer is an important branch of biometric identification, the technology referring to using computers to authenticate the identity of an individual by means of physical characteristics such as face or fingerprints or behavioural characteristics such as gait or handwriting. Nowadays, the field of HSV is experiencing unprecedented growth, especially following the rapid development of machine learning.
Unlike the forensic document examiners (FDEs), the computer vision and pattern recognition (CVPR) communities describe the three main types of forgery as random forgery, simple forgery, simulated forgery, and skilled forgery [1]. In the case of random forgery, real name and real signature of the genuine writer are unacquirable. Simple forgery is obtained by the forger knowing the real name of the genuine writer, but not knowing the real signature of the same writer. Simulated forgery and skilled forgery are obtained respectively by an inexperienced forger and an experienced forger, both knowing the name and real signature and having practiced for an unlimited number of times.
Depending on the acquisition method, HSV systems can be divided into two categories: online and offline. The online approach captures dynamic handwriting characteristics such as stroke length, force, direction, speed and grip, while the offline approach only captures static handwriting information. The online method provides more information for HSV systems and the dynamic information is difficult to imitate in a short period of time, making it less difficult to identify. However, in practical applications, the materials to be authenticated are often static handwriting files, which means the offline method is more widely used.
Depending on how the models are trained, HSV systems can be divided into two categories: writer-dependent (WD) and writer-independent (WI). Writer-dependent signature verification systems have a proprietary classifier trained for each user and therefore have better classification accuracy. Researchers regard writer-dependent signature verification systems as the more secure approach, because their templates are not stored. However, writer-dependent signature verification systems face very high complexity and computational costs when trying to add new users to the system. For this reason, writer-independent signature verification systems that build a generic classifier trained on all users are clearly preferable in practice.
Focusing on the above-mentioned writer-independent offline signature verification, we introduce the latest research progress in detail from the four aspects of preprocessing technology, feature extraction method, dataset, and neural network structure, including representative methods in this paper. In addition, in Section 3, in order to analyze the application effect of different methods, we also quantitatively compare the recognition results of different methods on common datasets. Finally, we summarize the remaining research issues in the field of OfHSV research and discuss the future research directions of this topic.
2. Method
The existing research on OfHSV mainly focuses on four aspects: dataset, preprocessing technology, feature extraction method and neural network structure. We will introduce them in detail in the next few subsections.
2.1. Preprocessing
The main purpose of preprocessing is to process the data before the feature extraction and classification tasks. It can reduce the useless information and make useful data better available for subsequent training tasks. Among all preprocessing methods, normalization, binarization, noise removal, segmentation and inversion are most common.
Normalisation refers to the transformation of the original image to be processed into another form that satisfies the requirements of a specific purpose through a series of specific transformations. Generally speaking, these include, but are not limited to, basic transformations such as translation, rotation and scaling. Kalera et al. [2] used rotation normalisation, which is an algorithm that calculates the most appropriate angle value for the image to be processed. The feature curve is rotated until the minimum axis of inertia coincides with the horizontal axis, thus placing the signature information in the image to be processed in a convenient horizontal position. Pourshahabi et al. [3] used size normalisation to resize images according to the longer side of the image, giving all images a specific size, which greatly affects the recognition and verification rates.
Binarisation can turn an image into a gray scale image, which reduces the dimensionality of the data and eliminates part of the interference of noise from the original image. During handwriting verification, gray processing the original image is one of the common processing methods. The signature itself only takes up a small part of the image and most of the information is useless background information, which will have a significant impact on training. Khalifa et al. [4] transformed the original RGB image into a gray scale image. This processing can minimise the influence of background information on training and classification, allowing the training model to pay more attention to the stroke part of the image rather than other information such as background and colour. OTSU algorithm, which is used for image segmentation, is also one of the binarisation methods. The image enhancement of Pourshahabi et al. [3] was carried out through the threshold obtained by OTSU algorithm.
2.2. Feature extraction
Feature extraction is a process, which is used to extract the representative features of the signature to distinguish different signatures, and the extracted features determine whether the system can operate effectively It is also one of the key points to improve the accuracy. There are many methods for off-line feature extraction, but researchers can only extract static information, such as the aspect ratio of signature, the slope of the stroke, and the gravity center.
The extracted features are usually roughly divided into two categories: global features and local features. Global features are features that have a relationship with the entire signature image. Common global features are aspect ratio, gravity center, number of horizontal and vertical strokes, total number of strokes, etc. Before local feature extraction, an image is usually divided into several parts according to some way. Local features will usually be extracted from different parts separately, and the relationship between parts may also be extracted. Local features usually include the position of the stroke, the curvature at the bend, the slope of the stroke, the relative position of the intersection between the strokes, etc.
In feature extraction, more attention is usually paid to the relative features of the signature image, such as the gravity center, the curvature of the stroke, etc., rather than the absolute features, such as the pixel values of length and width, and the position of the signature image in the picture. This is because absolute features are easy to be forged by imitators, but relative features are affected by personal signature style and are more difficult to forge.
Muhammad et al. [5] firstly used Otsu to binarize the image, and then adjusted the pixels to perform morphological operations such as thinning and closing, and denoising, so as to complete the image preprocessing. After that, the signature image for extracting local features was divided into 16 parts horizontally and vertically. Then, the horizontal feature and vertical feature were extracted and formed into a \( 1×60 \) feature vector. Combine the slope, distance and angle of each feature to form a local feature vector of size 1 x 150. The following six global features were then extracted and together formed a feature vector of size 1 x 156: aspect ratio, signature pure width, signature pure height, signature normalised height, area and black pixel area. They then used GA to select the best features according to their fitness function. Finally, SVM was used to measure the accuracy of feature extraction and the performance of feature selection algorithm. They reported lowest AER was 5.0% on MCYT dataset.
Debanshu et al. [6] proposed a language-invariant OfHSV model with good results in both WI and WD scenarios. The model first converted the signature image to the corresponding signal and adjusted it to a fixed dimension, and then performed a singular value decomposition on it. After that, four different types of features, namely frequency based, similarity based, shape based and statistical, were extracted from the signal of the signature image. Next, a novel BRDA-based wrapper FS approach was proposed, following metaheuristics to reduce the feature dimension. Finally, the Naive Bayes classifier was applied to identify whether the signature was forged or genuine. They achieved EER of 0.01(WI), 0.02(WD) and accuracy of 99.36%(WI), 98.72%(WD) on CEDAR dataset.
Batool et al. [7] proposed a signature recognition technique based on optimal feature selection and multi-level feature fusion. 8 geometric features and 22 Gray Level Co-occurrence matrix (GLCM) features were computed from the preprocessed feature samples, and they were fused by using a technique based on High Priority Index features (HPFI). In addition, SKcPCA was proposed to select the optimal features for the final classification of real and fake signatures. SVM was used as the classifier. The experiment was conducted on MCYT, GPDS and CEDAR. The FAR of MCYT was 2.66%, and the FRR was 2.00%. The FRR and FAR of CEDAR were 3.75% and 3.34%, respectively. The FRR and FAR of GPDS were 9.69% and 10.3%, respectively.
2.3. Neural network structures and related applications
HSV by computer has evolved over the decades since its inception in the 1980s. In this period, almost all aspects of this field have progressed considerably. In the early days of research, most of the work on HSV was about handcrafted feature extraction, in the hope of finding good feature representations of signatures. With the booming field of artificial intelligence field, in particular machine learning, research into HSV techniques that do not need handcrafted feature extraction has also increased in popularity.
Neural networks have been the dominant solution for OfHSV tasks for more than a decade. At this point, basic networks such as basic CNNs, GNNs and DNNs have been studied thoroughly, and their limitations have been gradually identified. As a result, people have started to design proprietary networks that are more suitable for the task. In recent years, many researchers in the field of OfHSV have made very creative progress in terms of the breadth, depth and loss functions of neural networks, and have obtained a better understanding of the applications of convolutional neural networks.
Since the difficulty in obtaining forgery data in real world application scenarios, more and more researchers, such as Shariatmadari et al. [8] and Zois et al. [9], have started to apply one-class convolutional neural networks to OfHSV tasks. The one-class classification network learns the decision boundaries of positive samples by itself without the user giving negative samples. This network is very promising for tasks such as OfHSV where it is difficult to give negative samples.
Furthermore, for the case where only a small amount of labeled data is available for each class, some researchers have suggested that an additional pretraining step can be added to the neural network to improve the network performance. Due to the small training size in OfHSV tasks, an appropriate loss function is one of the most effective learning parameters in enhancing the generalisation capabilities of network [10]. If the predicted value is to be infinitely close to the true value, the loss function needs to be minimized. Every loss function has its own Strengths and weaknesses [10]. If a dynamic multi-loss function can be combined to allow different loss functions to complement each other's strengths, it can help to improve the generalisation capabilities of the network.
In basic CNN networks, features at different levels share the same extraction strategy and CNN network, so it is difficult to ensure that features at different levels can be retained at the same time. Often, CNNs are very prone to ignore low-level information [11], thus leading to inaccurate results. Shariatmadari et al. [8] designed a better solution to this problem in a better hierarchical CNN network. The authors first divide the images into patches of different sizes according to the best extraction strategy for each feature, and then input them into the hierarchical network. In each stream in the hierarchical CNN network, the arrangement and number of convolutional layers differ according to the optimal extraction strategy for each feature. Finally, the features of different levels then use the majority-voting rule of the fusion.
Many other works have made useful attempts to optimise network performance for OfHSV work. For example, Maergner et al. [12] attempted to apply the DenseNet-121 network, which has been proven to perform well in natural image recognition, to HSV, while Calik et al. [13] developed a new convolutional neural network structure, LS2Net, to handle large-scale training problems. All these works have achieved relatively good experimental results.
2.4. Dataset
In earlier studies, offline handwritten signature verification (OfHSV) systems mostly relied on private datasets in the training and testing process, making it difficult to compare different efforts relate work, because the composition of the datasets may affect systems’ performance and test results such as the AER, EER and ACC to some extent. In addition, building exclusive datasets is a time-consuming and laborious process.
In the 21st century, researchers began to convert some of the private datasets to the public, which not only prevent researchers from building the datasets by themselves, but also enabled researchers to evaluate their OfHSV systems more objectively. In the following 10 years, more and more public datasets have been developed, with datasets such as CEDAR, MCYT-75 and 4NSigComp gaining favor by the OfHSV community and being widely used in research.
As mentioned above, the WI systems requires a generic classifier to be built before it can be applied to the users’ HSV work. As we know, in machine learning, a large-scale dataset as a training set allows the generic classifier to have a better generalisation capability. This means, for a WI system, a large-scale dataset is required in the train process to get an excellent generic classifier. At the same time, in the realistic application scenario of OfHSV, plethora of signature samples of users is not readily available. Therefore, comparing the questioned samples with a small number of positive samples from users when testing the model can yield test results that are more relevant to the real-world application scenario. These issues place higher demands on both the training and test sets.
In recent years, as research has progressed, researchers have become aware of the limitations of existing datasets. Most of the commonly used datasets have been described in detail in previous studies. Therefore, this paper will not dwell on them too much, but focus more on how to improve them to overcome the limitations of existing datasets and make them closer to the real application scenarios. The limitations of the commonly used datasets (such as CEDAR, McYt-75, 4NSigComp, etc.) are mainly as follows:
(1) Limitations in the training process. The scale of commonly used signature datasets is insufficient, through which the WI system cannot get an excellent generic classifier. CEDAR, for example, contains handwriting results from just 55 volunteers, a data size that is notoriously small in machine learning. Furthermore, in most related work, the dataset will be further divided into training set and test set. This means that the data used as the test set is subtracted from the total in the dataset, which leads to a further strain on the amount of data available for training. Meanwhile, due to the personal privacy of handwriting as biological information, law or politics usually do not allow researchers to construct signature datasets of excessive size. All these in turn pose additional challenges for researchers.
(2) Limitations in the testing process. In commonly used signature datasets, the number of positive samples provided by each user is often far greater than that in actual situations. Again, using CEDAR as an example, each volunteer provides 24 genuine signatures in this dataset. However, when the WI system is applied in practical financial as well as judicial related areas, the users are often unwilling or unable to provide such a number of positive samples. This is precisely the reason why some methods yield satisfactory results during testing, but yield less than satisfactory results in practical applications.
In response to the problems mentioned above, researchers have made many beneficial attempts in recent years, many of which are creative achievements. To address the problem of small dataset size and insufficient training data, Zois et al. [9] combined three commonly used datasets (CEDAR, McYt-75 and GPDS300) with a dataset provided by the Netherlands Forensic Institute (NFI). This approach can expand the size of the dataset, but is challenging to realize. Generally speaking, different datasets have great differences in signature language, image resolution, number of writers, genuine and forged signatures. CEDAR, for example is 300 dpi, while the other three datasets are 600 dpi. In this work, the authors have completely separated the training and test sets, i.e., the model is trained in one dataset and tested in the other three datasets. In the tests where different combinations of datasets were applied, the models all obtained better experimental results than the traditional ones.
Researchers have provided numerous ideas to address the problem that datasets don’t match the real application scenarios for providing too many positive samples in the testing process. Calik et al. [13] tested the WI system in different proportions of the data volume of the training set and the test set. In contrast, Zois et al. [9] proposed a straightforward approach using a more realistic dataset, such as those commonly used in the FDEs community. FDEs is thought to work similarly to WI systems, comparing questioned samples with a small number of positive samples of users. The datasets collected by the FDEs community mimic the real scenes in applications, and the results obtained are closer to those in real-world verification tasks.
3. Experiment
3.1. Common dataset
To evaluate the model accurately and representatively, some well-known and widely used datasets will be selected and introduced.
The CEDAR Signature database includes the signatures of 55 volunteers, each of which has 24 genuine signatures and 24 forged signatures. CEDAR contains 1320 real signatures and 1320 forged signatures. In this dataset, all signatures are presented in gray.
The GPDS300 signature dataset includes the signatures of 300 volunteers, each of which has 24 genuine signatures and 30 forged signatures. So, that's 7,200 real signatures and 9,000 forged signatures. As with the CEDAR dataset, the signatures in GPDS300 are also binary images
The BHSig260 signature library contains the signatures of 260 signers. One hundred of the signatories signed in Bengali and another 160 in Hindi. Each signer has 24 genuine signatures and 30 forged signatures. Thus, for Bengali signers, there are 2400 genuine signatures and 3000 forged signatures. And for the Hindi part of the signature, the genuine signatures and the forged ones are 3,840 and 4,800, respectively. The signatures of the two languages are treated as two different datasets.
3.2. The evaluation criterion
To accurately evaluate the recognition performance of different methods, common evaluation criterions include the accuracy, false acceptance rate (FAR) and false rejection rate (FRR). Accuracy is the percentage of correct judgments made by the model. FAR describes the percentage of forged signatures that are considered authentic by the model, while the FRR means the percentage of genuine signatures identified as forgeries.
3.3. Performance analysis
After the introduction of experimental models, we evaluated some models with the corresponding data of specific evaluation indicators, whose results can be seen in Table 1.
On the CEADAR dataset, all the models have good performance, and the accuracy rate is more than 90% or even more than 95%. And FAR and FRR are below 10%, and even most of them are below 5%. On the other datasets, however, it is clear that the indicators are far worse than on the CEDAR dataset. The accuracy is mostly in 70% to 80%, and FAR and FRR are mostly in 10% to 30%. These indicators can show that most of these models can complete the training task, but there are still some defects. Possible problems with the CEDAR data are that signature forgery is less professional or that English signatures are easier to identify. Other datasets may also have the problem that signatures are difficult to extract features. However, there is no doubt that these models can generally complete the task of signature recognition, but there is still room for improvement.
4. Discussion
4.1. There are limited signature samples for the dataset
The number of samples in datasets will greatly affect the result of training. Currently, a large number of famous datasets, such as CEDAR and BHSig, have the problem of insufficient samples to some extent. The CEDAR dataset, for example, includes signatures from 55 volunteers, each of them has 24 original signatures and these signatures are simulated to 24 forged images. For deep learning, such data scale is not enough to achieve the training purpose perfectly. Moreover, in order to train the model without over-fitting and under-fitting problems, each user is required to have enough signature samples. Therefore, a larger dataset needs to be built, signatures of more people need to be included, and the number of signature samples of each person needs to be increased. In addition, forged signatures in this dataset must be imitated to a higher degree.
4.2. Active pixels
Researchers usually do not cut off excess white space at the edges of an image before performing resolution resizing operations. When dividing an image into blocks or putting it into a convolutional neural network, these extra margins may affect the results of blocking or convolving. At the same time, because the blank part carries invalid information, when adjusting to the same resolution, the image of directly adjusting the resolution will carry less information than the image of first cropping and then adjusting the resolution. Sharif et al. [5] proposed that the accuracy of block segmentation can be effectively improved by calculating the sum of pixels in each horizontal and vertical row separately, and cutting off the boundary columns and rows where the sum of pixels is not zero, so this operation can be used in a wider range of applications.
4.3. Training machine learning on small datasets
Large-scale public datasets are commonly used in research, but small-scale datasets, learning with which is more relevant to real application scenarios of HSV, have been gaining more popularity among researchers. Kao and Wen [14] proposed an offline signature authenticity detection method based on an explainable deep learning method and a single known sample. Hafemann et al. [15] proposed a solution based on meta-learning methods. Since small datasets are more consistent with the application of real practical scenarios, this research area still needs more attention.
4.4. Application of more appropriate pattern recognition methods
While current HSV work is often based on statistical pattern recognition, some researchers have argued that structural pattern recognition is a better idea. Structural pattern recognition is the process of finding functional relationships between structural primitives, which in HSV are usually the strokes of a signature, and looking for structural primitives. The classification process requires the use of error-tolerant graph matching algorithm to solve for graph dissimilarity computation, which often results in very high time complexity. In recent years, some algorithms have provided polynomial, rather than exponential, run times [16], [17]. However, these algorithms have no way of obtaining a global minimum of the cost function, but can only find a local minimum. Maergner et al. [18] utilises the bipartite approximation framework proposed by Riesen and Bunke [17], which again reduces the time complexity.
5. Conclusion
As one of the hot issues in the field of computer vision and image processing, although OfHSV has achieved many research results in the past few decades, there are still many problems to be solved in this field. This paper briefly reviews the research status of OfHSV, so that other researchers can better understand the existing achievements and current problems of OfHSV. Specifically, according to the different steps of OfHSV, this paper introduces the existing representative HSV algorithms from four aspects: database, preprocessing, feature extraction and selection, and classification model. Second, we quantitatively compare the accuracy of different methods on four datasets, CEDAR, GPDS300, Bengali and Hindi. Finally, we discuss current issues and discuss future research directions.
References
[1]. Malik MI, Liwicki M, Ieee, editors. From Terminology to Evaluation: Performance Assessment of Automatic Signature Verification Systems. 13th International Conference on Frontiers in Handwriting Recognition (ICFHR); 2012 Sep 18-20; Monopoli, ITALY2012.
[2]. Kalera MK, Srihari S, Xu AH. Offline signature verification and identification using distance statistics. International Journal of Pattern Recognition and Artificial Intelligence. 2004;18(7):1339-60.
[3]. Pourshahabi MR, Sigari MH, Pourreza HR, editors. Offline Handwritten Signature Identification and Verification Using Contourlet Transform. International Conference of Soft Computing and Pattern Recognition; 2009 Dec 04-07; Malacca, MALAYSIA2009.
[4]. o-Khalifa O, Alam MK, Abdalla AH, editors. An Evaluation on Offline Signature Verification using Artificial Neural Network Approach. International Conference on Computer, Electrical and Electronics Engineering (ICCEEE); 2013 Aug 26-28; Khartoum, SUDAN2013.
[5]. Sharif M, Khan MA, Faisal M, Yasmin M, Fernandes SL. A framework for offline signature verification system: Best features selection approach. Pattern Recognition Letters. 2020;139:50-9.
[6]. Banerjee D, Chatterjee B, Bhowal P, Bhattacharyya T, Malakar S, Sarkar R. A new wrapper feature selection method for language-invariant offline signature verification. Expert Systems with Applications. 2021;186.
[7]. Batool FE, Attique M, Sharif M, Javed K, Nazir M, Abbasi AA, et al. Offline signature verification system: a novel technique of fusion of GLCM and geometric features using SVM. Multimedia Tools and Applications.
[8]. Shariatmadari S, Emadi S, Akbari Y. Patch-based offline signature verification using one-class hierarchical deep learning. International Journal on Document Analysis and Recognition. 2019;22(4):375-85.
[9]. Zois EN, Alexandridis A, Economou G. Writer independent offline signature verification based on asymmetric pixel relations and unrelated training-testing datasets. Expert Systems with Applications. 2019;125:14-32.
[10]. Janocha K, Czarnecki WM. On Loss Functions for Deep Neural Networks in Classification2017 February 01, 2017:[arXiv:1702.05659 p.].
[11]. Vo QN, Kim SH, Yang HJ, Lee G. Binarization of degraded document images based on hierarchical deep supervised network. Pattern Recognition. 2018;74:568-86.
[12]. Maergner P, Pondenkandath V, Alberti M, Liwicki M, Riesen K, Ingold R, et al. Combining graph edit distance and triplet networks for offline signature verification. Pattern Recognition Letters. 2019;125:527-33.
[13]. Calik N, Kurban OC, Yilmaz AR, Yildirim T, Ata LD. Large-scale offline signature recognition via deep neural networks and feature embedding. Neurocomputing. 2019;359:1-14.
[14]. Kao HH, Wen CY. An Offline Signature Verification and Forgery Detection Method Based on a Single Known Sample and an Explainable Deep Learning Approach. Applied Sciences-Basel. 2020;10(11).
[15]. Hafemann LG, Sabourin R, Oliveira LS. Meta-Learning for Fast Classifier Adaptation to New Users of Signature Verification Systems. Ieee Transactions on Information Forensics and Security. 2020;15:1735-45.
[16]. Justice D, Hero A. A binary linear programming formulation of the graph edit distance. Ieee Transactions on Pattern Analysis and Machine Intelligence. 2006;28(8):1200-14.
[17]. Riesen K, Bunke H. Approximate graph edit distance computation by means of bipartite graph matching. Image and Vision Computing. 2009;27(7):950-9.
[18]. Maergner P, Riesen K, Ingold R, Fischer A, Ieee, editors. A Structural Approach to Offline Signature Verification Using Graph Edit Distance. 14th IAPR International Conference on Document Analysis and Recognition (ICDAR); 2017 Nov 09-15; Kyoto, JAPAN2017.
Cite this article
Guo,Y.;Li,S.;Wu,J. (2023). Research advanced in offline handwritten signature verification. Applied and Computational Engineering,6,1236-1244.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 3rd International Conference on Signal Processing and Machine Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Malik MI, Liwicki M, Ieee, editors. From Terminology to Evaluation: Performance Assessment of Automatic Signature Verification Systems. 13th International Conference on Frontiers in Handwriting Recognition (ICFHR); 2012 Sep 18-20; Monopoli, ITALY2012.
[2]. Kalera MK, Srihari S, Xu AH. Offline signature verification and identification using distance statistics. International Journal of Pattern Recognition and Artificial Intelligence. 2004;18(7):1339-60.
[3]. Pourshahabi MR, Sigari MH, Pourreza HR, editors. Offline Handwritten Signature Identification and Verification Using Contourlet Transform. International Conference of Soft Computing and Pattern Recognition; 2009 Dec 04-07; Malacca, MALAYSIA2009.
[4]. o-Khalifa O, Alam MK, Abdalla AH, editors. An Evaluation on Offline Signature Verification using Artificial Neural Network Approach. International Conference on Computer, Electrical and Electronics Engineering (ICCEEE); 2013 Aug 26-28; Khartoum, SUDAN2013.
[5]. Sharif M, Khan MA, Faisal M, Yasmin M, Fernandes SL. A framework for offline signature verification system: Best features selection approach. Pattern Recognition Letters. 2020;139:50-9.
[6]. Banerjee D, Chatterjee B, Bhowal P, Bhattacharyya T, Malakar S, Sarkar R. A new wrapper feature selection method for language-invariant offline signature verification. Expert Systems with Applications. 2021;186.
[7]. Batool FE, Attique M, Sharif M, Javed K, Nazir M, Abbasi AA, et al. Offline signature verification system: a novel technique of fusion of GLCM and geometric features using SVM. Multimedia Tools and Applications.
[8]. Shariatmadari S, Emadi S, Akbari Y. Patch-based offline signature verification using one-class hierarchical deep learning. International Journal on Document Analysis and Recognition. 2019;22(4):375-85.
[9]. Zois EN, Alexandridis A, Economou G. Writer independent offline signature verification based on asymmetric pixel relations and unrelated training-testing datasets. Expert Systems with Applications. 2019;125:14-32.
[10]. Janocha K, Czarnecki WM. On Loss Functions for Deep Neural Networks in Classification2017 February 01, 2017:[arXiv:1702.05659 p.].
[11]. Vo QN, Kim SH, Yang HJ, Lee G. Binarization of degraded document images based on hierarchical deep supervised network. Pattern Recognition. 2018;74:568-86.
[12]. Maergner P, Pondenkandath V, Alberti M, Liwicki M, Riesen K, Ingold R, et al. Combining graph edit distance and triplet networks for offline signature verification. Pattern Recognition Letters. 2019;125:527-33.
[13]. Calik N, Kurban OC, Yilmaz AR, Yildirim T, Ata LD. Large-scale offline signature recognition via deep neural networks and feature embedding. Neurocomputing. 2019;359:1-14.
[14]. Kao HH, Wen CY. An Offline Signature Verification and Forgery Detection Method Based on a Single Known Sample and an Explainable Deep Learning Approach. Applied Sciences-Basel. 2020;10(11).
[15]. Hafemann LG, Sabourin R, Oliveira LS. Meta-Learning for Fast Classifier Adaptation to New Users of Signature Verification Systems. Ieee Transactions on Information Forensics and Security. 2020;15:1735-45.
[16]. Justice D, Hero A. A binary linear programming formulation of the graph edit distance. Ieee Transactions on Pattern Analysis and Machine Intelligence. 2006;28(8):1200-14.
[17]. Riesen K, Bunke H. Approximate graph edit distance computation by means of bipartite graph matching. Image and Vision Computing. 2009;27(7):950-9.
[18]. Maergner P, Riesen K, Ingold R, Fischer A, Ieee, editors. A Structural Approach to Offline Signature Verification Using Graph Edit Distance. 14th IAPR International Conference on Document Analysis and Recognition (ICDAR); 2017 Nov 09-15; Kyoto, JAPAN2017.