osvgan generative adversarial networks for data scarce
play

OSVGAN: Generative Adversarial Networks for Data Scarce Online - PDF document

OSVGAN: Generative Adversarial Networks for Data Scarce Online Signature Verification Chandra Sekhar Vorugunti Sai Sasikanth Indukuri Viswanath Pulabaigari Rama Krishna Sai Gorthi IIIT SriCity


  1. OSVGAN: Generative Adversarial Networks for Data Scarce Online Signature Verification Chandra Sekhar Vorugunti Sai Sasikanth Indukuri Viswanath Pulabaigari Rama Krishna Sai Gorthi IIIT SriCity University of Massachusetts IIIT SriCity IIT Tirupati Chittoor-Dt, 517 646 Amherst Chittoor-Dt, 517 646 Chittoor-Dt, 517 506 Andhra Pradesh, India MA 01003, US. Andhra Pradesh, India. Andhra Pradesh, India. Chandrasekhar.v@iiits.in sindukuri@umass.edu viswanath.p@iiits.ac.in rkg@iittp.ac.in Abstract In literature, several approaches towards online signature verification (OSV) had been detailed which can be primarily Impractical to acquire a sufficient number of signatures grouped into feature-centric techniques [13, 20] that interpret from the users and learning the inter and intra writer signatures using a collection of local or global features, variations effectively with as minimum as one training function-centric techniques which apply distinct methods sample are the two critical challenges need to be addressed such as Hidden Markov models [1], Dynamic Time Warping by the Online Signature Verification (OSV) frameworks. To (DTW) [1,10,11], averaging time series [26], interval valued address the first challenge, we are generating writer specific [13,20], sequence matching [10,26], feature fusion based synthetic signatures using Auxiliary Classifier GAN, in which [13], fuzzy based [20], stroke based [18, 24], deep learning a generator is trained with a maximum of 40 signature based [23,24,25,27] and many more. Recent works [32] samples per user. To address the second requirement, we are confirms that, individual profiles (x, y, pressure, azimuthal) proposing a Depth wise Separable Convolution based Neural results into lightweight frameworks and higher classification Network, which results in achieving one shot based OSV with accuracies compared to the compound signature. Hence, in reduced parameters. A first of its kind of experimental this work, we focus on generating writer specific synthetic analysis is done with an increased set of signature samples profiles to evaluate the genuineness of the test signature. (five-fold) on two widely used datasets SVC, MOBISIG. The state-of-the-art outcome in almost all categories of Even though several OSV frameworks were proposed, still experimentation confirms the competence of the proposed there is a shortfall in OSV systems addressing critical OSV framework and qualifies for the real time deployment in requirements: R1. One/Few shot learning: A light weight limited data applications. OSV framework, which can effectively learn to classify a test signature, when trained with one signature sample per user. 1. Introduction R2. An OSV framework, must be tested with more signature Signatures encompass an aggregation of individual samples, to be ratified to deploy in real time environment. writing characteristics which are a significant source of Even though very few works are proposed to address R1 information to classify the genuineness of a user trying to [7,14,18,25], based on authors ’ knowledge, no work is login into the system. Based on the data acquisition, OSV proposed to address R2, as acquiring a greater number of systems are classified into offline or online [1,2,7,22,23]. In signature samples per user is impractical. As Table 1 suggests, case of offline signatures, only the static information, i.e. X- maximum number of signature samples per user 40, which is axis, Y-axis profiles are available in an image format for very less compared to other computer vision problems like verification. In case of online signatures, as shown below, Object Detection [31] etc. Hence, to address the above two along with X, Y profiles, the dynamic information includes, requirements, our contribution in this work is two-fold: the pressure, pen angle, tilt of a device, etc. Due to availability of both static (X, Y profile) and dynamic information, OSV 1. As represented in Fig 1, a novel variant of Auxiliary frameworks tend to be more robust and accurate. Classifier-GAN (AC-GAN) based framework, which generates effective and unlimited writer specific synthetic signature samples. 2. As represented in Fig 2, we propose a Depth Wise Separable Convolution (DWS) based OSV framework, through which we achieve one shot learning with reduced parameters compared to standard convolution based neural networks. 2. Proposed OSV framework 2.1 Synthetic Signature profile generation As depicted in Fig 1, 3 to generate high-quality writer specific synthetic signatures, we have proposed OSVGAN, which is a modified version of AC-GAN [5,28]. AC-GAN is TABLE I. T HE PARTICULARS OF WIDELY USED DATASETS IN OSV widely used variant of vanilla GAN, in which, addition to the DataSet → noise vector ′𝑜′ , a corresponding label, 𝑚 ∼ 𝑄 𝑚 is given as an SVC MobiSig MCYT input to the generator G to generate writer specific synthetic Total number of users 40 83 100 signatures 𝑇 𝑡𝑧𝑜 = 𝐻(𝑚, 𝑜) . Genuine, Forgery 20,20 40,25 25,25 samples per user

  2. Figure 1. The Proposed OSVGAN architecture, which is a variant of Auxiliary Classifier GAN. Figure 2. The Proposed Depth Wise Separable Convolution based Neural Network Architecture to classify a test signature. In vanilla GAN [3,4], Generator G, transforms a random reparametrized [6] the latent generative space of vanilla GAN noise ‘𝑜’ into a 1D vector (signature profiles) or 2D image i.e. into a set of Gaussian mixture models and learn the best 𝑦 𝐻 = 𝐻(𝑜) . The noise ‘𝑜’ typically chosen from an easy-to- mixture model specific to each writer. Motivated by Swaminathan et al [4] work, we have reparametrized the sample uniform distribution, typically ‘𝑜’ ∼ 𝑉(−1, 1) . The generator aims to maximize the generated data (synthetic latent generative space of Auxiliary Classifier GAN into a set of mixture models and learn the best mixture model specific signature profiles) as similar as possible to the target data to each writer. distribution (signature dataset). 𝐻 (5) 𝑨 (𝑨) = ∑ 𝑄 ∅ 𝑗 . 𝑕(𝑨|𝜈 𝑕 , ∑ 𝑕 ) 𝑜 (𝑜)𝑒𝑜 𝑄 𝑒𝑏𝑢𝑏 (𝑇 𝐻 ) = ∫ 𝑄 𝑒𝑏𝑢𝑏 (𝑇 𝐻 , 𝑜) = ∫ 𝑄 𝑒𝑏𝑢𝑏 (𝑇 𝐻 |𝑜). 𝑄 𝑕=1 𝑜 𝑜 (1) where 𝑕(𝑨|𝜈 𝑕 , ∑ 𝑕 ) represents the probability of the sample z principally GAN attempt to learn a mapping from a basic in the normal distribution N( 𝜈 𝑕 , ∑ 𝑕 ). latent distribution 𝑄 𝑜 (𝑜) to the complicated data distribution Assuming uniform mixture weights i.e. ∅ 𝑗 = 1/𝐻 𝑒𝑏𝑢𝑏 (𝑇 𝐻 |𝑜) . Therefore, the joint optimization problem for 𝑄 𝑕(𝑨|𝜈 𝑕 ,∑ 𝑕 ) the GAN can be represented as given below: 𝐻 (6) 𝑄 𝑨 (𝑨) = ∑ 𝑕=1 𝐻 𝑁𝑗𝑜 𝐻 𝑁𝑏𝑦 𝐸 (𝑊(𝐻, 𝐸)) = 𝑁𝑗𝑜 𝐻 𝑁𝑏𝑦 𝐸 (𝐹 𝑦~𝑄𝑒𝑏𝑢𝑏 [𝑚𝑝𝑕𝐸(𝑦)]) +𝐹 𝑨~𝑄𝑨 [log (1 − 𝐸(𝐻(𝑨)))] (2) Applying “ reparameterization trick ” [6] on equation (6), which divides the single Gaussian distribution into ‘𝐻’ An Auxiliary Classifier GAN [5], as depicted in Fig1, ‘𝐻’ Gaussian distributions. The noise from the i th Gaussian takes as input both the class label ‘𝑑’ (in the signature context, distribution is calculated using 𝑨 = 𝜈 𝑗 + 𝜏 𝑗 . 𝜗 , where ′𝜗′ genuine/forgery) and the noise ‘𝑜’ i.e. 𝑇 𝑔𝑏𝑙𝑓 = 𝐻(𝑑, 𝑜). represents an auxiliary noise variable such that, 𝜗 ∼ N (0, 1). Similarly, the discriminator outputs the probability 𝜈 𝑗 is a sample from a uniform distribution 𝑉(−1, 1) and 𝜏 𝑗 is distributions over signature labels 𝑀 𝑇 (genuine/forgery) and set to 0.4. User is advised to read [4] for further analysis. the class (writer) labels (writer id) i.e. 𝑀 𝑥 𝑄(𝑇 | 𝑌), 𝑄(𝑀 | 𝑌) = 𝐸(𝑌) . The discriminator ’s objective As depicted in Fig 2, a Gaussian random noise of size 5 is function is represented as below: derived from the selected Gaussian distribution and the label 𝑀 𝑇 = 𝐹[log 𝑄(𝑇 = 𝑠𝑓𝑏𝑚 | 𝑌 𝑠𝑓𝑏𝑚 )] + 𝐹[log 𝑄(𝑇 = 𝑔𝑏𝑙𝑓 | 𝑌 𝑔𝑏𝑙𝑓 )] (3) embeddings are given as an input the Generator G. The 𝑔𝑏𝑙𝑓 )] (4) 𝑀 𝑥 = 𝐹[𝑚𝑝𝑕 𝑄(𝑀 = 𝑥 | 𝑌 𝑠𝑓𝑏𝑚 )] + 𝐹[𝑚𝑝𝑕 𝑄(𝑀 = 𝑥 | 𝑌 generator generates the corresponding profile of an online signature of size 1*200, which is fed as an input to the The generator and the discriminator compete to maximize 𝑀 𝑇 discriminator ‘𝐸’ , consists of a one-dimensional convolution - 𝑀 𝑋 and 𝑀 𝑇 + 𝑀 𝑋 respectively. Recently, Swaminathan et al layer, followed by three dense layers to classify the synthetic [4] proposed a novel attempt in which, to increase the signature profile as real or generated. The generator is trained modelling power of the prior distribution, they have to generate the synthetic profiles close to the samples from

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend