Leave-one-out-training and leave-one-out-testing Hidden Markov models for a handwritten numeral recognizer: The implications of a single classifier and multiple classifications

Leave-one-out-training and leave-one-out-testing Hidden Markov models for a handwritten numeral recognizer: The implications of a single classifier and multiple classifications

Ko, Albert Hung Ren and Cavalin, Paulo Rodrigo and Sabourin, Robert and de Souza Britto, Alceu

IEEE Transactions on Pattern Analysis and Machine Intelligence 2009

Abstract : Hidden Markov Models (HMMs) have been shown to be useful in handwritten pattern recognition. However, owing to their fundamental structure, they have little resistance to unexpected noise among observation sequences. In other words, unexpected noise in a sequence might ” break” the normal transmission of states for this sequence, making it unrecognizable to trained models. To resolve this problem, we propose a leave-one-out-training strategy, which will make the models more robust. We also propose a leave-one-out-testing method, which will compensate for some of the negative effects of this noise. The latter is actually an example of a system with a single classifier and multiple classifications. Compared with the 98.00 percent accuracy of the benchmark HMMs, the new system achieves a 98.88 percent accuracy rate on handwritten digits. © 2009 IEEE.