Evaluation of hidden layer architecture in neural networks for mRNA backtranslation, 2003
Pratt, Kenrick A.
2000-2009
Many biological experiments require a protein sequence to be translated to the nucleic acid sequence that codes for it or require an investigator to possess a means to backtranslate a protein to its amino acid sequence. However, the degenerate nature of the genetic code greatly frustrates this process through ambiguities in the wobble bases. One possible solution to this dilemma is to predict codon usage frequencies for a target organism through use of an Artificial Neural Network. Consequently, a Neural Network was trained on amino and nucleic acid sequences to determine the networks capacity in accurate predictions for a twenty amino acid window. Moreover, 10 different network architectures were surveyed to ascertain which one yields optimum (least error) results when trained on the same nucleic acid sequences. The winning architecture was examined using two new training sets that have been partitioned into those with high bias and those with low bias for mRNA secondary structure. The more negative the bias, the more secondary structure it will have, whereas less negative bias will display less secondary structure. Testing of these two training sets revealed that the neural network was able to distinguish between the two sets; i.e., the training set with greater secondary structure learned the patterns in less training cycles and produced a lower error when compared to the training set with less secondary structure given the same network architecture. Ultimately, this work might be beneficial as a computation tool for backtranslation in degenerate PCR cloning and in identifying the unknown coding regions in genes.
text
application/pdf
2003-12-01
thesis
Master of Science (MS)
Clark Atlanta University
Biological Sciences
Seffens, William
Georgia--Atlanta
http://hdl.handle.net/20.500.12322/cau.td:2003_pratt_kenrick_a