Google Bert- Multiple Choice Question Generation on Ontology Base
DOI:
https://doi.org/10.48047/Keywords:
Long Short-Term Memory, Deep Learning, Google BERT, Ontology ModelAbstract
Data mining is the ability to identify useful information from raw data and process the useful
information in a separate list. The process of extracting useful information from big data source is known as
ontology. In current days the semantic web is becoming a more trending topic for a lot of new research
inventions. Multiple Choice Question Generation is one of the hot topics which are becoming a challenge
for semantic web developers. By using this ontology model, a lot of researchers try to proposed MCQ item
generation by giving sample phrases or sentences, or paragraphs for the system and try to generate the MCQ
automatically based on those phrases. Hence in this proposed paper, we try to design an MCQ item
generation system and we try to label that system as OntoQue. The proposed application is evaluated and
tested using Deep Learning NLP pre-trained model like Google BERT, through which we can able to
extract the MCQ generation for a summary. In the primitive methods, we try to use Long Short-Term
Memory (LSTM) networks in order to learn order dependence in a sequence manner, but this is failed to
achieve 100 % accuracy in training bulk amount of data and there is also a problem in LSTM like vanishing
gradient problem through which some quantity of data vanishes while training the machine. For testing the
proposed application we try to use the Stanford question answering dataset, in which the data set combines
more than 1one lakh questions in SQuAD1.1 with over fifty five thousand unanswerable questions written
adversarial by crowd workers to look similar to answerable ones. The proposed model can accurately train
with this dataset and this can try to generate the desired result for any type of text summary which we try
to enter for the system.