This paper is an approach to some new dimensions of research in the field of Natural Language Processing (NLP). The objective of this paper is to discuss some key challenges which are required to be addressed in this field. Researcher could be beneficial who wish to study and learn about applications and tools & technology of NLP. NLP is an arising area of research and development in the domain of computational linguistic. Till date, there are a number of NLP research dimensions already evolved. Among them, text analysis and speech recognition are the primary moto of NLP research. To achieve this, researcher need to concentrate on various tasks like speech recognition, text to speech conversion & vice versa, text summarization, story generation, named entity recognition, auto correct of grammar while typing, sentiment analysis, machine translation and automated answer generation etc. NLP is a challenging field since it deals with human language, which is extremely diverse and can be spoken many ways. The key challenges are resolving ambiguity in words, common sense representation, contextual information retrieval, pragmatic analysis etc. Though there are various applications and techniques are used to address these challenges in NLP, these are found in infant stage till date. This paper gives emphasis on some key areas of NLP where researchers can involve their research to enrich the NLP work.
Challenges of Natural Language Processing: A New Dimension of Research in Computing
Chandamita Nath and Bhairab Sarma,[DOI: 10.24214/jecet.B.10.2. 05156.]
Machine Reading Comprehension: Methods and Trends of Low Resource Languages
Dr. Mubarak Alkhatnai, [DOI: 10.24214/jecet.B.10.2. 05775.]
Natural language processing (NLP) has been used to establish human-like communication with computers. Machine Reading Comprehension (MRC) has made a significant development; it has been crucial for machines to comprehend substantial language facets, like semantics, syntax, pragmatics, and phonology. Multiple studies have reported MRC models in high-resource languages. However, these models are unable to provide significant performance in MRC models in low resource languages. This is due to the unavailability of large-scale training datasets in low resource languages. Several studies on Machine Reading Comprehension (MRC) have proposed MRC models based on English. Nonetheless, these models provide insignificant performance on low-resource languages. However, limited research has been done using low resource languages, particularly Arabic, Urdu, and Hindi. This study presents a survey on trends and methods of Machine Reading Comprehension (MRC) in these low-resource languages. The survey demonstrates that available MRC models are ineffective in low resource languages, such as Hindi, Arabic, and Urdu, mainly due to large datasets and language structure unavailability. Finally, the study describes open issues in available MRC systems and provides direction for future research.