Article Preview
Top1. Introduction
In the field of Chinese NLP, Chinese word segmentation is indispensable for intelligent Question-Answer system, speech recognition and machine translation because it is necessary for further analysis and processing of Chinese sentences. However, unlike English texts, there is no naturally recognizable separator in Chinese texts, and there is a problem of polysemy in Chinese. Therefore, we need to rely on the Chinese word segmentation to obtain the key information of Chinese texts.So Chinese word segmentation becomes an important research direction in Chinese NLP.Since Chinese word segmentation was proposed, many experts and scholars had devoted themselves to the important and basic research in the field of Chinese NLP. With the wide application of neural network, the Chinese word segmentation methods have been transitioned from the previous dictionary and statistic methods to the neural network methods. Especially the application of machine learning and deep learning makes the Chinese word segmentation more effective. In 2015, Prof. Ze-Wen Liu from Tsinghua University proposed a model based on Linear Chain Conditional Random Field (LCCRF) to solve Chinese word segmentation problem (Liu, Ding, & Li, 2015), which effectively optimizes features selection and tagging, and reduces the time complexity and space complexity of the training model. Several months later, Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang (2015) from Fudan University proposed another method using Long Short-Term Memory (LSTM) model for Chinese word segmentation in the Conference on Empirical Methods in Natural Language Processing, which had improved the results comparing with the traditional methods. In 2016, Xuezhe Ma and Eduard Hovy from Carnegie Mellon University proposed the sequence tagging method based on neural network model for POS tagging at the Association for Computational Linguistics (ACL), which brought a new idea for Chinese word segmentation. In the same year, Yao and Huang (2016) adopted the Bidirectional LSTM neural network model, which didn't need any previous knowledge or pretreatment to improve the accuracy of Chinese word segmentation. However, CRF relies heavily on extracted features and task-specific resources, LSTM has complex structure and has the disadvantage of training and predicting time too long. Professors from Xiamen University proposed a new method to solve Chinese word segmentation by the Gated Recurrent Unit (GRU) neural network in 2017 (Li, Duan, & Xu, 2017).
Both the GRU model and the LSTM model are extensions of the recurrent neural network model, and they have reached predestinate goals on a number of tasks, but the GRU model has simpler structure than the LSTM model. However, both the LSTM model and the GRU model are default to unidirectional processing, so the two models will set the back words are more important than the previous words when they applied to the Chinese word segmentation task. This setting is not appropriate for the task of Chinese word segmentation, because the weight of past information and future information is uncertain for different sentences. Therefore, in this article we propose a new model CBiGCN which combines Bidirectional GRU (it consider the different weight of text words), CRF (it can gets a globally optimal solution by the statistical normalized probability globally.) and CNN, inherits the advantages of these models, uses end-to-end sequence tags method for Chinese word segmentation. After experiments, the model proposed by this paper has been significantly improved in terms of Chinese word segmentation accuracy and training speed.