Abstract:With the development of mobile Internet and automatic speech recognition (ASR), the mobile terminal through voice interaction has become a trend. The traditional method to understand user's spontaneous spoken language is to write context-free grammars(CFGs)manually. But it is laborious and expensive to construct a grammar with good coverage and optimized performance, and difficult to maintain and update. We proposed a new approach to spoken language understanding combining support vector machine(SVM)and conditional random fields(CRFs), which detect task and extract task semantic information from spontaneous speech input respectively. Tasks are represented as a vector of task name and semantic information. Eight different tasks from "iFLYTEK yudian" voice mobile assistant are tested, and the precision and recall of semantic representation of query are 90.29% and 88.87% respectively.