eventscribe

The eventScribe Educational Program Planner system gives you access to information on sessions, special events, and the conference venue. Take a look at hotel maps to familiarize yourself with the venue, read biographies of our plenary speakers, and download handouts and resources for your sessions.

close this panel

SUBMIT FEEDBACKfeedback icon

Please enter any improvements, suggestions, or comments for the JSM Proceedings to make your conference experience the best it can be.

Comments


close this panel
support

Technical Support


Phone: (410) 638-9239

Fax: (410) 638-6108

GoToMeeting: Meet Now!

Web: www.CadmiumCD.com

Submit Support Ticket


close this panel
‹‹ Go Back

JeeHyun Hwang

SAS Institute, Inc.



‹‹ Go Back

Haipeng Liu

SAS Institute, Inc.



‹‹ Go Back

Yang Xu

SAS Institute, Inc.



‹‹ Go Back

Please enter your access key

The asset you are trying to access is locked for premium users. Please enter your access key to unlock.


Email This Presentation:

From:

To:

Subject:

Body:

←Back IconGems-Print

574 – Recent Advances in Software

Language Modeling Using SAS

Sponsor: Section on Statistical Computing
Keywords: N-gram Models, Long Short-Term Memory network, Recurrent Neural Network, Deep Learning, Perplexity, Word Prediction

JeeHyun Hwang

SAS Institute, Inc.

Haipeng Liu

SAS Institute, Inc.

Yang Xu

SAS Institute, Inc.

Language modeling is to predict next word in a given sentence. Language models are widely used for improving performance in many areas such as automatic speech recognition and machine translation. Prediction of next word is a sequential data prediction problem. In this paper, we present two approaches for the task of language modeling. First, n-gram models are designed to statistically estimate the probability of next word given a sequence of previous words. This one is supported by SAS language model functionality, called language model action set. This action set is designed to efficiently train n-gram models on cloud platforms when a training data set consists of a large number of documents. Second, we explore neural network for building language models using SAS deep learning functionality, called deep learning action set. This action set enables us to build LSTM-based models. We choose LSTM-based models because it is known that Long Short-Term Memory networks (LSTM) have advantages over recurrent neural networks in terms of handling exploding and vanishing gradient problems. We conduct user studies and our user studies demonstrate the effectiveness of our language models.

"eventScribe", the eventScribe logo, "CadmiumCD", and the CadmiumCD logo are trademarks of CadmiumCD LLC, and may not be copied, imitated or used, in whole or in part, without prior written permission from CadmiumCD. The appearance of these proceedings, customized graphics that are unique to these proceedings, and customized scripts are the service mark, trademark and/or trade dress of CadmiumCD and may not be copied, imitated or used, in whole or in part, without prior written notification. All other trademarks, slogans, company names or logos are the property of their respective owners. Reference to any products, services, processes or other information, by trade name, trademark, manufacturer, owner, or otherwise does not constitute or imply endorsement, sponsorship, or recommendation thereof by CadmiumCD.

As a user you may provide CadmiumCD with feedback. Any ideas or suggestions you provide through any feedback mechanisms on these proceedings may be used by CadmiumCD, at our sole discretion, including future modifications to the eventScribe product. You hereby grant to CadmiumCD and our assigns a perpetual, worldwide, fully transferable, sublicensable, irrevocable, royalty free license to use, reproduce, modify, create derivative works from, distribute, and display the feedback in any manner and for any purpose.

© 2019 CadmiumCD