All Times ET
Keywords: Universal Sentence Encoder; FOMC Statement; Neural Network
I fine tune the Universal Sentence Encoder (USE) to identify the tone of the post-meeting FOMC statements. Four different neural network architectures are added to input variables based on the USE representation of two consecutive statements to classify the tone. I train models to match the predicted tone with the one identified by pre-classified alternative FOMC statements constructed by Doh, Song, and Yang (2020). While deep learning architectures fit the training dataset well, the simplest architecture with a single layer feed forward neural network is dominant in terms of fitting the development dataset. However, when the input vector is expanded, the performance of the deep learning architecture with multiple layers improves most.