top of page
  • Writer's pictureMSU AI Club

Language Modelling with Recurrent Neural Networks

In this workshop, we learned about a new class of neural networks. These are an excellent choice for working with data that comes in sequences of arbitrary length. This workshop was split into two consecutive weekly meetings.


Part A: Intro to RNNs

Recurrent neural networks (RNNs) have important applications in language models and time series analysis.


RNNs have a lot in common with the Multilayer Perceptrons we've been learning about, but in summary, they can run a linear layer in a cycle with its inputs being affected by previous outputs to allow them to take in a sequence of arbitrary size and a defined order, such as words in a paragraph, or samples in an audio file.


In the first part of this workshop, members worked together to train a model that can classify an Amazon product review as either positive or negative -- having 1-2 stars, or 4-5 stars.


Here is the completed code from this workshop:






Part B: Exploring Word Embeddings

In the second week of this workshop, we talked about an idea central to language models in AI: word embeddings.


Members got to play around with some of the neat mathematical tricks you can do with word embeddings. We learned about how these are just ways to encode words in numbers such that their meaning and the relationship between their meanings has a mathematical significance.


With the recent announcement of GPT-4 and other major milestones, this workshop happened during one of the biggest weeks in AI, and we took some time to contextualize what we learned in the rapidly developing world of natural language processing. Take a look at some exciting examples in the slideshow:





If you would like to learn more about our workshops, chat with members, and get access to more learning resources, feel free to join us on Discord.

38 views0 comments
bottom of page