Skip to content

KDD19 Tutorial: From Shallow to Deep Language Representations: Pre-training, Fine-tuning, and Beyond

License

Notifications You must be signed in to change notification settings

astonzhang/KDD19-tutorial

Repository files navigation

KDD19 Tutorial: From Shallow to Deep Language Representations: Pre-training, Fine-tuning, and Beyond

Time: Thu, August 08, 2019 - 9:30am - 12:30 pm | 1:00 pm - 4:00 pm

Location: Dena’ina Center, Kahtnu 1 & 2-Level 2, 600 W. Seventh Avenue Anchorage, AK 99501

Presenters: Aston Zhang, Haibin Lin, Leonard Lausen, Sheng Zha, and Alex Smola

Abstract

Natural language processing (NLP) is at the core of the pursuit for artificial intelligence, with deep learning as the main powerhouse of recent advances. Most NLP problems remain unsolved. The compositional nature of language enables us to express complex ideas, but at the same time making it intractable to spoon-feed enough labels to the data-hungry algorithms for all situations. Recent progress on unsupervised language representation techniques brings new hope. In this hands-on tutorial, we walk through these techniques and see how NLP learning can be drastically improved based on pre-training and fine-tuning language representations on unlabelled text. Specifically, we consider shallow representations in word embeddings such as word2vec, fastText, and GloVe, and deep representations with attention mechanisms such as BERT. We demonstrate detailed procedures and best practices on how to pre-train such models and fine-tune them in downstream NLP tasks as diverse as finding synonyms and analogies, sentiment analysis, question answering, and machine translation. All the hands-on implementations are with Apache (incubating) MXNet and GluonNLP, and part of the implementations are available on Dive into Deep Learning.

Agenda

Time Tutor Title
9:30am-10:00am Alex Smola Part 1.1: Basics of hands-on deep learning
10:00am-11:00am Alex Smola Part 1.2: Neural Networks
11:00am-11:10am Coffee break
11:10am-11:30am Aston Zhang Part 2.1: Shallow language representations in word embedding
11:30am-12:30pm Aston Zhang Part 2.2: Word Embedding Application
12:30pm-1:00pm Lunch break
1:00pm-2:20pm Leonard Lausen Part 3: Transformer
2:20pm-2:30pm Coffee break
2:30pm-3:30pm Haibin Lin Part 4.1: Deep language representations with Transformer (BERT)
3:30pm-4:00pm Haibin Lin Part 4.2: BERT Application

Part 1.1: Basics of Hands-on Deep Learning

Slides: [pdf]

Notebooks:

  1. NDArray: [ipynb]
  2. Autograd: [ipynb]

Part 1.2: Neural Networks

Notebooks:

  1. Model: [ipynb]
  2. CNN/RNN: [ipynb]
  3. Sequence: [ipynb]
  4. RNN with Gluon: [ipynb]

Part 2.1: Shallow language representations in word embedding

Slides: [pdf]

Part 2.2: Word Embedding Application

Notebooks: [ipynb]

Part 3: Transformer

Slides: [pdf]

Notebooks: [ipynb]

Part 4.1: Deep language representations with Transformer (BERT)

Slides: [pdf]

Part 4.2: BERT Application

Notebooks: [ipynb]

FAQ

  • Q: How do I get access to the notebooks from the tutorial?
    • For setting it up on SageMaker Notebook instances, you can find the instructions here.
    • For setting up locally, check out the local installation guide.
    • The notebooks can be downloaded from this repo.

Have more questions? You may reach us at mxnet-science-info@amazon.com

Links

About

KDD19 Tutorial: From Shallow to Deep Language Representations: Pre-training, Fine-tuning, and Beyond

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published