This post was originally published at Accelerate with BERT: NLP Optimization Models

For a successful natural language processing project, collecting and preparing data, building resilient pipelines, and getting “model ready” can easily take months of effort even with the most talented engineers. But what if we could reduce the data required to a fraction?
In this article, we’ll cover how transfer learning is making world-class models open source and introduce BERT (bidirectional encoder representations from transformers). BERT is the most powerful NLP “tool” to date. We’ll explore how it works and why it will change the way companies execute NLP projects.

Link: 

This post was originally published at Accelerate with BERT: NLP Optimization Models