English lite language model for winkNLP
This is a pre-trained English language model for the winkjs NLP package — winkNLP. The lite model package has a size of ~890KB, which expands to about 2.4MB after installation. It is an open-source language model, released under the MIT license.
It contains models for the following NLP tasks:
- Tokenization
- Token's Feature Extraction
- Sentence Boundary Detection
- Negation Handling
- POS tagging
- Automatic mapping of British spellings to American
- Named Entity Recognition
- Sentiment Analysis
- Custom Entities Definition
- Stemming using Porter Stemmer Algorithm V2
- Lemmatization
- Readability statistics computation
The model must be installed along with the wink-nlp:
# Install wink-nlp
npm install wink-nlp --save
# Install wink-eng-lite-model
node -e "require( 'wink-nlp/models/install' )" wink-eng-lite-model
We start by requiring the wink-nlp package and the wink-eng-lite-model. Then we instantiate wink-nlp using the language model:
// Load "wink-nlp" package.
const winkNLP = require('wink-nlp');
// Load english language model — light version.
const model = require('wink-eng-lite-model');
// Instantiate wink-nlp.
const nlp = winkNLP(model);
// Code for Hello World!
var text = 'Hello World!';
var doc = nlp.readDoc(text);
console.log(doc.out());
// -> Hello World!
Learn how to use this model with winkNLP from the following resources:
- Overview — introduction to winkNLP.
- Concepts — everything you need to know to get started.
- API Reference — explains usage of APIs with examples.
- Release history — version history along with the details of breaking changes, if any.
The winkNLP processes raw text at >525,000 tokens per second with this model, when benchmarked using "Ch 13 of Ulysses by James Joyce" on a 2.2 GHz Intel Core i7 machine with 16GB RAM. The benchmark covered the entire NLP pipeline — tokenization, sentence boundary detection, negation handling, sentiment analysis, part-of-speech tagging, and named entity extraction.
While it is trained to process English language text, it can tokenize text containing other languages such as Hindi, French and German. Such tokens are tagged as X (foreign word) during pos tagging.
The model follows the Universal POS tags standards. It delivers an accuracy of ~94.7% on a subset of WSJ corpus — this includes tokenization of raw text prior to pos tagging.
The model is trained to detect CARDINAL, DATE, DURATION, EMAIL, EMOJI, EMOTICON, HASHTAG, MENTION, MONEY, ORDINAL, PERCENT, TIME, and URL.
It delivers a f-score of ~84.5%, when validated using Amazon Product Review Sentiment Labelled Sentences Data Set at UCI Machine Learning Repository.
The model is contained in the standard NPM tarball format. You can find it under the latest release. The model is stored in form of trained data in JSON and binary formats. Apart from the data, there is a tiny fraction of JS glue code, which is primarily used during model loading.
If you spot a bug and the same has not yet been reported, raise a new issue.
Wink is a family of open source packages for Natural Language Processing, Machine Learning and Statistical Analysis in NodeJS. The code is thoroughly documented for easy human comprehension and has a test coverage of ~100% for reliability to build production grade solutions.
The wink-eng-lite-model is copyright 2020-21 of GRAYPE Systems Private Limited.
It is licensed under the terms of the MIT License.