Skip to content

StarlangSoftware/NGram-Py

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

46 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

N-Gram

An N-gram is a sequence of N words: a 2-gram (or bigram) is a two-word sequence of words like โ€œlรผtfen รถdeviniziโ€, โ€œรถdevinizi รงabukโ€, or โ€รงabuk verinizโ€, and a 3-gram (or trigram) is a three-word sequence of words like โ€œlรผtfen รถdevinizi รงabukโ€, or โ€œรถdevinizi รงabuk verinizโ€.

Smoothing

To keep a language model from assigning zero probability to unseen events, weโ€™ll have to shave off a bit of probability mass from some more frequent events and give it to the events weโ€™ve never seen. This modification is called smoothing or discounting.

Laplace Smoothing

The simplest way to do smoothing is to add one to all the bigram counts, before we normalize them into probabilities. All the counts that used to be zero will now have a count of 1, the counts of 1 will be 2, and so on. This algorithm is called Laplace smoothing.

Add-k Smoothing

One alternative to add-one smoothing is to move a bit less of the probability mass from the seen to the unseen events. Instead of adding 1 to each count, we add a fractional count k. This algorithm is therefore called add-k smoothing.

Video Lectures

For Developers

You can also see Cython, Java, C, C++, Swift, Js, or C# repository.

Requirements

Python

To check if you have a compatible version of Python installed, use the following command:

python -V

You can find the latest version of Python here.

Git

Install the latest version of Git.

Pip Install

pip3 install NlpToolkit-NGram

Download Code

In order to work on code, create a fork from GitHub page. Use Git for cloning the code to your local or below line for Ubuntu:

git clone <your-fork-git-link>

A directory called NGram will be created. Or you can use below link for exploring the code:

git clone https://github.com/starlangsoftware/NGram-Py.git

Open project with Pycharm IDE

Steps for opening the cloned project:

  • Start IDE
  • Select File | Open from main menu
  • Choose NGram-PY file
  • Select open as project option
  • Couple of seconds, dependencies will be downloaded.

Detailed Description

Training NGram

To create an empty NGram model:

NGram(N: int)

For example,

a = NGram(2)

this creates an empty NGram model.

To add an sentence to NGram

addNGramSentence(self, symbols: list)

For example,

nGram = NGram(2)
nGram.addNGramSentence(["jack", "read", "books", "john", "mary", "went"])
nGram.addNGramSentence(["jack", "read", "books", "mary", "went"])

with the lines above, an empty NGram model is created and two sentences are added to the bigram model.

NoSmoothing class is the simplest technique for smoothing. It doesn't require training. Only probabilities are calculated using counters. For example, to calculate the probabilities of a given NGram model using NoSmoothing:

a.calculateNGramProbabilities(NoSmoothing())

LaplaceSmoothing class is a simple smoothing technique for smoothing. It doesn't require training. Probabilities are calculated adding 1 to each counter. For example, to calculate the probabilities of a given NGram model using LaplaceSmoothing:

a.calculateNGramProbabilities(LaplaceSmoothing())

GoodTuringSmoothing class is a complex smoothing technique that doesn't require training. To calculate the probabilities of a given NGram model using GoodTuringSmoothing:

a.calculateNGramProbabilities(GoodTuringSmoothing())

AdditiveSmoothing class is a smoothing technique that requires training.

a.calculateNGramProbabilities(AdditiveSmoothing())

Using NGram

To find the probability of an NGram:

getProbability(self, *args) -> float

For example, to find the bigram probability:

a.getProbability("jack", "reads")

To find the trigram probability:

a.getProbability("jack", "reads", "books")

Saving NGram

To save the NGram model:

saveAsText(self, fileName: str)

For example, to save model "a" to the file "model.txt":

a.saveAsText("model.txt");              

Loading NGram

To load an existing NGram model:

NGram(fileName: str)

For example,

a = NGram("model.txt")

this loads an NGram model in the file "model.txt".

For Contibutors

Setup.py file

  1. Do not forget to set package list. All subfolders should be added to the package list.
    packages=['Classification', 'Classification.Model', 'Classification.Model.DecisionTree',
              'Classification.Model.Ensemble', 'Classification.Model.NeuralNetwork',
              'Classification.Model.NonParametric', 'Classification.Model.Parametric',
              'Classification.Filter', 'Classification.DataSet', 'Classification.Instance', 'Classification.Attribute',
              'Classification.Parameter', 'Classification.Experiment',
              'Classification.Performance', 'Classification.InstanceList', 'Classification.DistanceMetric',
              'Classification.StatisticalTest', 'Classification.FeatureSelection'],
  1. Package name should be lowercase and only may include _ character.
    name='nlptoolkit_math',

Python files

  1. Do not forget to comment each function.
    def __broadcast_shape(self, shape1: Tuple[int, ...], shape2: Tuple[int, ...]) -> Tuple[int, ...]:
        """
        Determines the broadcasted shape of two tensors.

        :param shape1: Tuple representing the first tensor shape.
        :param shape2: Tuple representing the second tensor shape.
        :return: Tuple representing the broadcasted shape.
        """
  1. Function names should follow caml case.
    def addItem(self, item: str):
  1. Local variables should follow snake case.
	det = 1.0
	copy_of_matrix = copy.deepcopy(self)
  1. Class variables should be declared in each file.
class Eigenvector(Vector):
    eigenvalue: float
  1. Variable types should be defined for function parameters and class variables.
    def getIndex(self, item: str) -> int:
  1. For abstract methods, use ABC package and declare them with @abstractmethod.
    @abstractmethod
    def train(self, train_set: list[Tensor]):
        pass
  1. For private methods, use __ as prefix in their names.
    def __infer_shape(self, data: Union[List, List[List], List[List[List]]]) -> Tuple[int, ...]:
  1. For private class variables, use __ as prefix in their names.
class Matrix(object):
    __row: int
    __col: int
    __values: list[list[float]]
  1. Write __repr__ class methods as toString methods
  2. Write getter and setter class methods.
    def getOptimizer(self) -> Optimizer:
        return self.optimizer
    def setValue(self, value: Optional[Tensor]) -> None:
        self._value = value
  1. If there are multiple constructors for a class, define them as constructor1, constructor2, ..., then from the original constructor call these methods.
    def constructor1(self):
        self.__values = []
        self.__size = 0

    def constructor2(self, values: list):
        self.__values = values.copy()
        self.__size = len(values)

    def __init__(self,
                 valuesOrSize=None,
                 initial=None):
        if valuesOrSize is None:
            self.constructor1()
        elif isinstance(valuesOrSize, list):
            self.constructor2(valuesOrSize)
  1. Extend test classes from unittest and use separate unit test methods.
class TensorTest(unittest.TestCase):

    def test_inferred_shape(self):
        a = Tensor([[1.0, 2.0], [3.0, 4.0]])
        self.assertEqual((2, 2), a.getShape())

    def test_shape(self):
        a = Tensor([1.0, 2.0, 3.0])
        self.assertEqual((3, ), a.getShape())
  1. Enumerated types should be used when necessary as enum classes.
class AttributeType(Enum):
    """
    Continuous Attribute
    """
    CONTINUOUS = auto()
    """
    Discrete Attribute
    """
    DISCRETE = auto()

Packages

 
 
 

Contributors

Languages