This resource is a dictionary of Modern Turkish, comprised of the definitions of over 50.000 individual entries. Each entry is matched with its corresponding synset (set of synonymous words and expressions) in the Turkish WordNet, KeNet.
The bare-forms in the lexicon consists of nouns, adjectives, verbs, adverbs, shortcuts, etc. Each bare-form appears the same in the lexicon except verbs. Since the bare-forms of the verbs in Turkish do not have the infinitive affix โmAkโ, our lexicon includes all verbs without the infinitive affix. The bare-forms with diacritics are included in two forms, with and without diacritics. For example, noun โrรผzgarโ appear both as โrรผzgarโ and โrรผzgรขrโ.
Special markers are included as bare-forms such as doc, s, etc.
Some compound words are included in their affixed form. For instance, โacemlalesiโ appears as it is, but not as โacemlaleโ.
Foreign words, especially proper noun foreign words, are included, so that the system can easily recognize them as proper nouns. For instance, the words โabbottโ, โabbigailโ are example foreign proper nouns. Including foreign proper nouns, there are 19,000 proper nouns in our lexicon.
From derivational suffixes, we only include words which has taken -lI, -sIz, -CI, -lIk, and -CIlIk derivational affixes. For example, the bare-forms โabacฤฑโ, โabdallฤฑkโ, โabdestliโ and โabdestlilikโ, are included, since they have taken one or more derivational affixes listed above.
Each bare-form has a set of attributes. For instance, โabacฤฑโ is a noun, therefore, it includes CL_ISIM attribute. Similarly, โabdestliโ is an adjective, which includes IS_ADJ attribute. If the bare-form has homonyms with different part of speech tags, all corresponding attributes are included.
| Name | Purpose |
|---|---|
| CL ISIM, CL FIIL, IS_OA | Part of speech tag(s) |
| IS_DUP | Part of a duplicate form |
| IS_KIS | Abbreviation, which does not obey vowel harmony while taking suffixes. |
| IS_UU, IS_UUU | Does not obey vowel harmony while taking suffixes. |
| IS_BILES | A portmanteau word in affixed form, such as โadamotuโ |
| IS_B_SI | A portmanteau word ending with โsฤฑโ, such as โacemlalesiโ |
| IS_CA | Already in a plural form, therefore can not take plural suffixes such as โlerโ or โlarโ. |
| IS_ST | The second consonant undergoes a resyllabification. |
| IS_UD, IS_UDD, F_UD | Includes vowel epenthesis. |
| IS_KG | Ends with a โkโ, and when it is followed by a vowel-initial suffix, the final โkโ is replaced with a โgโ. |
| IS_SD, IS_SDD, F_SD | Final consonant gets devoiced during vowel-initial suffixation. |
| F GUD, F_GUDO | The verb bare-form includes vowel reduction. |
| F1P1, F1P1-NO-REF | A verb, and depending on this attribute, the verb can (or can not) take causative suffix, factitive suffix, passive suffix etc. |
You can also see Python, Cython, C++, C, Swift, Java, or C# repository.
To check if you have a compatible version of Node.js installed, use the following command:
node -v
You can find the latest version of Node.js here.
Install the latest version of Git.
npm install nlptoolkit-dictionary
In order to work on code, create a fork from GitHub page. Use Git for cloning the code to your local or below line for Ubuntu:
git clone <your-fork-git-link>
A directory called util will be created. Or you can use below link for exploring the code:
git clone https://github.com/starlangsoftware/dictionary-js.git
Steps for opening the cloned project:
- Start IDE
- Select File | Open from main menu
- Choose
Dictionary-Jsfile - Select open as project option
- Couple of seconds, dependencies will be downloaded.
Dictionary is used in order to load Turkish dictionary or a domain specific dictionary. In addition, misspelled words and the true forms of the misspelled words can also be loaded.
To load the Turkish dictionary and the misspelled words dictionary,
a = TxtDictionary()
To load the domain specific dictionary and the misspelled words dictionary,
constructor(comparator: WordComparator = WordComparator.TURKISH,
fileName: string = "turkish_dictionary.txt",
misspelledFileName: string = "turkish_misspellings.txt")
And to see if the dictionary involves a specific word, Word getWord is used.
getWord(nameOrIndex: any): Word
The word features: To see whether the TxtWord class of the dictionary is a noun or not,
isNominal(): boolean
To see whether it is an adjective,
isAdjective(): boolean
To see whether it is a portmanteau word,
isPortmanteau(): boolean
To see whether it obeys vowel harmony,
notObeysVowelHarmonyDuringAgglutination(): boolean
And, to see whether it softens when it get affixes, the following is used.
rootSoftenDuringSuffixation(): boolean
To syllabify the word, SyllableList class is used.
constructor(word: string)
@inproceedings{yildiz-etal-2019-open,
title = "An Open, Extendible, and Fast {T}urkish Morphological Analyzer",
author = {Y{\i}ld{\i}z, Olcay Taner and
Avar, Beg{\"u}m and
Ercan, G{\"o}khan},
booktitle = "Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)",
month = sep,
year = "2019",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd.",
url = "https://www.aclweb.org/anthology/R19-1156",
doi = "10.26615/978-954-452-056-4_156",
pages = "1364--1372",
}
- main and types are important when this package will be imported.
"main": "dist/index.js",
"types": "dist/index.d.ts",
- Dependencies should be maximum (not only direct but also indirect references should also be given), everything directly in the code should be given here.
"dependencies": {
"nlptoolkit-corpus": "^1.0.12",
"nlptoolkit-dictionary": "^1.0.14",
"nlptoolkit-morphologicalanalysis": "^1.0.19",
"nlptoolkit-xmlparser": "^1.0.7"
}
- Compiler flags currently includes nodeNext for importing.
"compilerOptions": {
"outDir": "dist",
"module": "nodeNext",
"sourceMap": true,
"noImplicitAny": true,
"removeComments": false,
"declaration": true,
},
- tests, node_modules and dist should be excluded.
"exclude": [
"tests",
"node_modules",
"dist"
]
- Should include all ts classes.
export * from "./CategoryType"
export * from "./InterlingualDependencyType"
export * from "./InterlingualRelation"
export * from "./Literal"
- Add data files to the project folder. Subprojects should include all data files of the parent projects.
- Classes should be defined as exported.
export class JCN extends ICSimilarity{
- Do not forget to comment each function.
/**
* Computes JCN wordnet similarity metric between two synsets.
* @param synSet1 First synset
* @param synSet2 Second synset
* @return JCN wordnet similarity metric between two synsets
*/
computeSimilarity(synSet1: SynSet, synSet2: SynSet): number {
- Function names should follow caml case.
setSynSetId(synSetId: string){
- Write getter and setter methods.
getRelation(index: number): Relation{
setName(name: string){
- Use standard javascript test style.
describe('SimilarityPathTest', function() {
describe('SimilarityPathTest', function() {
it('testComputeSimilarity', function() {
let turkish = new WordNet();
let similarityPath = new SimilarityPath(turkish);
assert.strictEqual(32.0, similarityPath.computeSimilarity(turkish.getSynSetWithId("TUR10-0656390"), turkish.getSynSetWithId("TUR10-0600460")));
assert.strictEqual(13.0, similarityPath.computeSimilarity(turkish.getSynSetWithId("TUR10-0412120"), turkish.getSynSetWithId("TUR10-0755370")));
assert.strictEqual(13.0, similarityPath.computeSimilarity(turkish.getSynSetWithId("TUR10-0195110"), turkish.getSynSetWithId("TUR10-0822980")));
});
});
});
- Enumerated types should be declared with enum.
export enum CategoryType {
MATHEMATICS, SPORT, MUSIC, SLANG, BOTANIC,
PLURAL, MARINE, HISTORY, THEOLOGY, ZOOLOGY,
METAPHOR, PSYCHOLOGY, ASTRONOMY, GEOGRAPHY, GRAMMAR,
MILITARY, PHYSICS, PHILOSOPHY, MEDICAL, THEATER,
ECONOMY, LAW, ANATOMY, GEOMETRY, BUSINESS,
PEDAGOGY, TECHNOLOGY, LOGIC, LITERATURE, CINEMA,
TELEVISION, ARCHITECTURE, TECHNICAL, SOCIOLOGY, BIOLOGY,
CHEMISTRY, GEOLOGY, INFORMATICS, PHYSIOLOGY, METEOROLOGY,
MINERALOGY
}
- If there are multiple constructors for a class, define them as constructor1, constructor2, ..., then from the original constructor call these methods.
constructor1(symbol: any){
constructor2(symbol: any, multipleFile: MultipleFile) {
constructor(symbol: any, multipleFile: MultipleFile = undefined) {
if (multipleFile == undefined){
this.constructor1(symbol);
} else {
this.constructor2(symbol, multipleFile);
}
}
- Importing should be done via import method with referencing the node-modules.
import {Corpus} from "nlptoolkit-corpus/dist/Corpus";
import {Sentence} from "nlptoolkit-corpus/dist/Sentence";
- Use xmlparser package for parsing xml files.
var doc = new XmlDocument("test.xml")
doc.parse()
let root = doc.getFirstChild()
let firstChild = root.getFirstChild()

