A treebank is a corpus where the sentences in each language are syntactically (if necessary morphologically) annotated. In the treebanks, the syntactic annotation usually follows constituent and/or dependency structure.
Treebanks annotated for the syntactic or semantic structures of the sentences are essential for developing state-of-the-art statistical natural language processing (NLP) systems including part-of-speech-taggers, syntactic parsers, and machine translation systems. There are two main groups of syntactic treebanks, namely treebanks annotated for constituency (phrase structure) and the ones that are annotated for dependency structure.
We extend the original format with the relevant information, given between curly braces. For example, the word 'problem' in a sentence in the standard Penn Treebank notation, may be represented in the data format provided below:
(NN problem)
After all levels of processing are finished, the data structure stored for the same word has the following form in the system.
(NN {turkish=sorunu} {english=problem}
{morphologicalAnalysis=sorun+NOUN+A3SG+PNON+ACC}
{metaMorphemes=sorun+yH}
{semantics=TUR10-0703650})
As is self-explanatory, 'turkish' tag shows the original Turkish word; 'morphologicalanalysis' tag shows the correct morphological parse of that word; 'semantics' tag shows the ID of the correct sense of that word; 'namedEntity' tag shows the named entity tag of that word; 'propbank' tag shows the semantic role of that word for the verb synset id (frame id in the frame file) which is also given in that tag.
You can also see Java, Python, Cython, C++, C, Swift, or C# repository.
To check if you have a compatible version of Node.js installed, use the following command:
node -v
You can find the latest version of Node.js here.
Install the latest version of Git.
npm install nlptoolkit-annotatedtree
In order to work on code, create a fork from GitHub page. Use Git for cloning the code to your local or below line for Ubuntu:
git clone <your-fork-git-link>
A directory called util will be created. Or you can use below link for exploring the code:
git clone https://github.com/starlangsoftware/annotatedtree-js.git
Steps for opening the cloned project:
- Start IDE
- Select File | Open from main menu
- Choose
AnnotatedTree-Jsfile - Select open as project option
- Couple of seconds, dependencies will be downloaded.
To load an annotated TreeBank:
TreeBankDrawable(folder: string, pattern: string)
a = new TreeBankDrawable("/Turkish-Phrase", ".train")
To access all the trees in a TreeBankDrawable:
for (let i = 0; i < a.sentenceCount(); i++){
let parseTree = <ParseTreeDrawable> a.get(i);
....
}
To load a saved ParseTreeDrawable:
ParseTreeDrawable(file: string)
is used. Usually it is more useful to load TreeBankDrawable as explained above than to load ParseTree one by one.
To find the node number of a ParseTreeDrawable:
nodeCount(): number
the leaf number of a ParseTreeDrawable:
leafCount(): number
the word count in a ParseTreeDrawable:
wordCount(excludeStopWords: boolean): number
above methods can be used.
Information of an annotated word is kept in LayerInfo class. To access the morphological analysis of the annotated word:
getMorphologicalParseAt(index: number): MorphologicalParse
meaning of an annotated word:
getSemanticAt(index: number): string
the shallow parse tag (e.g., subject, indirect object etc.) of annotated word:
getShallowParseAt(index: number): string
the argument tag of the annotated word:
getArgumentAt(index: number): Argument
the word count in a node:
getNumberOfWords(): number
@inproceedings{yildiz-etal-2014-constructing,
title = "Constructing a {T}urkish-{E}nglish Parallel {T}ree{B}ank",
author = {Y{\i}ld{\i}z, Olcay Taner and
Solak, Ercan and
G{\"o}rg{\"u}n, Onur and
Ehsani, Razieh},
booktitle = "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jun,
year = "2014",
address = "Baltimore, Maryland",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P14-2019",
doi = "10.3115/v1/P14-2019",
pages = "112--117",
}
- main and types are important when this package will be imported.
"main": "dist/index.js",
"types": "dist/index.d.ts",
- Dependencies should be maximum (not only direct but also indirect references should also be given), everything directly in the code should be given here.
"dependencies": {
"nlptoolkit-corpus": "^1.0.12",
"nlptoolkit-dictionary": "^1.0.14",
"nlptoolkit-morphologicalanalysis": "^1.0.19",
"nlptoolkit-xmlparser": "^1.0.7"
}
- Compiler flags currently includes nodeNext for importing.
"compilerOptions": {
"outDir": "dist",
"module": "nodeNext",
"sourceMap": true,
"noImplicitAny": true,
"removeComments": false,
"declaration": true,
},
- tests, node_modules and dist should be excluded.
"exclude": [
"tests",
"node_modules",
"dist"
]
- Should include all ts classes.
export * from "./CategoryType"
export * from "./InterlingualDependencyType"
export * from "./InterlingualRelation"
export * from "./Literal"
- Add data files to the project folder. Subprojects should include all data files of the parent projects.
- Classes should be defined as exported.
export class JCN extends ICSimilarity{
- Do not forget to comment each function.
/**
* Computes JCN wordnet similarity metric between two synsets.
* @param synSet1 First synset
* @param synSet2 Second synset
* @return JCN wordnet similarity metric between two synsets
*/
computeSimilarity(synSet1: SynSet, synSet2: SynSet): number {
- Function names should follow caml case.
setSynSetId(synSetId: string){
- Write getter and setter methods.
getRelation(index: number): Relation{
setName(name: string){
- Use standard javascript test style.
describe('SimilarityPathTest', function() {
describe('SimilarityPathTest', function() {
it('testComputeSimilarity', function() {
let turkish = new WordNet();
let similarityPath = new SimilarityPath(turkish);
assert.strictEqual(32.0, similarityPath.computeSimilarity(turkish.getSynSetWithId("TUR10-0656390"), turkish.getSynSetWithId("TUR10-0600460")));
assert.strictEqual(13.0, similarityPath.computeSimilarity(turkish.getSynSetWithId("TUR10-0412120"), turkish.getSynSetWithId("TUR10-0755370")));
assert.strictEqual(13.0, similarityPath.computeSimilarity(turkish.getSynSetWithId("TUR10-0195110"), turkish.getSynSetWithId("TUR10-0822980")));
});
});
});
- Enumerated types should be declared with enum.
export enum CategoryType {
MATHEMATICS, SPORT, MUSIC, SLANG, BOTANIC,
PLURAL, MARINE, HISTORY, THEOLOGY, ZOOLOGY,
METAPHOR, PSYCHOLOGY, ASTRONOMY, GEOGRAPHY, GRAMMAR,
MILITARY, PHYSICS, PHILOSOPHY, MEDICAL, THEATER,
ECONOMY, LAW, ANATOMY, GEOMETRY, BUSINESS,
PEDAGOGY, TECHNOLOGY, LOGIC, LITERATURE, CINEMA,
TELEVISION, ARCHITECTURE, TECHNICAL, SOCIOLOGY, BIOLOGY,
CHEMISTRY, GEOLOGY, INFORMATICS, PHYSIOLOGY, METEOROLOGY,
MINERALOGY
}
- If there are multiple constructors for a class, define them as constructor1, constructor2, ..., then from the original constructor call these methods.
constructor1(symbol: any){
constructor2(symbol: any, multipleFile: MultipleFile) {
constructor(symbol: any, multipleFile: MultipleFile = undefined) {
if (multipleFile == undefined){
this.constructor1(symbol);
} else {
this.constructor2(symbol, multipleFile);
}
}
- Importing should be done via import method with referencing the node-modules.
import {Corpus} from "nlptoolkit-corpus/dist/Corpus";
import {Sentence} from "nlptoolkit-corpus/dist/Sentence";
- Use xmlparser package for parsing xml files.
var doc = new XmlDocument("test.xml")
doc.parse()
let root = doc.getFirstChild()
let firstChild = root.getFirstChild()

