Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
Artifex is a Python library for:
- Using pre-trained task-specific Small Language Models on CPU
- Fine-tuning them on CPU without any training data β just based on your instructions for the task at hand.
How is it possible?
Artifex generates synthetic training data on-the-fly based on your instructions, and uses this data to fine-tune Small Language Models for your specific task. This approach allows you to create effective models without the need for large labeled datasets. - Tracking model performance locally with built-in evaluation and monitoring tools.
At this time, we support 10 models, all of which can be used out-of-the-box on CPU and can be fine-tuned on CPU.
If you don't want to self-host, you can also use all of our models through the Tanaos API. With 100ms end-to-end latency and a unified entrypoint to all of our models, it's the easiest way to integrate Small Language Models into your applications.
| Task | Description | Default Model | Use via API | Use via Artifex |
|---|---|---|---|---|
| Text Classification | Classifies text into user-defined categories. | No default model β must be trained | - | Examples |
| Guardrail | Flags unsafe, harmful, or off-topic messages. | tanaos/tanaos-guardrail-v2 | Examples | Examples |
| Intent Classification | Classifies user messages into predefined intent categories. | tanaos/tanaos-intent-classifier-v1 | Examples | Examples |
| Reranker | Ranks a list of items or search results based on relevance to a query. | cross-encoder/mmarco-mMiniLMv2-L12-H384-v1 | - | Examples |
| Sentiment Analysis | Determines the sentiment (positive, negative, neutral) of a given text. | tanaos/tanaos-sentiment-analysis-v1 | Examples | Examples |
| Emotion Detection | Identifies the emotion expressed in a given text. | tanaos/tanaos-emotion-detection-v1 | Examples | Examples |
| Named Entity Recognition | Detects and classifies named entities in text (e.g., persons, organizations, locations). | tanaos/tanaos-NER-v1 | 0.1B params, 500MB | Examples |
| Text Anonymization | Removes personally identifiable information (PII) from text. | tanaos/tanaos-text-anonymizer-v1 | 0.1B params, 500MB | Examples |
| Spam Detection | Identifies whether a message is spam or not. | tanaos/tanaos-spam-detection-v1 | 0.1B params, 500MB | Examples |
| Topic Classification | Classifies text into predefined topics. | tanaos/tanaos-topic-classification-v1 | 0.1B params, 500MB | Examples |
For each model, Artifex provides:
- Inference API to use a default, pre-trained Small Language Model to perform that task out-of-the-box locally on CPU.
- Fine-tune API to fine-tune the default model based on your requirements, without any training data and on CPU. The fine-tuned model is generated on your machine and is yours to keep.
- Load API to load your fine-tuned model locally on CPU, and use it for inference or further fine-tuning.
- Built-in, automatic evaluation and monitoring tools to track model performance over time, locally on your machine.
We will be adding more tasks soon, based on user feedback. Want Artifex to perform a specific task? Suggest one or vote one up.
- Cut your chatbot costs and latency by 40% by using a small, self-hosted Guardrail model.
- Analyze your users' sentiment without sending their data to third-party servers.
- Anonymize user data locally and stay GDPR-compliant.
Install Artifex with:
pip install artifexTrain your own text classification model, use it locally on CPU and keep it forever:
from artifex import Artifex
model_output_path = "./output_model/"
text_classification = Artifex().text_classification
text_classification.train(
domain="chatbot conversations",
classes={
"politics": "Messages related to political topics and discussions.",
"sports": "Messages related to sports events and activities.",
"technology": "Messages about technology, gadgets, and software.",
"entertainment": "Messages about movies, music, and other entertainment forms.",
"health": "Messages related to health, wellness, and medical topics.",
},
output_path=model_output_path
)
text_classification.load(model_output_path)
print(text_classification("What do you think about the latest AI advancements?"))
# >>> [{'label': 'technology', 'score': 0.9913}]Use Artifex's default guardrail model, which is trained to flag unsafe or harmful messages out-of-the-box:
from artifex import Artifex
guardrail = Artifex().guardrail
print(guardrail("How do I make a bomb?"))
# >>> [{'is_safe': False, 'scores': {'violence': 0.625, 'non_violent_unethical': 0.0066, 'hate_speech': 0.0082, 'financial_crime': 0.0072, 'discrimination': 0.0029, 'drug_weapons': 0.6633, 'self_harm': 0.0109, 'privacy': 0.003, 'sexual_content': 0.0029, 'child_abuse': 0.005, 'terrorism_organized_crime': 0.1278, 'hacking': 0.0096, 'animal_abuse': 0.009, 'jailbreak_prompt_inj': 0.0131}}]Learn more about the default guardrail model and what it considers safe vs unsafe on our Guardrail HF model page.
Need more control over what is considered safe vs unsafe? Fine-tune your own guardrail model, use it locally on CPU and keep it forever:
from artifex import Artifex
guardrail = Artifex().guardrail
model_output_path = "./output_model/"
guardrail.train(
unsafe_categories = {
"violence": "Content describing or encouraging violent acts",
"bullying": "Content involving harassment or intimidation of others",
"misdemeanor": "Content involving minor criminal offenses",
"vandalism": "Content involving deliberate destruction or damage to property"
},
output_path=model_output_path
)
guardrail.load(model_output_path)
print(guardrail("I want to destroy public property."))
# >>> [{'is_safe': False, 'scores': {'violence': 0.592, 'bullying': 0.0066, 'misdemeanor': 0.672, 'vandalism': 0.772}}]Use Artifex's default reranker model, which is trained to rank items based on relevance out-of-the-box:
from artifex import Artifex
reranker = Artifex().reranker
print(reranker(
query="Best programming language for data science",
documents=[
"Java is a versatile language typically used for building large-scale applications.",
"Python is widely used for data science due to its simplicity and extensive libraries.",
"JavaScript is primarily used for web development.",
]
))
# >>> [('Python is widely used for data science due to its simplicity and extensive libraries.', 3.8346), ('Java is a versatile language typically used for building large-scale applications.', -0.8301), ('JavaScript is primarily used for web development.', -1.3784)]Want to fine-tune the Reranker model on a specific domain for better accuracy? Fine-tune your own reranker model, use it locally on CPU and keep it forever:
from artifex import Artifex
reranker = Artifex().reranker
model_output_path = "./output_model/"
reranker.train(
domain="e-commerce product search",
output_path=model_output_path
)
reranker.load(model_output_path)
print(reranker(
query="Laptop with long battery life",
documents=[
"A powerful gaming laptop with high-end graphics and performance.",
"An affordable laptop suitable for basic tasks and web browsing.",
"This laptop features a battery life of up to 12 hours, perfect for all-day use.",
]
))
# >>> [('This laptop features a battery life of up to 12 hours, perfect for all-day use.', 4.7381), ('A powerful gaming laptop with high-end graphics and performance.', -1.8824), ('An affordable laptop suitable for basic tasks and web browsing.', -2.7585)]For more details and examples on how to use Artifex for the other available tasks, check out our Documentation.
Artifex includes built-in tools to automatically monitor and evaluate the inference and training performance of your models over time. This logging is performed entirely on your machine. Monitoring and logging are crucial to ensure your models are performing as expected and to identify any potential issues early on. All logs are written automatically after every inference and training session in the artifex_logs/ folder in your current working directory.
Logs include operation-level metrics (e.g., inference duration, CPU & RAM usage, training loss, etc.), daily aggregated metrics and any errors encountered during inference or training. Additionally, warnings for potential issues (e.g., high inference duration, low confidence scores, high training loss, etc.) are logged in a separate warnings log file for easier identification and troubleshooting.
Below is a list of all the metrics and warnings logged by Artifex:
- timestamp
- model used
- inference duration
- CPU & RAM usage
- input token count
- inference input & output
- inference errors (if any)
- Daily total inferences count
- Daily total & average input token count
- Daily total & average inference duration
- Daily average RAM & CPU usage
- Daily average confidence score
- Daily model usage breakdown
- timestamp
- model trained
- training duration
- CPU & RAM usage
- training instructions & parameters
- training results (loss, samples/second, steps/second)
- training errors (if any)
- Daily total trainings count
- Daily average training duration
- Daily average CPU & RAM usage
- Warning for low confidence scores during inference
- Warning for slow inference (> 5 seconds)
- Warning for high inference input token count (> 2048 tokens)
- Warning for short inference input text (< 10 characters)
- Warning for null inference output
- Warning for high training loss (> 1.0)
- Warning for slow training (> 5 minutes)
- Warning for low training throughput (< 1 sample/second)
Want to view, search, and correlate logs from all your models in one place, to easily debug production issues and understand system behavior over time, without digging through individual log files? Simply do the following:
-
Create a free account on the Tanaos platform
-
Create an API key from your profile page
-
Instantiate Artifex with your API key:
from artifex import Artifex guardrail = Artifex( api_key="<YOUR_API_KEY_HERE>" ).guardrail
-
Use Artifex as usual. All logs will be sent to your account on our platform. You can view them on the traces page.
You can opt-out of logging by passing the disable_logging=True flag when training or performing inference with any model:
from artifex import Artifex
guardrail = Artifex().guardrail
print(guardrail("How do I make a bomb?", disable_logging=True))Contributions are welcome! Whether it's a new task module, improvement, or bug fix, we'd love your help. To get started, install the repository locally with:
git clone https://github.com/tanaos/artifex.git
cd artifex
pip install -r requirements.txt
Once you have the code set up, you can start working on any open issue or create a new one. To contribute code, please follow the standard fork --> push --> pull request workflow. All pull requests should be made against the development branch. The maintainers will merge development into master once development is stable.
Before making a contribution, please review the CONTRIBUTING.md and CLA.md, which include important guidelines for contributing to the project.
Not ready to contribute code? You can also help by suggesting a new task or voting up any suggestion.
-
Why having Guardrail, Intent Classification, Emotion Detection, Sentiment Analysis etc. as separate tasks, if you already have a Text Classification task?
The Text Classification task is a general-purpose task that allows users to create custom classification models based on their specific needs. Guardrail, Intent Classification, Emotion Detection, Sentiment Analysis etc. are specialized tasks with pre-defined categories and behaviors that are commonly used in various applications. They are provided as separate tasks for two reasons: first, convenience (users can quickly use these models without needing to define their own categories); second, performance (the specialized model typically performs better than re-defining the same model through the general Text Classification model).
- Full documentation: https://docs.tanaos.com/artifex
- Get in touch: info@tanaos.com
