Open framework for confidential AI
-
Updated
Mar 16, 2026 - Rust
Open framework for confidential AI
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Build secure mcp infrastructure to audit and control every data access by AI agents with minimal effort
Let AI agents like ChatGPT & Claude use real-world local/remote tools you approve via browser extension + optional MCP server
A living map of the AI agent security ecosystem.
OS-level sandbox for AI coding agents - kernel-enforced file, command, and network isolation
Forge - OpenClaw for Enterprise: Forge is a secure, portable AI Agent runtime. Run agents locally, in cloud, or enterprise environments without exposing inbound tunnels.
This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.
Secure Computing in the AI age
IntentusNet - Deterministic execution infrastructure for agent and distributed systems, enabling reproducible workflows, reliable intent routing, transport abstraction, and transparent operational control.
Secure local-first desktop layer for OpenClaw featuring voice, canvas, and hardened security guardrails.
Project Agora: MVP of the Concordia framework. An ethical, symbiotic AI designed to foster and protect human flourishing.
Secure Python Chatbot with PANW AIRS protection and Claude API
Secure Python Chatbot with PANW AIRS protection and OpenAI API
Real-time code analysis that detects cross-file semantic errors, type inconsistencies, array bound violations, and function signature drift while you type, before files are saved, without external security APIs.
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
Offline-first cognitive operating system for synthetic intelligence. Features belief ecology, RL-based goal evolution with differential privacy, contradiction tracing, HMAC-signed audit logs, sandboxed execution, and local LLM inference. Designed for air-gapped, adversarial environments.
💻🔒 A local-first full-stack app to analyze medical PDFs with an AI model (Apollo2-2B), ensuring privacy & patient-friendly insights — no external APIs or cloud involved.
Behavior-driven cognitive experimentation toolkit with BCE (Behavioral Consciousness Engine) regularization, telemetry, and plug-and-play integrators for language-model training and evaluation.
airlock is a cryptographic handshake protocol for verifying AI model identity at runtime. It enables real-time attestation of model provenance, environment integrity, and agent authenticity - without relying on vendor trust or static manifests.
Add a description, image, and links to the secure-ai topic page so that developers can more easily learn about it.
To associate your repository with the secure-ai topic, visit your repo's landing page and select "manage topics."