Feature
One governance layer for all your AI. Native adapters for LangChain, CrewAI, AutoGen, OpenAI, and more. Write policies once, enforce everywhere.
Wrap chains, agents, and tools
Govern crew and agent interactions
Multi-agent conversation governance
Direct API integration
Claude API support
Model Context Protocol servers
RAG pipeline governance
Transformers integration
HTTP middleware for any API
Bedrock model governance
Mistral API support
Build your own integration
from tork.adapters import LangChainAdapter
from langchain.chains import LLMChain
adapter = LangChainAdapter("policy.yaml")
# Wrap any chain
chain = LLMChain(llm=llm, prompt=prompt)
governed_chain = adapter.wrap(chain)
# Use normally - governance is automatic
result = governed_chain.run("User input here")from tork.adapters import CrewAIAdapter
from crewai import Crew, Agent
adapter = CrewAIAdapter("policy.yaml")
# Wrap entire crew
crew = Crew(agents=[agent1, agent2], tasks=[task1])
governed_crew = adapter.wrap(crew)
# Agent communications are governed
result = governed_crew.kickoff()from tork.adapters import OpenAIAdapter
from openai import OpenAI
adapter = OpenAIAdapter("policy.yaml")
client = OpenAI()
# Wrap the client
governed_client = adapter.wrap(client)
# All completions are governed
response = governed_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)Same policy.yaml works across all frameworks. No rewriting rules when you switch.
Whether using LangChain in prod or testing with raw OpenAI calls, same governance applies.
Move from one framework to another without rebuilding your compliance layer.
Running CrewAI for agents and LangChain for RAG? One Tork instance governs both.
New framework released? Our SDK lets you add governance in hours, not weeks.