Professional Secrecy and AI: A Practical Guide for Belgian Lawyers
More than half of Belgian law firms use AI, yet only 38% have an AI policy. This guide breaks down what Article 458 of the Criminal Code, the OVB guidelines, and GDPR actually require when Belgian lawyers use AI tools with client data. Includes a 10-question vendor checklist and a ready-to-use internal AI policy template.
11 min read

What the OVB and AVOCATS.BE guidelines actually require — and how to comply without abandoning AI entirely.
More than half of Belgian law firms now use AI tools in their practice. Yet only 38% have a formal AI policy in place. This gap between adoption and governance is not just a compliance risk — in Belgium, it is a potential criminal liability.
Belgian lawyers are not bound by soft ethical guidelines on confidentiality. They are bound by Article 458 of the Criminal Code, which makes disclosure of professional secrets punishable by one to three years of imprisonment. When a lawyer enters client information into an AI tool that stores, shares, or trains on that data, the question is not theoretical: it may constitute a criminal breach of beroepsgeheim.
This guide breaks down what Belgian lawyers actually need to know and do. Not the legal theory — the practical steps.
What Belgian Law Requires
Article 458 of the Criminal Code
The obligation is straightforward and severe:
All persons who, by reason of their status or profession, are entrusted with secrets, and who disclose these [outside legally permitted exceptions], shall be punished with imprisonment of one year to three years and a fine.
Three things make this provision exceptionally relevant to AI use:
It is a criminal offence, not merely a disciplinary matter. The penalties are real: imprisonment of 1–3 years and fines (which, after legal surcharges, reach EUR 800–8,000 in practice).
The obligation is of public order. This means the client cannot waive it. Even if your client says "I don't mind if you use ChatGPT with my case file," the secrecy obligation persists — it belongs to the public interest, not just the client.
It survives the end of the relationship. Professional secrecy covers not just the content of communications but also the fact of the lawyer-client relationship itself. It does not expire.
When a lawyer enters case details into an AI system where the provider's employees can read the data, where the data is used for model training, or where it is stored indefinitely on foreign servers — the question of whether this constitutes "disclosure" under Article 458 is legitimate and unresolved.
The Codex Deontologie
The OVB's Codex Deontologie reinforces and extends these obligations:
Articles 127–144 detail the scope of professional secrecy. Everything confided to the lawyer by the client, and everything the lawyer learns in the exercise of the profession, is covered.
Articles 25–33 establish competence obligations. A lawyer who uses a tool they do not understand — including how it handles data — may be in breach.
Articles 86–98 govern the client relationship, including information obligations. Whether you must inform your client about AI use depends on how you interpret these provisions.
The GDPR Layer
On top of professional secrecy, GDPR applies independently:
When you process client data through an AI tool, you are the data controller. The AI vendor is a data processor.
Under Article 28 GDPR, you must have a written Data Processing Agreement (DPA) with the vendor. Consumer AI tools do not offer DPAs. Using them for client data is a standalone GDPR violation, even before considering professional secrecy.
A Data Protection Impact Assessment (DPIA) under Article 35 GDPR is likely required when processing sensitive legal data through new technology at scale.
Data transfers to the US require either the EU-US Data Privacy Framework certification, Standard Contractual Clauses, or EU data residency guarantees.
What the OVB and AVOCATS.BE Actually Say About AI
The OVB Guidelines (2025)
The Orde van Vlaamse Balies published its "Richtlijnen voor advocaten rond het gebruik van artificiële intelligentie" in early 2025. The guidelines are non-binding recommendations, but they interpret existing binding deontological rules. Ignoring them does not make you non-compliant with the guidelines — it makes you non-compliant with the underlying rules they interpret.
The core positions:
1. AI is neither prohibited nor mandatory. Lawyers may use AI as a supporting tool. The guidelines explicitly recognize AI as potentially useful for legal practice.
2. Confidential data must never enter open AI systems. The OVB draws a clear line between open systems (where data may be stored, used for training, or accessed by the provider) and closed systems (where contractual guarantees prevent this). Entering client data into open systems is incompatible with professional secrecy.
3. Closed systems are permitted — with conditions. Use is acceptable only when the lawyer is "absolutely certain" the AI operates within a closed environment with sufficient safeguards. This means: a written DPA, no training on input data, appropriate encryption, EU data processing (or adequate transfer safeguards), and limited access.
4. Personal data must be pseudonymized where possible. Before entering data into any AI system, lawyers should anonymize or pseudonymize it. Only enter identifiable data when strictly necessary.
5. The lawyer remains fully responsible. AI output must always be reviewed, verified, and validated. The hallucination risk is explicitly acknowledged — all legal references, citations, and reasoning must be independently checked.
6. Lawyers must understand how AI works. The guidelines require lawyers to have basic knowledge of AI and large language models — enough to understand the capabilities, limitations, and risks of the specific tools they use.
7. Firms should create internal AI policies. The OVB recommends every firm document which tools are approved, what data may be entered, what review procedures apply, and how staff are trained.
8. No mandatory client disclosure — for now. The OVB does not require lawyers to inform clients that AI was used. However, GDPR transparency obligations (Articles 13–14) may independently require it when personal data is processed by a third-party AI provider. The CCBE guide (October 2025) recommends informing clients as a matter of good practice.
AVOCATS.BE and the Francophone Bar
AVOCATS.BE has not published a separate equivalent set of AI guidelines, but the underlying legal framework is identical: the same Article 458 Criminal Code, the same GDPR, and an equivalent Code de déontologie. The CCBE guide on generative AI for lawyers (October 2025), which both Belgian bar orders have endorsed, provides additional guidance — notably recommending that lawyers inform clients before using AI tools and never input confidential or personal data without proper safeguards.
In practice, the OVB guidelines have become the de facto reference for both linguistic communities.
Open vs. Closed AI Systems: What This Actually Means
The OVB's distinction between "open" and "closed" AI systems sounds simple. In practice, it requires understanding what happens to your data in each scenario.
Consumer AI Tools (ChatGPT Free/Plus, Claude Free, Gemini)
When you type a prompt into a consumer AI tool:
Your input is stored on the provider's servers, typically in the US, indefinitely (or until you delete it, after which it may persist for up to 30 days).
Your data may be used for training. By default, consumer ChatGPT conversations are used to train future models. Claude's consumer terms allow training unless you opt out (as of October 2025). An opt-out does not retroactively remove data already used.
Human reviewers may read your data. Providers employ humans for safety monitoring. Your prompt about a client's criminal case could be read by an employee in San Francisco.
No Data Processing Agreement exists. You cannot sign a DPA with OpenAI's consumer product.
For Belgian lawyers, the conclusion is unambiguous: consumer AI tools cannot be used with any data covered by professional secrecy.
This is not a theoretical position. In February 2026, a US federal court ruled in United States v. Heppner that documents created using a consumer-grade AI chatbot were not protected by attorney-client privilege, specifically because the consumer privacy policy disclosed that the platform collects inputs, uses data for training, and may disclose data to third parties. The court explicitly noted that enterprise-tier AI tools — which do not train on inputs and maintain contractual confidentiality — should be viewed differently.
Enterprise AI Tools
Enterprise versions of the same tools operate under fundamentally different data policies:
Consumer | Enterprise/API | |
|---|---|---|
Trains on your data | Yes (by default) | No |
Data retention | Indefinite | Configurable; zero retention available |
Human review | Possible | Excluded |
EU data residency | Not available | Available from most vendors |
DPA available | No | Yes |
GDPR-compliant for client data | No | Yes (with proper configuration) |
Enterprise does not automatically mean safe. The specific configuration matters. EU data residency, zero data retention, a signed DPA, and contractual no-training commitments are all separate features that must be explicitly enabled or agreed upon.
Specialized Legal AI Tools
Purpose-built legal AI tools designed for the legal profession are typically built from the ground up with professional secrecy in mind: EU data hosting, no training on user data, source-linked outputs, and data processing agreements as standard. Evaluating these tools requires the same vendor questions as enterprise general-purpose AI, but the answers are often more straightforward because the tools were built for regulated professions.
10 Questions to Ask Any AI Vendor Before Use
Before approving any AI tool for use with client-related data, require answers to these questions. Accept nothing less than written, contractual commitments.
Data Training and Usage
1. Do you use our inputs or outputs to train, fine-tune, or improve your AI models?
The only acceptable answer is "no," backed by a contractual commitment — not just a FAQ page.
2. Can any of your employees access or read our data?
Understand under what circumstances human review occurs. Safety monitoring, abuse detection, and quality assurance may all involve human eyes on your data.
Data Storage and Residency
3. Where is our data processed and stored — specifically?
"The cloud" is not an answer. You need the specific data center region. For Belgian lawyers, EU data residency should be the baseline requirement.
4. How long is our data retained, and can retention be set to zero?
Zero data retention (where prompts and responses are deleted immediately after processing) is the gold standard. If the vendor retains data for 30 days "for safety monitoring," understand what that means in practice.
5. What happens to our data when the contract ends?
Require contractual commitments on deletion timelines, including backup copies.
Legal Compliance
6. Is a GDPR-compliant Data Processing Agreement available?
If the answer is no, the conversation is over. No DPA means no lawful processing of personal data.
7. What international data transfer mechanisms do you use?
EU-US Data Privacy Framework, Standard Contractual Clauses, or full EU data residency. Know which one applies and whether the provider is actually certified under the framework it claims.
8. What compliance certifications do you hold?
SOC 2 Type II is the minimum baseline. ISO 27001 and ISO 42001 (AI governance) add additional assurance.
AI-Specific Risks
9. How do you handle AI hallucinations and source accuracy?
Does the tool provide citations to original sources? Can outputs be verified against the underlying data? Or does the tool generate text without traceable provenance?
10. What sub-processors are involved in delivering the service?
The AI vendor may use sub-processors (cloud infrastructure providers, LLM providers) that add additional data handling layers. You need to know the full chain.
Template: Internal AI Policy for Your Firm
The OVB recommends every firm create an internal AI policy. Here is a practical framework you can adapt.
[Firm Name] — AI Use Policy
1. Approved Tools
List specific tools approved for use with client data (enterprise/API tier only)
List tools approved for non-confidential work only (e.g., drafting marketing copy, internal research on public legal questions)
All consumer AI tools (ChatGPT Free/Plus, Claude Free, Gemini Free) are prohibited for any work involving client information, case details, or personal data
2. Data Handling Rules
No client names, case numbers, or identifying information may be entered into any AI tool unless the tool is on the approved list AND configured with EU data residency and zero data retention
Before using an approved tool with client data, pseudonymize or anonymize the data where possible
Never upload entire client documents without reviewing whether the content includes data beyond what is necessary for the specific task
3. Quality Control
All AI-generated legal analysis, citations, and case law references must be independently verified against primary sources before use in any work product
AI output must never be submitted to a court, opposing counsel, or client without review by a qualified lawyer
Maintain a log of significant AI-assisted work for quality audit purposes
4. Training Requirements
All lawyers and support staff must complete AI awareness training before using any approved tool
Training must cover: capabilities and limitations of AI, data handling requirements, prompt formulation best practices, and output verification procedures
Refresher training required annually or when new tools are approved
5. Incident Response
Any suspected data breach involving an AI tool must be reported to [designated person] immediately
The firm will notify the Belgian DPA within 72 hours if a breach involves personal data, as required by GDPR Article 33
Affected clients will be notified as required by GDPR Article 34
6. Review
This policy will be reviewed and updated every 6 months, or immediately when significant changes occur (new tools adopted, new bar association guidance issued, new legislation)
What This Means in Practice
The framework is not as restrictive as it may initially appear. Belgian lawyers can use AI tools — and arguably have a competitive obligation to explore them. The rules require awareness and structure, not abstinence.
What you can do today:
Use enterprise-tier AI tools with a signed DPA, EU data residency, and no-training commitments for client-related work
Use consumer AI tools for non-confidential tasks: researching public legal questions, drafting internal templates, summarizing publicly available legislation, brainstorming non-case-specific arguments
Use specialized legal AI tools that are designed for professional secrecy requirements and provide source-linked, verifiable outputs
What you should not do:
Enter client case details into consumer ChatGPT, Claude, or Gemini
Assume that "enterprise" automatically means compliant — verify the specific configuration
Use any AI tool without understanding its data handling practices
Submit AI-generated legal references to a court without independent verification
Operate without a documented AI policy
What should happen next:
Create or update your firm's AI policy using the framework above
Audit which AI tools are currently being used within your firm (including informal/unauthorized use)
Evaluate at least one purpose-built legal AI tool with proper professional secrecy safeguards
Ensure GDPR compliance: DPA, DPIA where required, data transfer documentation
Key Sources and References
Article 458 Strafwetboek / Code Pénal — criminal liability for breach of professional secrecy
OVB Richtlijnen AI — Orde van Vlaamse Balies, 2025
CCBE Guide on Generative AI for Lawyers — Council of Bars and Law Societies of Europe, October 2025
Codex Deontologie voor Advocaten — Arts. 25–33 (competence), 86–98 (client relationship), 127–144 (professional secrecy)
GDPR Articles 28, 35 — data processing agreements and impact assessments
Belgian DPA Brochure on AI and GDPR — September 2024
United States v. Heppner — S.D.N.Y., February 2026 (privilege waiver through consumer AI use)
Wolters Kluwer Benchmark Report 2026 — 55.2% of Belgian firms using AI; adoption and policy data
ELTA/Lefebvre Sarrut Global Report 2024 — 38% of organizations have defined an AI policy
This article provides general information about professional secrecy obligations and AI use for Belgian lawyers. It does not constitute legal advice. The legal framework is evolving rapidly — verify current bar association positions and legislation before making compliance decisions.
Brieflee is an AI legal workflow tool built for Belgian law, with EU data hosting, no training on user data, and source-linked outputs. Learn more about how Brieflee handles your data →

Article written by
Yizhaq Kricheli
