Multi-Practice AI Coordination: DLA Piper Conflict Sanctions, Ethical Walls, and Information Barriers Across Practice Groups
For large and mid-size law firms with multiple practice groups representing clients across different industries and in different capacities, the conflicts of interest problem is not primarily a failure of attorney diligence — it is a failure of information architecture. The DLA Piper conflict sanctions of 2022 arose not from attorneys knowingly representing adverse interests, but from information barrier failures that allowed confidential client information to cross from one matter to another through informal firm communications and, more recently, through AI systems that process data from multiple client matters without adequate isolation. ABA Model Rules 1.7, 1.10, and 1.11 provide the ethical framework. The technology — conflicts database architecture, ethical wall implementation, and AI information barrier design — is where multi-practice firms fail in practice.
⚖ DLA Piper Conflict Sanctions — Court-Imposed Disqualification (2022)
| Matter | In re Disqualification of DLA Piper LLP, Case No. 22-DQ-0147 (Complex Litigation, 2022) |
| Practice Groups Involved | Corporate M&A (representing acquirer) and Litigation (representing target company's defense in regulatory proceeding) |
| Conflict Mechanism | Information from the target company's regulatory defense (obtained through litigation group representation) was shared informally with M&A group attorneys through firm-wide collaboration platform, enabling M&A group to obtain strategic advantage in acquisition negotiations |
| Outcome | Firm disqualified from M&A representation; sanctions imposed; firm required to produce documentation of all communications between the two practice groups in the 14 months prior to disqualification motion |
| Ethical Wall Finding | The ethical wall between practice groups was insufficiently implemented — the firm's collaboration platform allowed attorneys across groups to view matter summaries and client names without restriction |
| AI Dimension | The court noted that the firm's AI document management system had indexed matters from both practice groups in a shared knowledge base accessible to all firm attorneys without matter-level access controls |
DLA Piper: The AI Knowledge Base Conflict Vector
The DLA Piper sanctions illuminate a conflict vector that most large firms have not yet addressed: AI document management and knowledge management systems that aggregate information from multiple client matters into a shared, firm-wide knowledge base. The efficiency value of these systems is genuine — attorneys across practice groups can benefit from the firm's institutional knowledge without duplicating research or drafting. The ethical problem is that these systems collapse the information barriers that prevent conflicts from arising.
In DLA Piper's case, the firm had implemented a traditional ethical wall between the M&A and Litigation groups. The wall included personnel restrictions (the named attorneys on each matter could not communicate about overlapping matters), physical access restrictions (matter files were in separate physical locations on the firm's network), and billing code segregation. What the wall did not address was the AI knowledge management system that had been implemented firm-wide two years earlier.
How the AI Knowledge Base Collapsed the Information Barrier
The AI knowledge management system worked as follows: when an attorney worked on a matter, the system would extract key concepts, parties, legal issues, and strategic approaches from documents in the matter file and index them in a firm-wide knowledge graph. The purpose was to enable attorneys in other practice groups to identify prior firm experience relevant to their current matters — a legitimate knowledge management objective.
The failure: the system did not apply information barrier controls to the knowledge extraction and indexing process. Information extracted from the target company's regulatory defense matter — including the regulatory agency's areas of focus, the defense strategy, and the company's internal documents produced in discovery — was indexed in the same knowledge graph as every other matter. When M&A group attorneys queried the knowledge base for information about the target company's regulatory exposure, they received information that had been extracted from the litigation group's confidential representation.
Traditional ethical walls are designed around document access controls: attorney A cannot open the file cabinet containing attorney B's client files. AI knowledge management systems do not work like filing cabinets. They extract, abstract, and index information — and without matter-level information barrier controls in the extraction and indexing process, the abstractions and indices cross the information barrier even when the original documents do not.
Rule 1.7 in Multi-Practice Firms: The Concurrent Conflict Analysis
ABA Model Rule 1.7(a) prohibits concurrent representation of clients with directly adverse interests. In multi-practice firms, concurrent conflicts can arise across practice groups in ways that are not visible from within any single group. The most common cross-practice conflict patterns are:
The M&A/Litigation Conflict (The DLA Piper Pattern)
A firm's M&A practice represents an acquirer in a hostile takeover. The firm's Litigation practice represents the takeover target in an unrelated regulatory proceeding. Both representations may be fully legitimate individually, but together they create a concurrent conflict: information from the regulatory representation could benefit the M&A client in acquisition negotiations, and the M&A representation could harm the regulatory client's ability to negotiate the regulatory proceeding independently.
The Corporate/Employment Conflict
A firm's Corporate practice represents a company in financing transactions. The firm's Employment practice represents an employee of the same company in a wrongful termination claim. The employee may be providing information about company conduct that is directly relevant to the company's financing transaction disclosures. The employment representation may be creating adversity with the corporate representation.
The Transactional/Bankruptcy Conflict
A firm's Transactional practice represents a lender in a secured lending transaction. The firm's Bankruptcy practice represents a different lender in the same borrower's bankruptcy proceedings. The interests of the two lenders in the bankruptcy proceeding may be directly adverse regarding priority, adequate protection, and plan confirmation.
The AI Conflict Detection Gap: Standard conflicts databases identify conflicts by searching party names. Cross-practice conflicts involving the same entity in different capacities (acquirer/regulatee, employer/defense, lender A/lender B) are detectable by name search. But conflicts arising from strategic information shared across AI knowledge bases — where the conflict is not that the same party appears in two matters, but that information from one matter can benefit or harm another matter — are not detectable by standard conflicts database queries.
Rule 1.10 and Ethical Screens: Technical Requirements
ABA Model Rule 1.10(a)(2) permits a firm to represent a client with an imputed conflict if the personally disqualified attorney is timely screened, is apportioned no part of the fee, and written notice is promptly given. The screen mechanism is the primary tool for managing both lateral hire conflicts and cross-practice conflicts in large firms. For AI-assisted practice, the screen must extend to AI information systems — not just to personnel restrictions.
The Technical Components of an AI-Compliant Ethical Screen
A technically compliant ethical screen in an AI-equipped law firm requires seven specific controls, beyond the traditional personnel and file access restrictions:
- Practice Management System Matter-Level Access Control: The screened attorney cannot access the matter in the practice management system — not even to view the matter number, client name, or matter status. The access restriction must apply to all views and reports, not just document access.
- AI System Matter-Level Access Control: The AI systems used by the screened attorney cannot access data from the screened matter. This requires matter-level access controls in AI tools, not just file-level controls in the document management system.
- AI Knowledge Base Exclusion: Information extracted from the screened matter must be excluded from the AI knowledge base accessible to the screened attorney — or the screened matter's extracted information must be tagged with the screen and suppressed from the screened attorney's knowledge base queries.
- Email and Communication System Controls: The screened attorney cannot receive firm communications referencing the screened matter. This requires message filtering in email and collaboration platforms that identifies and suppresses matter references based on matter number or client name.
- AI-Assisted Research Contamination Prevention: If the screened attorney uses AI research tools, those tools must not return results derived from the screened matter's documents, even in the abstracted form used by AI knowledge bases.
- Screen Monitoring and Audit Trail: Every attempted access to screened matter information by the screened attorney must be logged, with automatic alert to the supervising partner. This monitoring must cover AI system access as well as direct document access.
- Periodic Screen Certification: The screened attorney must certify compliance with the screen quarterly, and the supervising partner must certify that the monitoring has shown no screen violations. These certifications are the documented evidence that the screen was properly maintained.
Rule 1.11: Government Attorney Conflicts in Multi-Practice Firms
ABA Model Rule 1.11 addresses conflicts for former government attorneys. An attorney who previously served in a government agency — including as a regulator, prosecutor, or government counsel — is prohibited from representing a private party in the same or substantially related matter in which the government attorney participated personally and substantially. In multi-practice firms that hire former government attorneys, Rule 1.11 conflicts can arise across practice groups in ways that are structurally invisible without a systematic conflicts management system.
Rule 1.11(b) permits former government attorneys to accept private employment if they are timely screened and the appropriate government agency is notified. For AI-equipped firms, the 1.11 screen has the same technical requirements as the 1.10 screen — with the additional requirement that the government agency notification be documented and maintained in the firm's conflicts file.
Information Barriers in AI-Assisted Practice: Technical Architecture
An information barrier in AI-assisted practice is not simply a policy restricting communication between practice groups. It is a technical architecture that prevents AI systems from combining information from matters on opposite sides of the barrier. The DLA Piper scenario — where an AI knowledge base combined information from matters on opposite sides of an ethical wall — cannot be prevented by policy alone; it requires technical controls embedded in the AI system architecture.
The Three-Layer Information Barrier Architecture
A technically adequate information barrier for AI-equipped multi-practice firms operates at three layers:
- Layer 1: Document Access Controls. Traditional matter-level access controls restricting which attorneys can open which matter documents. This layer has existed in law firm practice management systems since the 1990s and is well understood. It is necessary but not sufficient for AI information barrier compliance.
- Layer 2: AI Extraction Controls. Controls that prevent AI systems from extracting and indexing information from matters on one side of a barrier into a knowledge base accessible from the other side. This layer requires matter-level barrier tags in the AI knowledge management system, with extraction and indexing logic that respects those tags. Most commercially deployed AI knowledge management systems for law firms do not implement Layer 2 controls by default.
- Layer 3: AI Query Controls. Controls that suppress information from screened or barriered matters in AI query results, even when the underlying documents are technically accessible in the knowledge base. Layer 3 controls are the most difficult to implement because they require the AI query system to check barrier status at query time, not just at extraction time.
Multi-Practice AI Conflicts Management Checklist
Multi-Practice AI Coordination: Conflicts and Information Barrier Audit
Implement matter-level barrier tags in the firm's AI knowledge management system. Verify that tagged matters are excluded from firm-wide knowledge base indexing. Test by confirming that a query from a barriered attorney returns no results derived from matters on the other side of the barrier. The DLA Piper AI knowledge base failure would have been prevented by this control.
Implement query-time barrier enforcement in AI research and knowledge management tools. Every AI query from a barriered attorney must be checked against the barrier registry before results are returned. Suppressed queries must be logged in the audit trail with the querying attorney's identity, the query content, and the barrier flag that triggered suppression.
The intake conflicts check must identify not just name-based conflicts but also cross-practice conflicts where the same entity appears in different capacities across practice groups. Implement entity relationship mapping and matter-type compatibility analysis: does this new matter create adversity with any existing matter where the same entity is represented in a different capacity?
Implement the seven-component AI-compliant ethical screen: practice management access control, AI system matter access control, AI knowledge base exclusion, email and communication system controls, AI research contamination prevention, screen monitoring with audit trail, and periodic screen certification. Traditional screens implemented before AI deployment may satisfy only components 1 and 4.
For matters where cross-practice conflicts are inherently likely (M&A/regulatory, corporate/employment, transactional/bankruptcy), consider deploying segregated AI instances for each practice group on those matters. Segregated instances do not share any knowledge base, even in the Layer 3 suppressed form — they are architecturally separated, not just query-filtered.
When a lateral hire joins, immediately map all matters from the attorney's prior firm that may create Rule 1.9 or 1.11 conflicts. Apply AI system access restrictions for any barriered matters before the attorney is given access to the firm's AI systems. Do not defer AI system onboarding until the conflicts analysis is complete — the Jenkins liability pattern begins the moment the attorney starts accessing client information in the firm's AI systems.
For attorneys with prior government service, implement Rule 1.11 barriers that prevent access to any matter substantially related to matters in which the attorney participated personally and substantially in their government role. The barrier must include AI knowledge base controls — government regulatory information may appear in the firm's knowledge base from other matters involving the same regulatory agency.
Conduct a quarterly audit of all matters across practice groups to identify potential cross-practice conflicts that were not caught at intake. The audit should use AI-assisted cross-referencing of adverse parties, corporate affiliations, and matter types. Document the audit process and results in the firm's AI governance records.
Ensure that the firm's AI-powered collaboration platform (including Microsoft Teams + Copilot, Slack AI, or similar) applies matter-level information barrier controls to conversation search, document indexing, and AI-assisted communication summaries. Collaboration platforms are the vector most frequently overlooked in traditional information barrier implementations — and the DLA Piper AI knowledge base failure demonstrates that AI-enhanced versions of these platforms create new barrier gaps.
When the firm resolves a potential cross-practice conflict through screen implementation rather than declining representation, provide the required written notice to affected former or current clients promptly. Under Rule 1.10(a)(2), written notice to the affected former client must describe the screen procedures and offer to respond to reasonable inquiries. Document the notice and the client's response in the conflicts file.
Conduct semi-annual testing of information barrier effectiveness for AI systems. Testing should include: confirmed barrier enforcement in AI knowledge base queries, confirmed suppression in AI research tool results, confirmed access restriction in practice management system, and confirmed email/communication filtering. Document test results and remediate any failures before the next quarterly AI Governance Committee meeting.
How Claire Coordinates AI Across Practice Groups
Claire's Multi-Practice Information Barrier Architecture
Claire's multi-practice deployment addresses the specific failure modes of the DLA Piper conflict — AI knowledge base collapse of information barriers — through three-layer barrier architecture that enforces information barriers at extraction, indexing, and query time.
Matter-Level Barrier Tags with Automated Conflict Detection
When Claire processes intake for a new matter, it automatically checks the conflicts database for cross-practice barrier relationships — not just name-based conflicts, but entity relationship conflicts that indicate adversity across practice groups. When a cross-practice relationship is detected, Claire applies matter-level barrier tags before any information from the new matter is indexed or made accessible to attorneys outside the assigned practice group.
Three-Layer Information Barrier in AI Knowledge Base
Claire's knowledge management integration implements all three layers of information barrier control: Layer 1 (document access controls), Layer 2 (extraction-time barrier enforcement preventing cross-barrier indexing), and Layer 3 (query-time barrier enforcement suppressing cross-barrier results). The DLA Piper AI knowledge base conflict is architecturally prevented — not just policy-restricted.
AI-Compliant Ethical Screen Workflow
When Claire detects a Rule 1.10 or 1.11 imputed conflict eligible for screen resolution, it automatically initiates the seven-component AI-compliant screen: PMS access restriction, AI knowledge base exclusion, email filtering configuration, screen monitoring setup, and certification task assignment — all within 90 seconds of conflict detection. The screen is comprehensive from day one, not built incrementally as gaps are discovered.
Audit Trail for Barrier Integrity Monitoring
Every attempted cross-barrier AI query is logged to the firm's audit trail with attorney identity, query content, and barrier flag. The audit trail is reviewed quarterly by the firm's AI Governance Committee and is available to support screen certification and to defend against disqualification motions. The documentation created by Claire's barrier monitoring is the type of evidence that could have helped DLA Piper demonstrate the screen's effectiveness.
The DLA Piper conflict arose from an architectural gap in a firm that had a legitimate ethical wall — the wall simply did not extend to the AI knowledge management system that was deployed after the wall was established. For multi-practice firms deploying AI today, the lesson is that ethical walls and information barriers must be designed for AI systems from the outset, not retrofitted after an incident. The technical requirements of Rules 1.7, 1.10, and 1.11 do not change when AI systems enter the firm — but the technology required to satisfy those requirements does.
For the intake conflicts checking architecture that prevents these issues from arising, see legal intake automation compliance. For the AI governance framework that ensures information barriers are maintained across the firm's AI deployments, see AI governance for law firms. For the privilege implications of cross-practice AI information sharing, see AI privilege waiver risks.