
Using GenAI to Assist with Discovery
Over the course of this three-part video series, I set out to train a custom AI agent (“project GPT”) to generate and respond to discovery requests under the Federal Rules of Civil Procedure and the Local Rules for the District of Maryland. My goal was to give the AI both procedural know-how and factual documents so that it could draft compliant interrogatories and document requests—and then respond to them—using real case materials.
Part 1: Gathering Source Materials
I began by compiling every procedural resource I’d need: model interrogatories and requests for production, the Federal Rules of Civil Procedure, the Local Rules for the District of Maryland, and selected case law on discovery objections and privilege disputes. In my demo, I prompted ChatGPT-04-Mini to find those materials, limiting its search to the District of Maryland. Within 28 seconds, it retrieved the civil-procedure rules from official court websites, the local rules, and even relevant Maryland decisions on discovery objections. That gave me a comprehensive list of documents to feed into my custom GPT for training.
Part 2: Preparing Factual Documents
Next, I needed actual case filings so the AI could apply the rules to concrete facts. I chose a Maryland-district summary-judgment case (“Gill v. Mallow”), downloaded the complaint and motions from PACER, and uploaded them alongside my procedural materials. I emphasized that AI excels at analysis but that gathering filings still requires a human. For the AI platform, I selected Claude because it integrates with Google Drive—but I discovered it only reads Google Docs, not PDFs or Word files. To work around this, I converted every PACER download and rules PDF into Google Docs via Drive’s “Make a copy” feature, a bit tedious but essential.
Part 3: Live Drafting and Responses in Claude
With my materials in Google Docs, I connected Claude to my Drive and confirmed it could access each folder. Initially, Claude reported it couldn’t read PDFs or DOCX files—so I knew conversion was necessary. Once all files were in Doc format, Claude listed them correctly and I gained confidence. I then asked it to draft discovery requests on behalf of the plaintiff in Gill v. Mallow, pulling facts from the PACER-sourced documents and ensuring compliance with all applicable rules.
Within moments, Claude produced twenty tailored interrogatories—complete with strategic instructions to pin down the defendant’s story, identify witnesses, gather policy documents, and secure relevant recordings. It even explained its reasoning, noted the District’s 20-request limit, and outlined its strategy for follow-ups. I then flipped roles and instructed Claude to respond as defense counsel: it drafted general objections, specific objections (e.g., to producing Social Security numbers), and detailed “subject to and without waiving” answers, listing the documents it would produce given the factual record.
Key Takeaways for Attorneys
- Curate Quality Training Data. The AI’s effectiveness depends entirely on the underlying materials. Include local rules, model forms, and real pleadings to give it a solid foundation.
- Understand Platform Limits. Claude’s Google-Drive integration is powerful—but only with Google Docs. ChatGPT excels at public-records retrieval. Know each tool’s file-format requirements and capacity constraints.
- Compare Costs and Capabilities. At roughly $20/month, general-purpose AI tools can replicate many features of specialized litigation packages. Test them first: if a dedicated product can’t at least match your Claude or ChatGPT workflow, it may not justify its higher price.
Finally, I reflected on next steps: uploading raw client responses (often unformatted) so AI can clean and format them; feeding large document collections into AI to pull out relevant materials; and using these accessible tools to benchmark pricier products. I wrapped up by inviting viewers to explore my live Claude results via the link in the video comments and to reach out at elefant@myshingle.com with any questions.