About

Using AI in Litigation

Courts routinely accept data driven AI exhibits and simulations, while generative AI is very risky, easily objected to, and excluded.

When people talk about AI today, they are usually referring to generative AI, the kind powering tools like ChatGPT, image generators, and video synthesis. But AI has existed in other forms for more than 20 years. For example, early motion-capture software relied on AI to translate real-world human movements into 3D model animations. A familiar consumer example is the Kinect sensor for the Xbox 360, which used AI trained on human motion data to interpret gestures, track players, and even recognize kids dancing in front of it.

These distinctions matter a great deal when discussing AI in a legal context, because not all AI is equal. Generative AI typically lacks inherent accuracy or a verifiable base dataset. If you ask it to generate (not retrieve) a map of a town, the result will often be wildly inaccurate. By contrast, a drone scan that uses AI to calculate geometry can achieve millimeter-level precision. Courts routinely accept the latter as reliable evidence, while a generated map would almost certainly be objected to and excluded.

Recognizing these differences becomes even more critical in less obvious situations to ensure a demonstrative exhibit is admissible. Context also plays a huge role. Using AI to generate a cartoon rain cloud for a PowerPoint slide is not the same as using AI to generate a video of a car accident. The rain cloud is a symbolic representation of an idea, even if it literally depicts rain, and does not claim to reflect any specific real-world event. A collision reconstruction, however, purports to represent reality. AI cannot reliably verify critical details such as whether the speed was 35 mph, if lane widths are accurate, or if other key elements match the evidence.

This is where understanding the nuances of AI becomes essential. You must know when its use is appropriate and what type of AI is being applied. A key telltale sign of reliability is how much real, high-quality data was required to produce the output. For the drone scan example, hundreds or thousands of precisely captured images are processed by narrowly focused, purpose-built AI software. The result is highly accurate, often backed by scientific literature documenting its precision.

In the car-accident scenario, simply uploading two photos of vehicles and prompting the AI to “create a crash” provides almost no data, making the output about as reliable as a dream. A more complex gray-area case is using AI to remove a building from a photo because it wasn’t present at the time of the incident. This is generally acceptable if the building’s absence is the only relevant change and elements behind it (e.g., trees) are not at issue in the case, the edit accurately reflects the original scene. However, if a different building was present and its characteristics (height, dimensions, placement) are material to the dispute, generating or altering it with AI is usually inappropriate, as those details are likely to be inaccurate. In that situation, reconstructing the building to verified dimensions is the proper approach for a demonstrative graphic.

Hopefully this provides useful insight into evaluating AI tools and their courtroom applications. At Fluid Media Studios, we work with these technologies every day and carefully select the right tools and methods for each circumstance to ensure accuracy, reliability, and admissibility.

Brian McKeznie

Founder/Owner of Fluid Media Studios, LLC.