On this episode of ‘Scaling Justice’, equivant Court’s Director of Marketing, Brendan Hughes, tackles the growing phenomenon of deepfakes and how it will impact the justice system, specifically with evidence integrity. He sat down with MJ Cartwright, Mentor-in-Residence at the University of Michigan, tech entrepreneur and justice system veteran, who’s now leading the charge at ProbeTruth: a startup using cutting-edge tools to detect and fight deepfakes. Listen to the conversation as MJ details real examples of deepfakes being used in courts, insights from her work, and how the justice system is adapting to this powerful technology.
Watch on YouTube:
Stream on Libsyn:
Podcast Transcript:
Brendan Hughes: Hello and welcome to Scaling Justice, a video podcast by equivant that operates at the forefront of the justice space. I’m Brendan Hughes, Director of Marketing at equivant. All of us at equivant are dedicated to delivering innovative solutions to simplify justice. In a world where seeing is no longer believing, how do justice agencies separate fact from fiction?
In today’s episode of Scaling Justice, we dive deep into how justice agencies and courts are combating the use of deepfakes: AI generated videos, images, and documents that can distort reality and threaten the integrity of evidence in courtrooms. We discuss how prevalent the issue is now, how fast is it growing and what courts can do to detect and address deepfakes with cutting edge technologies. We’re lucky enough to have a guest today that is a veteran of the justice system and of technology companies, MJ Cartwright. She’s currently a statewide mentor with the University of Michigan Innovation Partnerships team. She was CEO of Court Innovations where she built an online dispute resolution platform from an academic idea to a SaaS solution installed in more than 150 locations in the US. Her latest venture is with a startup called Probe Truth, a solution built to ensure verification with cutting edge deepfake detection. Welcome, MJ.
MJ Cartwright: Pleasure to be here. Thank you.
Brendan Hughes: Great. Well, I thought we would just start a little bit of a baseline for our audience, if you don’t mind explaining a little bit more what deepfakes are and what originally sparked your interest in deepfakes.
MJ Cartwright: Well, deepfakes are simply altered or completely fabricated images via AI. So AI has come in and made something real, not real, and they’re becoming more and more realistic. You could view it as an AI version of Photoshop. And my interest in deepfakes started actually after we did the online dispute resolution platform, and I started working with Professor Khali Malick, and he was working on various projects. I was mentoring him and I got more and more intrigued about what he was doing. That’s a huge problem, and the technology expertise doesn’t reside typically within the court structure, and they really rely on others to come up with these technologies to help them out.
Brendan Hughes: So then what types of deepfakes do you think pose the greatest risk to courts or justice agencies, images, videos, or documents or all three?
MJ Cartwright: That’s a tough one, especially images and videos. When you see something, it pulls on your heartstrings, right? And your emotions are involved and then you can’t unsee it. So I would say that has the most emotional impact, but anything that decisions are being made is really important that it’s real information.
Brendan Hughes: Great. And you had mentioned being interested in this in the past few years, and I’m sure this issue is only grown. Are there some real-world examples that you’re aware of where deepfakes have kind of impacted courts?
MJ Cartwright: So we know that juries cannot tell if something is real or fake, which obviously is an issue. But also, we have looked, and there is case after case after case where the defense will argue something is fake, but that evidence gets into the process and decisions are made based on that evidence being a part of it. The flip side is that someone’s arguing the evidence is fake when it is in fact real. And there was a recent case in California, Tesla was saying that an interview was faked. In fact, it was real, and that had an impact as well. So you can be introducing fake evidence or you can be hiding behind real evidence by saying it’s fake. And there are various cases where it’s been found out that lawyers have introduced fake evidence into court. You obviously can’t do that if you know it’s fake, but sometimes they don’t know it’s fake.
Brendan Hughes: Yeah, that’s very interesting. It really seems to challenge the process of admitting evidence and probably challenges to the judges around the standards of evidence and what goes into that. Are you finding tha?
MJ Cartwright: I mean, if you can’t assume that all are part of your evidence is real or fake, then is it authentic or not? Right. And interestingly enough, we saw some action on this front just in the last two or three weeks. The Louisiana legislation passed a framework for handling AI-generated evidence. Reasonable diligence is required for attorneys to verify digital evidence. So they really have formalized a procedure to address this authentication processes when it’s concerning AI or deepfakes. And this is really encouraging to see this.
Brendan Hughes: Yes, I imagine having some guidelines around making sure things are authentic is important. In this new world, it’s probably a bit challenging for the prosecution or the defense. How are they addressing this new world of potential deepfakes?
MJ Cartwright: Right. I mean, if you’re a judge in a case, you want to make sure the evidence that comes into play is authentic and so it becomes a big deal. And now that there’s at least a ruling in one state that says, yeah, you have to take reasonable efforts to prove the evidence is real, it’s super encouraging.
Brendan Hughes: Great. And you have a background in technology and in starting new businesses and experience in the justice industry. So how did that all kind of meld together to the deepfake issue and how you could be supportive and helpful in this way?
MJ Cartwright: Yeah, you’ve really identified a passion of mine, right? The combination of meaningful technologies with people to provide a high social impact. That’s been a real driver of the startups that I’ve been behind and making sure that the courts and the legal ecosystems have the ability to detect and verify as they go along. This is just going to continuously maintain the integrity and the trust of our court systems. And that’s just near and dear to me. So it’s a combination of things that excites me, but the strong social impact is a real driver for me.
Brendan Hughes: Yeah. You mentioned the new legislation in Louisiana, and I’m imagining that a lot of people within the justice system, this whole concept is new to them and what technology and tools are available to them to authenticate things is probably new to them as well. Are you finding that there’s a lot of education around what technology tools are available and what is really needed to combat the idea of deepfakes?
MJ Cartwright: Yeah, I mean, there’s a lot of questions and it’s an area that can go techy really quickly, and it’s really important that people understand what it is they’re doing and using and experiencing. For us, it’s making sure we have wicked smart people who are involved in this and they really understand how deepfakes are created so you can detect them. Some of the strongest talent out there could be just generating deepfakes where in fact, the ones we’re pulling into Probe Truth are ones that want to really make sure that those dee fakes can be detected for all the right reasons, which is good. But on the technology front, there’s a lot going on in the AI space, as you know. And we’re combining a lot of different tech. And one of the hot areas is neuro-symbolic AI, and it really is just using reasoning and learning together.
So, the human mind is looking at the various individual components, doing the analysis associated with that, but then looking together as a whole. And that really is where it comes into play, applying that reasoning to the AI that’s there and then taking advantage of the different modalities. So you’re combining video and audio, making sure that it’s working together in a way that makes sense. And that’s really the neuro-symbolic AI, multimodal reasoning that we’re using. That’s kind of core, and it’s quite new, but it’s really providing so much more power than we’ve seen out there before.
Brendan Hughes: Interesting. And are these tools that you are developing, and maybe that are already out there for court members, stakeholders to use, where do you feel like you are in the level of reliability of these tools to catch deepfakes? Do these tools have limitations?
MJ Cartwright: Yeah, of course they have limitations. Like any system in the AI space, the more it learns, the more accuracy is going to be improved upon. So that is standard. We’re going to see more and more accuracy as we go. Combining some of the different pieces together, we can get even more accuracy, which is exciting for us. So we’re focusing on some of the weaknesses we see out there in deepfake detection and some of the weaknesses. If you’re really looking at one approach and not multi-approaches, you’re just looking at simple limited pattern recognition, so you’re not going to identify a lot there. The other is someone wants to hide the fact they have a deepfake, and so they make it a low-quality media, and then you can’t detect if it’s fake or not. The other is using multi-deepfake techniques, and then you have to unpack that.
I don’t know if any systems can actually do that right now. And those are the things that we’re pulling in and analyzing and running through our stuff. To me, the big one is being able to explain what is the deepfake and what is real. We can run this analysis, we can throw a bunch of AI words at a judge, but how do we actually make sure that when they’re looking at the evidence report, it really makes sense and it’s logical for making decisions on. That’s a super area of focus for us. And the other is that new deepfakes are being generated all the time, right? With new algorithms, they call it a Zero Day Detection. New deepfake is created today. You haven’t seen it before. Your systems haven’t learned it yet. You can’t detect it through the normal AI machine learning algorithms. But with reasoning, you can actually detect something’s wrong here. So my various checks aren’t triggering it, but that whole picture view and reasoning tells you something’s not right. And to be able to flag that is significant. So those are the things we are tackling that we see are limits in the marketplace for detection in general. And those are the things that we’re pulling into probe truth and applying to the courts.
Brendan Hughes: Yeah, it’s very interesting and very fast moving, right? The technology goes so fast and advances so fast. I imagine it’s concerning for a lot of the courts out there. And do you have just a general idea of how many courts or justice agencies are currently using these tools, or are they overwhelmed right now?
MJ Cartwright: We’re just getting involved with the courts with this technology, so we haven’t seen it in the courts. But it doesn’t mean it’s not there. And what we’re seeing instead is forensics experts are coming in and analyzing. And so if you think about a forensic expert coming in, it’s incredibly costly. It’s a limited resource. And so that’s what we’re seeing right now, and we know that that’s a huge problem because if you don’t have an attorney for example, you’re not going to have access to those people and those costs associated with it. So the tools become more and more important as we go. So we’re not seeing a lot of it yet as far as solutions that have been implemented in the court. So this is why it’s pretty important to me that we get out there with this stuff.
Brendan Hughes: Yeah, I imagine also very time consuming with forensics and costs money to bring them in all the time, especially as the way it’s going to be more and more of an issue moving forward. So then is the ideal then that these tools would be incorporated into the justice system’s current technology stack, that they would have this as almost part of their toolkit moving forward? Is that a vision you see in the future?
MJ Cartwright: Exactly. So the easiest way if you think about it is that for a court to work with their evidence management partner, you’ll have Probe Truth integrated in there with a deepfake detection capability. So their workflow can be the same. They can identify which types of evidence you want to run through the deepfake detection, and then go through our API behind the scenes in their case management system, and then we’ll return the status and we’ll return the explainability report. And all that stuff is attached to the evidence in the system. So it fits into the current workflow. We’re just running the API and then partnering with case management to make sure that it’s easy for the court.
Brendan Hughes: Yeah, that makes sense. I like your point from earlier about the explainability because detecting it is one thing, but then how to make sure that the jury understands, correct?
MJ Cartwright: Exactly.
Brendan Hughes: So this space is probably moving very quickly. Where do you see it going in the future? I imagine this issue is only going to continue to grow. How do you see it evolving?
MJ Cartwright: Well, I think you’re going to see more legislation and court processes throughout the US similar to what we’ve seen in Louisiana, right? That’s going to help provide a framework to have processes and procedures built around that. So I do think that’s going to happen, and we’re hearing that in other states. We’re hearing it in other countries around the world about deepfakes and having laws and processes to handle those. So there’s a lot of conversation around this. It’s no longer a fringe thing. And in fact, there’s a stat that I was floored by that deepfake attacks have surged by more than 1300%. 1300% up from one per month to seven per day, and that’s huge. And so it just shows you the power that they have. The AI detection technologies have to continue to be more powerful, and that detection for us is what is fake, but also what is real.
Brendan Hughes: Yes, and making sure, again, this is such a big issue for trusting what’s admissible in a court proceeding, right? Everybody has to agree that this is truthful and these are the correct documents, images, audio.
MJ Cartwright: Yes, absolutely.
Brendan Hughes: That’s great. The work that you guys are doing seems like it’s very important. Could you give me a little bit of a quick background on Probe Truth and the evolution of the company and where you are right now in the technology?
MJ Cartwright: Yeah. So the state of Michigan has a translational research, advanced computing grant funds, and so I was working with a colleague through those, and there’s a strong commercialization component associated with that. And so as we’re moving this through, we just got the company established. We really saw the potential to move this out and move it out now so that it can have the impact that it has. So the funding is just being finalized. And also some other grants are behind the scenes that are making sure this technology can be applied as we move forward. And then we’re working with very strong partners out there that are going to help get this into the court. And there’s a lot of attention being paid to this right now. We want to make sure that we are a place to go that can make a difference for our courts, and that things are going to be changing and moving rapidly. You want to make sure that the environment is safe and you can have those discussions to make sure that the software vendors can make a difference. The courts have a comfort level, and even people who are coming to courts without lawyers, how do they know that legal counsel will have evidence that is provable and authentic?
Brendan Hughes: Yeah, very important work. I appreciate you joining us today, MJ. That’s a great conversation about a very topical issue that’s facing the courts today and will be in the future. I appreciate you providing your insights.
MJ Cartwright: Thank you. Absolute pleasure. Appreciate it.
Brendan Hughes: Thank you.