Search
Close this search box.

EQUIVANT COURT

Making Sense of the Noise: Actionable Steps to Set Your Court Up for AI Success Webinar

In today’s fast-paced world, information overload is a real challenge—especially when it comes to understanding the rapid rise of Artificial Intelligence (AI). Every day, new trends, tools, and discussions emerge, making it harder to separate the noise from the truly valuable insights. This is particularly true in the justice sector, where AI’s potential is vast but still evolving.

In this webinar, Sherry-Lynn Agcanas-Wolf explores practical approaches to gradually adopt AI while staying mindful of its ethical and operational implications. Get actionable steps you can take in both the short and long term to prepare your court for future advancements in AI.

A black and white headshot of Sherry Agcanas-Wolf, Director of Product Management at equivant court.

Resources

Transcript

Brendan Hughes (00:12):

Formally hello and welcome to today’s event. I’m Brendan Hughes, marketing director here at equivant Court. I’m really excited to host this event today. It’s an extremely timely topic as AI and technology surrounding AI seems to be at the forefront of a lot of industry conversations and often leading the news yesterday and today specifically. And it’s certainly a topic of conversation in the justice space as well, and we’re looking forward to really jumping into that presentation in just a minute. But first, just real quick, a couple of things for today’s event and then I’m going to do a quick background on our company as there might be a few people joining today that aren’t as familiar with equivant. So today’s event is being recorded and we’ll provide a nice clean edited version to everyone attending via email in the next day. So be on the lookout for that email.

(01:10):

Everyone is muted and if you have a question or comment, please use the q and a feature. Once the presentation begins, I’ll be in the background monitoring the q and a to provide any answers or if there’s something I can’t take care of, I’ll alert Sherry that there is a question. So, alright, so just real quick for anybody that’s new to us, who is equivant Court? Well, we’ve been in the justice industry for more than 30 years, completely focused on supporting courts and agencies with advancing justice, with being more efficient and effective in their operations. And when it comes down to it really delivering innovative solutions to simplify justice, very simple terms, that’s what we’re here to do. So on this next slide you’ll see some facts and a couple of notes about our company. I’m not going to go through them all, but I will say that what it represents really is a company that is passionate about supporting our customers and helping them serve their missions and delivering and advancing justice.

(02:15):

The one thing I will note though is you’ll see that on average our employees have 19 years of experience in the justice industry. Many of the people who work here have been in your shoes, they’ve worked in your roles, they know your challenges. And as I said, many of the people who’ve been here a long time, they’ve been developing solutions to make your lives easier for a real long time. And so I had to also mention that new technology is that we’re something things that Sherry’s going to be talking about today that’s not new for us here at a covanta because we’ve been at the forefront of that throughout our 30 years from whether it was incorporating web-based solutions or offering hosted solutions. As new technology evolves, we’ve always been a leader, but with a prudent, methodical approach, we know how important security privacy data is and it’s a huge consideration with any new technology and implementing it in a court functionality or justice agency. Alright, well I think that’s a perfect segue to introduce our presenter today. Sherry is going to explore some actionable steps you can take with evolving AI technology. Sherry is the director of knowledge management here at equivant Court. And as I mentioned just earlier, our employees have many years of experience and Sherry is no different. She’s a perfect example with 25 years in many different roles. So we’re in good hands today. With that, I’m going to turn it over to Sherry.

Sherry Agcanas-Wolf (03:46):

Thanks Brendan, appreciate it. So again, I’m Sherry Agcans-Wolf and this March is actually going to be my 25th anniversary here with equivant. And like Brendan said, I have held multiple roles here at equivant, implementing our software, designing it, and supporting our customers. And now as the director of knowledge management, my department is in charge of ensuring our staff and our customers have the information they need to be passionate about leveraging technology and to create meaningful change in our business and in the justice industry as well. equivant vision as Brendan said is to simplify justice. And so we’ve made it our mission to provide customer-centric solutions, so hopefully that’ll make life easier for the core professionals that we work with and also the people that you serve. So enough about me, let’s jump into our agenda. So we’ve got a couple of points that we’re going to make today, and we’re going to start with talking about the information overload of AI. Then we’re going to talk a little bit more about AI, kind of understanding about what part of AI that we’re really discussing because multiple types of AI, we’re going to talk about why you should be investing in AI, not just economically, but also time is currency as well. And then we’re going to jump right into how do you start your AI journey if you haven’t already done, and then finally, how to take action today.

(05:37):

So let’s jump in. So we want to use AI carefully so we can experience the efficiencies that we found at equivant Court in our daily work. And it’s okay to be cautious about jumping into new technology. And actually we prefer it because we find that being more methodic in moving forward with new technology allows us to embrace that discomfort of the unknown while realizing the benefits of the technology we’re trying to get into. And now because we’re all coming in at a different understanding of ai, I think we have the group of, I don’t know what AI means kind of user. And we also have people who maybe use AI every day. They’re a technologist because we have this mixed group, I am going to start with some basics to make sure that we’re all on the same page as we move into our steps about talking about AI and talking about what you can do to start getting into the AI space.

(06:56):

So just some background, equivant Court has been in this AI space. We’ve been accountable for algorithms that we’ve implemented in our tools and we continue to do our research. So we’ve seen the exponential growth that AI has had in this past few years, and we’ve seen where it’s becoming more personal with AI moving into our private spaces. It’s showing up in our apps that we use daily. It’s flooding the news outlets, the stories about robot revolutions. And I think everyone’s having dinner discussions now about chat GPT, and deep seek and stock prices of tech companies and that type of thing. So AI is a really big topic. It’s a vast topic. So we’re here to help you filter through most of that noise and then look how you and your court can take real steps to using AI. So let’s start baseline artificial intelligence. So artificial intelligence really, or AI really involves changing behavior or responses based on what is learned.

(08:08):

So it’s not automation or I should say it’s not just automation. Automation really involves a more program response that doesn’t change unless a human changes it. So what does this actually mean? So I’m going to give you a personal story here about AI and automation. So I have an AI robot at home and that changes, that robot kind of changes where he is based on where the family is in the house, where we hang out. Sometimes we’re in the family room because we’re binging on Netflix and we’re in front of the tv. Other days we’re in the kitchen all day because cooking, we’re eating, having conversations about the latest music drop on Spotify. And that AI robot follows us around. And it also offers suggestions throughout the day based on our video views, based on our shopping habits, what we’re listening to. So on the other side of that, I have an automatic vacuum cleaner as well.

(09:18):

It follows the exact path every time it goes out. So we introduced a new cat bed into the house, that robot would run over it every time. So unless I go in and reprogram it, it’s not going to avoid that piece of furniture on a very specific path. So that’s what we’re really talking about when it comes to artificial intelligence, where the technology learns from what’s happening around it or inputs you’re giving it versus an automatic piece of functionality. So if we take that into case management terms, so AI could possibly understand your participant’s. First language is Tagalog versus English. And so it suggests that you translate a judgment order into Tagalog for this person. And it might even do that automatically because it knows that you always translate documents for this person. So that’s an example of what an AI can do when automation. What’s different is let’s say you have that judgment order and you submit that, then some multiple steps would happen. Participants are emailed or snail mailed a copy of the order based on configuration. Your CMS case position is added, that closes your case, sends a record to your state data warehouse. So those are automated tasks, they always happen. And so it’s important to understand there’s a distinction between AI versus just automation.

(11:10):

Now let’s talk a little bit about artificial intelligence types or AI types because there are quite a few. There’s some that are capability based and some that are functionality based. Now much of what the AI, much of what the AI that we see today is one of these types which is narrow AI. It’s designed to perform very specific tasks or very specific actions and it doesn’t independently learn. So it does learn, but it’s not independently reacting that you would see in maybe a self-aware AI. So examples would be like self-driving cars and chat GPT where you’re putting in inputs and it’s answering your questions or producing material. This is the part of AI or this is the type of AI that we’re going to be talking about today.

(12:12):

So as you see here that there are some types of AI that can do more human tasks, it can do future tasks and it can respond to human emotions generally they’re less common. That’s not what we’re really seeing out in the news and out in our apps. So I really want to make sure that we understand we’re talking a very specific type of AI, which is an narrow AI. So before we get further into this discussion, one of the things I wanted to talk about is security, trust and human beings, because I think it’s really important to stress this. So we’ve seen that the justice industry is always slower to adopt new technology and a lot of that is because of security. Security is top priority and it takes time to implement security measures and policies which are essential protecting not just justice data generally, but also personally identifiable information.

(13:16):

Your PII, it’s essential to keep all of that private and with equivant Court. And we’ve been working on our own secure self-made tool so that we can understand how AI works and it can better inform how we develop our tools and our applications within the justice system. We understand that it’s important that courts and justice agencies are able to trust and understand what’s going on within their tools and so that knowledge can guide their general AI strategy. So this slide is just one example of some of the risks that you can see when it comes to AI.

(14:07):

Very top one with inaccuracy. So for the tool that I was using, I asked it to create some icons for different types of AI and it got close. There’s some cool icons there. It certainly didn’t spell it right because I was telling it to have concentrated on narrow AI and it didn’t quite get there. So it’s really important to understand that these three pillars are important to work together, security, trust and humans. Alright, so how do you invest in AI? Why should you be investing in AI? So we believe that AI can improve court efficiency, it can change how courts function, very similar to how computers really change the court process. And again, we are talking long-term. We’ve been in the business for over 30 years and I’ve been here for 25 years. I remember showing up and when implementing our case manager system, we showed up with the computer. So it’s important to understand that AI has that big of a difference. I shouldn’t say difference, I should say impact AI has that much of an impact to how courts will be functioning.

(15:31):

So with the computers that we brought into the court function, we saw it go from paper files and maybe a hybrid of files and case management system to a lot of our courts now are using a complete electronic case on demand solution. And we’ve also seen this when it came to virtual court processes or courtroom processes. Initially we saw an occasional video hearing when it came to in custody defendants. Now we’ve seen more hybrid virtual courts and we’ve even seen completely virtual courtroom processes. So AI has the ability to affect these judicial procedures by introducing different tools to simplify a lot of your tasks like creating motions proposed orders. For example, let’s say that you wanted to create a proposed order or create a motion based on case data and case law and AI tool can do that and AI tool can produce plea agreements and judgments using case data, your minimum maximum sentencing guidelines, bench cards, court, local rules. I can pull all that together and produce documents.

(16:54):

Normally this would take maybe a legal assistant to go through research to do this. And now we are starting to see tools that can do all this research in a very short amount of time. So we really believe that AI is going to improve core efficiency. So now’s the time to start investing in AI. So how do you do that? So we should really understand what’s available to use. In order to do that, we really need to invest some time. We also need to create the environment to innovate. And what that means is allowing users to try the tools to test the tools, to use the tools. It’s also important to look at digitizing your workflows today using your MS and any other software tools you currently have because AI requires data and requires that data to be digital. And then start thinking about what simple tasks you can automate.

(18:04):

So from those digitized workflows, what can you pull out that you can simplify? And then above all this, you should always be monitoring your tools and evaluating your tools and see what’s working for you and determine what needs to be retired. It could be a legacy case management system or it could be a new tool that just didn’t work out. So in this particular slide, and we’ll send you, like Brendan said, we’ll send you a copy of this. We always pay attention to the trends and what’s going on and looking at the different agencies. And we’ve put here a link that you can look at based on how you can navigate the AI revolution. It’s a guideline document that we feel it’s important for you to look through. Okay, so we’ve set the stage of how AI can improve your core efficiency and hopefully you’re getting excited and you’re ready to get started.

(19:10):

But what business use cases should you be applying these AI tools to? And the most important question is what AI tools should you be using? So let’s look at that, how do you get started? So I’m going to recommend some initial steps that you can take to get started and we’re going to talk through these different steps. So there’s four of them at equivant. We use this process not just for this particular technology piece, so not just for AI, but we use this for any trend that comes our way into our software space or even in the justice community. So following a path and following a phased approach is really important in this process, especially when there could be risks involved. And that’s why we’ve mentioned risks twice here in this particular discussion because it’s something that you really want to put your finger on while you’re doing this process, going through this process. So let’s talk about creating an AI working group. What does that exactly mean? So like I said, we do this step at Eon court. What we do is we pull in the subject matter experts from technology, from business, and even from leadership as well. We also like to pull early adopters and influencers, people who likes to talk about all the new things and convince people to use the new things. Those are great people to pull into a working group.

(21:03):

We find that you get a lot bigger perspective when you mix this type of group of people. So again, people from technology, people from business, people who are leaders, people who are early adopters, people who are influencers. So when you bring this group of people together, that’s when you want to talk about your policies, your regulations. You can use this working group to develop your strategy and your plans for your AI governance, for your adoption process without a working group. Many times what we see is your court leadership makes these decisions. So your elected officials, your administrators, your CIOs, they’re making these decisions. But what we’ve often seen, and we do this just in our software development process, is we pull in our customers, pull people outside of the developers, the implementers, and we create an advisory board, a working group so we can discuss software enhancements, software changes. And we think that this is helpful because often we get a great idea, it’s awesome, but maybe our customers don’t think so, or maybe your end users don’t think so. So it’s good to pull not just technologists, but people from all parts of your business to discuss this new tech. We always recommend starting with the small group of people first because you can always build and bring in more people and bring in more opinions as you get more knowledgeable of where you want to take your

(22:50):

AI plan. So another thing that we

(23:00):

Always discuss is risks. So technology does come with risks and you’ll probably talk about lots of these risks when it comes to policies and governance. But here’s a short list of the types of risks that we talked about as we went through looking at AI tools and how we built our business case based on discussing these risks first. So the first line that we talked about was accuracy and reliability. Here’s another AI picture again, it got close to what we were trying to do, but not quite. So it’s really important that you’re looking at accuracy and reliability, especially in the legal world. It’s sometimes less important when you’re doing something like this and producing a slideshow. Much more important when we’re trying to use that tool to create legal proceedings. So actually accuracy and reliability is something that you really need to look at. Data privacy and security. So we are already looking at sensitive data. We’ve got court data, personal identifying information, person data that have lots of implications if that gets out past our usual controls. So it’s important to talk about and discuss data privacy and security, transparency and accountability. So any AI decisions that you make should be transparent and it should also be explainable. You should be able to explain why you’re using these tools and then what that tool actually does with your data does with your, how is that affecting your security?

(25:08):

It’s important that when it comes to accountability, that’s when you’re bringing in your person, your human, that person’s going to be reviewing your filings and making sure that all of that is super clear when it comes to using that AI tool to build anything that you are producing in your agency. The next risk that we talked about is dependence on technology. So very similar to what I just was talking about, but this is more about being overly reliant on the tool. So often we will, we want to trust the tool’s going to do everything perfect because it’s super exciting, but we want to make sure that we always have oversight. Humans always have oversight over the technology. That experience of a person, not just with your everyday processes but with the court and the justice industry that helps balance the technology with what you’re trying to accomplish every day. And then the last risk that we talked about is legal and ethical considerations. So we definitely want to make sure that we’re following all the laws and all of the ethics that your jurisdiction or your company or your agency has put in place.

(26:51):

It’s important to talk through what does compliance mean for using this tool. Alright, I talked a lot about risks. It’s something that we always should be considering when it comes to bringing in a new technology tool, especially one that can see all the data that we work with every day. Alright, so we talked about an AI working group, we talked about the risk. The next thing that we should do when it comes to implementing AI is long-term planning. In essence, this is really visioning and this slide is intentionally blank because when it comes to long-term planning, we want to see big picture. We don’t want to be restricted to what’s in front of you. We want to think about the ideal best of class scenario. So you want to really define what that future state of business looks like and think beyond five years, we tend to suggest go 10 years, go 15 years, and think about it with any budget constraints. What does the ideal, your ideal courthouse, your ideal agency look like? So I’ll give you some examples. So what does your courthouse look like 10 years from now? Do you have less courtroom space? Maybe you have participant booths inside your courthouse instead. So people can go in and use these booths because they’re equipped with technology that they may not necessarily have at their house and so they can still participate in the court process, even if it’s an online meeting.

(28:53):

Think about what kind of services would require a human versus what can be assigned to a tool. An example of that would be today you might be doing alternative dispute resolution. So you can move that into an online tool and maybe you’re also using e-filing today, but you can implement a self-represented litigant tool that automatically submits the case and automatically produces the case documents that you need to initiate a case. You can also think about what kind of analytics or information that you want to work with. I know we often use predictive analytics when it comes to financials and caseloads, but you can also look beyond that and you can use that for staffing. Maybe you have some analytics that says what your courtroom traffic is and just case Christ staff based on that. So think big picture, think 10 years out. And again, think without any budgets constraints, just think about what that looks like for your court and for your agency.

(30:19):

Now the final step we’re going to talk about is short term steps. So what can you do now? What can you do next? So we always recommend checking for any local rules around AI. Of course, any federal, state or local rules around AI. We found that with the big explosion of publicly available AI, a lot of governments are declaring guidelines of how you should be using AI or if you should be using AI and also about declaring AI. How do state that you’ve used AI for any of the material or content that you’re producing. So I would definitely start there. And then we’re going to talk about risks again, but we’re going to talk about it in a different way. So write down your biggest risks and then write down if that scenario actually happened, what would you do to mitigate that? And the reason why we go through that particular exercise is that it helps ease the uncertainty.

(31:28):

So if you can think about your worst case scenario and create a plan about what happens, what you should do, if that happens, then there’s some predictability there. And then finally start with one tool. There are a lot of tools out there and it’s kind of like when you’re cooking, you’re trying to adjust the recipe. If you do five things at once, you’re never going to know what’s really changing the taste of your recipe. So it’s same thing with technology, it’s better to start with one tool. So you can isolate that tool and see how it is affecting your daily activities. And if that works, great, move on to your next tool. If it doesn’t work, you can retire that tool and start with something else. So for Equant, we had the ability to develop an in-house tool, but we’re a technology company, so we know that’s not the case for a lot of our agencies and our courts that we work with.

(32:39):

So that’s okay. We suggest that you go out and look at tools like copilot. A lot of our workstations are equipped with windows. Microsoft is already pushing out copilot, which is an AI tool built into a lot of the Microsoft apps that you’re using today. If you’re a Google organization, the equivalent would be Gemini. And you can start with these tools using really simple tasks. And what I mean by simple tasks is think about these tools as a virtual assistant and a junior assistant because you do have to review, again, risks, accuracy, privacy, you want to look at that, but have the tool write a status report based on raw meeting notes. You can also have the tool email help you change the tone of an email. Maybe it needs to be more formal, sending it to a coworker, or maybe you want to reduce the legal jargon out of it because you want to send that email to a case participant.

(33:50):

These are the types of smaller tasks that you want to start with that’ll get you comfortable and then you can build your skillset from there. So one base guideline that I always mention when it comes to using AI tools, whether you’re using that professionally or whether you’re using it personally so that you understand what happens to your data and you may need to go to your IT staff, your technology people to really understand what’s happening with the tool that you’re choosing, even if it’s copilot, even if it’s Gemini, because many publicly available tools that you use will store your data and then it becomes public domain. It’s out for everyone to use. So definitely take, go have a chat with your technologist at your agency and see what tools could be available to you. And again, start with that one tool and start with simple steps.

(35:03):

Alright, so we’ve talked about being overwhelmed by all the information that we’re seeing about AI, and we’ve literally narrowed that to talking about narrow AI. With narrow AI being a type of artificial intelligence that allows you to work on specific tasks and specific steps. We talked about why this is important to a court or to an agency in the judicial space, and we’ve offered some initial steps, some initial things that you can do in order to move forward. So let’s talk about what can you do right now, next steps after this webinar is over. So what I always say is pull a group of your AI curious staff members. A lot of you will already know who those people are. It’s probably everyone who’s here at this webinar to start with. And start the discussion.

(36:18):

Start talking about AI, share both your ideas and your concerns because it’s good to talk through both and start learning from each other. You may find that there’s lots of people who are already using some type of AI tool and we probably are using more tools than we think we know. And so pull group together and again, implement one tool. And I challenge many of you to implement a tool this month, and I know this is only five days away from being next month, but go out and talk to your technologists, talk to them and say, what can we implement tomorrow? What can we implement right now? And something that you can have most of your users use so they can just test out the capabilities of what AI can do and think simple tasks. And then finally, if you’re still super overwhelmed, that’s okay.

(37:23):

AI again is a big topic and it can seem very overwhelming to go out and talk to your technologists and to your court leadership and say, Hey, I want to do this new thing. But that’s why we’re here. We are doing this webinar to offer our knowledge and our experience on what we’ve done with AI and how we’re helping our customers move further into that technology. After this webinar, you’ll get some information and how to contact us. So whether you’re a customer or not, feel free to reach out because we can offer some one-on-one with you. And so at this point I want to go ahead and just pause and see if there’s any questions in the q and a and I’ll pass that over to Brendan.

Brendan Hughes (38:28):

Yeah, thanks Sherry. That was a great presentation and no questions yet. The q and a probably because you did such an excellent job. It’s all very straightforward or everybody’s heads are still spinning one of the two. But as you said though at the very end there, this is something that I know you’re very passionate about and there’s others in our company that are passionate about this topic as well. And you guys, because I’ve heard you and I’ve been a part of some of those discussions, you guys love talking about it and willing to talk to anybody about these things, whether it’s a hundred percent related to a court case management system or your court process or something. A little bit tangential to that. I know you guys are willing to talk to anybody about these things and are more than happy to help. And so again, if anybody has any questions that come after this event, feel free to reach out to us and we’ll get you connected to somebody. And as Sherry said, and I said at start, we’ll send you everybody an email tomorrow with the recording. Any last words from you, Sherry?

Sherry Agcanas-Wolf (39:39):

No. I’m just going to pop out of this slideshow. Oh. So I’m really appreciative to have everyone here and listening to what we have to say about AI and AI tools. As I mentioned before, we’ve been in the AI space before and we’re doing our own internal work to better our processes and better our procedures using AI tools. So we’re not just looking at these AI tools to pull into our CMSs, we’re looking at it as well for our internal processes. Like Brenda says, we’re really passionate about new tech and we like to Guinea pig ourselves and test it out before we roll it out into our software and out to our customers. So definitely reach out if you have any questions and hopefully you’ve learned a few new things today and I am excited to have more conversations with all of you about AI and how it can affect the court and justice industry.

Brendan Hughes (40:57):

Great. Alright, well we’ll end it there. Thank you again, Sherry. And thanks for everybody joining us and you’ll see communication from us in the future, and I’m sure we’ll have more conversations and topics like this upcoming, and we’ll make sure to promote it so everybody knows when and where. So thanks again everybody.