Emerging Ethical Iss ...

Emerging Ethical Issues For Lawyers Using AI | Derek Bauman

November 16, 2023 | by D. Todd Smith

Listen to the podcast here:

Artificial intelligence is rapidly evolving and is beginning to impact almost every aspect of our lives. The legal industry is no exception, and numerous ethical issues have emerged as a result. In this episode, Todd Smith and Jody Sanders explore the ups and downs of lawyers using generative AI with Derek Bauman of Feldman and Feldman in Houston. Derek emphasizes the importance of understanding AI’s limitations and the ethical implications of using it. Using real-life examples, he also delves into the consequences of relying too heavily on AI-generated content in court submissions.


Our guest in this episode is Derek Bauman of Feldman & Feldman in Houston. He’s an appellate lawyer there. Derek, thanks for joining us.

It’s my pleasure. I appreciate you having me here.

Could you give a little bit about your background, how you got into law, and how you got to your current career path that you’re in?

I do have the story that I have in the can that I bore many people with at parties but I quite accidentally stumbled into law. The short version, I have to tell it. It’s your fault because you brought it up. I was working at a job but it was a job where I would go in and turn off my brain to survive the day. I had a boss who was amazing. I will name him. His name is Jesus Gonzalez. When I was young and in my twenties, he was on a pressure campaign where he said, “Derek, you’re wasting your life. You need to figure out what you’re going to do with it.”

One day as a joke, and it was supposed to be taken as a joke where everybody laughed, I said, “How do think I’d be as a lawyer?” Everybody I said this joke to didn’t laugh and said, “You’d be good at it. You should explore it.” That’s what set me on my path. In law school, I thought I’d be a transactions attorney and then I took my first transaction class. I was like, “No. I’m not going to be a transaction attorney.”

By mere coincidence, I had signed up for an appellate class that I thought I would hate and found out that semester that I loved it. From that moment on, it became my goal to become an appellate lawyer. I spent some time briefing attorneys. They had a full set of briefing attorneys at the First Court of Appeals. It’s the last year that all nine judges had one. I was very fortunate to get that. That marked my career and allowed me to become an appellate specialist.

You clerked for Justice Higley. Is that correct?

That’s right. I clerked for her for one year and then I went into private practice. After some time, I went back to the court and was a staff attorney for Justice Higley. I spent many years with her. She was an amazing woman and a fantastic judge.

We also can’t leave out that you were part of the legendary 2005 class from the University of Houston Law Center. You were a staff attorney after being a briefing attorney and then went to the city attorney’s office.

I went to the city of Houston for a brief stint. I honestly loved it. It is a great group of people. It was an interesting work to have but then it ended up not long after that making the jump to where I am at Feldman & Feldman. I am the appellate attorney but there was another guy here for a while.

What kind of work does Feldman & Feldman do?

It’s the number of things where I get handed something that I didn’t even know was an area of law and the instructions are, “Figure this out.” I’m like, “Good.” It’s civil but we do a fair amount of governmental law which is part of how I found out about the firm. The number of things that we take on is something new every time and they often surprise me.

Did you do appellate work at the City of Houston?

I did. They had a small appellate group and I was a part of that group. I am labeled as the appellate guy here but a very good chunk of what I do is civil litigation.

It’s good to be labeled the appellate guy.

I wish that was my only label here. It’s great. I do get a lot of different things. This may sound corny but I am at my happiest when I’m writing a brief in my legal career. If I can block out all the other things that I’m supposed to be doing and sit there writing a brief, I’m a happy man.

A friend of mine refers to it as falling down the writing hole and it’s a good place to be.

I love it. I’d stay there if I could but then I don’t have enough briefs to write.

The fact that they do some government-related law over there, was that the natural fit between your prior time working in the government and moving into Feldman & Feldman?

It was, and it was also that they were looking for someone who had appellate experience. I had that too for attraction. I had a fair amount of governmental law under my belt and also, I’m board-certified in appellate law. They were looking for that as well. If anybody knows the name Feldman & Feldman, they know Dave Feldman was the city attorney for a while. That was part of the connection that brought me in. I didn’t know him before I came here but those two worlds intersect.

Let’s back up a little bit and talk about AI. You’ve had a long-standing interest in technology and AI. Where did that come from? How did that start?

I’ve always had a longstanding interest in technology and computers. My knowledge of AI is more limited by computers more broadly. If you’d asked anybody that knew me back then what is it that you are going to study in college, everybody without hesitating said computer science was where they were heading. If you’d asked me that, I would have given you the same answer.

In high school in my senior year, I can’t explain why I got wild hair and took a theater class and loved it. In college, I said, “I’ll be a theater major,” which surprised everyone. I continued to that path. That’s law-related but that’s another story I have in the can for boring people at parties. Those are my adventures once I graduated college and realized I didn’t want to move to LA or New York. What do I do?

That was always in the background of what was Derek’s interest at the time so I’ve always had this continuing interest in computer science purely at an amateur level. I do not claim to be professional in any way but while I was at the court at the First Court of Appeals as a staff attorney at one point, we had some software that rudimentary helped create our shell document for our opinions. At one point, I was like, “This is rather rudimentary.”

I know just enough that I can improve this. I spent some time at the court improving the system that helped create the shell document. Everybody wrote their opinions. You can tie any job that I’ve had and see the moments where Derek dips his foot back into the computer science and technology world but only at an amateur level. I’ve always kept abreast of many things as far as computer science goes. I have a rudimentary knowledge of visual basics for those who are computer programmers. It doesn’t go beyond that.

I’ve always kept in touch with those. At any firm I’ve ever been, including at the court, I’ve always been the unofficial IT guy. They’ll go to me first. If I can’t find the answer, then they’ll go to the IT guy to give him money to do this stuff. At the court, the guy Eddie is great. We had a great relationship. There are things he didn’t know that I did and things that I didn’t know that he did. We would often refer to each other. I got a salary as a staff attorney but they didn’t care as long as I got my work turned in if I was also helping somebody figure out how to do it and what they are supposed to be doing. I’ve always had at an amateur level an interest in computer science and all that stuff.

I want to know whether you have found your undergrad in theater to be useful as a lawyer. Some people think of law as performance art in its own way. I wonder if there’s some translation between what you did as an undergrad versus being a lawyer.

I want to start by saying before we start recording, I promised that I wouldn’t cuss on the show so I’m going to say exactly yes. I have a background in acting as well but my primary background in theater RTF or Radio Television Film was playwriting and screenwriting. I studied both as an undergrad. When people saw I had a theater degree thought they that Derek was the guy who wanted to be in front of the judge and argue everything. I was like, “No. Derek is not that guy. Derek is the one who runs to write your brief because that’s what I have experience in.”

Everybody has their style and approach but I believe that the education I got in undergrad for storytelling and narrating has dramatically had an influence on my ability to write a brief. Everybody has the thing that they bring to it so it’s not like I’m better than everybody else but if you read everything that I’ve written that goes before the appellate court, you can identify, “Derek has a background in drama.” Hopefully, that’s a beneficial thing. I believe it is. It very much influences how I write.

Artificial Intelligence: Every lawyer has their own style and approach in writing briefs. But having a background in storytelling and narrating certainly have a positive influence on your writing skills.

Let’s set the stage a little bit here. When we talk about AI or Artificial Intelligence, what are some background understandings or definitions that’ll help with that conversation?

For the computer programmers that are reading this, remember that I emphasized repeatedly that I’m an amateur. A lot of my definitions, for an actual computer programmer, are going to make their head spin.

These are for appellate lawyers so you can say whatever you want to and nobody is going to know the difference.

You’re among friends.

I know but I’m just saying for any computer programmers, find your show and read it. From my layman’s perspective, it helps to start back that we’re all of a similar age. We can remember in the early 2000s that algorithm was the term du jour for anything electronic. If you had an algorithm, you were the lead. What algorithm means as far as computer science goes is a lot more detailed than what I can give you but the way I’m using it here is you have a very long and technical process to develop a system.

It’s a system within a computer programming and more often how you saw an algorithm was in Google or Facebook. It’s a sorting algorithm to decide what to put up on top that’s going to keep you engaged the longest and that you’re going to find more relevant that you keep coming back to that website. Back in the early 2000s, if you said, “I had an algorithm,” people were like, “You’re special. I want to go to your website.” What that meant could be a wild variety of actual things as far as when you get down to the details.

Artificial intelligence is the term du jour for technology. It’s probably broader and is much harder to define but for our purposes, let’s say it’s how I define it. You’ll find some stuff like this roughly the same online. It is a process that is done by a computer or whatever else you want to call it if it was done by a person, you would consider that to be a demonstration of intelligence. It’s not only creating a list or adding numbers unless they’re complex numbers but it’s any process that is sophisticated that if you saw a human doing it, you describe that as intelligence, which is a vague definition.

What my focus on in this paper that I’ve been doing and that we’re talking about here is something more specific called generative AI. That can mean many things but it’s a form of artificial intelligence that can create something new that did not exist before. The more common example is it creates a piece of art that did not exist before. You could find similar things but you didn’t find this one thing.

You can find a string of text or a story that you could not find before. That’s what generative AI is supposed to do. That’s part of why it’s considered artificial intelligence to create this pros, artwork, or whatever piece of music that we have previously ascribed to human capacity and didn’t consider that to be part of what computers could do. Those are the basics that are going to lay the groundwork for what we need to go over.

You mentioned your paper and before I let it get away from me, Jody shared a copy of it with me and I have to admire the Philip K. Dick reference in the title.

Thank you. Just so everybody knows, the title here is Do Android Lawyers Dream of Electric Billable Hours? As you’ve referenced, this is a Philip K. Dick reference to a short story that he’s written called Do Androids Dream of Electric Sheep? That was very intentional. It’s a playoff of that title. That short story then became Blade Runner. That was in my way a homage towards Blade Runner. I’m not the world’s biggest sci-fi geek but in front of Blade Runner, don’t talk to me after that. I will get mad at you if you talk to me about what Blade Runner is like.

That’s one of my favorite genres. That’s the only reason why I know. When I saw the title, I was like, “This is clever.”

Thank you. I appreciate that. A friend of mine and Jody’s is Christina Crozier. She had helped me along at certain stages of preparing this. She did the first presentation on it because I was going to be out of town. I had to explain the meaning of the title to her so that if anybody asked, she wouldn’t be like, “I don’t know what you’re referring to.” That’s my creative side coming out. I had to find a way to wrap Philip K. Dick into this.

I’m going to preface this next question for appellate lawyers, not computer scientists. How generally does generative AI work? I’m talking more of the ChatGPT than the DALL-E type.

Let me quickly describe those two things because both of those are run by OpenAI. There are other predecessors but the big name is OpenAI, which created DALL-E, which does artwork. They’ve created ChatGPT which does conversations or prose, whatever you want to call it. The idea for the artwork for DALL-E we won’t focus on but you give it a text prompt and say, “Draw me a picture of this,” and it will do that. You can refine that so it’s taking text prompts and turning images into it. It’s quite amazing. It’s fascinating to see it work.

There are a lot of other things that are similar to that. In this conversation, when I refer to ChatGPT, it has something similar. It works off of ChatGPT. There are a whole bunch of different competitors but I’m using ChatGPT as a stand-in for all of them. If I can tangent real quick, there’s the Turing test and that helps us describe it. To me, the two are related.

There’s Alan Turing. I don’t know that much about it but I admire him. I don’t know if I could spend an entire CLE on him but I would be willing to spend the time to learn an entire CLE on him. He was involved in World War II. He operated very heavily in the ’50s as well. He developed here in the ’50s called The Imitation Game. The idea of The Imitation Game is you have someone who’s the interrogator, the term that we’re going to use. Let’s say, you readers, are the interrogator.

You will be receiving communication. You won’t be seeing 1 but 2 communications, 1 from a computer and 1 from an actual person. You communicate however you want. You can ask them anything you want. You can say anything you want and get a response back. You can have a conversation with them. The idea is can you identify from these open-ended conversations which one is the human and which one is the computer? That’s what he identified as The Imitation Game.

His prediction, and it’s a lengthy prediction that involves numbers so I won’t read it out, was that by the turn of the century, he thought computers would be advanced enough that they would be able to pass the Turing test 70% of the time. The conclusion that he drew from that meant that he expected around that same time that people would be able to refer to computers as thinking in the normal sense of the word and nobody would object to that description.

Artificial Intelligence: Alan Turing believed that by the turn of the century, computers would be advanced enough to pass his test 70% of the time.

The reason I’m bringing it up is that ChatGPT fits into that. We know it is computers. “I’m talking to a computer,” but it feels like you were talking to a person. It’s very easy to fall into the trap of believing that you’re talking to a person. We can get more into that later on what the implications of that are but it can be anything.

You can ask it a question. It’ll give you an answer. You can ask it to give a philosophical response to a philosophical question. You can ask it to write a cover letter for you. It’s very broad and honestly impressive in what it can do. As opposed to DALL-E which is images, this is all text basis. You give it a question and it is giving you a response back. That’s the overview of it in getting in the details of what that means and what it’s doing if you want.

People would have heard that phrase before, The Imitation Game. I knew it primarily from a movie a few years ago where Turing was the character played by Benedict Cumberbatch. If it sounds familiar to people, that’s the reason why.

If you want to learn more about Alan Turing, that’s a good movie to start with. It’s dramatized but it’s a good way to start off to learn about him.

One of the things you talk about in your paper which is a very simple example is predictive text on your phone or in Google.

To me, that’s a simpler way to look at it. If you are texting somebody and you have your smartphone, it has that little bar above the text of three ideas of what it thinks you want to say next. The example that I gave in my paper is if you say, “I’m going to make mac and,” cheese is going to be right there as one of the choices that you would do but if you put in, “board-certified in appellate,” it’s not going to know what to say after that. That is a simplified version.

It is the way your text prediction on your phone works. It’s going to take a fairly large database of information of things that’s been written before and it compiles it. “When people have written this amount of information before, most of the time the next word is going to be one of these three things.” It’s trying to do it. The more common a phrase is, the more likely it’s going to predict what the next phrase is. The less common it is, the less likely it can predict what the next word you’re going to use is going to be.

That is a very simplified but accurate way to look at what ChatGPT is doing. ChatGPT is doing it on a much larger scale but what it’s doing and I wanted to emphasize this, I’m sure we’ll discuss this many times, is it’s doing a linguistic prediction of, “With this prompt that this person has given me looking at this huge database of text that I have to review, what is going to be the most likely thing that is going to fit linguistically in response to what this person has said?”

This might be one of your questions and it’s worth jumping into. There are two things that it means that is important to understand. First of all, the information going out is as good as the information coming in. If there is bias in the information, there’s going to be bias in its response. As you might imagine, this probably is not going to surprise too many of your readers but there’s a lot of emissive information on the internet. I don’t know if you guys knew that.

I can’t say that their primary source is that but I know a lot of it is even if it’s not primary. If your primary source of information is the internet and you’re trying to draw conclusions based on that alone, you’re going to have a lot of bias that is going to come out in the responses that you give. The other very strong implication of this is that ChatGPT cannot know if what it is saying is true or false because all it is doing is looking at things that have been written in the past and trying to predict what to say at the moment.

Whether any of those things are true or anything that it is saying is true, it can’t know. This has led to what has come to be called a hallucination. That’s maybe another term that we need to go over. The shortest definition of a hallucination is something that would be considered a lie if a human were saying it. It’s not true.

The reason it’s called a hallucination is because again ChatGPT doesn’t know. It doesn’t know it’s “lying” because it doesn’t know what’s real. It doesn’t understand fact from fiction. That is a huge risk that we’re going to go over a lot more. We’ll go over that in pieces but that’s the huge downside in my opinion to using ChatGPT in the legal context. If you’re doing it for fun, why not? Go for it. It’s fun but it’s a real liability in the legal context.

One other background piece is when we talk about in a context outside of what people will think it is, an LLM or the lock-in date. Can you explain what those concepts are and how they impact what comes in?

LLM means Large Language Model and that’s what I’m referring to as this vast database of information that’s coming in. I don’t know a lot about how those things are structured but I did read an article that was interesting. I don’t want to miscite but I think it was in The Atlantic. I could be wrong. Maybe this is also for DALL-E but a lot of these things have these low-grade workers in these third-world countries drawing connections to things for pictures which is DALL-E to describe the things that appear in that picture.

That information is then used in part of their large language model. That’s still what they are calling it for DALL-E but it is something similar. There’s a lot of tagging going on by low-wage workers across the world. I don’t know the details of this and I don’t want to get over my skis. I just read an article. I have to confess. The other thing about it was something like date. I can’t say what it is.

Is it the lock-in date?

With ChatGPT for example, the database has a cut-off where it can’t access information existing after a certain date. It’s October of ’21 I believe.

It’s something like that. It’s been updated a couple of times but there is a limitation too. It’s learning.

You can’t give it the most recent things.

It’ll acknowledge its limitations. You’re not going to get the most up-to-date information asking ChatGPT questions but it acknowledges the limitation. There’s at least some disclosure. After some recent events, it started giving some more detailed disclaimers of what it could or couldn’t do. I wanted to ask a quick follow-up because we’ve been hearing for many years about this whole concept of big data. Is the LLM or the Large Language Model related to that in any way or is it more focused on a database that the people who built ChatGPT have control over and used to generate GenAI responses that fit into it?

I can’t say that I know a lot about it but my sense of it is that what we speak of when we talk about big data is a collection against us, like getting our information and we are part of the web. To some degree, that’s brought into ChatGPT because if you ask about historical figures or popular people, it’s going to know who you’re referring to in a lot of these circumstances.

However, I don’t think it’s the big data in the sense of what I’m thinking of. If I say something or have a conversation with a friend here, an Alexa speaker, all of a sudden, I’m seeing ads for that thing a few moments later and that sort of thing. That’s the way I envision it but I don’t know how much those two interrelate, to be perfectly honest.

Let’s explore a little more a couple of the things that you talked about in the concept of bias. In your paper, you give a specific example of a defamation lawsuit. If you want to reference that, it’s pretty interesting.

It’s fascinating to me. I’ll go on and describe it rather than say how fascinated I am. There is a guy by the name of Mark Walters. He’s a fairly conservative guy. I don’t know a lot about him. His main job is to be a radio host in Georgia. Separate from that, there’s a guy named Fred Riehl. He’s a journalist and was doing an investigation into a lawsuit in the Western District of Washington, which the appellate lawyers all know what that means.

There was a complaint that was filed in this case in the Western District of Washington. He provided a link to that case to ChatGPT and then asked ChatGPT, “Can you provide me a summary of what this complaint is?” ChatGPT was more than happy to oblige and said that the suit was against Walters, a radio host in Georgia. He said that Walters had worked for the Second Amendment Foundation and, at some point in the past had been the treasurer and CFO of that organization.

In this lawsuit that ChatGPT had been asked to review, the foundation was suing him for embezzling funds and manipulating financial records. The problem is that none of that is true and these are all what we’re referring to as hallucination. The lawsuit did not involve Walters at all. Walters has never worked for the Second Amendment Foundation and has never been accused of embezzlement by anyone, let alone this organization.

The journalist informed Walters and then Walters turned around and sued OpenAI. I should have done some updating to see if there’s been any activity on that and I haven’t. These are attorneys, not just appellate attorneys, who we could get into the ins and outs of that lawsuit. I don’t want to go too far astray but he sued OpenAI which is the entity that runs ChatGPT.

I agree that’s the correct entity to sue but it wasn’t that entity that said that thing. They just created some software that said that thing. Can you do defamation against the company? You have to have intent. How do you show intent for that company when this ChatGPT is just spitting out random information? That’s a fascinating question. I could spend hours on my own tumbling that over in my head. How do you show that there’s a mental state?

We all know that this is a fairly simple one that is like, “What are your damages?” The only thing that happened was that Riehl saw it. From what we can tell, Riehl didn’t believe it and then he told Walters. The damages are nothing or limited if they exist at all. I don’t think anything can come of that lawsuit but it creates the question of what sort of lawsuits are we going to see in the future that we just didn’t dream would exist in the past.

It’s also related to hallucinations. If you’ve done your research on OpenAI, you know it can’t tell fact from fiction. The way I said it in one of my presentations is, “Can a Magic 8 Ball defame you?” If the Magic 8 Ball can’t defame you, can ChatGPT? It’s not much different. ChatGPT is fancier. I don’t mean to offend anybody at OpenAI but in my opinion, I consider ChatGPT to be this interesting parlor game and not anything else. I don’t know. Should you be allowed to trust it to be defamed by it? It’s the point that I would drive home.

This is the kind of thing though because this is still a novel area. We’re going to have test cases. It seems like to me this has got a test case written all over it.

You are doomed to fail but it’s an interesting test case. It was like, “What sort of things are we going to see in the future that we didn’t dream of now?” My wildest dream of a lawsuit in the future is not going to be able to encompass what is going to come down the road probably sooner than later. This is a sign of things to come and what we are facing.

One of the things that might be worth covering for folks who are learning about ChatGPT and getting to know what it does is it has limitations on its ability to reach the internet to gather information. You said that what was provided here was a link to the actual lawsuit.

The thing is I don’t know. I don’t claim to in any way know the inner workings of ChatGPT or how OpenAI created the software. The guy provided a link but I would seriously doubt the ChatGPT was able to open that link and review it. It was told to give a summary of a case so it started making stuff up. My guess is that it had no idea of the contents of that link. It just started to try to provide a response that would fit linguistically in what it had been given to do.

Artificial Intelligence: ChatGPT may be unable to open links and review its content. It could just be coming up with a response that would fit linguistically to the user’s provided commands.

I’m still thinking. Based on my experience in using ChatGPT, wouldn’t it be more beneficial if it could go and analyze the data behind a link to have the capability? Maybe this is not possible in this model but maybe it would not have hallucinated had it been able to access the information in the link. There are other models like Claude, for example, that make it pretty simple. You can essentially upload a PDF into the interface and it’ll examine the document itself. This could have been prevented if a different interface had been used potentially.

It’s possible OpenAI can. From its response, I’m assuming I don’t think it could read a single thing off of that document. To be able to compare to give a better response, it would also need to have other complaints and litigation-type documents in its database. Maybe it does. Maybe it doesn’t. I don’t know but that would have to be part of its database as well to compare it against.

I’m showing my ignorance of how the technology works.

I don’t claim to know it very well. On a certain level, I’ve gotten to understand it. I know the information that goes in and comes out. I don’t claim to know in between that black box.

That is one of the important takeaways here. You need to have some level of technical competence about what its abilities are or aren’t at a basic level before you start to use it beyond having fun and playing around with it for this exact reason. There are a lot of pitfalls. On the discussion of future implications here, are we all going to be out of work in a few years thanks to AI and ChatGPT?

I did some research on this. I don’t think this is part of the original scope that I had but I knew that as soon as I started going down this road, we were going to say, “Is ChatGPT going to take away my job?” The short answer is no. I could go lengthy but I’ll try to keep it short. I am not an economist. I don’t claim to be an economist. I have a background in computer science and theater. Economy has never fitted into either of those two.

I did try to do some research on this because I’ve heard some stuff on this idea before and there is this broad consistency among economists like, “Technology doesn’t take away jobs.” It’s important to understand exactly what they mean when they say it. I did some research because I was curious and was being whimsical. I don’t know at what point but before alarm clocks were these things that you could go to the store and buy, if you needed to wake up at a certain time, there was someone called a knocker. They would go by. They had this long pole and they would knock on your window at a certain time. That person was your alarm clock. They would go around the town knocking on people’s windows at the designated time because no alarm clocks were invented then.

We can all agree those jobs have been replaced but what the economists are saying is that on the net, if technology might take away a portion of a job more often it eliminates a job, and even when it does that, it creates more jobs that are needed to interface with that technology. On the net, any sort of technological advance is met with an increased demand for the human populace to perform some job.

The question is, “Are there going to be aspects of our jobs as attorneys in the future that we are doing as humans that are going to be done by computers?” The answer is probably yes. There are aspects of our job that will probably be taken over. We already know that’s true. We already go to Westlaw or LexisNexis rather than pulling the books off of our shelves, which is how you used to do it a long time ago.

All the people are like, “Back in the time, we used books.” I’m like, “I am not missing that.” I welcome that in that regard but whether it’s going to take away your job, no. There will still be first-year associates in the future and fifth-year associates. What that job looks like will be different but it won’t eliminate the need for lawyers in that sense. I expect it will cause it to grow based on trends that we’ve seen in the past. I don’t see any reason why it shouldn’t happen here. I’m not an economist. That’s from what I’m seeing.

I don’t know if you have this as a question but there’s one story that I put in the paper that has confounded me that I’d like to go in on the topic. Just because technology like ChatGPT won’t take away your job doesn’t mean people won’t try. We already have documentation of that. It’s quite shocking to me. There was a case where there was a group that ran a suicide hotline. If my memory serves, it deals specifically with people who have weight issues. They’ve operated the suicide hotline with a select group of people who have experience in dealing with this specific type of suicide prevention.

This is a very serious and important issue. During COVID, it will not surprise anybody to hear that the hotline was ringing more often than it had been before. It was putting huge demands on the staff. This is the part that I was flabbergasted by. The organization that ran it was not showing much sympathy for its staff that were having to run this hotline.

If I remember right, they were trying to organize. Around the time that this was going on, they had hired this software company to help them develop software that was supposed to work alongside the people to give prompts of, “Here’s what you can say here,” to help the people in dealing with the people responding to on the hotline.

What they decided to do when the employees started to try to organize because they were being overworked was they fired all the employees and said, “This software that we have paid for, this ChatGPT-ish generative AI software, is going to be our hotline.” The software company that created the software said, “This is not what it was designed for.” They fired everybody and that went exactly as well as you would think it would.

If you’re contemplating suicide and you’re trying to call somebody who can relate to you, the last thing you want to do is talk to a computer that’s going to do predictive text to try to respond to whatever your concerns are. That went down in flames like it should have. Hopefully, that’s going to be a good enough warning that this doesn’t become a common occurrence for people like, “I can fire everybody,” but people will try. I expect that will happen but those are going to be bumbling failures as they already have been shown to be.

In the context of legal practice particularly, there are a number of ethical issues that come out with a different overlay. We can talk about those a little bit particularly the Mata case.

Let’s talk about that and we can then get into the ethical implications that follow from that case because it’s a good highlight of these issues. I’m going to give a little caveat but I’ll wait a little bit because I feel a little bit bad for the attorneys that got slapped in this one. We have a guy named Roberto Mata. In 2019, he was on an international flight from El Salvador to New York. Mata alleges in a lawsuit that came later that during the flight, an employee of the airline which is Avianca struck him in the knee with a metal serving cart and he suffered injuries.

I don’t know the severity. I’ve never tried to understand exactly how badly he is hurt but he was hurt badly enough that he decided he needed representation. Before this lawsuit happened, in 2020, Avianca, the airline, filed for bankruptcy. While the bankruptcy was pending, Mata filed suit. All of our appellate lawyers who are reading know what that means. For the computer scientists, you’re on the wrong show.

The bankruptcy is pending. Mata filed suit after his attorneys realized that the bankruptcy was pending. This is in New York so they file what is called a stipulation to dismiss without prejudice. We can all take a good guess at what that means even if we don’t practice in New York. If the bankruptcy concludes later, I don’t know exactly when it concluded, but Mata refiled the suit in early 2022. As a reminder, this injury occurred in 2019 and more than two years have passed.

He filed suit in the State Court. Avianca, the airline, moves to the Federal court and then moves to dismiss the suit. They raised a number of things but they do 12(b)(6). One of their arguments is the statute of limitations for international flights was two years so he has filed the suit too late. Mata’s attorneys filed a response to the motion and the cited cases holding the bankruptcy tolls the statute of limitations in question.

This is an international flight. Montreal Convention is the convention that controls the law for international flights and the limitations period for such claims. Under the Montreal Convention, it’s two years. He has cited authorities to say, “The bankruptcy has tolled that so I still get to file suit in 2022.” Avianca’s attorneys filed a reply. In the reply, they informed the court that they could not locate any of the relevant cases cited by Mata.

The court on its accord after this probably had looked into it on its own and issued an order requiring Mata to produce ten of the opinions cited within its response to the 12(b)(6). Things are already bad at this point. We already see where this is going. Many of our readers probably already know where this is going but this is where things start to go off the rails. At this point, Mata’s counsel does not come clean. What they do is file an affidavit and attach “excerpts.”

If you go look up the case and read these excerpts, you’re like, “This is not what illegal opinion looks like.” Attached to the document were “excerpts” of 8 of the 10 cases they were supposed to produce. Their explanation for why they’ve only provided excerpts is because “that was only what was made available by online database.” The attorney says in his affidavit that 1 of the 10 cases could not be found at all. Another one was not included because “it is an unpublished opinion.

Every appellate attorney is going to get a chuckle out of, “I can’t find it because it’s unpublished.” That’s not a good defense for why you can’t find an opinion. This will surprise no one. The court then issues a show-cause hearing. Appellate attorneys know that if you are invited to a show-cause hearing, bad things have happened. You don’t call your mom up and say, “Mom, I got invited to a show-cause hearing.”

The court says in the show-cause order that six of the submitted cases appear to be bogus judicial decisions with bogus quotes and internal citations. It’s not good. After that, Mata’s counsel filed another affidavit. This is the one that is a gut punch when you read it because Mata’s counsel in that affidavit says, “I didn’t do any of the research or drafting that I signed to. Instead, it was done by another attorney in the firm.” It’s not something you want to call your mom and say you did. That’s bad but this other attorney then files his affidavit.

Let me put in the caveat real quick because I’m going to read it. There are two attorneys that were involved in this. Both of these attorneys have been dragged through the mud. Deservedly or not deservedly, I’m not going to get into it here but their reputation is taking a hit and this is not the first place. In the legal communities, this is spread quite rapidly. I empathize with them. I am not trying to mock them but this is also unfortunately for them a very good learning point for every other attorney involved.

I have some joviality in my voice but I also have a good deal of empathy for these attorneys and the harm to the reputation that they have suffered. I hope that I never find myself in this situation. The second attorney who has said, “Yeah, this was me along,” in his affidavit, says, “As the use of generative artificial intelligence has evolved with law firms, your affiant consult to the artificial intelligence website ChatGPT to supplement the legal research performed.”

I’ll read another portion of this but I want to pause because I was so curious about this case so I read more. Somehow I was able to read parts of the transcript. Maybe the entire transcript was in the record. I forgot about the hearing for the show cause. At that hearing, the guy said that when he said “Supplement the legal research,” what he meant was all of the legal research. That was another thing he got dinged for because he was trying to diminish what he had done in his affidavit. He had gone on to Fastcase or one of those databases to do the research. He found nothing and then went to ChatGPT.

He did do some research but he got nothing. He went to ChatGPT and it said, “Here are all these cases.” He said, “It’s good enough for me,” and then quoted them in the response. It wasn’t a supplement. It was all. The attorney obtained in the affidavit says, “The questions were provided by ChatGPT, ‘which also provided its legal source and assured the reliability of its content.’” He relied because ChatGPT was like, “This is legit.” He was like, “Good enough for me. I’m done.”

I don’t want to mock too much but we all know that you cannot stop there. Anybody who is reading this and didn’t know about ChatGPT beforehand knows you cannot stop there. That’s where he got harmed. On the show-cause hearing, none of the ten cases exist. The holdings were made up. The citations within the holdings were made up. This is what I find interesting. The judges identified in these fake opinions are real judges. We can tie into that later on that has its problems but that’s it.

The court sanctions these guys which probably surprises no one. It talks about the harm of inciting fake authority. One of them is it wasted time of opposing counsel and the court. It deprives the client of a persuasive argument. The court gives a lot of examples but to me, that’s the big issue. “You have harmed your client by doing this.” The other thing though is it cited real judges and the court was like, “This can harm the reputation of judges and the party cited within the fake opinion.” It identified actual judges.

I couldn’t tell you which one but let’s say it says it was an Eleventh Circuit opinion but one of the judges cited as being on the panel would say it was from the Sixth Circuit. I don’t know if that’s true but if you know the judges, which most of us honestly don’t and I don’t know all the Federal judges, I don’t know if that would have been a red marker for me like, “This isn’t correct.”

When this stuff does come to light, there is that potential that it can harm the judges. They’re being associated with things that are not their opinions and are not the law and could be taken as true which could end up hurting the opinion of the judges and the parties involved. The other court also mentioned that it promotes cynicism in the judicial system. It is not wrong. The court ultimately sanctioned the guys.

One of the things the court emphasizes is at no point, even if to the point that the order was issued awarding sanctions, did you withdraw your response?” Finally, they were like, “This came from ChatGPT.” They never asked the court to withdraw or modify anything in the response, and that bothered the court. First of all, don’t do this but if you get stuck in a situation like this, come clean as quickly as you can. If you have cited false authority, do more than say, “My bad.”

You have to say, “Court, I can no longer rely on this and I cannot represent this to be something that is advancing my position or my client’s position. Please allow me to withdraw it.” Bear both of those things in mind. Saying, “My bad,” is not enough. You also have to try to withdraw what you have filed that embarrassed this false information. Judge Starr in Northern District, Texas did some stuff right around this time. Can we jump into that and then we can get in the others of it?

That was one of the places I wanted to go, which talks about what courts have done in light of some of these issues.

Maybe others have happened but this is what I found interesting. Judge Starr was in the Northern District of Texas and this suit took place in New York. This is a different area. He came out with some new orders in May of 2023. This was before the court had come out with its sanction order but after all of this stuff had come to light. There’s a lot of supposition. I’m in the camp that is supposing that Judge Brantley Starr issued these new rules in response to this Mata case.

I don’t know that to be true but he’s created a new local rule for appearing in his court and it says, “All attorneys and pro se litigants appearing before the court, when they appear, must file on the docket a certificate attesting that no portion of anything they file will be drafted by generative AI like ChatGPT. Any language drafted by generative artificial intelligence will be checked for accuracy.”

Let me try to emphasize this phrase. By using print reporters or traditional legal databases, those things are checked against by a human being. You have to say at the beginning of the litigation that you’re going to do one or the other. Never use it or you will do your double checking on anything drafted by these using a traditional legal database and a human has to do that double checking.

He gets into some of the ethics and I disagree with nothing of what he says. He says they can be powerful but legal briefing is not one of its uses that should be applied. He mentions that these platforms use both hallucinations and bias. On the point of hallucination, they make stuff up even quotes and citations, which would reflect the Mata case. He focuses on bias or reliability. He says, “Attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to try and uphold the judicial system.”

We have taken an oath or an ethical responsibility not just to represent our client but also the system at large. He says these computers are unbound. They don’t have a sense of duty, honor, or justice. They are bound by any ethical restraints. They don’t know how to advance the ethics, rule of law, or ethics involved in the legal process so they can’t be relied on for that process.

I emphasized twice because he said in his restriction, “You can’t use this unless you’ve compared it against the traditional legal database.” For all of us who practice, we think of Westlaw and LexisNexis as the primary examples of that. However, what I’ve shown in my presentations, what concerns me is if you go to Westlaw or LexisNexis, they are promoting, “We’ve got generative AI.” They are saying it in full force.

I have LexisNexis in my practice and multiple times it has said, “Derek, do you want to try your generative AI stuff? We’ve got it over here.” Each time I hit that button, I’m like, “No.” I wish they had a button that said, “No. Don’t ever ask me again,” because I won’t touch it. The thing is maybe it’s not that bad. I’m too scared of it. I don’t trust it. That’s the hot term to be used. Back in the early 2000s, they said, “We have an algorithm.” Everybody is excited. They’re trying to tie into that. They’re like, “We have generative AI.”

I don’t know how strong or weak it is. I don’t know what it does but I keep coming back to Judge Starr’s order. If I use that on Westlaw or LexisNexis, have I complied with or violated his order? I understand the spirit of his order and all of the implications behind what he’s saying ethically of why this order exists but even if we are not at that train wreck, that order is heading towards a train wreck.

Generative AI, whether we want to or not, is going to become more part of our lives every day. Whether you have violated or stuck with that order is going to be in my opinion much harder to determine in the future. That’s a lot of the ethics related to it but if more orders like this come down, am I complying with it or not? Am I allowed to use generative AI with Westlaw or not? A clarification is probably going to be in order very soon.

Artificial Intelligence: Whether we want it or not, generative AI will become part of our everyday lives.

I see a difference between what Lexis and Westlaw are offering with GenAI in terms of the phrasing of legal research terms, summaries of searches, and things that are currently available, and the representation of what a case says. There’s a line there that you can draw. GenAI is altering the text of a case or is hallucinating an entire citation and authority. It’s not the same thing. These orders are interesting. Judge Starr is not the only one to issue orders like this. There are some people who say, “This isn’t necessary because we’re all bound by the general rules of ethics.”

I tend to fall on that camp but with that said, it’s a good idea to shine some light on the issue so maybe an order like this won’t be necessary in the future. It’s like when we first got the internet or email. Everybody was worried about preaching privilege by sending emails and things like that. I understand the dilemma and it’s going to be used as the test case that we spoke about. Mata is its test case in a little different way.

This is the Titanic for the legal world.

The other side of the coin that I have not heard discussed anywhere that I can think of is how you balance this emerging technology with the lawyer’s duty of technological competence. What we’re also told in our ethics rules is we have a duty to our clients to use technology competently. We have these resources that are available to us that can potentially help us do our jobs more efficiently. It’s yet to be seen how that is all going to pan out but I would flag that as another emerging issue that we got to try to get our handle on.

For the moment, I have Derek Bauman’s theory of how you could use ChatGPT and other things similar to it. The examples that I’ve given are you have to be the guardrails when using ChatGPT. You can use ChatGPT when you are asking questions where you already know the law or you have already done your research. “Why would you use it?” What it can be beneficial for is like spit-balling. It’s like, “I’m trying to think of the best legal arguments I can make concerning X.”

You’ve done your research. You know what the law is in that area. You can go on ChatGPT and say, “What are three good arguments for why X should be the policy under Texas law?” If you’ve done your research, it starts spitting stuff out, and one of them is complete nonsense, you’ve done it and you know it. You can identify that and move that one away but it’s possible that when you’re using ChatGPT, you’re like, “I didn’t think of that before.”

It could be something else similar to, “I don’t think this is right but now that it’s saying that, I haven’t considered this other possibility.” It could help you to think of things in a new light but you have to be the referee. You have to put the guardrails up and know when it’s feeding you nonsense and when it’s not. The other example I will give is none of our audience, except for the computer scientists, are going to find this helpful.

I don’t mean to aggrandize the appellate lawyers but we’re an audience so they will all smile if I do. We are all particularly good at writing. I do not consider ChatGPT to be a better writer than me. I consider it to be much worse than I am but I know a lot of people who are excellent lawyers but writing is not their skill. If you are a good lawyer, you know the law, and you’ve done this other stuff but you aren’t good at drafting, then you can go in and use ChatGPT to help you begin that process.

If you do that, let me add one more red flag for you. If you do some research on OpenAI’s frequently asked questions portion of their website, one of the questions is, “Will you use the information I provide?” They’re like, “Not everything but we can. Anything you use, we reserve the right to use it for training purposes and show it to our employees.” Another question is, “Can I delete my prior conversations?” The answer is no.

You cannot ethically put client information into ChatGPT without breaching your client confidentiality requirements. That limits how you can use ChatGPT as a drafting tool but that’s going back to what Judge Starr said. He says these platforms are very useful but legal briefing is not one of them. That’s one of the things where he and I disagree. Legal briefing is one of them. It’s legal research that is not one of them.

However, you still can’t get around that ethical problem so you still have to feed it like, “Here’s the law. Help me say the general applicable law to this topic,” and it can help you do that. As long as you know when it’s feeding you crap, then you can use it but you have to be the referee and know when it’s gone off the rails.

To Todd’s point, part of the technological competence duty is understanding its limitations because that’s important. It doesn’t mean you have to use it. It means you have to understand how to use it responsibly. It seems from everything I’ve heard about the Mata case that it was an unintentional error upfront that snowballed and compounded.

That’s exactly what it was. The biggest hurdle we face is we’re going to start getting those calls from a client that’s like, “I was talking to ChatGPT and ChatGPT tells me I have a great case.”

That’s the next advancement past the Google School of Law.

“I plugged what happened in ChatGPT and it has assured me I can win my case and my damage is this.” The other part is going to be like, “I have to be aware so I can talk to my client and I can explain to my client the limitations of this.” Just because ChatGPT has told you that you have a great case does not mean you have one. That also ties into competence. You have to know where its limitations are so you can talk your clients down off of this ivory tower that they’ve built for themselves for what their case is going to be able to do.

It seems like there’s going to be a need for many ethical ethics CLE presentations. On this topic, we’re just scratching the surface, to your point, Derek. This is super fascinating. You can make the case I suppose but one could comply with their duty of technological competence by saying that the use case for GenAI is not settled enough currently to be able to use it effectively and ethically. That’s something that will be flushed out later because I don’t think that will be the case for very long.

I’m not a computer scientist but I have read an article by a computer scientist saying, “There’s no way around this. There is no solution to this.” We’ve said that about technology in the past and then somebody finds a solution. Maybe there is a way to feed into these ChatGPT programs, something that is akin to an understanding of reality that could help temper this problem. If that problem is solved, then maybe we can have more caution in our use of it but also at that point, we’re like, “Can it take my job?” We measured in our response whenever that should occur.

You have the commercial incentives for companies like Westlaw and LexisNexis to find a way to use it efficiently for attorneys. There is a market pressure that’s going to influence that as well.

It’s always going to be there and it’s going to become part of our lives. We need to get ready for that.

To that point, as we get close to wrapping up, we as lawyers speak about Westlaw and Lexis. I wouldn’t call it reverence exactly but we know what that product is and what their database is generally consist of. To the extent that those vendors are offering a product, you have some built-in assurance about the quality of that product to some degree. You know that it’s been vetted. There are millions of dollars being spent advancing this stuff. As we’ve already acknowledged, it’s going to be transformative in some way.

We can’t predict what law practice is going to look like specifically in many years. I would throw out there that to the extent that people are inclined to do this, you can talk to your Westlaw rep or Lexis rep. He can probably comfortably use those products in your law practice. I don’t think you’re going to find yourself at the wrong end of the sanctions order or a show-cause order like what happened here by relying on those authorities. Beyond that, it’s anybody’s guess.

If you do that and you appear before Judge Starr, I would encourage you to call the court and say, “Am I allowed to do this?” I’m super nervous about those things but it wouldn’t hurt you to call the court and say, “Am I running a foul by using this?”

That’s good advice for all lawyers. Check the local rules on this stuff because they are beginning to pop up out there to something that years ago, we wouldn’t know about.

I’m not saying Judge Starr does but maybe one order overreaches in one of them but eventually, you figure out here is the best way to formulate this. We’re still in the early stages. Even the judges are trying to figure this out too and where the right balance is. You have to be careful about all that at the same time.

That’s a great point where we can wrap up a substantive discussion. Derek, you’re aware that one of our traditions on our show here is to ask our guests for a tip, war story, or a parting thought. Do you have something you would like to share with us?

I have to learn how to keep this short because this is another one in the can that I bother people with at parties. This one, they start to walk away if I go too long. I want to try to tell the story as briefly as I can and then I’ll give you my advice. I was at the Court of Appeals as I mentioned many times. I don’t mean to say this too cruelly but I read a lot of briefs and found myself thinking a lot of times, “That’s a crappy brief.” I spent a lot of time trying to figure out why. “Why is this brief so bad?”

What I noticed is I had interns who would come and they’re in law school. They had the story idea of what appellate law is so they come to the Court of Appeals as interns and they have this very romantic idea of what they’re going to be encountering. I give them some briefs and they’re like, “These briefs are terrible.” I was like, “I know. They are. Do your best research. Look at the record and you tell me what you think the answer should be.” They would write up something for me. Not meaning to be mean but at the number of times, I go, “This is not that good.”

I can go on forever but I will wrap it up here. My theory that I have developed over time because I spend way too much time thinking about this is that every writer no matter how good you are, you can be the best writer around, but by the time you have done all the research and understanding the law, all the research, understanding the facts, and all of the analysis to figure out how to put that all together, by the time you’ve spent all of that time putting it all together so you have in your mind the brief you want to write, it is so obvious to you that you forget what the person who does not have this information sees.

What you were writing makes sense because you are entrenched in it. This makes sense because it’s all in your mind right there but it does not make sense to someone who does not have all of that information in their mind. What I would encounter a lot when I read these not-so-good briefs is once I did the research of the law and I did the understanding of the record, I went back to that brief like, “That’s what they’re talking about.” That’s what has been the support for my theory.

It remains a theory but the advice I give to everybody related to this theory is when you have written something, it’s going to make sense to you. You cannot see the forest or the trees because you have spent so much time understanding all this so you serve yourself and your client massively. You don’t take it to the partner who’s been working on this case alongside you the entire time. You go to someone who has no involvement in the case.

I’m sort of joking but I’m not joking. Find someone that doesn’t like you that much. If you give it to them and they read it and go, “That makes sense,” then you know it makes sense. If you send it to your friend that you’ve been best buddy since law school, they will be like, “You did great. I’m so proud of you,” but they’re not going to give you that objectivity.

Find someone who doesn’t care about you and is annoyed when you walk into their door. That person is going to give you the best advice of, “This made zero sense to me. I don’t know why this is in here. This is missing something. Right here, you got to fill that in.” When someone says that to you, you’re like, “Right.” You have to learn how to be the person who knows everything and then somehow forget to write that brief. That is my advice to everyone. You have to learn that skill and it is such a hard skill to learn.

Artificial Intelligence: You have to learn how to be a lawyer who knows everything and then forgets everything in order to write an effective brief.

It’s time that we finally let the cat out of the bag. Derek Bauman is not a real person. He is an OpenAI platform designed to be like a lawyer. This entire episode has just passed the Turing test. Congratulations to us.

I have nothing to say in response to that other than, “That’s great.” Derek, we do appreciate it. Thanks for coming on and sharing the benefit of your experience and research. We will look forward to following this topic. You are into the future. There is no doubt about it.

This is our lives from now on. Thank you so much. It is an absolute pleasure. I enjoyed all of this.


Important Links

Love the show? Subscribe, rate, review, and share

A special thanks to our sponsors:

Join the Texas Appellate Law Podcast Community today:

About Derek Bauman

Derek Bauman has over fifteen years of experience practicing appellate law. He has been board certified in civil appellate law since 2013.

He started off his legal career as a briefing attorney for Justice Higley at the First Court of Appeals from 2005 to 2006. In 2006, he joined Franklin, Cardwell & Jones as an associate in litigation. In 2010, he returned to the First Court of Appeals as a staff attorney for Justice Higley. He left the court again at the end of 2018 to join the City of Houston. He now serves as senior counsel for Feldman & Feldman, P.C.

He earned his law degree from the University of Houston Law Center in 2005. He earned two degrees from the University of Texas. He has a Bachelor of Science in Radio-Television-Film and a Bachelor of Arts in Drama. He graduated from UT in 1996.

Derek is married and a father of two. He enjoys cooking and bike riding.