You can listen the episode here. The below is an auto-generated transcript and it include time-stamps:
Mark D: [00:00:00] Hello and welcome to LODcast, a show for in-house lawyers and professionals around the world. My name is Mark Dodd and I'm the head of market insights at LOD, a pioneer of alternative legal services around the world. Today's show is an update to the generative AI episode we did five months ago. But in today's tech environment, five months feels a little bit like five years, so there's a lot to update you on.
And to bring you up to speed, I felt we needed two guests to get a proper grip on what's happening. My first guest is Josh Kubicki. Josh is the Co-Founder and Design Lead at Bold Duck Studio. He's the Director and Professor of Legal Design Business Hub and Entrepreneurship Program at Richmond Law School.
And he's the creator of BrainyActs, a newsletter for legal professionals on pragmatic and practical uses for ChatGPT and other generative AI tools. Josh and I consider the volume and depth of change to date. Discuss the need to get your hands dirty and under the hood with these tools and debate the second-order risks and downstream implications of generative [00:01:00] AI for in-house counsel. As the second guest on today's show, I was delighted to welcome back Jonny Badrock, the Chief Legal engineer at SYKE. Jonny and I discussed the key developments in the area of legal technology applications of generative AI. And we share what LOD and SYKE are doing in this space. So without further ado, let's get into it.
Welcome, Josh. Could you tell our listeners a bit about who you are?
Josh K: Yeah, so I'm Josh Kubicki. I'm the author of BrainyActs, which is a new daily newsletter on the pragmatic use of generative AI in the practice of law and the business of law. Prior to that, I was a lawyer many years ago in-house, but for the last 20 years, I've been really on the side of bringing the business of law, legal operations, more to the forefront to sort of power legal services and be recognized as an integral part of the domain of legal services.
Mark D: Brilliant. We're really excited to have you today because, well, how we got in contact was through the BrainyActs newsletter that you mentioned. For those listeners who are not yet aware, it's, [00:02:00] you're coming up to a hundredth edition. And in fact, by the time people are listening, it's probably already, you've already done your hundredth edition. So it's a daily newsletter. I've been using it since Edition 30, I think, and it's crammed full of the latest news and updates relating to generative AI and the legal profession or knowledge work generally, I think. Can we start with why you started to write it? Because it's quite an undertaking.
Josh K: It is. And it's been consecutive 100 days, no days off, which I did deliberately. Without talking to my wife ahead of time, but that's another story. The real reason, I mean, honestly, the real reason was this. I'm not a tech geek in the sense of I lift the hood and get involved in the actual bits and bytes of technology.
I've been, as my intro, I talked about really on a mission to uncover and disclose the business models generative AI came out, you know, over the last year. Really been playing with it. But then when chat GPT and GPT four got released, you know, subsequently [00:03:00] within four months of one another, I immediately saw it as a business model, play a tool to help reconfigure our legal service business models to improve the life of lawyers, employees, working them, and obviously clients.
And so I was like, well, how am I going to learn about this? I'm going to learn about it anyway. It's going to move fast. I knew it was going to move fast. So I just challenged myself to learn in public. and build in public, which I'm a big proponent of. And I said, well, you know what? I'm just going to challenge myself to write for 100 consecutive days.
I figure 100 is good, right? After 100 days of consistent learning, and then communicating that learning, which is the challenge in the newsletter. I should be in a really solid place of understanding all the dimensions here. And it really was a personal challenge that I invited whomever wanted to join me in terms of readers and over 2000 of you have joined me.
So that really was the catalyst that drove this.
Mark D: Yeah. [00:04:00] Brilliant. And as they say in the show business, it's all killer. No filler. I'm, I'm surprised that, uh, you would have thought that a hundred consecutive days there would be like a quiet day, but it really honestly seems there hasn't been a quiet day.
Did that surprise you?
Josh K: Let's just say this, that some days are harder than others, right? So I did, I promised myself to never phone it in, never just get the newsletter out for the newsletter sake. And sometimes, honestly. Yeah. Yeah. And this is my own problem. It's sometimes it took four to five hours to produce one newsletter because I didn't want to ship it if I didn't feel like there was some learning in it.
And some days are really slow, like the weekends, like the news cycle tends to tamp down as we get to Friday, Saturday, and Sunday. And so it's like. Well, I don't wanna send a newsletter on the weekend. Readers readership is low on the weekends, right? Cuz everyone's busy, right? So it's like, okay, but I still have to have something in there that's useful.
So it was a challenge, but again, it was focused on, I never [00:05:00] made it about just the newsletter. It's like, okay, well what can I learn today? There's gotta be something I can learn today about this. Even a Sunday, right? And so, alright, and then I've gotta package it in newsletter. So, Some days were harder than others in doing that, but I feel confident if I look back at, and today's going to be 95, but by the time this is a broadcast prior, I'll be done with the one hour day challenge.
I don't think there's much, like you said, I think it's, what'd you say, it's all killer, not filler. I love it. I think a lot of it is very pragmatic and useful and that's kind of cool. I'm proud of that.
Mark D: definitely. I can attest to that fact and it is so fast moving that, you know, we actually, so we did a load cast on this five months ago, but that feels like centuries ago now, given the pace of change. So we thought we'd do another follow up on generative AI and how it impacts particularly in house legal. So I was also very pleased to hear in the intro that you were an in-house lawyer in a previous life.
So I'm [00:06:00] wondering for those of our listeners who are in house counsel and legal ops professionals, why do you think it's important for them to be following the developments in this area?
Josh K:. Yes, it's a technology, right? But I think that doesn't do it justice. And I'm not here to like, make some broad, crazy claims like robot lawyers and all that kind of stuff. And we can unpack that. But this is a business tool. It's just It's just that and there's multi dimensions to it, but any in house team, whether one person or hundreds are constantly dealing with demands from the internal customers or internal business customers, right?
And they struggle to meaningfully address the business needs to understand. Let's just start there. Understand the business of the company they're in. We know that often there's a disconnect between the legal function and the business function, and there's a gap. There a tool like this can help close that gap and we can go into some instances of [00:07:00] how, but beyond that, this tool extends the talents and the skills and the knowledge of the in house team and makes them scalable and approachable.
In a way that hiring additional headcount or bringing on a new technology just simply can't. It's almost the perfect blend of capability that really can extend the insight, the decision making, and yes, absolutely, the services that legal teams can provide. I know that's very high level and we can sort of dig into specific instances because there's.
Multitude of rabbit holes to go down in here, but really it's a scale creation for illegal teams, which are always asked to do more with less beyond pure scale, it's actually meaningful because the way generative AI exists and the way we can craft it, it creates an opportunity for more meaningful, intentional services and [00:08:00] interactions.
Mark D: Brilliant. Well, well, let's perhaps dive into a few specific examples because I think that's really interesting. So we've seen examples of, you know, you could get a large language model to ingest a bunch of contracts, for example, or to ingest a bunch of documents and create a chatbot FAQ. That's been quite a common example. Do you think that's probably the most common example that you've come across? Yeah,
Josh K: I think there's, yeah. And the contract, I mean, and the contracts are so broad, right? So we can talk about vendor contracts. If you're in a large company, you know, like, uh, pharmaceuticals or healthcare, you know, heavily regulated.
A perfect example is like the business marketing team, those teams that are responsible for pushing and creating awareness in the marketplace. We know that before any marketing copy goes out, it should be cross checked with someone in legal. And you can imagine in large companies, that's an enormous amount of volume.
And to put human eyes on something like [00:09:00] that is a huge task. And the backlog is real and marketing has to move at the pace of the market and competition. And so that's a real pinch point for a lot of legal teams. And yes. There are, we've had semantic analysis. We've had NLP, natural language processing.
We've had engines that help reduce the amount of work that a lawyer or someone with that expertise, because it doesn't always have to be a lawyer, can be a paralegal that's reviewing that, but that's very reactive. They're waiting for language to come in queue, review it, then communicate back out what's okay, what's not.
With generative AI, there is. Still the ability to be reactive, but now you can be proactive because while you're reviewing these, the generative AI tools that are out there can actually be applying. Your company's heuristics, culture, tendencies, and be [00:10:00] predictive and really sort of help scale that individual's ability to communicate back to the business, not just what changes you need or what can't be done, how to do it.
Give you three different versions of the language and if you think about that, a human at a keyboard saying, okay, I know you want to say marketing team, but we can't for these reasons, a lot of legal teams just stop right there and pitch it back to the marketing team, right? Okay, well, we know we can't do.
Can you help me out with what we can do? Okay, well, if I'm a really good, you know, legal team member, I would give them an example. I would say it like this. Okay, well, think about the just the time it takes to generate that You know, with generative AI, you could generate an example. You could generate three examples.
You could say, okay, in this jurisdiction or this country, this is how we're able to say in this one, we're able to say it like this generative AI can do that automatically, right? So just think about the reduction of time, reduction of just cognitive challenges of a [00:11:00] human trying to sort of sort that out.
And just think about the highlighted service. Experience of that marketing team now as they interact with legal, it's like, oh my gosh, wow, not only was this faster, it's better and it's more responsive to my needs.
Mark D: Yeah, and I like that example because I think it's showing how the tool helps the in house team be more efficient at scale.
And the other end, I guess one other way of flipping it around is how AI is helping the non legal members. Get across the legal positions quicker because you can apply to automatically summarize it. Let's say there's a change in, you know, GDPR was a really big example in the UK and Europe. Instead of getting an in house counsel to go and manually tell all the teams about the changes, you could get a large language model.
And apply it at scale to the whole company globally and say, these are some examples. This is what we expect. No, I think it's really interesting, all the different facets of the organization that it touches because it doesn't really leave anyone untouched.
Josh K: No it doesn't. [00:12:00] And just another pragmatic example.
So this is, and this is in firms and this is in house, right? So we know that email rules our days, right? And if you think about how much knowledge is tapped. Is trapped in our inbox and in email threads so we know that lots of to go with your example a change in GDPR that might like as we're responding to that as an in house team that might be in a series of threads with seven different people.
Coming on and off the thread, sort of expressing their opinion and how is we as company X are going to adjust to those changes, right? And that's never really memorialized. We're relying on humans in that chain to both understand all the different threads in there and then dispense that information across the team.
Broadly. And okay. So say if it was my job to like update our GDPR policy, what would I have to do? I'd have to look at that email, [00:13:00] go in, look at our current policy, see what everyone said, probably have a meeting, probably have another meeting, blah, blah, blah. With an LLM, okay, we could basically point that LLM at, now, I'm not talking public facing and we can get into that, open AI, like this is proprietary LLMs.
We could basically take that chain of emails and say, okay, here's our current GDPR policy. Here's a recent discussion amongst all stakeholders via email on changes. Please update our policy and LLM absolutely has the ability to completely rewrite that based on that unstructured conversation. That's an email.
Think about the implications of that. It's just an immense way to tap into the power of inbox and remove it from someone's. Burden of translating that into an updated policy and that it's just it's astonishing. [00:14:00]
Mark D: It is quite astonishing and it leads me to think having watched the, um, you know, the Microsoft copilot and the Thompson Reuters plugins and the Lexis AI models.
Do you think the biggest impact for lawyers will come in the form of? Bespoke changes to legal tools, or will it come in the ability to use Microsoft Copilot and get, you know, meetings summarized, you know, cause it seems like that's almost going to be way more powerful for the lawyer because I could, I can get rid of, you know, minute taking.
I can get rid of understanding what everyone said in a meeting and computing that into, you know, their working day.
Josh K: You're exactly right. It's probably one of my biggest fears. It's why I'm so passionate about. Everybody getting their hands on the tools right now because we're at the raw front end of this.
And yes, those other tools that are coming online, like co pilot and all that, they're going to be remarkable, but they're not going to give you the raw under the hood, [00:15:00] look at what it can actually GPT, GPT four, or any other model using Bing chat. Think of it as literally it's ugly, it's raw, it breaks, it's beautiful because of that.
If you get your hands on it now, you can start to get really used to how these models work. That's a benefit, but it comes with some challenges, which we all know or should know. Is going to clean a lot of that up. It's going to make it very safe, very seamless, but it's also going to hide that raw front end.
So we, as the human user, once those models come online, and if we haven't participated in the error we are right now, we're not going to, it's not going to be as intuitive, we're not going to understand what's actually happening. And I think that's going to dilute some of the transformative power of this stuff.
Mark D: Yeah, that's really interesting. And I wonder if an analog is, you know, I'm someone who learned how to drive a car in [00:16:00] manual. And now these days, everyone's got automatic and they don't know how to drive a manual. And I just, that kind of freaks me out. I'm like, well, what if, what if you have to drive a manual? Like what happens then?
Josh K: Take that forward with autonomous vehicles. I mean, we're humans have become worse drivers that better over time. Now we don't have to, I mean, cruise control. Okay, fine. Many decades ago, then assisted driving. We've got to keep the hands on the wheel and then autonomous. And I know we're going to sort of get into this, like, what are the, sort of the second order effects of this stuff?
Right. So I think you're right. It's yes, it's a blessing, but will we forget certain things that are still core? But we just don't have to do them anymore because the technology and way we work is doing so. Yeah, no,
that's, that's really interesting. And I'll take that as a, I guess that's clarion call to get your hands dirty while you still can.
Yes. It's safe. I mean, you gotta, you can't be a bozo about it, but it's free. I mean, it's literally free [00:17:00] technology right now and it's cutting edge. I write BrainyActss every day, every day there's something new and I have to cut the signal from the noise. So anything in BrainyActs, a hundred times more has happened.
I'm just trying to refine it for the legal market. So it's free. It's happening as we speak and. There's no reason not to participate in it. That's my obviously biased take.
Mark D: No, I like it. Well, speaking of BrainyActs, let's get back on track with that because you've done, you know, at the time of publication, you would have done 100 editions.
I'm wondering, as you've been across this much more than the average person, you've had to be, has there been a particular bit of news that you found most surprising, most counterintuitive that you think is interesting for legal professionals?
Josh K: Yeah, so, uh, one sort of, uh, One sort of obvious, and it goes against the grain, um, like this is not easy.
That's really sort of the counterintuitive and allow me to unpack that. So most people be like, well, yeah, of [00:18:00] course, Josh, it's not easy. It's a new technology. But when anyone who goes to chat GPT open as chat GPT, the free version or being chat and they're presented with the prompt window, right? They'll put it.
They tend to treat it like Google search because Google has trained us. When you see a blank window with a prompt, you put in something right? And someone put it in and everyone plays with rap lyrics and you write me a song and that's great That gets your hands dirty and you absolutely should do that, but they treat it as a google search So lots of people when they first encounter it, they're like, oh this It's kind of bogus.
Like this is kind of dumb. Like it doesn't do anything. Or they're like, they'll ask it a very generic question. The counterintuitive thing is this is not a fact engine. And we've got to change our minds for how we interact with this technology. We're so used to putting in a query and having anything Westlaw, right?
Spit back. What is the fact? These are the facts. These are the cases, right? And we're so used to like This trust [00:19:00] that we inherently built in, that this is fundamentally a mind shift. This tool is really about helping you reason and make decisions and discover the questions you could be asking about an area that you're thinking about.
It's not, it has nothing, literally nothing to do with giving you facts, and that's the counterintuitive thing. Because it's been pitched as it's ingested the internet and all this, we're making this big assumption that therefore it knows the right answer. And I know this isn't a technical discussion, but I'll just say this for the listeners.
Just know this. It has no idea what's factual and what's not. It does not work like that. There's no redundancy. There's nothing built in like Google search is that reinforces what's fact from fiction. It's not designed for that. So that's one of the big counterintuitive things. I think people have to wrap their heads around is what is it [00:20:00] and what is it not?
Mark D: that's brilliant. I really like it. I've, you know, someone once said early on It's like it just deals in plausibility not truth. And I think that's a really good way of thinking about
Josh K: it It's a good way of saying much much crisper than what I just said. Yes
Mark D: Yeah, it's a similar point and That is a mindset shift and I really like your point around how we You've become trained by Google to expect, you know, and Wikipedia also to expect a fairly robust, truthful response.
So that's really interesting. And I think it is a, yeah, it's a mindset shift and particularly for lawyers where, you know, we are merchants of truth and it's very important in particular circumstances. So it's definitely worth knowing what it does and what it doesn't do. So thank you for that. Well, and that kind of leads us onto the next question around risks.
Because I think the average in house lawyer probably would have been, probably would have seen a few things, how it can hallucinate. They would have seen the law case in America where the person didn't actually [00:21:00] check the reference and it just created bogus decisions. They're probably familiar with dangers of bias and the data of IP issues and confidentiality.
But what I love about your work is that you're uncovering second order risks. So for example, one of the issues that you cover is about What happens when we lose cognitive friction in the workplace? Because these tools are removing that from us. And we kind of, kind of were talking about that a bit earlier.
Did you want to expand on that particular point and then maybe any other second order risks that come to mind? Yeah.
Josh K: So I'll just expand on the cognitive friction. If you think about how we'll focus on lawyers, how new lawyers. Learn, right? I mean, let's be frank and this is global. This is not just a U. S. challenge. We tend to treat new lawyers like we just throw them in and let's focus on firms for a moment because they're the biggest employers. It's like through osmosis. Like if you and this goes back to this goes to work from home or the office. This is the debate we're having right now. And yeah.
Unfortunately, the traditional mindset is you have to be physically present in [00:22:00] order to be trained properly. Well, I'd say why? Well, it's because we're really horrible at training. So we just rely on, well, you've got to learn by osmosis. Just hang around the halls of the firm and you'll pick it up. Right?
It's like, that's a bit ridiculous. intentionality completely. Compounding that now is generative AI, where, yes, it absolutely has the capability to consume a lot of the work that might traditionally go to those new lawyers. So if we do remove that work, what are those new lawyers going to focus on? How are they going to cut their teeth, so to speak, on learning what law school didn't learn, you know, teach them, the practicalities of law.
So. That's what I thought of this lack of cognitive friction. It's beautiful that it can replace a lot of the work, but then you have to be intentional about how do you train the next generation of lawyers and. I love that. I love [00:23:00] that generative AI is creating that because I don't think we've reached a tipping point as an, as a profession that we have to really get honest that we kind of are horrendous at training and educating new lawyers and with as much displacement as this technology might be able to do.
Hopefully, hopefully we'll get real about that. That's going to go into another second order. And this is absolutely present in, so I'm also a law professor. I didn't share that in the beginning. I teach at law schools here. I teach the business of law and entrepreneurship to law students. And so I'm always with the next generation of lawyers.
And I don't want to get into these generational divides, but we know that the current generation of lawyers, you know, grew up, anyone who has children, we know that mobile phones, TikTok, Snapchat, all that kind of stuff, you know, kids will sit next to each other and text each other versus turning their heads and talking to one another.
Okay. And that's real. And yeah, we've seen a decline of the ability to write intentionally, interpersonal skills have sort of [00:24:00] been degraded for any kids that were in school during the pandemic. We knew they got isolated for one to two years, and there's a bunch of studies on the impact and degradation of interpersonal and communication skills.
Well now we have another one. Think about it. If generative AI, if I can just basically regurgitate notes or thoughts and say, write an eloquent email to Mark in a professional tone on this and it spits it out and I don't have to write or be thoughtful about that, generative AI does. That will definitely degrade our ability on our own to communicate eloquently, sophisticatedly, meaningfully, intentionally, all those things.
So it has a very nasty side effect that if we're not careful, it will, we just won't be able to communicate to each other anymore without the assistance of technology. So that's a huge risk. It's a big risk, but that's a huge real risk I'm already [00:25:00] seeing.
Mark D: That's really interesting. There's a quote somewhere, I won't do it justice, but it's like clear writing comes from clear thinking.
Yes. If you're getting a large angle model to do the clear writing, you're probably not clearly thinking about it. I do want to say though, I'm not, and I don't think you're suggesting this, but it's great for so many things where you, Well, you know what you need to do and it's not, you know, it's not a learning opportunity.
It's just, I just needed to do this one thing for me and then that it's perfect. And, uh, you know, none of us, certainly you and I, we're not Luddites, we're embracing the technology, but I still think, yeah, we need to be cognizant of these risks, the second order risks. Cause they're really interesting. And frankly, no one really knows yet what's going to happen and how it all plays out because the ramifications, they're happening right now.
Yes, we haven't got a view. I'm sure there'll be some interesting PhDs in the coming few years around what actually, how that reduces the, I mean, I think it's a real live issue for universities and for schools. I mean, [00:26:00] you would know better than most how challenging that would because critical thinking is difficult to teach.
Josh K: Critical thinking has to be built in the mind. It can't be displaced. Yes, you can, by example, and that's part of the teacher's setup, is to set up paradigms where critical thinking has to be accessed by the student and unlocked so they quote unquote learn. It, it raises the stakes for teachers and professors, definitely.
And critical thinking, I mean, go back to this as a reasoning model, this, these are reasoning tools. It can, you can outsource a lot of critical thinking, at least the preliminary critical thinking that you might be scribbling on notepads or have on whiteboards across, you know what I mean? All our doodles or post its and all that kind of stuff.
The learning happens when you're connecting all those fragmented thoughts and just bonehead ideas that we have throughout the day. This because it can quote unquote, be careful about how I frame this because it can create the [00:27:00] appearance of thinking it's not, but because it can, it's really strong with taking abstract ideas, compiling them, then communicating them back in a very intuitive, easy to understand way.
You could see some danger here. Like students, that's why a lot of everyone's talking about cheating and writing papers. That's like obvious going to second order. It's like, what are the downstream implications? If they do that fine, they're cheating. They're not thinking. And as they. The absence of thinking creeps into their work.
What are they actually learning when they exit? You know what I mean? And how are they going to actually behave in a professional capacity? What didn't they learn that they don't necessarily learn through course material, but through interaction with one another or the professor that generative AI is displaced.
No, to your point, we don't know it's happening right now. Someone will study it and it's going to be interesting. There'll be upside and downside with everything.
Mark D: [00:28:00] Totally. I think it's so interesting, Josh, before we wrap up, that's, I would like to bring it back to the in house team for a second. And I'm thinking about leaders, leadership in particular now.
So if you're an in house leader, how would you be thinking about managing? Generative AI, apart from subscribing to BrainyActs, we'll take that as a given, but what are you, what are you practically doing? Are you leaning on your tech partners to give you information? Are you setting up a steering group, a task force? What would you be doing as an in-house leader?
Josh K: So I, I think we're everybody, the majority of people are right now, there's Two things. So some in house teams have had to already create an A. I. Internal use policy, right? So I have a conversation actually in two hours with a large health care provider here in the States and talking with one of the deputy general counsels who had to grasp this notion of how do we inform not just the legal team, but our entire employees, how and how not to use this.
So one [00:29:00] understanding that is not written in stone. You're going to be dealing with a living, breathing policy that must be updated. And it can't be you sitting in the legal spot dictating to the business what can and cannot be done. This is Absolutely urgent that there is a two way dialogue between the business of your company and legal because I can tell you as many use cases as you think you've foreseen inside the legal team out there in the company, they've come up with 10 X more business and use cases, some good and some dreadfully stressful, right?
So maintaining a constant flow of information without judgment. Without slapping a risk statement on it right now, because you don't want to chill or cool the dialogue between the business and legal right now. You absolutely want to be very cozy. You want to be very comfortable. You want to make it a safe place.[00:30:00]
Be having these conversations without negative implications to employees, because your job right now is to learn and then ingest that learning and update the policy and communicate it in a way that works. As well as it can, knowing it's going to absolutely be imperfect for a very long time, right? So it's an elastic, flexible AI use policy.
That's number one is just understanding that. Number two is awareness and education. We're inside the bubble, right? We're talking about on this, on this podcast. And because I spent so much time out, I'm like in the 1% of people who know what's going on. And that's okay. That's my job. Most people. Most people still haven't actually like used it.
So education, education, a large pharmaceutical global pharmaceutical company in September, and I kind of dread this in some ways. I'll be training all their in house team, right? And I say dread because September, I have no idea what I'm going to be training them on. Like, I [00:31:00] can't design that today in June because it'll be look completely different.
But their goal is not to make them generative AI experts. Their goal is not to help them buy generative AI tech. It's simply, what is it? How might we use this? How should we not use this? And let's encourage you to experiment and get hands on. Right now in the raw front end so you all are more acclimated and comfortable to what this actually does So it's that education and awareness piece.
That is essential. I wouldn't hoard it. I wouldn't sequester it. Yes control it Yes, mitigate it. You want people to use this and be aware of it because there's so many myths. There's so many fears There's so many stresses out there that just are nonsensical Until you use it, you can't really understand what's happening.
Mark D: I love that. I think that's a brilliant summary. And it's also exciting to see that all these large companies are contacting you and getting you to do this. [00:32:00] Like, obviously, it's on, you know, it's definitely on big companies radars. They need to sort this out. So that's really encouraging to see, actually.
Josh K: It is and more so and firms too. And I think this has gotten, and we'll see this more. People are understanding that we've got to understand what this is a little bit more, and it doesn't help that headlines like in the U S last week where the U S lawyer relied on chat. But I wrote an open letter and BrainyActss, you might, or to, to that lawyer, because it's like, okay, don't pile on to this guy.
It's a new technology. It was an easy mistake. Yes. Did he forego some of his professional responsibilities to. Check these cases that were fabricated, but that wasn't the tools fault. That's a lack of understanding around what this tool is. Goes back to where we began. This tool is not designed to create fact it's going to make up cases because that's what it's designed to do.
Because this lawyer didn't understand that he got in some trouble. Just understanding what the tool does is the starting point. And I think most people are starting to recognize [00:33:00] that.
Mark D: Brilliant. Well, I know there's so much we could talk about and so much to cover. But before we do finish, do you think there's anything really major or key that we missed that you think is relevant to our audience?
Josh K: I would just say this, and I'll try to be succinct. A lot of, a lot of the industry is focused on the inherent threat to lawyers. In this space and taking legal work away and going back to this whole robot lawyer Thread that's out there. I think that's dramatically overplayed I actually believe we're going to see an increase in legal demand Because of this and that benefits both consumers as well as lawyers.
We'll leave that for another discussion Perhaps the thing I want to really stress to your audience is the business of law legal operations the that community is not as close to this as they should and it This, I will say, it's an existential threat to business of law professionals because if you think about what the business functions of a legal team in [00:34:00] house or firm can do, marketing, communications, compliance, understanding the business better, there's a whole host of things that this can do, training, onboarding, I could go on and on.
If you're a business professional, who's not at the top of your game, if you're not a business professional, who really views themselves as committed and truly skilled up in their area, not just pretending, not just phoning it in. And I'll say this on the record, lots of law firms across the globe, but certainly in the U S have a tendency to over promote business of law talent, big salaries, big titles, and thin on skills.
And on experience, this technology absolutely has the power to put you in the spotlight and show how little you're actually doing on behalf of the firm. That should be a wake up call. I see far too many business of law people ignoring this. And I'll tell you, I can come up with a marketing strategy. Any law [00:35:00] firm out there in about two hours worth of work that far surpasses what is considered acceptable in the industry today, using the free tools, if you're not digging into this stuff to show the value you can deliver right now, I'm telling you, it's going to replace you.
That's what's being overlooked. Forget replacing lawyers. This technology is going to replace business side services and skills inside firms and in house teams that should hopefully get the attention of folks out there.
Mark D: Well, that was certainly a blockbuster to finish the finish the podcast. Josh, I just need to say thank you.
Thank you so much for taking time out of your very busy day. We really appreciate it. And we can't wait to have you back on maybe in, you know, six months times when the world's totally changed again. I'm sure we'll
Josh K: talk about completely different things then. And that's the beauty of it. That's the beauty, but thank you very much for having me.
It's been great.[00:36:00]
Mark D: Welcome back, Jonny. It's great to have you on the show again. Could you remind our listeners a bit about who you are, your background and expertise?
Jonny B: Yeah, thanks Mark. Really good to see you again. So I'm Jonny Badrock. I'm a chief legal engineer, at SYKE and LOD. I'm a lawyer by trade and background, but I've been working in legal tech and transformation for the last 10 years or so. Uh, and my relevant site now is very much the open storm, understanding our customers biggest challenges and making sure that we're designing and bringing solutions and technology to them. So those challenges.
Mark D: And so five months ago, you joined me on the show and we talked about generative AI, and we both agreed that five months is a long time in this space.
So we thought we'd get the band back together and we'd cover what's been happening straight off the bat. We've got GPT 4, last time we recorded it was 3. 5 and already that's a pretty big change.
Jonny B: Yeah. Massive change. And it's quite funny, isn't it? Talking about bringing the band back together only five months later.
It's one of the rapidest areas of development [00:37:00] that we've seen generally in technology society. But yeah, GPT 4 is massive. It's been a huge improvement on GPT 3. 5. To give some context around that, GPT 3. 5 effectively took a bar exam. in November last year and passed in kind of the bottom 10% of people who passed the exam in terms of its score.
And GPT 4, which came out kind of two and a half, three months later, was in the top percent of all scores. So that's the kind of development we can see, even do the law bar exam in those three months. So we've had some incredible developments and I think even more so the hype to just continue to kind of spiral as well.
You know, even my gran's talking about GPT 4 as generative AI. I mean, she doesn't have a laptop. I don't think she knows what it means, but she's aware of it. She's seen all the news. So the coverage is incredible.
Mark D: Yeah, absolutely. And that bar exam thing's funny because I listen back to our episode and we talk about how it had passed, you know, did the bar exam and that was impressive.
And now it's so much more impressive in such a short space of time. So obviously, this show is [00:38:00] aimed at in house counsel and professionals around the world. We know. Lots of them listen and legal ops professionals as well. Now, since, since we recorded a lot of the main providers like LexisNexis, Thomson Reuters have come out with what they're doing in this space because everyone's under this intense pressure to release something because, as you say, the hype cycle is so strong there's expectations that players will be doing things and doing impressive things already.
What's your view on what we've seen so far from those types of vendors?
Jonny B: Yeah, to be honest, I've been really impressed going back to the point before around how what we've seen with generative AI and large language models is the speed at which people are bringing things to market. So, you know, when we spoke five months ago, at that point, it was a bit of fun.
Lots of people were using kind of chat GPT and being interspersed there, asking interesting questions, having a bit of a play with it, having a bit of a laugh. But what we've seen in the space of five months is now some really credible tools that are actually being brought to market that are making a difference to lawyers lives every day.[00:39:00]
And we've just not seen that development in any other kind of area of legal tech in the last kind of 15, 20 years. So some really cool demos that came out, lots of partnerships directly with Microsoft from some of the big vendors. So, you know, Thomson Reuts have got their co pilot plugins, which allows users to.
Can ask natural questions and do, you know, easier legal research. We've seen a lot of CLM providers, Ironclad, DocuSign, Icertis, Thomson Reuters, again, embedding tools inside of their platforms already. A lot of them have got beta's going and working live with customers on it in the space of five months, which again is incredible.
I think Ironclad have actually a live production environment with some of their customers and using it to help them to kind of draft contracts, redraft contracts. It's a really cool demo. Knocking around, showing kind of a business user interacting with a contract in a fairly kind of plain English way, but getting it to adapt clauses to change in that confidentiality clause from one way to two way. So it's really cool stuff going on, but very real and tangible.
Mark D: amazing. And I think we talked [00:40:00] offline before we got on about how you have all these major players who using this new technology and kind of embedding it into existing products. But there's also vendors, I think like contract network who've built a product from the ground up using GPT 4, I think. Is that right?
Jonny B: Yeah, it is. Yeah. There's a new product being launched recently called the Contract Network by some of the kind of legal tech industry veterans. They bought, you know, they built that from the ground up as a very data focused platform to help organizations negotiate contracts quicker.
But all the way through their products, GPT 4 large language models are kind of integrated into the platform. So first onboarding templates for a customer and helping to match it to their kind of standard data models all done automatically and with about 80% accuracy, which is just something we've never seen before.
But then all the way through again, helping with that kind of negotiation piece, suggesting clauses that can be dropped in. So it's kind of right the way through the products. And again, that's kind of been built to come about in the last five months.
Mark D: Yeah, I think what's interesting is, working in the legal technology space, historically, it's probably fair to say that the development. And pace of change has been quite slow. That's probably a favorable reading of it. And suddenly it's at warp speed. So I wonder, I mean, that would be causing nerves for a few people. I guess the tech cycle broadly is moving so fast that even the legal tech space is moving along at a similar pace. Is that a fair observation?
Jonny B: Yeah, I think it absolutely is. I think it absolutely is. And it's... I think what we're seeing is there's just a lot more people talking about it. So I think some of the earlier breeds of AI and legal and some of the technology. It was very much the forward thinking, GCs, legal operations, people, kind of a new breed who were really into that subject matter for people who would go into conferences to learn about it.
I think what we've seen, what I've seen in a lot of my customer engagements is a lot more customers are talking about this and are actually quite anxious about it, as well as excited, but are coming [00:42:00] to us for our advice and say, look, we know this. It's coming. It's everywhere. How's it gonna impact my team?
What should I be doing about it? And it's the first time that we've really had custom so many customers. Coming to us asking to kind of proactive advice so early on rather than it being the other way around and us needing to be the kind of evangelist. Yeah, that
Mark D: that's really interesting. And I probably should say the pace of development has been very impressive, but we've also seen a few blunders already and one of the ones which comes to mind, which was well documented a couple of weeks ago was a lawyer in the U.
S. who had used chat GPT to make his submissions. And then the judge had to write and say, he said that six of the submitted cases were completely made up. They were bogus, with bogus quotes and bogus internal citations.
Jonny B: Yeah, I thought that was awesome. I think it's brilliant. To be honest, it reminded me a little bit of the old kind of the dog at my, at my homework kind of excuses that you used to give at school. I think what it does show though, and the bit that I guess concerned me around that is Any of the technology, any [00:43:00] of the research that you do, you'd check it first. So, you know, when I was a, you know, when I was a practicing lawyer, and if I asked kind of one of the interns who was in from university over the summer to do a little bit of legal research, there's no way that I've been automatically put back into my legal advice note for a customer without checking the citations before.
I find it really interesting to look at the psychology of what's going on for people. You know, if I was doing a Google search for research as well, I'm not just going to take it on face value, but it's correct. So it'd be really interesting to sit down with that lawyer and understand what the thought process was.
I'm guessing it was a little bit of a last minute panic and not the best way to get results. But so yeah, it does just show the dangers, but I think we've got to be careful. Not to just point at that and say, that's a generative AI problem, we need to shut this down, it's not working, and I think that comes back to a governance and a process problem, and that, you know, you wouldn't have done that with any other type of research.
Mark D: Absolutely, and we're going to get on to the risks a bit later, but before we do that, we've kind of brought this up to speed in the past five months of what we've seen, and obviously a [00:44:00] lot's happened. I thought maybe we could cast our eyes forward to the next 12 months, if it's possible in this rapid environment to think about what's happening.
And we know from, there's been a few reports already, so LexisNexis released a report in March. But I interviewed over 4, 000 legal professionals around whether they've used it and over a third of lawyers had already used chat GPT and nearly half law students had used it. I'm sure those numbers are now different as of June.
So it's already definitely mainstream and it's only going to become more mainstream. I'm wondering from your perspective, someone has really good insight into what in house legal teams are doing with legal technology. What are you thinking, what's realistic for the next 12 months?
Jonny B: Yeah, it's a really good question and it's interesting crystal ball time, but I think I probably got more confidence in some of these predictions.
Any viewers I've made around legal tech in the last kind of 10 years, I think we're going to see people who are using legal technology, contract technology already, this technology is [00:45:00] definitely going to be built into those products in the next 12 months. As I say, a lot are already doing live beta testing with clients and we've already got live tools in production.
So I'm confident that those teams that are already using technology for like contract management, they'll be using generative AI to help speed up the drafting of clauses, help the review and redline of contracts, increase the capability of doing analytics on contracts. That's definitely going to be escalated massively.
In the next 12 months, and I think we'll see other organizations who haven't taken the leap yet probably take that leap a little bit quicker. But I think as well, what's more interesting around this kind of technology are tools like Microsoft's Co-Pilot. So, that's also going to be launching in the next 12 months.
There's been a lot of coverage, coverage on that publicly in the tools that are going to be embedded inside of the tools that lawyers, business users are using every single day. So it's baked into, you know, Microsoft Outlook. Microsoft Word, Microsoft PowerPoint, Excel, and it's going to sit in a, and it's going to be embedded in a [00:46:00] toolbar alongside it, but just allows you to do your work quicker.
So some really cool examples on the Microsoft website, I'd encourage people to go and have a look at, but just simply kind of, here's a, you know, link to a OneNote link where I made some meeting notes yesterday. Please send that into an email to summarize for me, or please send it into a PowerPoint presentation, so I can share this with my team.
Literally type it in a command like that, and it will automatically generate an entire presentation or entire emails. Or the entire document for you. And again, it might not be production ready. You're probably going to have to review it and have a little look at it. But I've used it for a number of use cases like that.
Helping write things like job specs, for example. And yeah, it's not perfect, but I can get to like an 80% good draft in two minutes. So writing a job spec now takes me 15 minutes, rather than probably an hour or two hours that it might have done before.
Mark D: It's amazing. And I think the Microsoft, the potential for.
Not just lawyers, but knowledge workers generally to increase efficiency is huge. And I think for lawyers, what's nice is you could potentially let lawyers [00:47:00] be lawyers. You can spend actual time on, on legal advice, on strategy, on quicker research because you're not spending as much time, you know, making meeting notes or preparing slideshow presentations.
So it's pretty, it's a pretty exciting time. I think, and not one to be scared of, particularly I think for the more senior lawyers who would otherwise be relying on, but well, actually, I don't know if that's true. It's probably interesting. It's probably relevant and interesting for all levels of lawyers.
I guess there is a, is a larger industry question around if it, if the AI becomes so good, it can do it. Basically, everything a more junior lawyer could do, where do, what does that, where does that leave junior lawyers and where does that leave training up lawyers and professionals?
Jonny B: Yeah, it's a really interesting point and again, one of the hot topics at the minute and I think we are going to see a big kind of paradigm shift, but yeah, this isn't just law.
You know, let's talk about this as wider society, wider jobs. There's lots of other kind of knowledge workers and other similar [00:48:00] industries that are going to be impacted in a similar way. But I think it also goes back to the point. A lot of lawyers are close to burnout, working very long hours, probably 30, that time is probably doing, you know, fairly routine, everyday tasks.
So, you know, I still think there's, and maybe it's the optimist in me, but I still see a world where actually most of these works are still going to be needed. They're going to be focusing on all that high value stuff that you just spoke about, but with a bit of luck, we might get to see the families or, you know, Just a new performance every evening is kind of rather working until a bit of a bit that oil.
So. I think, you know, I think it's going to be an enabler. I still don't think it's going to replace. And I think, particularly around law, I think people often forget just how much of a, of a human profession it actually is. You know, even things like litigation will be a legal answer that tells you whether you're going to win or not, and almost a decision tree that you could dictate off the back of that.
There's a lot of emotion that goes into it as well. There's a lot of negotiation tactics that become very human, and you've got to read the other side. [00:49:00] To work out what their strategy is and how you want to play it. And so there's a lot more than just the legal decisions that go into kind of a lawyer's role.
And there's a lot of commerciality in there. And I think, you know, AI is going to automate some of that decision making, but I'm not sure as a society, people are going to trust it and want to work with it. It's still going to want to make human decisions. I think
Mark D: it's a really interesting point. Only earlier this week, I was reading an AFT article, which was sharing research, which shows that on average, 57% of a professional's day is spent in this administrative space on meetings, on emails, on responding to instant messaging.
So any I guess any technology which moves the dial significantly in that area moves the wider dial significantly because it's the biggest part of someone's day. So it's the opportunity for efficiency is, I think, very exciting,
Jonny B: completely agreed. I think the bit that excites me around that as well is probably the way it will change that humans do things.
So, I mean, I get frustrated at how [00:50:00] much time is spent on admin tasks and chats and meetings and. I think if we're all on it, this is a big portion of that, but it's probably unnecessary as well. And we've just got into the habits of doing it. And I think, I almost think that kind of having AI there to help assist with some of that will almost just change the, or will change how we think about doing that work anyway, in question, whether it's even necessary.
So again, I'm getting quite philosophical, I think, in what we're talking about today, but I think, I do think that changing some of these behaviors is actually going to change much more than just AI replacing, because I think it will change how we work all together, but how we think about work and how we think about activities that need to happen at work.
Mark D: Absolutely, I agree. So let's move from the philosophical to the more practical. So Syke and LOD, we're doing a bunch of stuff in this space. Did you want to give listeners a bit of an idea of what we're doing in terms of the AI large language model space?
Jonny B: Yeah, absolutely. There's lots of exciting experimentation prototypes going on at the minute.
A lot of stuff that you probably expect like we've spoken about today, lots of experimentation [00:51:00] with contracts, thinking about how do we summarize contracts more easily for business users. How do we interact with them being able to recall clauses, being able to analyze a big bunch of contracts to get to results kind of much more quickly looking at them from a portfolio perspective, experimenting with things like RFPs as well.
And now we can kind of interpret those and speed up the review of those. I think one of the kind of flagship projects that we're working on at the minute as well is LOD match. So that's a groundbreaking LM powered AI assistant, which is going to be fully operational in July. And that's going to be embedded inside of our LOD business.
And it essentially provides an instant match for our legal professionals and clients. So it's going to drive a huge jumping speed and matching for an easier, quicker process for both of the last 10 months or so. And yeah, it's going to make a big difference. It's going to make our people's lives easier, but it's also going to make our clients lives a lot easier.
It's going to speed up that whole process and get people to the right results much more quickly. It's really exciting and I can't wait. Can't wait for some launch in the next month [00:52:00] or so.
Mark D: Absolutely. I think L. A. D. Match is a great example of how these large language models are empowering all sorts of efficiencies, not just within the contract space, because I think the contract space is the one where a lot of lawyers and legal professionals, that's where their mind goes to immediately because it's the large sets of data.
But actually, the ability of these generative AI products to do other stuff like LOD Match is really exciting. And I also can't wait to see once it hits the market in the coming months, it's very exciting. Now, I know we, we touched on briefly, briefly the lawyer who made the error of relying too much on chat GPT, which managed to hallucinate some cases and we've, there's been fairly Wide coverage of the dangers of the hallucination and bias and also using confidential data, but my question is what's the appropriate level of concern because as you said earlier, if you ask an intern to do some research, you wouldn't just rely on it blindly and submit it straight to the judge in that particular case.
Do you [00:53:00] think there's scope for us to have a, I guess, a more mature approach to how we should be concerned about the dangers of AI? I think it's
Jonny B: a really interesting topic. I think, you know, that case from a lawyer in the U. S. shows that it's really important that we do have concerns, that we do have hesitation around it, um, a bit concerned that he didn't, but Examples like that are really good just for making people suddenly pause and think actually what is it that I'm doing here?
There was a similar case, I think it was Samsung, who some of their employees were using ChatGPT in their day to day operation. We're actually putting a lot of sensitive commercial information into there to help them. And that got leaked to the bad data. So, you know, you wouldn't, it's quite interesting, but I imagine a lot of those people would put such sensitive data into any other technology out there.
So I'm not sure what's encouraging people to trust in generative AI and ChatGPC I think a healthy level of caution is really important. And I think some of these horror stories are quite good just to ground the people of bringing them back. But on the complete flip side of the argument, [00:54:00] you know, nothing's perfect.
If you ask me a legal question Mark, I couldn't just mix something up and come back to you and tell you the answer. And you'd probably run with it and you wouldn't necessarily ask me how I knew that or where it's come from. You might just trust it and run with it. You know, there's a, we were talking when you and I caught up earlier in the week around kind of Teslas as well and the coverage they got. So, you know, the first Tesla that crashed and suddenly it was on every single newspaper around the world saying that Tesla is this god awful company producing these terrible machines. Not recognizing, but actually the stats of a number of Teslas that crashed compared to most of the cars. It was probably one of the safest vehicles on the road, but it's just that it's It's that coverage, it's that sudden panic that often comes from people who feel threatened.
So it's all about that mindset of how do we put in place the right governance and the right questions to make people think about what they're doing, but also recognize that there's so many other risks in life. I don't think that AI is necessarily a bigger risk than lots of other things. And when it, you know, when it comes down to hallucinations and bias.
As I say, lots of the methods for producing this kind of content are [00:55:00] also not effective and not 100% reliable. So it's all just about making sure that you've got the right person in the room who's actually assessing the information that's coming out and putting a name to it and saying they're happy that this can go out.
And particularly in the legal space, that's really important. Absolutely.
Mark D: And before we wrap up, Jonny, is there, just thinking about what we've talked about, is there anything that we've missed that's happened over the past five months or as we think about the next 12? Is there anything big to think about?
Jonny B: Yeah, I mean, I think in reality, there's probably millions of things that we've missed into a five minute chat, but I think the key thing for me, talking about these risks and hallucinations of everything that's gone wrong is. Making sure that we're not dampening enthusiasm, making sure that we're not damping innovation.
You know, I'm still encouraging all the customers that I speak to, all the lawyers that I speak to, friends that I speak to, like, go have a play with it. Go and see what it's all about. See it for yourself, understand it, you know, within safeguard rails, don't go and put some really confidential information in there.
Probably don't upload your passport and your credit card details or some very informative company information, but go and have a play and see what it can do. [00:56:00] So I think that's, what's going to kind of drive the inspiration, but also probably drive the right level of adoption and using the tools in the right way.
Don't be scared of it, just go play with it in a safe way, that's what I'm encouraging most of our legal customers to be doing.
Mark D: Absolutely, I think cautiously optimistic is the name of the game with generative AI. And I also totally agree, I think getting your hands dirty with generative AI, so to speak, is just...
Very easy to do and very worthwhile because you quickly come to understand what it is you're dealing with And yeah, it's hard to replace actually doing some work with it. I think absolutely. I know it just leaves me to say Thanks so much. I know you're a busy guy and thanks for taking time out to bring us up to speed
Jonny B: That's great.Thanks mark