FinRegRant #5: AI and Bank Regulation
Brian Knight Sat Down with Matt Mittelsteadt to Discuss Artificial Intelligence's Potential Impact on Banking
Transcript
Note: While transcripts are lightly edited, they are not rigorously proofed for accuracy. If you notice an error, please reach out to bbrophy@mercatus.gmu.edu
Brian Knight: Greetings. Welcome to another episode of the FinRegRant. My name is Brian Knight, and I'm a Senior Research Fellow at the Mercatus Center at George Mason University. I have the honor of being joined by my colleague, Matt Mittelsteadt, who is a scholar at Mercatus who studies artificial intelligence. We're going to be talking about the interplay and issues around artificial intelligence and financial services. Matt, let me just start with a quick bio question. When did you first get interested in artificial intelligence?
Matt Mittelsteadt: I've been in technology for a long time now, computer science-related technologies. I previously worked in healthcare software at a company called Epic, which most people probably haven't heard of it, but it's software you have most likely used. Most medical records are run by this company. I got my start in technology there, but I jumped from there into cybersecurity, actually, where I studied that for a number of years. From cybersecurity, just by nature of where the industry was going, just gradually pushed myself into AI.
I more formally got my start at Syracuse University, where I was working as a research fellow on an AI-related grant related to the intersection of AI policy and the law and national security. A lot of things. This is back in roughly 2017 when, yes, AI was a conversation, but it wasn't quite the conversation we're having today. We were doing a lot of preliminary work, but not the in-depth policy analysis, issue analysis that we're doing today. From there, I jumped over to Mercatus, and I've been working on AI relatively ever since.
Brian: It's great to have you. Let me ask a question, sort of a fairly basic question. That is, okay, when people think of AI, a lot of people's minds immediately go to sort of the Terminator or HAL or some entity, usually, but not always malevolent, that is thinking for itself. It has a sense of self. It seems like when we actually talk about how AI is being used in the real world, it isn't that. It isn't this general intelligence. It's much more specific. With the caveat that by the time we're done recording this, this is going to be outdated because the technology is moving so quickly. As of right now, today, what is AI and what can it do and what can it not do? Where should people be thinking about where this actually plays?
Matt: I'll start off by talking about what's your average interaction with artificial intelligence. Honestly, you're touching it in so, so many ways throughout your day. Anytime you take a photo on your iPhone or Galaxy or whatever phone you have, more than likely that phone is using artificial intelligence in some way to improve the quality of that photo. That's just one very simple way that you're probably using AI most days.
Google search, that's another very common implementation of this technology. The way the algorithm works, it uses machine learning technologies to take in user data and steadily refine how it serves up information to you to hopefully do a better job at searching the internet. Beyond this, I guess a fairly financial services related example, if you use your bank's mobile phone app, more than likely it has a check cashing or check depositing module where you can use your phone's camera to scan a check and input that into your bank. That actually uses AI image recognition technology to scan the check and nab the information off of it.
I guess the point here is it's exceedingly common. Yes, this conversation has largely started in a broad sense since the release of ChatGPT in 2022, but this stuff is everywhere. We're really using more low-tech versions of artificial intelligence all the time in our daily lives. Now, of course, I mentioned ChatGPT. That's the reason we're having this conversation today. That's the reason Washington has suddenly woken up to this technology that has been around for decades now. That is changing things. A lot has happened in the last one to two years. The release of ChatGPT in 2022, I think was the first time that a lot of people realized that AI can actually perhaps do these things that a lot of people have predicted for decades that maybe computers could do, such as writing logically consistent essays, for example, or even writing computer code or in more malicious examples, writing malware or writing spear phishing emails to potentially attack individuals or companies. We're seeing a rapid progress in this world ever since the release of ChatGPT. Like you said, every single day there seems to be a new innovation. Image generators have gotten near photorealistic. Video generation, just last week OpenAI released Sora, which is a new video generation software that doesn't produce full-length movies or anything like that, but it produces relatively short clips that, honestly, if I didn't know this technology existed, I would have thought they were real. The technology has truly progressed and only seems to be progressing more. I think, we're going to see what happens here, for better or for worse. There's going to be a lot of good and, of course, there's going to be probably a lot of bad, too. Our task as people thinking about this stuff is to try and find some sort of balance there.
Brian: Yes. In financial services, it seems to me that, because people have been talking about AI in financial services for a long time, during the first — well, not the first, but like the most recent big fintech boom of the late teens, which is where I cut my teeth, there was a lot of talk about the use of AI and machine learning for things like credit underwriting. Now, first let me pause. AI and machine learning, are they the same? Are they different? Is one a subset of another? What's the distinction between the two?
Matt: Yes. Usually, people categorize machine learning as a subset. The way I like to frame this is that AI is the goal. We are trying to artificially create intelligence. To do that, we use a lot of different mechanisms. Machine learning is actually a type of algorithm that is essentially a type of technology. That's how you should think of that, a type of technology that we use to learn about the world, become intelligent, and hopefully help us reach this goal of artificially creating intelligence. Now, very briefly, there are other methods to build AI systems other than machine learning. Most of them are not popular. At this point, AI and machine learning are roughly synonymous, but they are technically distinct. We can use them roughly synonymously because that's just the state of play right now.
Brian: Okay, great. At least in the late teens, the pitch would go something like this. We, FinTech Company A, we make loans, and we are better able to assess creditworthiness because we take in a million different points of data, and then we use AI/machine learning to assess the creditworthiness of borrowers more accurately, perhaps more expansively. Maybe we can assess creditworthiness of borrowers that traditional methods can't. Therefore, that lets more people get loans and get loans at better prices. Now, I've also heard that a lot of that was marketing speak, but that there presumably is some truth to it. Now, how does that work, right?
Matt: Theoretically, and more than theoretically, and there's a lot of evidence to support this, AI systems can basically just spot patterns humans can't. Usually when you're looking at someone's credit history or something like that, there's a lot of data points in there. For any human to understand the holistic picture of all the patterns and nuances in that data, that's just really hard because we're all very limited. We all have one brain, and that brain is very distracted, and that brain has a limited set of knowledge and has a limited set of experiences.
Now, AI systems aren't limited by one brain. They have expansive memory banks. They have expansive processing capabilities, theoretically, and they can just sift through all of this data and potentially find much more discrete, nuanced patterns. They can connect data points that perhaps are way apart in the data set, and perhaps no human would catch any sort of connection between them. These systems can potentially connect those and draw new inferences that humans couldn't draw. Overall, just combining all of these potentially hard-to-discover patterns that they can ferret out in these credit histories or what have you, they can create a much more holistic picture of the risk you might incur by giving this person a loan, or perhaps on the other end of the scale, perhaps a person who might at first glance seem risky, well, maybe these nuances suggest they're not actually that risky, and they might be actually a worthy candidate for a loan or certain loans with certain restrictions. Now, in terms of the truth of whether or not this is actually expanding mortgages or loans or what have you, I do agree that I think a lot of this is potentially marketing, but there is some modest evidence to suggest that there is a truth to this. I was reading a recent press release, so this is still marketing. Take everything with a massive asterisk. US Bank and a few others were talking about their success at actually providing loans to people who had previously been denied or previously restricted in their ability to access money. They were actually using recent advances in machine learning technologies, AI technologies, to study their data a bit more closely, and they were given loans, and those loans did actually work out.
There is emerging evidence. Hopefully that is the pattern we see moving forward, but obviously in the FinReg world, there is concern that, first of all, maybe that's just not going to be the case, right? Maybe this is just all marketing, as you said. More concerningly, perhaps these systems will just systematize certain biases, such as racial biases or other negative biases that might be somehow captured in this data and taught to these systems that can just reproduce these negatives.
Brian: I have two questions here, one I think fairly simple and one absolutely not. I'll do the simple one first.
Matt: I'll try my best.
Brian: Okay. Credit scoring is not new. Is the difference between AI machine learning credit scoring and more traditional credit scoring that the algorithm in the AI machine learning one is sort of like self-improving, like it in and of itself is iterative rather than relying on humans to come in and say, "Okay, well, now we're going to change the algorithm and now it's version 2.0."
Matt: Yes. There's a couple elements here. If you look at traditional methods of analyzing these things, there's a lot of statistical methods and a lot of handwritten checklists and requirements that you need to meet in order to grant mortgages, grant loads, what have you. All of that is very much human-based, as you said. Humans need to be thinking about what red flags exist, for example, that should mean that you should deny someone a mortgage or what green flags exist that mean that you should give someone a mortgage. All of that traditionally had to be relatively human determined.
Now with these systems, the computer can be looking for these flags. It can be, like I said, finding new flags, red flags that perhaps we didn't notice before but also, as you said, iteratively changing themselves over time. These things are learning systems. As the world changes, as the nature of the market changes and the nature of finance changes, these things theoretically, and this is theoretical because what happens in practice is different than the theory, but theoretically can be taking in new data about the real world, matching their decision functions, their statistical methods of analyzing people to the current state of data so that they're really making well-tuned decisions to the current state of the market.
Brian: I'm going to ask the difficult question, but you prompted another hopefully not too difficult question, so I'm going to interject it. When you say learn, what are these things learning on? Because if I apply for a loan, you can pull my credit score. You can see which loans I've defaulted on or not, and I haven't defaulted on any to my knowledge. Fine, but that's just one data point. What if I'm a new borrower? What if I don't have a large credit history or whatever? What sort of data do these models rely on?
Matt: First of all, to train these models from the ground up, what you're doing is you're taking massive data sets of hundreds and thousands and sometimes millions of borrowers, potentially to train these things. It uses a lot of data to get a basic picture of what the credit histories of everyone looks like. From those massive data sets, it draws sort of a generic picture of what should be allowed, what should not be allowed. Now, if it looks at you and you have a very limited credit history, that is actually a problem for these systems because theoretically they should be working with more data about you, they can make better, more nuanced decisions. If you don't have very much data, it's going to make potentially a very blunt decision, which is not too different than human analysts might do. If all you have ever done is made one transaction in your life, I as a human can't do too much with that. Same with a machine. We should think about that very similarly. That is a challenge. If people have very limited credit history, they're potentially not going to get very good decisions. I will say that technologically, that is changing. These systems are getting better and better at making inferences or decisions based off of limited data. There is limits to that, and you can only do so much with so little.
Brian: Okay. All right. Now we're going to get to the difficult question, because you raised the issue, which I know is a very strong concern among policymakers in DC currently around bias. Can you tell me, when people talk about bias in this context, are they describing a scenario where the AI algorithm consistently gets the wrong answer with regard to a certain characteristic, race, gender, national origin, whatever it might be? Is it that it is going to internalize pre-existing societal problems and provide an accurate but uncomfortable answer, that like, well, yes, this characteristic correlates with being a poor credit risk for reasons that are inappropriate societally, but the algorithm is — just for the narrow question of creditworthiness, the algorithm is coming up with the accurate answer. Is it one, the other, both? What are we talking about?
Matt: Yes. Probably both. It, of course, depends on use case. To the first point, here the classic example is facial recognition technology, where starting back in roughly in 2014, 2015, we did see a very real failing of these technologies to appropriately identify people with darker skin. Facial recognition technologies, because they weren't, these systems are built based off of a lot of training data. For some reason, the developers of these systems decided to train these things only on white faces, or a very limited set of faces that mostly included white faces.
The results were systems that performed very well at identifying white faces but were not able to accurately identify black faces. When applied in things like policing, as they are, surveillance, security, or anything really, they can really become real problems. There is potentially this inaccurate, improper treatment between certain protected classes that we can see. That's the classic example. I will say that's an example that has mostly been solved at this point. These things do exist, but they are solvable. That same problem can extend to mortgages, mortgage vetting systems, or what have you.
Now, in terms of the second question, I think that is indeed possible. These things are trained on data about society. If there is a certain truth, out in society, that maybe a certain group of people is, if they have, they live in some neighborhood, and that neighborhood has all low-income housing, I'm sure there will be certain correlations it will draw about whether or not to give that group of people a loan.
Whether or not that's appropriate from a policy lens is another question, of course. That's perhaps where we can inject certain restrictions on discrimination that we already have in the books. There's a lot of anti-discrimination laws with regards to the financial sector. Those continue to apply with AI technology. If you have a system that is making a certain recommendation about a loan, I think at the end of the day, think about that. Maybe it's accurate in a certain sense, but also, humans do need to be taking the effort to compare that to existing regulations and determine whether or not it's compliant, which will always be a question.
Brian: Pivoting away from the credit context somewhat, just to be more universal here. For compliance, one of the big things that I hear talked about a lot is intelligibility. That at the end of the day, the human at the company using the AI needs to be able to articulate why the AI made the decision it made. Just saying, well, the greater and powerful law said so, is not adequate for regulatory purposes, if nothing else. A lot of people are concerned about intelligibility. Can you talk about what intelligibility is, why might people be concerned about it, and what can be done about that concern?
Matt: One of the big problems with — not all, but most AI technologies is this black box problem. In many, many cases, these systems are just these massive algorithms with all sorts of self-training. They have mysterious functions that they themselves have determined is how they make decisions. For us, humans receiving those decisions, that can be confusing. They give you an answer, but they don't necessarily give you the how — how did they arrive at that? What data points went into determining that answer, determining whether or not you got a loan, what did it look at? Did it look at just your finances or did it look at your racial characteristics? We don't necessarily know if we only get an answer.
That's in terms of regulation, in terms of trusting these outputs, that's a big question. Right now, one of the big efforts in the AI industry and a big effort that regulators increasingly are demanding is explainability. For critical decisions such as whether or not you get a loan there's a big effort to pair that with an explanation. What steps led to this decision? What data points did the system use? How were they combined to reach that decision, et cetera, et cetera.
This is an area that progress is being made on. It all depends on design. How you design the system determines whether or not you can get this explanation with an answer. Recently there have been examples of new products coming onto the market that are doing a pretty good job with this. One, it's not relevant to this credit conversation we've been having, but it is relevant to financial regulation. Google has an anti-money laundering suite area where if you flag someone for money laundering, flag their account, lock that account up, you should probably have a pretty good reason for doing so, and authorities will indeed demand that because they have to do an investigation.
These systems are designed to produce a list of criteria that led to that decision so that a human can look at that and determine, "Does that actually match what we expect when we think about money laundering?" Later on, the authorities can look at that and determine, "Do we need to actually investigate this individual? Is there something bad happening here or was this a mistake, which is incredibly possible, and should we unlock this account or just leave this one aside?"
Progress is being made. That's one example of a system that is doing this, and it's actually in use by companies like HSBC. I think in coming years based off of developments just in the science, we can expect a lot more of that and a lot more detailed answers. This is something that people are worried about. The science isn't quite there across all domains. Hopefully, we can expect more, but this is critical for a lot of regulatory questions.
Brian: What strikes me as a fairly basic question popped in my head when you're describing this — can AI do causation or is it just doing very sophisticated correlation? Is it, "Oh, well, based upon our training data, accounts that have these four characteristics are 60% more likely to be a money laundering risk?" Or is it, "Oh, no. This is evidence of money laundering in this particular account?” Which one is it?
Matt: Both. It depends on the system. I would say the vast majority of it is correlation. Which I think is unfortunate because in things like money laundering, if we just have correlation maybe that's not good enough. Also, maybe that can be wrong, but that is how it is right now. I suggested earlier that the science is indicating some progress in this area, and we are actually seeing developments that suggest that causation is indeed possible in certain use cases. For example, there's a recent research paper, I believe from OpenAI, that showed it was a system built to crunch math.
Traditionally, after you say, "What is the answer to two plus two?" the system would just throw some number at you. It wouldn't tell you how it did that. This system actually goes through the steps. It gives you the answer and then tells you the exact mathematical steps that led to that, which determines causation. If you go through those steps, show people those steps and show the exact process that led you there, and this is called ‘process based’ explainability. That's something we could potentially see in some domains where it makes sense. For something like money laundering, though, I think there is a certain unfortunate reality where no matter what we do, there is going to be a degree of correlation. Some patterns are just patterns and that's just the case. There is a certain unfortunate reality where no matter what we do, there is going to be a degree of correlation. This looks like money laundering. Can we concretely determine whether this transaction at this laundromat is really legitimate or not? You probably have to actually be on the ground viewing the transaction, understanding who the person is concretely to really get a good grip on that. These systems can only do so much, and I think probably we can only expect correlation, but there's reason to believe that progress is being made on causation.
Brian: All right. We've talked about regulation tangentially, and feel free to go beyond the financial space based upon your experience on this. What are you seeing as DC wrestles with AI? Are there any big meta themes or are there any persistent problems or is there anything that's making you feel optimistic? What is your take on this?
Matt: There's a lot. I think one of the big themes was that starting in 2022 and mostly 2023 when ChatGPT came out, I think most people in DC realized they didn't understand this stuff at all and definitely didn't understand this new wave of artificial intelligence technology. Over the last year, and I do believe continuing today, I'm still getting a lot of people asking very basic questions about artificial intelligence. A lot of people—
Brian: I asked a couple.
Matt: No, and that's totally fine, and that's exactly what we should do. In my view, this is a promising thing. If people are sitting down and asking the dumb questions, asking the basic questions, trying to get a grip on what this stuff is and how it's actually acting, that means they're trying to think beyond the Terminator. They're trying to actually ground themselves. That's really what you need to craft good regulation. You need to understand what you're regulating before you act. Now a second piece of this, I think unfortunately there's a little less action on, is actually understanding the nature of issues.
Right now there's a lot of swirl about what should be regulated. What are the real AI issues? Most of it at this point is talking points. There's discussion of job loss, for example, and there's reason to believe this might become an issue. I don't know. There's not any data to ground the conversation. People are just talking about something. They're not grounding themselves like they are with the technology. They're just sitting at this high level. I think while that hasn't manifested in any specific bill or anything, I think that could lead us to misfire with some regulation if we don't try and ground the discussion of the real problems as well.
Now, in terms of actual action, in Congress, we're seeing mostly a lot of nothing, a lot of discussion. In the executive and in the agencies, we are seeing some initial moves. Back in October, the Biden administration passed their big AI executive order, it's the longest executive order ever, which should tell you a lot about its scope.
It touches basically every agency including some of the financial regulators, those that executive orders would apply to. Mostly just asking though, it doesn't do anything terribly concrete. It says consider regulations in most cases and in the case of the financial regulators. There's a sense that something's going to happen, but we don't exactly know what that is on the executive side.
In the independent agencies, another pattern we're seeing is some modest, I guess, standard-setting regulatory attempts. There was recently a joint rule on automatic valuation model systems that can slap a price tag on say, your house to inform a mortgage, or what have you. Regulators across a wide variety of agencies are worried that those might be somehow flawed. They don't really know how those decisions are being made, and whether or not those values are actually accurate. There is currently a pending rule to set standards on that to ensure that in their view it's functioning appropriately. We're seeing other rules of those types emerge from others. SEC's got a pending rule on investment advisors, whether or not their systems have conflicts of interests, things like that.
In my view, the pattern here is that because AI is uncertain, because these problems are very big and we have yet to fully grasp them, these regulators are starting with very concrete, discreet things. I don't know if either of these rules are appropriate or accurate, but they are targeted. There are very specific targets, very specific things, looking at standards. I think that's the right approach. We should look at specific systems, look at specific problems, and solve them very specifically. I think that's hopefully a pattern that holds. Of course, hopefully we do that when it is warranted. I think that's promising. There isn't a clear attempt by these financial regulators to do some big AI regulatory push. They're looking at specific things.
Brian: Let's imagine that you could have the policymaker sitting at this table with us. What would you tell him or her? If you had 10 minutes of their time, what advice would you give them?
Matt: Let's figure out what's already on the books right now. One of the big issues, and increasingly people are voicing this, one of the big issues is that this technology touches almost everything. There's a lot of laws already in the books, a lot of rules in the Federal Register. Yet, we don't have a firm sense of what applies to these new and emerging technologies. I think this is a real problem in areas such as financial regulation when there's a lot of competing regulators trying to regulate perhaps even the same companies. We could run into a lot of over-regulation risks. If OpenAI is contracting with all sorts of different types of financial entities, they could run into a situation where they're subject to all these regulators.
We need to figure out, we need to literally sit down, map out all these agencies, what powers they have, and compare that to the technologies that exist today. How do these things connect? I think only by doing that, can we get a good sense of who is going to be regulated. Are they over-regulated? Are they perhaps under-regulated if there's a real problem? What gaps exist? What conflicts exist? Oftentimes, there are duplicative rules, and that can really become an issue both for businesses, which are faced with uncertainty, but also if there's a real problem. You're not going to enforce regulations on that problem if you're stuck in the courts trying to hammer out this conflict.
I think right now, we really should be sitting down to map these things out. I was encouraged that the Senate, specifically for financial regulators, actually did pass a bill mandating this, mandating the regulators map this stuff out. Unfortunately, it failed in the House. I can't exactly say why, but that's the type of legislation I think at this juncture we need, because unless we do that, anything we try to pass, if there is a real issue, is more than likely to misfire, and I think we should avoid that at all costs. We have a certain amount of runway right now where this stuff is new, and problems have yet to fully emerge, if there are any, and we should use that time to really be thoughtful about this.
Brian: Is this something AI could help with?
Matt: I think so. Yes. We are seeing early examples of AI technologies used to analyze regulations. The governor of Montana actually just revealed, I forget the exact regulations he analyzed, but he used the technology to analyze some subset of regulations to try and spot certain issues, and he did a big press conference displaying this, but beyond him, the Department of Defense is actually actively using this technology to try and distill their massive, like hundreds and hundreds of pages of regulations into a usable human format. This could be very useful for spotting these things, analyzing these things, and breaking down these unwieldy rules, regulations, and laws that humans can't understand, per se.
Brian: I'm an attorney by training, and therefore I need to stick up for the Guild because my understanding, or at least what I've read, is that, at least in matters of law, hallucinations are a serious issue, so, what is an AI hallucination? Is someone giving the AI mushrooms, or, what is it? When we say AI hallucination, what is it?
Matt: It's just guessing, so the example I give is, there's a very frequent piece of advice people early in their career get, which is, when you're in a meeting, if you don't know the answer, just say you don't know. For humans, we have to tell them and teach them that. That right there is hallucinating. If I'm in a meeting, and I don't know an answer, and I just spout something out, which is the first reaction for a lot of people, that's a hallucination. AI does the same thing. They get a question, and they essentially freeze up. They want to give you an answer, and so they do give you an answer, and if they don't have any way of knowing if that's the correct answer, if they haven't learned, if you ask them, who was the czar of Russia in 1892, and it hasn't learned Russian history, well, it won't know that answer, but it still might give you an answer, right? Because it just wants to answer you. It's that same impulse. That's what we're seeing. How to solve that is a very wicked problem. It's not something that's easily solvable. I think it’s going to be one of those persistent issues. We can always do better, and work is in progress to do better, but that's something that is going to be around forever and is going to be persistent. We need to contend respect when we're using these technologies.
Brian: Look, this has been a great and very informative conversation. We're rounding third and heading for home. I got one more question for you, but it's a two-parter. Guessing right now, today, 2024, if we go forward 5, 10, 20 years, what is your most likely optimistic scenario for AI's effect on society, and what's your most likely pessimistic?
Matt: Optimistic, medicine, I think the biggest use cases by far are in medicine. We're already seeing AI systems develop in roughly half the time it takes to develop a drug using traditional human-based methods. We're seeing systems discover new drugs. Actually, the first AI-discovered drug is in human trials right now. FDA Phase 2 human trials. That deals with a disease called IPF. It's a terminal disease.
5 million people potentially impacted with this one AI output. We could save 5 million lives, which is pretty amazing. That's just the tip of the iceberg. This thing was discovered in three years. The average time it takes is 10 years to discover drugs like this, and that's pretty amazing. We could see a huge wealth of new drugs, and treatments, what have you — a lot of lives saved. I'm very optimistic about that.
There's real evidence, like I said this is in human trials now. There's real evidence to suggest this is not hype. Now on the pessimistic end, I would say that the fake versus real problem is terrifying in a way. I mentioned the Sora video application. This thing produces almost photorealistic video. There was one video of literally a human, which with traditionally animated human faces, no matter how good it is, you could usually tell.
There's that uncanny valley effect. With this one, I couldn't tell that was a fake human. In terms of our grip on reality, that can be very concerning. If you can't trust video, you can't trust audio, you can't trust text. What can you trust? What counts as evidence in a trial context? What counts as history if people can just change video to match whatever narrative they want, that's very concerning.
Especially concerning because this stuff can be created by anybody. These technologies are out there. Small underground actors who have malicious intent can craft some deep fake video of the president saying something bad the day before the election that could sway the results of elections. We don't really have a good way of thinking about that. People don't necessarily know that these things exist at this point.
There's a lot of room for scams and a lot of room for influence and it's just uncertain. I am a little bit worried about that, and I think especially because there's no clear silver bullets in terms of taming this thing. Yes, I'm optimistic about some things like healthcare, and I think we really should be optimistic about that and unleash that at all costs. We should also confront these issues head-on.
Like AI-generated video, we need to be thinking about how, as best we can, to set up good norms about what we can trust to create perhaps forensic tools to identify these things and hopefully innovate around the problem to a certain extent. Overall, just as a society, we need to get to grips with these problems, face them head-on, and really recognize that this is a general-purpose technology. It has very good use cases, it has very bad use cases. Our task is to ensure that it has a net benefit moving forward.
Brian: Thank you very much for taking the time. I found this to be incredibly informative. I hope the listeners did as well. With that, I'm going to wrap up this episode of the FinRegRant.
[00:39:33] [END OF AUDIO]