FREE SUBSCRIPTION Includes: The Advisor Daily eBlast + Exclusive Content + Professional Network Membership: JOIN NOW LOGIN
Skip Navigation LinksHome / Articles / Read Article

Print

Six Ways AI Is Changing the Legal Landscape for Lenders

Date: Aug 03, 2023 @ 05:00 AM
Filed Under: Industry Insights

With the prevalence of Artificial Intelligence (AI) in the current world of business, many lenders may be wondering whether using this technology will get them into trouble with the law. Scrutiny and close monitoring are essential to the process of selecting AI software. Without diligent research and proper risk management, financiers may find themselves in the undesirable position of being the first to litigate the largely unmapped area of AI under the law. Litigation can be time-consuming and costly. To safely navigate the world of AI and protect their time, money, and reputation, lenders should be mindful of the six key legal issues discussed below.

1. AI May Discriminate in Labor and Employment

Financiers using AI to build a strong team should be acutely aware of the labor and employment issues that can arise. AI has already infiltrated most workplaces and is being used for many decision-making processes. According to the Equal Employment Opportunities (EEOC) Chairwoman Charlotte Burrows, more than 80 percent of employers are using AI in some form of their work and employment decision-making. While AI can expedite and potentially reduce the prevalence of human error in the hiring process, it can also replicate biases and result in systemic discrimination. Lenders need to be wary of any disparate impact caused by biased algorithms and should consider conducting bias audits.

Currently, there are no federal regulations specifically addressing the use of AI in employment and HR. However, the Equal Employment Opportunities Commission (EEOC) and the Department of Justice have published guidance to assist employers in navigating the use of AI while complying with the Americans with Disabilities Act (ADA). In addition, several states and cities have introduced legislation to address AI's impact. Staying informed, conducting bias audits, and constantly adapting policies and procedures in compliance with emerging regulations are crucial steps to successfully navigate the legal landscape surrounding AI in the workplace. In an area with very little legal precedent, financiers must be extra cautious to avoid being sued and therefore “becoming” the precedent.

2. AI Generated Content May Lack Intellectual Property (IP) Protection or Infringe Upon the Rights of Others

While engaging in marketing, a financier may be tempted to use AI programs to generate compelling images or words to advertise their business. This could, however, be a dangerous move. AI, specifically generative AI, has raised questions about the fundamentals of IP law, including copyright authorship and patent inventorship. While AI offers innovative possibilities, its rapid evolution has outpaced the development of clear legal frameworks. Recent court decisions and administrative guidance have shed light on some key takeaways. For instance, it has been established that AI cannot be considered a "person" under copyright and patent law. Additionally, the protection of AI-generated works remains unsettled, with uncertainties about the level of human involvement required to meet copyright or patent law thresholds. As a result, financiers who use generative AI may not have IP rights over the content that they pay to generate.

The use of generative AI also raises concerns about the potential infringement of copyrighted works and the legal distinction between originality and derivation. AI is often programmed to scrape or extract data from public sources or company websites. These resources, while publicly available, are frequently copyrighted. Consequently, the uninhibited use of generative AI resources can lead to inadvertent IP infringement, trade secret misappropriation, or unfair competition claims. For example, comedian and author Sarah Silverman recently began litigating a case against OpenAI and Meta for copyright infringement after her book was purportedly used to train AI without just compensation. While this case was filed against AI companies in particular, lenders who use AI products should be aware that the content they use may include copyrighted language, images, or works. Regulatory bodies and courts are attempting to address the challenges of the increase of AI, but a lack of clear guidance necessitates ongoing vigilance and proactive measures from financiers to ensure compliance with existing laws and mitigation of legal risks in this rapidly evolving space. Financiers spend valuable time and money in building their brand. If litigation ensues, they can be stopped from using the IP at issue and required to start over from scratch. Thus, it is important to ensure proper rights to the material you generated from day one.

When using AI, financiers should take affirmative steps to protect their interests. One step a lender should take is to closely review the terms of service of any AI software they are thinking of using, to ensure that there are reasonable terms in place to protect them. If the terms are unreasonable, they may consider asking for a revision or finding a new software company to work with. Another way to stay safe is to include protective clauses in contracts and transaction terms if possible. Businesses should demand terms of service from generative AI platforms that confirm proper licensure of any data that the AI has been trained with and seek broad indemnification for potential IP infringement caused by the AI company's failure to license data. Additionally, lenders should incorporate disclosures in agreements to clarify IP rights and establish processes for authorship and ownership of AI-generated works. Confidentiality provisions can also be expanded to prevent the inputting of confidential information into AI tools. Furthermore, it is highly recommended that financiers stay informed and work with competent legal counsel to ensure AI-specific clauses are included in all of their past and current contracts. By adopting these measures, financiers can minimize unintended risks and ensure that their use of generative AI aligns with legal requirements.

3. Al Data Collection May Breach Privacy

Lenders should maintain data privacy when collecting information from website visitors, loan applicants, or for other business-related purposes. This requires ensuring compliance with relevant federal and state privacy regulations. AI systems rely on exposure to massive quantities of data for the algorithmic training of their programs. It is important to note that data inputted into AI systems may no longer remain confidential and may be subject to the system's terms of use. Financiers should exercise caution and refrain from inputting personal data, trade secrets, confidential or privileged information, or data that should not be disclosed to third parties into AI systems. In addition, to stay compliant with existing regulations, lenders must prioritize transparency with borrowers regarding data collection, usage, and protection, obtaining the appropriate consent for handling sensitive information. By adhering to data privacy regulations and implementing strong policies and procedures, banks can safeguard customer information and distance themselves from potential lawsuits.

Furthermore, when choosing which AI program to use, financiers should investigate specific information about the software’s AI models, data privacy and security standards, and risk-mitigation safeguards. For reference, here is a list of helpful questions to ask, provided by NASDAQ:

  • What are the AI data training practices used?
  • How are my enterprise’s and my client’s confidential data and IP protected?
  • What are the security frameworks and practices?
  • Is the provider using a custom proprietary AI model or a third-party bolt-on model?
  • If they have a third-party bolt-on provider, what is their data retention policy?
  • Will our enterprise’s sensitive data be used to train the public AI model?

4. AI-Assisted Funding May Sidestep Fair Lending Laws

When using AI tools, lenders should take additional precautionary measures to ensure compliance with fair lending laws. The legal frameworks that govern commercial finance, including the Equal Credit Opportunity Act (ECOA) and Americans With Disabilities Act (ADA), are of utmost importance in light of the potential for discrimination by algorithms and AI. These statutes prohibit both "disparate treatment" and "disparate impact" discrimination. Disparate treatment involves intentionally treating someone differently based on a prohibited factor, while disparate impact occurs when a neutral policy disproportionately harms a protected group and is not necessary for a legitimate business purpose.

As mentioned above, AI has a reputation for replicating societal biases and contributing to systemic discrimination. With the increased use and sophistication of AI and Machine Learning (ML) models, the risk of discrimination also grows. Financiers must recognize the applications of fair lending laws and ensure that their algorithms and AI systems are designed and implemented in a manner that upholds these principles. Be sure to thoroughly research and evaluate an AI platform before moving forward with it. By actively addressing bias and discrimination patterns, lenders can stay safe and solidify their reputation as a leading-lender. Consequently, they will contribute to a more equitable financial system that provides fair access to credit for all, regardless of their race or ethnicity.

5. AI May Enhance Smart Contracts but Comes with Downsides

Many financiers are interested in implementing new strategies to reduce costs and increase efficiency, and smart contracts often come up in this conversation. When it comes to doing deals with traditional smart contracts, financiers should be aware of the unique challenges these innovative tools present. Smart contracts are pre-defined computer codes stored on a blockchain, automatically executed under specific conditions without the need for intermediaries. However, this automation limits customization and may bring forth negotiation, modification, and execution issues.

In terms of contract negotiation, smart contracts require precision and objectivity, leaving no room for ongoing negotiations or open terms. The flexibility to fill in provisions later or negotiate solutions when issues arise, as commonly done in traditional contracts, is not present in smart contracts. Unlike traditional contracts that allow modification upon a mutual agreement on the new terms, smart contracts are unable to be changed once transferred to the blockchain. Modification is only possible by canceling the old contract and creating an entirely new one with revised terms. For those lenders who value taking a more traditional approach by establishing personalized relationships and agreements, this may not work well.

The convergence of AI and blockchain has the potential to significantly improve smart contract modification. Smart contracts, traditionally rigid in design, could benefit from AI and gain the ability to dynamically adjust contract terms based on real-time market conditions or asset performance. This adaptability would allow smart contracts to cater to evolving circumstances and the needs of the parties.

Furthermore, AI could improve smart contract execution by introducing change toward better contract negotiations, automation, and efficiency. By addressing scalability and efficiency challenges, AI algorithms could enhance transaction processing and prioritize tasks, leading to faster confirmation times and a seamless user experience. Additionally, AI could strengthen the integrity of blockchain networks by detecting and preventing fraudulent activities, thus reinforcing the security and trust of a transaction. AI might also streamline dispute resolution in smart contracts by evaluating and interpreting contract terms and potentially resolving negotiation issues before the execution of the contract. Through predictive analytics, AI could facilitate a more thorough analysis of vast datasets, thus helping to identify trends, patterns, and risks that could impact contract outcomes. This data-centric approach could minimize risk, reduce uncertainty, and facilitate more informed decision-making.

There are, however, some downsides to AI when applied within smart contracts. Currently, not all smart contracts are enforceable. Any time a wet ink signature is required a smart contract will not be a viable option. Furthermore, entrusting contract adjustments to AI, a technology currently known to have problems with bias, copyright infringement, and fabricating information, might not be the safest decision for your company. The inclusion of AI can also increase complexity and risk, and require new skills and resources from the financier’s team. While these tools could certainly help to reduce costs and increase efficiency, the improvements may come at the expense of reliability and expertise.

The collaborative future of AI and blockchain holds promise, but financiers should ensure that they do not use these tools recklessly. The ongoing developments of these technologies will shape the future of smart contracts and legal, and you don’t want to become the unexpected case that shapes the law.

6. AI Complicates Liability Outcomes

While AI can create efficiencies, if a legal issue arises it can be difficult to say with certainty whether or not the lender may be held liable under the law. Because AI programs are designed to learn and progress on their own, the decisions they make or things they do may not be in alignment with what the manufacturer initially intended. As a result, an AI software representative would likely argue in court that they are not responsible for any damages incurred by AI negligence. Some sources do suggest, however, that negligence liability and a strict product liability approach may provide a counter-argument. For example, under a strict liability approach, the manufacturers of an AI program could be held liable for defects without requiring an inquiry as to whether the defect arose from an identifiable failure, such as a design defect, a manufacturing defect, or manufacturer negligence. If the product is defective, a manufacturer could be held liable regardless of their intent. Using this approach, the independent decisions an AI system makes would be attributable to the distributor of the AI system rather than the lender. As previously mentioned, another way to protect yourself from liability may be to include broad indemnification clauses in your agreements with AI software companies. Although typically these clauses may be restricted by courts, modifying the language to include AI applications in various business scenarios (such as IP, data privacy, or fair lending) may drastically reduce the likelihood of liability if something goes wrong.

While there are potential solutions to the problem of liability in AI cases, it is vital to remember that we are in unmarked territory. Financiers must ensure that they use AI systems with caution, as a mistake made by an AI program could result in legal repercussions for the financier.

Conclusion
In summary, AI is drastically changing the legal landscape in banking and finance. As the legal frameworks surrounding AI continue to evolve, lenders should stay informed, seek competent legal counsel, and implement proactive measures to mitigate legal risks and ensure compliance with existing laws. Litigation can be time-consuming and costly, and no financier wants to be an illustrative case setting the legal precedent.



Tenor D. Ickes
Summer Intern | Oswald Law Firm
Tenor D. Ickes, was born in Wurzburg, Germany, to U.S. military parents serving abroad. He is a summer intern at Oswald Law Firm, where he enjoys writing on trending business, banking & finance law, and litigation topics. He earned a B.A. from San Jose State University while working as a law clerk at a Silicon Valley banking litigation firm. He is currently pursuing a Juris Doctor degree, and he is a student member of the San Francisco Bank Attorneys Bar Association and The Bar Association of San Francisco.
Comments From Our Members

You must be an Equipment Finance Advisor member to post comments. Login or Join Now.