State AG’s AI Crackdown Meets Trump Executive Orders – Roadmap for Best Practices in Consumer Finance?
The Massachusetts Attorney General's $2.5 million settlement with student loan lender Earnest Operations LLC marks a pivotal moment in AI regulation, demonstrating how existing consumer protection laws are being forcefully applied to algorithmic decision-making systems. Announced on July 10, 2025, this settlement addressed allegations of unfair and deceptive practices and fair lending violations, primarily stemming from the company's alleged misuse of artificial intelligence (AI) underwriting models. While this resolution was a settlement, reached through an Assurance of Discontinuance (AOD) filed in Suffolk County Superior Court, rather than a court ruling after a contested trial, it establishes a significant enforcement benchmark for AI liability.
This settlement is a clear demonstration of how existing consumer protection and anti-discrimination laws are being applied to advanced algorithmic decision-making systems. Key implications emerging from this settlement underscore the critical imperative for robust AI governance, rigorous fair lending testing, and transparent model explainability for all companies utilizing AI in regulated sectors. The aggressive stance of the Office of the Attorney General of Massachusetts (AG) and the detailed mandates of the settlement may serve as an early blueprint for future compliance and risk mitigation in the rapidly evolving landscape of AI deployment. Note, for this post, we leave aside myriad legal questions surrounding fair lending laws and disparate impact analysis, as discussed in prior posts, such as here, here and here.
The Evolving Landscape of AI Liability
The rapid integration of AI across various business operations, particularly within the financial services sector, has ushered in a new era of legal and ethical challenges. As AI systems become increasingly sophisticated and autonomous in their decision-making, regulatory bodies globally are intensifying their scrutiny, focusing on the potential for these technologies to perpetuate or exacerbate various compliance issues, including discriminatory outcomes, often unintentionally. This growing concern has prompted regulators to adapt existing legal frameworks to address novel AI applications.
In April 2024, the AG issued an advisory opinion explicitly clarifying that existing state consumer protection, anti-discrimination, and data security laws apply to emerging technologies, including AI and algorithmic decision-making systems, just as they would in any other applicable context. This advisory opinion was a crucial precursor to the Earnest settlement, signaling the AG's intent to leverage established legal frameworks to hold companies accountable for AI-related harms without waiting for new, specific AI legislation. This approach suggests that the focus of regulatory oversight is increasingly on the impact of a technology rather than the technology itself, setting a precedent for what can be termed "technology-neutral" regulation. This means that the fundamental legal principles of fairness, non-discrimination, and consumer protection are considered universally applicable, irrespective of whether a decision is rendered by a human or an algorithm. Consequently, businesses arguably must translate existing legal obligations into their AI development and deployment processes, as they can expect that new AI applications may be judged by outcomes under expansive readings of current legal standards.
The Earnest Operations LLC Settlement: A Landmark Precedent
The settlement between the AG and Earnest Operations LLC (Earnest) may represent a pivotal moment in the regulatory oversight of artificial intelligence in consumer finance. It highlights the tangible risks associated with deploying AI without comprehensive safeguards and the regulatory expectations for responsible AI use.
Background of the Case
The AG filed an enforcement action against Earnest, a Delaware-based student loan company, following an investigation of its underwriting and advertising practices. Earnest specializes in providing education financing products, including student loans, and its operations extensively utilized AI models for underwriting loan applications. The AG's investigation specifically targeted Earnest's lending practices, scrutinizing both its "Algorithmic Underwriting" and "Judgmental Underwriting" processes. The central thrust of the AG’s allegations revolved around the company's purported failure to prevent disparate outcomes and adequately mitigate fair lending risks associated with its AI-powered tools. The investigation allegedly revealed that Earnest's AI models, defined as "machine-based systems that make predictions, recommendations, or decisions influencing lending outcomes," automated loan approval and pricing decisions across three distinct stages: "prescreen decline," "quick decline," and "risk score," each employing algorithmic assessments and "Knockout Rules" to screen applicants. The AG asserted that compliance obligations extend to every automated stage that influences an applicant's progression, not solely the final underwriting decisions.
Allegations of AI Misuse and Discriminatory Impact
The settlement details several specific allegations of AI misuse that purportedly led to violations of consumer protection and fair lending laws, including the Equal Credit Opportunity Act (ECOA) and Massachusetts' Consumer Protection Act (G.L. c. 93A, § 2).
A primary concern was Earnest's incorporation of the U.S. Department of Education's "Cohort Default Rate" (CDR) data into its Student Loan Refinance (SLR) model as a weighted input. The AG asserted that this practice disproportionately penalized Black and Hispanic applicants, resulting in disparate impacts on approval rates and loan terms. This allegedly meant that these groups were more likely to face unfavorable loan terms or outright denials compared to White applicants in violation of fair lending principles. To rectify this, the settlement explicitly mandated that Earnest discontinue the use of the CDR variable in its AI models. Furthermore, it required the company to conduct annual fair lending testing to ensure that no other variables or processes led to similar discriminatory outcomes.
It is crucial to note that the AG clarified that the issue was not the use of publicly available CDR data itself, but rather the alleged failure to test for disparate impact stemming from its application within the AI model. The AG's emphasis on "disparate impact" and the assertion that the company was held accountable "regardless of intent" indicates a critical shift in regulatory enforcement. This distinction highlights a regulatory focus on the outcomes of AI systems, rather than solely on the intent behind their design. As a result, companies arguably must proactively identify and mitigate unintended biases in their AI systems outcomes, even when the underlying data or algorithms appear neutral on the surface. The compliance paradigm is thus shifting from a focus on inputs to a focus on outputs.
Another allegation involved the use of "Knockout Rules" based on immigration status. Earnest was accused of automatically denying applications from individuals who did not possess at least a green card during the "prescreen decline" stage of its underwriting process. This practice allegedly created a disparate impact risk against applicants on the basis of national origin, in violation of ECOA and Massachusetts state law. In response, the settlement strictly prohibited Earnest from continuing this practice, requiring the discontinuation of the "Knockout Rule" based on immigration status and reinforcing the need for comprehensive fair lending testing across all algorithmic rules.
Beyond these specific discriminatory practices, a broader systemic issue emerged: Earnest's alleged lack of fair lending testing and model explainability. The AG alleged that the company deployed sophisticated AI models "without taking reasonable measures to mitigate fair lending risks," specifically, failing to test its AI models for disparate impact. This alleged oversight meant that potential biases could go undetected and unaddressed. To remedy this, the settlement mandated that Earnest conduct annual fair lending testing for all its AI models, ensuring continuous vigilance against discriminatory effects. This mandate also included a requirement for additional testing upon "trigger events" like significant model updates.
Finally, the AG alleged that Earnest's adverse action notices frequently failed to provide specific reasons for credit denials, partly because the algorithmic models could not adequately explain their decision-making logic. This lack of transparency in AI models, often referred to as the "black box" problem, presents a substantial compliance risk, particularly when decisions impact consumers in legally protected ways. To address this, the settlement mandated that Earnest use interpretable models for adverse action notices, compelling Earnest to develop systems that can clearly articulate why an applicant was denied, ensuring consumers receive the specific reasons they are entitled to by law. The AG’s emphasis on explainability and auditability underscores that merely having an effective AI model is insufficient; companies must also be able to articulate how it arrives at its decisions.
Collectively, these allegations and their corresponding mandates paint a clear picture of the AG's commitment to holding companies accountable for the responsible and ethical deployment of AI, ensuring that technological advancement does not come at the cost of fairness and consumer protection.
Algorithmic Governance Mandates: A Blueprint for Compliance
The AOD in the Earnest settlement establishes a comprehensive governance framework for the company's AI underwriting practices. This framework is presented in such a way that it may serve as a "blueprint" for other companies employing automated decision-making or AI in underwriting. The detailed nature of these mandates indicates a shift from purely punitive enforcement to a proactive, prescriptive regulatory strategy aimed at shaping industry best practices for AI. This level of operational integration suggests that AI governance cannot be a mere afterthought; it must be embedded into the core business strategy and development lifecycle of AI systems. The detailed requirements for policies, dedicated oversight teams, continuous testing protocols, and robust documentation are not merely punitive measures but rather a clear articulation of regulatory expectations for responsible AI deployment.
What follows is a practical outline of the AOD’s apparent mandates for companies using or planning to use AI in lending decisions.
Comprehensive AI Policies and Procedures
Companies should develop and maintain robust written policies to ensure their AI models comply with anti-discrimination and fair lending laws. These policies must encompass the entire lifecycle of AI models, from their initial design and development through deployment, ongoing monitoring, and subsequent updates. This requirement emphasizes a proactive, end-to-end approach to AI risk management, ensuring that compliance considerations are integrated at every stage of an AI system's existence.
Establishment of an Algorithmic Oversight Team
A key structural component is the establishment of an internal algorithmic oversight team, which must have a designated chairperson. This team is assigned critical responsibilities, including managing fair lending testing, maintaining comprehensive model inventories, and actively responding to any identified bias concerns. This institutionalizes accountability and expertise within the organization specifically for AI governance, ensuring dedicated resources are allocated to these complex issues.
Rigorous Fair Lending Testing Protocols
The AOD imposes stringent requirements for fair lending testing. It mandates annual disparate impact testing for all algorithmic underwriting models and "knockout rules" utilized for loan application decisions. Furthermore, the settlement requires additional testing to be conducted upon the occurrence of "trigger events," such as significant model updates or the receipt of credible internal complaints. This goes beyond one-time assessments, necessitating continuous monitoring and adaptive measures to ensure ongoing compliance and address emerging biases.
Model Inventories and Documentation Standards
Companies should maintain detailed records for all their AI models. This includes documentation of algorithms, training data used, model parameters, active use dates, and the results of all fair lending testing conducted. This emphasis on comprehensive documentation is crucial for enhancing transparency, enabling effective auditability, and supporting the explainability and defendability of AI decisions, particularly for models that might otherwise be perceived as "black boxes."
Interpretable Models for Adverse Action Notices
To ensure compliance with ECOA and its implementing Regulation B, companies should utilize interpretable models or systems that enable the accurate identification and articulation of reasons for credit denials. This directly addresses the issue of opaque AI decisions that prevent clear communication to consumers, ensuring that individuals receive understandable explanations for adverse credit outcomes. The requirement for "interpretable models for adverse action notices" directly confronts the challenge posed by opaque or "black box" AI models, highlighting that such models present a substantial compliance risk if they cannot explain their decision-making logic.
Discontinuation of Problematic Variables
The settlement explicitly prohibits Earnest from using the "Cohort Default Rate" variable and the "Knockout Rule" based on immigration status in its models. More broadly, companies are expected to thoroughly understand how different data sets, including publicly available information, are weighted and utilized within their models. This understanding is critical to facilitate the identification and removal of any problematic data sets that could inadvertently lead to discriminatory outcomes.
Broader Implications for Businesses and Regulatory Compliance
The Earnest settlement extends far beyond the specific facts of the case, carrying profound implications for businesses across various sectors and for the broader landscape of regulatory compliance in the age of artificial intelligence.
Application of Existing Laws to Emerging AI Technologies
The settlement powerfully illustrates that regulators are not waiting for the passage of new, AI-specific legislation to address algorithmic harms. Instead, they are strategically leveraging existing consumer protection and anti-discrimination laws, such as Massachusetts' G.L. c. 93A and the federal Equal Credit Opportunity Act (ECOA), to hold companies accountable for AI misuse. This "no AI law, no problem" approach signifies that businesses must operate under the assumption that their AI systems already may be subject to existing legal frameworks. The AG's advisory in April 2024 explicitly stated that, at least in the Commonwealth, existing laws apply to AI "just as they would in any other applicable context". This is a clear articulation of a technology-neutral regulatory philosophy, suggesting that the legal principles of fairness, non-discrimination, and consumer protection are considered universally applicable, regardless of whether a decision is made by a human or an algorithm. These developments have potentially far-reaching implications for innovation, as they suggest that new AI applications will be judged by their outcomes under existing legal standards, rather than requiring specific new laws for every technological advancement. Companies must therefore translate existing legal obligations into their AI development and deployment processes.
The Imperative of Proactive AI Governance
The Earnest settlement unequivocally underscores the critical need for companies to consider robust, proactive AI governance programs. This includes conducting rigorous fair lending testing at every stage of model development and deployment, maintaining comprehensive documentation for all AI systems, and establishing clear roles and responsibilities for compliance, legal, and data science teams. Mitigating the risks associated with "black box" AI models is paramount; companies must ensure their AI models are auditable, explainable, and transparent, particularly when those models are involved in making decisions that directly impact consumers.
The detailed mandates in the AOD elevate AI governance from a mere compliance checklist to a strategic imperative for businesses, especially those operating in regulated industries. The extensive requirements, covering policies, dedicated teams, continuous testing, and detailed documentation, suggest that AI governance cannot be an afterthought; it must be deeply embedded into the core business strategy and the entire development lifecycle of AI systems. This necessitates substantial investment in cross-functional teams (comprising legal, compliance, data science, and engineering expertise), the development of new internal processes, and architecture of AI systems to ensure explainability and auditability from their inception. The ongoing evolution in regulatory treatment of AI demands attention and resource allocation from the highest levels of an organization, transforming AI governance into a competitive differentiator and a fundamental risk mitigator.
Anticipating Future Regulatory Scrutiny
The Earnest settlement is not an isolated incident but rather part of a growing body of regulatory actions addressing AI in consumer finance. It aligns with broader initiatives aimed at ensuring AI systems are fair, transparent, and accountable. Businesses should anticipate continued and heightened scrutiny from both state and federal regulators concerning their AI deployments. The potential for "steep monetary penalties and long-term regulatory oversight" is a clear signal of a trend toward more aggressive enforcement in the AI domain. Moreover, the trend toward the technology-neutral use of existing laws does not preclude the development of new, AI-specific regulations. This necessitates that companies remain vigilant and proactive in their AI risk management strategies.
The Federal Executive Order Backdrop…. Conflict or Complement to State AI Regulation?
The Earnest settlement arrives at a time of transformation in the federal regulatory posture toward artificial intelligence. In July 2025, for example, President Trump issued a series of Executive Orders (EOs) that demonstrate the priorities underlying federal AI policy, priorities that may cause tension with the algorithmic accountability framework reflected in this state enforcement action.
Notably, the July 23, 2025 Executive Order “Preventing Woke AI in the Federal Government” prohibits the use of generative or decision-making AI models that embed or rely upon demographic-specific variables, particularly race or gender, in any capacity deemed to reflect ideological bias. This federal directive risks chilling the very type of disparate impact testing and identity-aware fairness analysis that the AG required Earnest to implement as a remedial and forward-looking compliance measure. The federal order’s call for neutrality, positioned as a rejection of so-called “woke” algorithmic content, is more than just a policy stance; it could fundamentally reshape federal procurement standards, regulatory oversight priorities, and how legal authorities interpret fairness in AI systems.
Likewise, Executive Orders promoting AI deregulation and innovation, such as the January 23, 2025 Executive Order “Removing Barriers to American Leadership in Artificial Intelligence”, emphasize removing barriers to rapid AI development and revoking prior risk-based governance frameworks. These Eos stand in stark contrast to the Massachusetts settlement, which imposes rigorous documentation, explainability, and testing mandates that treat AI as a high-risk domain requiring sustained compliance infrastructure. For institutions deploying AI across jurisdictions, this divergence creates operational uncertainty: adherence to state-mandated fairness obligations may be viewed as incompatible with emerging federal requirements – or at least deprioritized in federally funded contexts.
However, the EOs do not preempt state consumer protection or civil rights laws. And the AG’s authority to enforce fair lending law remains intact. The Earnest settlement reflects a broader legal principle: that existing laws apply to algorithmic outcomes, regardless of whether federal agencies choose to adopt additional protections. So that means, the state enforcement regime may become a de facto patchwork standard for operational AI governance in consumer finance, even if the federal government embraces a more wait and see model.
What the Future May Hold
The growing bifurcation between federal and state regulatory philosophies raises deeper questions about the future of AI governance … such as:
Will a national AI compliance floor emerge organically through multi-state enforcement actions like this one?
Will federal AI policy lean toward preemption, potentially displacing state-driven mandates rooted in civil rights and consumer fairness?
Until those questions are resolved, companies subject to both state and federal oversight must tread carefully, as they can expect continued and heightened regulatory scrutiny of AI systems, both at state and federal levels.
The Earnest case, coupled with the AG’s 2024 advisory opinion, underscore that AI liability is no longer theoretical, and that state regulators will act, even in the absence of federal alignment. The AG's actions serve as a warning and practical guide for organizations seeking to harness the power of AI responsibly while mitigating legal risk. Businesses deploying AI must recognize that accountability for AI outcomes now may be a reality. Navigating this complex legal frontier will necessitate a strategic and integrated approach to AI ethics, compliance, and risk management, embedding these considerations into the very fabric of AI development and operational processes.
At the end of the day, forward-looking organizations should maintain AI governance programs that satisfy the strictest applicable standard, ensuring compliance not only with federal innovation policies but with state-enforced principles of equity, explainability, and consumer protection.
Questions?
Our team stays at the forefront of the rapidly evolving regulatory landscape surrounding technological advancements in lending, including AI. For more information or to discuss how we can help you build and maintain AI governance compliance strategies with confidence, contact jlevonick@garrishorn.com.
Read the Earnest settlement here.
Read the July 23, 2025 EO, “Preventing Woke AI in the Federal Government” here.