Next Article in Journal / Special Issue
From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems
Previous Article in Journal
Can Artificial Intelligence Aid Diagnosis by Teleguided Point-of-Care Ultrasound? A Pilot Study for Evaluating a Novel Computer Algorithm for COVID-19 Diagnosis Using Lung Ultrasound
Previous Article in Special Issue
Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Concept Paper

Algorithms for All: Can AI in the Mortgage Market Expand Access to Homeownership?

1
Department of Strategic Management and Public Policy, School of Business, The George Washington University, Washington, DC 20052, USA
2
William P. and Hazel B. White Center, Department of IT, Analytics, and Operations, Mendoza College of Business, University of Notre Dame, Notre Dame, IN 46556, USA
3
AB Schnare Associates, Washington, DC 20007, USA
*
Author to whom correspondence should be addressed.
AI 2023, 4(4), 888-903; https://doi.org/10.3390/ai4040045
Submission received: 8 August 2023 / Revised: 18 September 2023 / Accepted: 27 September 2023 / Published: 11 October 2023
(This article belongs to the Special Issue Standards and Ethics in AI)

Abstract

:
Artificial intelligence (AI) is transforming the mortgage market at every stage of the value chain. In this paper, we examine the potential for the mortgage industry to leverage AI to overcome the historical and systemic barriers to homeownership for members of Black, Brown, and lower-income communities. We begin by proposing societal, ethical, legal, and practical criteria that should be considered in the development and implementation of AI models. Based on this framework, we discuss the applications of AI that are transforming the mortgage market, including digital marketing, the inclusion of non-traditional “big data” in credit scoring algorithms, AI property valuation, and loan underwriting models. We conclude that although the current AI models may reflect the same biases that have existed historically in the mortgage market, opportunities exist for proactive, responsible AI model development designed to remove the systemic barriers to mortgage credit access.

1. Introduction

In the mortgage market, digitalization, i.e., “the use of digital technologies to change a business model and provide new revenue and value-producing opportunities”, [1] is already embedded and expanding throughout the mortgage value chain. For example, mortgage lenders use sophisticated digital marketing techniques to target prospective borrowers and artificially intelligent bots to communicate with customers. Credit scoring companies are using machine learning processes to evaluate credit risk. Property valuation algorithms integrate copious amounts of data on land titles, sales, market trends, and aspects of local infrastructure to produce digital appraisals. Digitized processes are replacing manual, paper-based workflows used for loan servicing and loss mitigation. Industry participants are experimenting with blockchain implementations to manage the origination process. Despite several potential benefits of digital transformation, including increased efficiency and accuracy and lower costs, institutions and regulators are finding it difficult to keep up with the rate of technological innovation. At the same time, evidence regarding the impact of digitalization processes, namely artificial intelligence (AI), on opportunities to expand mortgage credit to underserved communities, including lower-income and minority households, is lacking and inconclusive at best. In this article, we offer a framework to examine the effectiveness of AI applications in the mortgage industry.
The purpose of this paper is to examine how the mortgage industry is leveraging AI to help overcome the historical and systemic barriers to homeownership for members of Black, Brown, and lower-income communities. As shown in Figure 1, we examine five areas where AI is transforming the mortgage market, including digital marketing; the inclusion of non-traditional “big data” in credit scoring; the use of artificial intelligence; and machine learning in automated property valuation, underwriting, and fraud detection models. Building on prior research, we examine evidence of the potential of AI to transform market systems and outcomes. Based on this overview, we then describe how proactive, responsible technological transformation can be used to help overcome the systemic barriers to mortgage credit for historically underserved households.

2. AI and ML in the Mortgage Industry

Artificial intelligence (AI) is a technological advancement whereby a computer or computerized actor (e.g., a robot) mimics human decision processes. Traditionally, humans program computers to perform specific computational or predictive functions, and then programmers update and improve these programs [2]. AI systems perform complex tasks in ways that mimic how humans solve problems. Machine learning (ML) is a form of AI in which the computer program optimizes its performance based on information gathered during previous tasks [3]. AI and ML are important digital transformation tools because of their ability to analyze much larger amounts of data and discover complex relationships that transcend traditional statistical assumptions and analyses. These tools have been increasingly applied in the private and public sectors [4]. Complex, multivariate algorithms have been in place for mortgage underwriting and pricing for more than two decades, and AI and ML are being used to enhance these models. AI and ML techniques have also been applied to marketing, customer relationship management, fraud detection, and loan servicing activities [4].
One supposed advantage of AI models is that they are not subject to human biases and errors; as such, they are viewed as possibly producing more accurate, consistent, and efficient decisions. Depending on how AI models are designed and developed, these enhanced capabilities could potentially expand access to credit for groups currently underserved by extant credit systems, particularly Black, Hispanic, and low-income consumers. However, it is unclear how well these models can adapt to the changes in the market or the extent to which they would magnify the effects of past discrimination [5].
Since these models rely on historical data, critics in the academic and policy communities have raised concerns about the potential for these models to perpetuate historical discrimination and inequality [6,7]. The complexity of AI and ML tools makes it difficult but not impossible for non-developers to scrutinize and monitor their inputs [8,9].
In subsequent sections, we provide an overview of key technological and analytical advancements in the mortgage market and their potential for expanding access to credit for lower-income and minority households. We begin by establishing the criteria for evaluating the extent to which these tools and processes serve these goals. Drawing on the S.C.A.L.E. framework developed by [10], we propose five factors summarizing the societal, ethical, legal, and practical issues that should be considered in the development and implementation of AI.

3. A Framework for Evaluating the Impact of AI on Access to Homeownership for Underserved Communities

The following framework provides a parsimonious framework for evaluating the fairness and equity of AI and other technologies. While concerns about the sources of bias are not new, much of the discussion in the industry is focused on the legal risks and challenges with monitoring and oversight. These criteria are intended to summarize and simplify the key issues that responsible practitioners and policymakers should be prepared to address. In particular, this tool is forward-looking and incorporates factors that should shape the design and implementation, as well as the post-hoc evaluation of the impact of AI models in this sector.
  • Societal values. A digitalized tool or process should be considered from the perspective of similar decisions, the larger context, and historical factors, and should align with the prevailing legal and ethical paradigms [11]. Recent political and social priorities in the U.S. have focused on racial equity and social justice, and the Biden Administration has directed regulatory agencies to increase fair access to homeownership. According to Kroll [8], credit scoring companies must “consider the context and impacts of their credit system and in particular to consider what outcomes are desired, how they might be reached, and how the deployment of a new system or changes to an existing system will alter the world.” New tools could be used to implement fair machine learning (FML) by deploying statistical algorithms to identify and correct unjust or biased outcomes [12].
  • Contextual integrity. The appropriateness of a technological tool depends on whether it conforms to contextual norms [13,14]. Regardless of its accuracy, a particular tool must be appropriate for the mortgage lending or housing domain. Walzer [15] described “spheres of justice” to underscore the importance of context in evaluating the fairness of outcomes by arguing that someone who excels in one sphere (e.g., education) should not be granted advantages without merit in another sphere (e.g., mortgage loan access). Certain social media advertising tactics, while appropriate for less consequential product categories, may result in unfair informational asymmetries in the mortgage lending context.
  • Accuracy. It is also important to evaluate the extent to which a tool is reliable, error-free, and widely available across all major demographic and economic groups and macroeconomic conditions. One advantage of AI is rapid, systematic, and consistent data collection and modeling. However, inaccuracies can result when certain types of data are systematically omitted or biases are built into algorithms. For example, in property valuation models, due to the varying assumptions about comparable property selection and the historical racial disparities in property values, can models produce the “accurate” measurements necessary to predict risk? What types of errors are acceptable? Accuracy also refers to the absence of bias [5].
    • Representation bias occurs when the sample upon which a model is based differs significantly from the characteristics of the population to which the model will be applied. For example, evidence suggests that the effect of credit scores on the likelihood of mortgage default differs for members of historically disadvantaged minority groups [16].
    • Historical bias occurs when the data are accurate and correctly sampled, but also capture disparities due to past racism and discrimination. For example, Black and Hispanic borrowers pay higher rates and fees on average, and were more likely to have received subprime loans, faced foreclosure, or sustained significant equity losses during the 2008 global financial crisis.
    • Omitted variable bias occurs when a model fails to include a factor that has a significant effect on the outcome. For example, it is illegal to include race or ethnicity as a factor in underwriting models. However, certain variables and combinations of variables are effective proxies for race/ethnicity. The omission of race as an explanatory variable can obfuscate the interpretation of these proxy variables and significantly underestimate the effects of racial/ethnic differences.
    • Selection bias occurs when certain categories of people or transactions are systematically excluded from the data upon which a model is based. For example, households with low or missing credit scores would likely be underrepresented in the samples of prospective borrowers.
    • Aggregation bias occurs when the characteristics of certain categories of people or transactions are erroneously applied to individual cases. One example is proxy methods, which rely on geographic and/or surname-based information to estimate the probability that a household belongs to a particular race and ethnicity when this information is not reported [17].
    • Measurement bias occurs when a variable is systematically inaccurate, missing, or inconsistently measured. For example, a recent study found that credit scores for minority and low-income applicants were less predictive in mortgage default models due to the variations in underlying credit files [18].
  • Legality. It is also important to assess whether adopting an AI application will have a negative and disparate impact on protected classes. The disparate impact standard prohibits any practice, including the use of a statistical algorithm, that has a negative, disparate impact on a particular racial/ethnic group when implemented. If a disparate impact occurs, the lender must provide a legitimate business justification and be able to rule out any less discriminatory alternative. The data and algorithms used for AI credit scoring, mortgage underwriting, and property valuation may run afoul of this standard.
  • Expanded opportunity. An AI solution should also significantly increase access to credit in addition to cost, efficiency, or risk assessment benefits. Whereas digitalization and AI have facilitated access to credit scores for previously unscorable or “credit invisible” households, it is unclear whether they increase financing opportunities for a larger group of consumers with poor credit histories [19].
It is important to note that while multiple criteria are likely to affect a particular AI application, these criteria are not mutually exclusive and are unweighted in terms of importance. Presumably, the relative importance of these factors would depend on the outcome priorities of the organization or stakeholder.
Table 1 provides examples of how the SCALE criteria apply to AI processes in mortgage lending and how these may affect access to mortgage credit to support minority homeownership.
SCALE is not the only framework designed to evaluate the impact of AI models. For example, the AI Act, which is currently under legislative review in the European Union, designates applications with significant socioeconomic implications as high-risk, thereby requiring these AI systems to meet the standards of data quality, accuracy, robustness, and non-discrimination. This ACT also specifies the standards for documentation, transparency, and human oversight [20]. While the SCALE criteria are consistent with the EU approach, the SCALE criteria assume that AI models are far too complex, dynamic, and context-specific for substantive regulatory oversight. In addition, SCALE attempts to go beyond mere regulatory compliance with anti-discrimination laws by stipulating that these models align with societal values and expand opportunity.
The National Fair Housing Alliance (NFHA), a major civil rights and consumer advocacy organization, has put forth the Purpose, Process, and Monitoring (PPM) framework, which is focused on the stages in algorithmic model development and monitoring. The authors of this framework address the key considerations necessary for equity audits based on “fairness, accountability, transparency, explainability, and interpretability” [21]. The SCALE framework also aligns with the PPM framework and could be applied throughout the PPM process. The SCALE framework has an additional advantage in that it encompasses aspirations intended to ensure fairness while proactively advancing equity.
In the following sections, we apply the SCALE factors to understand the effects of “big” data, artificial intelligence, and machine learning on minority households in the mortgage market. We conclude with a discussion of the implications for ethical and socially responsible AI and opportunities to alleviate the existing barriers to mortgage access.

4. SCALE Criteria and AI-Driven Marketing

AI is one of several technological advancements in the mortgage industry that has been leveraged to increase access to mortgage credit for underserved consumers and to do so at lower costs and increased efficiency. However, recent research in this area has highlighted certain persistent disparities in financial market access despite the introduction of AI.
Evidence suggests that Black, Hispanic, and lower-income households have lower rates of online financial service usage. Based on the analyses of the 2019 Survey of Consumer Finances [22], Black and Hispanic households were less willing to engage in online banking transactions and were less reliant on online information for borrowing decisions relative to White and other households. Black and Hispanic households were also more likely to have been denied credit or feared being denied credit in the past 5 years. Those who had been turned down or feared being turned down were significantly less likely to access online financial services regardless of race or ethnicity; this relationship was significantly more acute for Black and Hispanic households. According to Fannie Mae [23], the use of digital mortgage services increased significantly during the pandemic, but less so among certain groups of homebuyers. Higher-income, Asian, and Black recent homebuyers indicated a slightly higher preference for online mortgage-related activities, while lower-income and Hispanic consumers showed a stronger preference for in-person or telephone interactions [24,25]. Thus, it appears that some of the same racial and ethnic indicators of a digital divide in financial services documented before the pandemic persist today.
The term “fintech” has been defined as “technology innovations used to support or enable banking or financial services” [26,27], such as smartphone applications, wi-fi, online and mobile banking, electronic payment transactions, direct deposits, as well as transactions on peer-to-peer platform, and access to blockchain and cryptocurrencies. Friedline and Chen [28] noted that the proliferation of fintech has coincided with a decline in banking activities at brick-and-mortar institutions, and that “these trends have the potential to replicate and reinforce redlining by amplifying the existing racialized geography of financial services and exacerbating consumers’ marginalization from the financial marketplace”. They found that fintech rates among high-poverty communities are generally low, and they are even lower in areas with larger shares of Black, Latinx, and American Indian/Alaskan Native populations. Controlling for high-speed internet access, smartphone ownership, and checking account ownership, fintech usage is higher in areas with Hispanic and Asian residents; this is not the case in high-poverty areas with higher proportions of Black residents. Fintech companies, as new entrants with non-traditional business models, are more likely to introduce AI systems because they have been less subject to regulatory scrutiny.
Haupert [29] found small yet significant racial disparities in loan approvals between similarly qualified White and non-White applicants and fewer disparities in approvals from fintech lenders versus traditional lenders. However, relative to similarly qualified White applicants, non-White applicants are more likely to receive subprime terms from both types of lenders, and the disparities in subprime lending between Black and White applicants are greater among fintech lenders than traditional lenders. The author thus recommended more careful regulation of fintech lending.
In terms of the SCALE framework, these fintech services have the potential to expand opportunities for minority homeownership, and many of these companies are innovators in AI. As it stands, however, they have had limited impact due to the significant racial and ethnic gaps in access and adoption.
Anecdotal evidence and recent legal activity suggest that AI used for targeted digital advertising practices may contribute to a less inclusive informational environment for members of traditionally underserved groups. AI marketing applications are used to harvest customer information to identify existing preferences and to recommend new product and service alternatives [30,31]. In doing so, these tactics, such as psychological targeting [30], customer prioritization based on income or profitability [32], and the targeting of vulnerable groups, can result in discrimination based on gender, age, and racial disparities. These practices have also raised concerns about the algorithms designed to optimize user acceptance in the context of social media [33]. Evans and Miller [34] argued that digital marketing techniques based on AI and machine learning (AI/ML) can increase the incidence of bias and consumer exploitation due to a lack of transparency in how they identify potential customers. Another concern related to targeted digital advertising by mortgage lenders is that advertisements may steer consumers toward particular products [34]. Enabled by AI, the customization of product and service offerings can have the unintended consequence of limiting access to information and opportunity [35], particularly in a category of profound socio-economic significance, such as housing. There is also evidence that AI systems, because they are based on current and historical data, can reinforce prejudices, stereotypes, and historical inequities. These strategies could easily evade regulatory oversight.
These techniques also raise concerns about data privacy and contextual integrity [13,36,37]. Prior research has found that consumers can feel exploited by unauthorized uses of their data and when their data is used to categorize them in an inaccurate or biased manner [38].
Specifically, digital marketers purchase the data from third-party vendors that track users and their browsing behaviors across websites. Lenders also rely on third-party lead generators who provide lists of potential customers based on the data collected from website users who have shown interest in a particular product or category, e.g., people searching for homes or real estate agents. Additionally, lenders’ digital marketing teams apply algorithms using the data extracted from various sources to estimate “e-scores” used to predict future usage behavior. Each of these techniques could exclude certain groups of borrowers from the market—particularly those who are currently underrepresented [34].
Recent cases against Facebook for Fair Housing Act (FHA) violations focused on ads for housing, but they also apply to ads for mortgages [39]. Cases filed by the National Fair Housing Alliance, other civil rights groups, and HUD found that Facebook enabled housing advertisers to screen viewers based on protected characteristics, such as race, sex, and disability, and to exclude parents, foreign-born individuals, and those seeking accessible units. In response, Facebook created a separate advertising platform that allows users to view all housing ads. The company also agreed to require advertisers to certify compliance with fair housing laws [40]. This example has prompted mortgage lenders to assess the fair lending risk in their AI marketing strategies and to carefully examine the criteria used to exclude groups based on prohibited characteristics [41].
In another example, the DOJ and CFPB settled a suit against Trustmark Bank in 2021 for using a digital marketing strategy designed for businesses in majority-White neighborhoods to generate mortgage business from majority-Black and Hispanic neighborhoods in the Memphis area [42]. The legal implications of AI targeting practices by mortgage lenders under the FHA and the Equal Credit Opportunity Act [43] raise important questions about fair access to information about mortgage loans.
That said, AI marketing tools are essential for reaching consumers in today’s marketplace. However, these practices may exacerbate information gaps and steering activities, reduce competition, and further the “dual” mortgage market in which minority homebuyers pay more for mortgage credit [44].
Based on the SCALE framework criteria, targeted advertisements based on demographic categories or correlated attributes may not align with societal priorities aimed at increasing racial equity and inclusion, and it is unclear whether these practices contribute to expanded opportunity.

5. SCALE Criteria and AI Credit Scoring and Underwriting Algorithms

Policymakers and credit experts have touted the potential for the inclusion of alternative data sources to expand access to credit scores (which are necessary to access the mortgage market) for those who currently have sparse or missing credit files [45]. Current credit scoring models rely exclusively on the timeliness of past payments on consumer credit lines, i.e., credit cards, car loans, student loans, mortgages, and other consumer loans. Proponents believe that using alternative data such as rental payments, utility payments, and digital transactions in credit scoring models will expand opportunities to consumers who are currently “credit invisible” or unscorable [46,47]. One study estimated that the inclusion of telecommunications and utility payment data in traditional scoring models would increase acceptance rates by about 10 percent for the overall population, and by more than 20 percent for Black and Latinx individuals and consumers making less than $20,000 a year [48]. Another analysis showed that rent and utility payments had a positive impact on consumers’ access to credit, although the opposite was true for remittance payments [47,49].
However, critics suggest that UTR payment history data could inadvertently increase financial challenges for families who are struggling to recover from the pandemic downturn or seasonal fluctuations in energy costs. In addition, there is evidence that Black, Hispanic, and low-income households pay more not only in energy costs as a share of their incomes but also per square foot of their residences [50], and that these households are also particularly susceptible to the negative effects of extreme weather events and global warming [51,52]. Another concern is that the impact of the COVID-19 pandemic on UTR reporting, particularly “full files” of all UTR payments, would disproportionately disadvantage lower-income consumers and minority communities [53]. A recent study found that 25 to 50 percent of consumers who experienced delinquencies did so on utility or telecom tradelines, but not on credit tradelines [47]. Thus, adding these data to consumers’ credit files could simply expand the population of consumers with lower credit scores.
The inclusion of rental payments to expand access to credit scores with absent or sparse credit files has garnered a great deal of recent attention. The Federal Housing Finance Administration recently approved the use of rental payments to bolster credit files used in government-sponsored enterprise (GSE) underwriting models. California, Colorado, and the District of Columbia have enacted laws to require government-subsidized landlords to report rental payments to credit bureaus, which are developing reporting standards. However, the inclusion of rental payments poses significant potential challenges. There is wide variation in the timing, consistency, and quality of rental payment and eviction data. Rental payment data are more likely to be collected from large-scale property management companies, yet Black and Latinx renters reside in only 35 percent of the units in buildings with 50 or more units and 44 percent of all units in two-to-four-unit buildings [54].
Several fintech initiatives to provide digital cash-flow data have been implemented to overcome these challenges. FinRegLab [55] analyzed the data from several non-bank financial companies that have adopted cash-flow variables in AI-driven credit decisions instead of traditional indicators and found that cash-flow variables improve predictiveness when used in tandem with traditional credit history information, and, in some cases, can predict default risk with similar effectiveness. Model developers should take note that the data and models reflect value judgments that may include certain biases [13,56]. For example, in existing scoring models, mortgage payments are weighted more heavily than other forms of credit. Blattner and Nelson [16] found that credit scores are less predictive of the default for racial and ethnic minority and low-income mortgage loan applicants, and that these errors have a significant negative impact on mortgage approvals. The authors linked these disparities to the differences in the underlying credit files rather than the biases embedded in the model specification.
Other potential predictors of credit risk include a consumer’s GPS location, social media activity, health records, club memberships, educational history, academic performance, and digital footprint. Critics of these approaches have raised concerns that the alternative factors are proxies for demographic characteristics (e.g., race, ethnicity, gender, and family status) that bias credit decisions, and thus, are likely to exacerbate the effects of past marketplace discrimination [29,57]. Moreover, research suggests that the inclusion of non-financial personal data in lending decisions can pose several ethical and legal risks [10]. Models that rely on these data may do little more than automate historical discriminatory practices in mortgage markets, harkening back to the days when Federal Housing Administration guidelines explicitly advised underwriters to consider whether a borrower intends to reside “in a location inhabited by a class or race of people that may impair his interest in the property and thereby affect his motivation [to repay the loan]” [58].
Digitalization in the mortgage industry has introduced opportunities to expand the types of data used in underwriting models, thereby expanding opportunities for homeownership to historically underserved households. However, the use of UTR payment data, cash-flow (i.e., aggregated banking) data, and non-financial personal data in underwriting raises important ethical and legal questions for those who develop and apply AI in credit scoring. While potentially predictive of repayment and default, these data raise questions based on the SCALE criteria of contextual integrity, accuracy, and perhaps even legality. Although there is overwhelming evidence that these new data sources will expand access to credit scores, it remains to be seen whether this will simply produce a larger pool of consumers with high-risk credit profiles who are more likely to be denied mortgage credit or targeted by subprime lenders [19].

6. SCALE Criteria and Automated Property Valuation Models

Digitalization in the appraisal process has improved efficiency in the loan origination process, and proponents argue that the accuracy of risk assessment has improved as well. Recent innovations include digitalized appraisal inspections whereby appraisers collect certain property data elements without in-person inspections in some cases. This information is then submitted to AI automated valuation models (AVMs) that replace traditional, more subjective procedures.
Meanwhile, appraisal bias has emerged as one of the most controversial issues in the mortgage industry, and several studies have documented systematic biases in traditional appraisals that result in lower values for Black and Hispanic homebuyers and neighborhoods [59]. One widely cited study, for example, revealed that homes owned by Black and Hispanic individuals are more likely to be appraised at a lower value than the sales price [60]. In another recent study, researchers compared traditional appraisals with those conducted via AVMs and found that homes owned by White borrowers are more likely to have an appraised value that is at least 10 percent higher than the AVM’s estimated value compared to homes owned by Black borrowers; these overvaluations are also more likely to occur when White borrowers live in majority-Black neighborhoods [61]. Additional evidence suggests that AVM models are less likely to produce biased results, and as such, can be used to advance more equitable outcomes in appraisals for minority homebuyers and homeowners [62].
Concerns that plague credit scoring algorithms also apply in the case of AVMs—namely, the potential for these models to capture and amplify latent discrimination and redlining. Homes owned by Black and Hispanic families as well as homes located in minority neighborhoods have historically and consistently had lower values and rates of house price appreciation than homes owned by similarly-situated White counterparts [63]. AI models could be developed to remove the barriers to equitable outcomes and offset the effects of bias and discrimination in AVMs by assimilating a wider range of data.
In another recent analysis, researchers argued that due to the complexity and dynamic nature of AI models, it would be difficult to identify the specific cause of disparities affecting underrepresented groups or to perform standard fair lending analyses [64]. The authors suggested that existing legal, policy, and regulatory frameworks lag woefully behind in understanding these technologies or how best to oversee their application [64]. To increase transparency, some modelers develop “inherently interpretable” models, while others combine complex models with post hoc explainability methods, i.e., supplemental information. Kluttz et al. [65] argued that in addition to transparency and explainability, AI models should be subjected to a higher standard of “contestability”—that is, the extent to which sufficient information is available to meaningfully challenge the model’s outcomes. In contexts involving AI/ML applications, contestability would be analogous to consumer protection laws that require, for example, disclosure of the reasons for a mortgage loan denial to the applicant.
Despite the SCALE framework’s concerns about accuracy, potential bias, and legality, AI/ML applications have significant potential to expand homeownership opportunities. If calibrated to do so, these models could be deployed to identify sources of bias and discrimination, as well as non-discriminatory alternatives [4]. Davis et al. [12] recently proposed an “algorithmic reparation” approach whereby AI techniques are explicitly designed to minimize or eliminate the effects of historical disadvantages (e.g., structural racism), rather than to attempt to remove bias from existing algorithms.

7. SCALE Criteria Applied to AI Fraud Detection Models

An advantage of the SCALE framework is that it can be used to evaluate cases where there might or might not be bias. An example is the use of AI models for mortgage fraud detection. Fraud in the form of misrepresentation of borrower identity, reported income or assets, and deed or property-related information poses significant risks to mortgage lenders and housing market intermediaries. Companies increasingly rely on AI and ML techniques to predict the likelihood of misrepresentation based on analyzing patterns in previous fraudulent cases, third-party validation of employment data, account activity, and consumer behavior data from website analytics, digital marketing, and social media activity. High-risk cases, once identified, are referred for further investigation [66].
The SCALE framework could be used to evaluate the impact of AI fraud detection models on racial equity. Examining image patterns and comparing applicant data with verified fraudulent cases aligns with the Societal Values for integrity, and since legitimate applicants grant their approval for data such as credit score, employment, and banking information to be accessed from third-party sources, the Societal Norms for data privacy are also met. Social media activity and other personal behavioral patterns may pose questions about Contextual Integrity, but otherwise, the factors used to predict fraudulent cases are likely appropriate because they mirror those used in underwriting decisions. The primary goal of fraud detection is to increase Accuracy, which depends on the ability of these models to reduce the rate of prediction errors and the likelihood and severity of fraud incidences. A key question about Accuracy is how reliable the data used for validation and/or prediction are and to what extent are these data consistently available across demographic groups. The Legality question pertains to whether AI fraud detection models have a negative, disparate impact on under-represented groups, i.e., if certain applicant profiles are correlated with race or ethnicity, will members of protected classes of prospective borrowers be subjected to more scrutiny than others? This is especially relevant in the case of fraud detection, which is largely unregulated relative to the activities fundamental to the underwriting decisions. Lastly, the Expanded Opportunity factor implies that AI fraud detection models would ultimately either enable previously underserved mortgage borrowers to enter the market or lower the costs of mortgage credit for the members of these groups.

8. Discussion and Conclusions

The emerging use of AI in the origination and servicing of mortgages promises to transform the housing industry and potentially open doors to segments of the population that have previously been underserved. At a minimum, enhanced technologies and the harvesting of non-traditional data should make the process more efficient, consumer-friendly, and less costly. However, unless properly designed, AI could also serve to solidify the historic inequities that currently characterize the housing market and conflict with the stated public policy goal of increasing the homeownership rates of Black, Brown, and lower-income families.
Perhaps the most fundamental issue relates to the types of data that can or should be used in the creation of the underlying models; for example, the assessment of credit risk. As noted earlier, the fact that a given variable is predictive of future loan performance—for example, an individual’s educational background or expenditure patterns—is not enough to justify its use. The SCALE typology presented above offers a framework that the industry, its regulators, and Congress could use to assess the pros and cons of deploying various kinds of data in the origination or servicing of mortgages. In the end, such analysis could serve as a basis for drafting new regulations, expanding fair lending laws, and/or enacting more general privacy legislation that explicitly prohibits the use of certain types of data—for example, an individual’s social media profile—for certain purposes, including but not necessarily limited to the granting of mortgage credit.
At a minimum, AI models have raised concerns about socio-political priorities to advance racial equity. The use of such tools in other applications has been shown to embed or exacerbate some of the biases that plague human decision-makers. For example, Microsoft’s AI chatbot “learned” to respond using racist language gathered from social media users [67], and reports claim that a Twitter algorithm automatically edited out images of Black faces [68]. Racial bias also has been found in popular facial recognition programs and tenant screening algorithms adopted by landlords [69,70]. According to a recent Brookings paper, the use of AI in the mortgage context can embed “biased feedback loops” whereby consumers who previously encountered barriers to traditional forms of credit and obtained financing via higher-risk and more expensive subprime loans have lower credit scores, thereby capturing these circumstances in models for future credit decisions and pricing [59]. Based on the SCALE framework, these approaches also raise concerns in terms of accuracy due to the potential for bias in the representation and selection of samples upon which models are based, in addition to omitted variables and historical factors which could also contribute to systematic errors.
Mortgage lenders, policymakers, and other industry stakeholders should also consider the elements of the SCALE framework when designing, adapting, evaluating, and monitoring digitalized tools. These perspectives could help inform recently proposed legislation (e.g., the H.R.6580 Algorithmic Accountability Bill proposed by Senators Wyden and Booker, and Representative Yvette Clarke) intended to expand the FTC enforcement of AI in housing, financial services, and other industries [71] (https://www.wyden.senate.gov/news/press-releases/wyden-booker-and-clarke-introduce-bill-to-regulate-use-of-artificial-intelligence-to-make-critical-decisions-like-housing-employment-and-education, accessed on 26 September 2023). Trade publications and the blogosphere are replete with examples of digitalized solutions claiming to increase efficiency in marketing, operations, risk assessment, regulatory compliance, and servicing. However, there are far fewer frameworks for proactive responsible digital transformation that could provide solutions to the systemic barriers to mortgage credit in the current market structures.
The proposed Algorithmic Accountability Act would direct the FTC to require impact assessments of AI systems and “augmented critical decision processes”. Although this proposal acknowledges the issues and would increase resources for the FTC and other agencies to evaluate AI models, it is unclear how audits would account for the iterative, dynamic, and rapidly changing nature of model development. How often should models be evaluated and at what stage of development or implementation? In addition to the concerns about transparency and interpretability described above, as well as the pervasiveness of AI modeling in these industries, it is hard to imagine that the government would ever have the resources to meaningfully evaluate these practices. In the case of AI, industry self-regulation might be a more viable, less costly alternative to traditional regulatory oversight. In partnership with FinRegLab, Blattner, and Spiess, they compared ML credit decision models in terms of explainability and fairness and found that these models vary in the extent to which they can identify characteristics that have a negative and disparate impact on protected classes. These authors propose an approach for “evaluating the quality and usability of information produced about machine learning models’ behavior’ which could be adopted by lenders and regulators who are seeking transparency in the context of fair lending” [64].
Another policy recommendation is to revisit HUD’s 2020 “disparate impact rule” that requires “a robust causal link between the challenged policy or practice and the adverse effect on members of a protected class”. The “robust causal link” standard has been difficult to prove or enforce and harkens back to a time when manual underwriting decisions based on a few discrete factors were the norm. Due to the substantial number of factors and combinations thereof in AI/ML models, causal links, including those that unduly harm disadvantaged groups, are difficult to uncover. New language and interpretation of this standard would foster more effective enforcement of the disparate impact legal standard.
As described above, digitalization and AI in the mortgage market can help advance social and political goals of eradicating racism and discrimination, as captured in Davis et al.’s [12] notion of digital reparation. Digitalization strategies could improve accuracy and remove rather than introduce bias; however, such strategies require thoughtful design, development, and implementation. Theoretical and empirical research on the effects of AI and ML, for example, suggest that if designed to do so at the outset, these tools have the potential to identify and eradicate the effects of systemic discrimination while simultaneously increasing predictive accuracy and efficiency in the mortgage value chain. It is important, however, to ensure that tools and approaches adapted from other contexts are appropriate for mortgage lending. The developers of these tools should address potential legal and regulatory issues, such as the potential for discrimination in the form of disparate impact. Lastly, in addition to increased efficiency, lower transaction costs, and/or improved predictiveness, digitalization strategies should be designed to expand opportunities by reducing the barriers associated with manual, more subjective, and biased processes used by many traditional brick-and-mortar institutions.
The Black–White homeownership gap persists due to economic and social disadvantages that have accumulated over generations. The effects of “color-blind” regulations are a subject of heated debate among scholars and policymakers, and some argue that to account for racial effects, race must be explicitly included in models that predict outcomes such as loan defaults. As captured by Samuel Myers’s [72] “Minnesota paradox”, a reliance on race-neutral metrics of homeownership and other economic outcomes can obfuscate segregation, poverty, and other conditions that exist for Black communities. Ifeoma Ajunwa [73] more broadly described the paradox of automation where more automated decision-making is positioned as an anti-bias intervention, yet “has served to replicate and amplify bias”. Recent research on ethics in AI and ML suggests that the models need to include race at the design stage, rather than simply as a test for bias on the back end. Existing, well-intentioned public policies prohibit the inclusion of race as a factor in credit or valuation models. This paradigm fails to acknowledge that race is an endogenous and recursive measure of systematic and institutional discrimination. To address the societal goals of advancing equity and expanding homeownership opportunities, these same models could be used to measure and potentially offset the effects of race in the estimates of credit costs and risks.
Several important unanswered questions remain. For example, what are the appropriate goals for adopting AI/ML approaches, particularly those used to inform lending decisions? Replicating human decisions is one such goal. Should the outcomes of these models (i.e., the ability to assess and price risk) necessarily be superior to bias and other errors often associated with human decisions? We assume firms should use established criteria to assess whether digitalization projects and any innovative programs or projects in mortgage lending expand opportunities for homeownership in underserved communities. More work should be conducted to translate the established criteria of success so they can be applied to the outcomes of these tools. As Thomas and Uminsky [74] noted, defining the outcomes or metrics of success (accuracy, effectiveness, etc.) narrowly, and without regard to the context of the decision, exacerbates underlying problems. Metrics should be broad, multi-faceted, and informed by an understanding of those stakeholders most impacted by the program—in this case, mortgage applicants from historically disadvantaged groups.
Another key issue is how AI/ML can be used to verify fair and equitable treatment of individuals for each of these types of decisions, as prescribed by Davis et al.’s [12] notion of algorithmic reparation. Rather than simply making the decisions, these tools could be used to support or validate decisions being made by humans and/or AI. This suggests an additional, responsible use of AI in the mortgage market: using novel data analytic techniques to monitor, assess, and verify the fair and equitable treatment of mortgage applicants.

Author Contributions

Conceptualization, V.G.P., K.M. and A.S.; investigation, V.G.P., K.M. and A.S.; writing—original draft preparation, V.G.P., K.M. and A.S.; writing—review and editing, V.G.P., K.M. and A.S.; project administration, V.G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created for this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gartner (n.d.). Digitalization. Available online: https://www.gartner.com/en/information-technology/glossary/digitalization (accessed on 7 February 2023).
  2. IBM Cloud Education (n.d.). What Is Artificial Intelligence (AI)? Available online: https://www.ibm.com/cloud/learn/what-is-artificial-intelligence?lnk=fle (accessed on 7 February 2023).
  3. Brown, S. Machine Learning, Explained. 2021. Available online: https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained#:~:text=What%20is%20machine%20learning%3F,to%20how%20humans%20solve%20problems (accessed on 7 February 2023).
  4. Akinwumi, M.; Merrill, J.; Rice, L.; Saleh, K.; Yap, M. An AI Fair Lending Policy Agenda for the Federal Financial Regulators. Policy Brief. 2021. Brookings Center on Regulation and Markets, Brookings Institution. Available online: https://www-brookings-edu.cdn.ampproject.org/c/s/www.brookings.edu/research/an-ai-fair-lending-policy-agenda-for-the-federal-financial-regulators/?amp (accessed on 7 February 2023).
  5. FinRegLab. The Use of Machine Learning for Credit Underwriting. The Milken Institute. 2021. Available online: https://finreglab.org/wp-content/uploads/2021/09/the-Use-of-ML-for-Credit-Underwriting-Market-and-Data-Science-Context_09-16-2021.pdf (accessed on 7 February 2023).
  6. Arnold, D.; Dobbie, W.; Hull, P. Measuring Racial Discrimination in Algorithms. AEA Pap. Proc. 2021, 111, 49–54. [Google Scholar] [CrossRef]
  7. Martin, K. Designing Ethical Algorithms. MISQ Exec. 2019, 18, 129–142. [Google Scholar] [CrossRef]
  8. Kroll, J.A. The Fallacy of inscrutability. Philos. Trans. R. Soc. A 2018, 376, 2018008420180084. [Google Scholar] [CrossRef]
  9. Johnson, K.N.; Pasquale, F.A.; Chapman, J.E. Artificial intelligence, Machine Learning, and Bias in Finance: Toward Responsible innovation. Fordham Law Rev. 2019, 88, 499. [Google Scholar]
  10. Perry, V.G.; Schnare, A.B. Tipping the SCALE: Will Alternative Data in Credit Scoring Promote Or Impede Fair Lending Goals? Presentation at the National Association of Realtors Public Policy Forum. 2021. Available online: https://www.nar.realtor/events/public-policy-forum/tipping-the-scale-will-alternative-data-in-credit-scoring-promote-or-impede-fair-lending-goals (accessed on 7 February 2023).
  11. Martin, K. Ethical Implications and Accountability of Algorithms. J. Bus. Ethics 2019, 160, 835–850. [Google Scholar] [CrossRef]
  12. Davis, J.L.; Williams, A.; Yang, M.W. Algorithmic Reparation. Big Data Soc. 2021, 8. [Google Scholar] [CrossRef]
  13. Martin, K.; Nissenbaum, H. What Is It About Location? Berkeley Technol. Law J. 2020, 35, 251–326. [Google Scholar] [CrossRef]
  14. Nissenbaum, H. Privacy in Context: Technology, Policy, and the Integrity of Social Life; Stanford Law Books; Stanford University Press: Stanford, CA, USA, 2009. [Google Scholar]
  15. Walzer, M. Spheres of Justice: A Defense of Pluralism; Basic Books: New York, NY, USA, 1984. [Google Scholar]
  16. Blattner, L.; Nelson, S. How Costly Is Noise? Data and Disparities in Consumer Credit [Working Paper]. 2021. Available online: https://arxiv.org/abs/2105.07554v1 (accessed on 7 February 2023).
  17. Consumer Financial Protection Bureau(CFPB). Using Publicly Available Information to Proxy for Unidentified Race and Ethnicity. Consumer Financial Protection Bureau; 2014. Available online: https://files.consumerfinance.gov/f/201409_cfpb_report_proxy-methodology.pdf (accessed on 7 February 2023).
  18. Heaven, W.D. Bias Isn’t the Only Problem With Credit Scores—And No, AI Can’t Help. MIT Technology Review, 17 June 2021. Available online: https://www.technologyreview.com/2021/06/17/1026519/racial-bias-noisy-data-credit-scores-mortgage-loans-fairness-machine-learning/(accessed on 7 February 2023).
  19. Schnare, A.B. Alternative Credit Scores and the Mortgage Market: Opportunities and Limitations. Progressive Policy Institute. 2017. Available online: http://www.progressivepolicy.org/wp-content/uploads/2017/12/UpdatedCreditScoring_2017.pdf (accessed on 7 February 2023).
  20. Engler, A. The E.U and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment. Brookings Institution. 2023. Available online: https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/ (accessed on 26 August 2023).
  21. Akinwumi, M.; Rice, L.; Sharma, S. Purpose, Process, and Monitoring: A New Framework for Auditing Algorithmic Bias in Housing and Lending. National Fair Housing Alliance. 2022. Available online: https://nationalfairhousing.org/wp-content/uploads/2022/02/PPM_Framework_02_17_2022.pdf (accessed on 26 August 2023).
  22. Federal Reserve. The 2019 Survey of Consumer Finances. 2020. Available online: https://www.federalreserve.gov/econres/scfindex.htm (accessed on 28 August 2023).
  23. Fannie Mae. COVID-19, Mortgage Digitization, and Borrower Satisfaction. 2021. Available online: https://www.fanniemae.com/media/40491/display (accessed on 7 February 2023).
  24. Atske, S.; Perrin, A. Home Broadband Adoption, Computer Ownership Vary by Race, Ethnicity in the U.S. Pew Research Center. 2021. Available online: https://www.pewresearch.org/fact-tank/2021/07/16/home-broadband-adoption-computer-ownership-vary-by-race-ethnicity-in-the-u-s/ (accessed on 7 February 2023).
  25. Vogels, E.A. Digital Divide Persists Even as Americans with Lower Incomes Make Gains in Tech Adoption. Pew Research Center. 2021. Available online: https://www.pewresearch.org/fact-tank/2021/06/22/digital-divide-persists-even-as-americans-with-lower-incomes-make-gains-in-tech-adoption/ (accessed on 7 February 2023).
  26. Ehrentraud, J.; Garcia Ocampo, D.; Quevedo Vega, C. Regulating Fintech Financing: Digital Banks and Fintech Platforms. FSI Insights on Policy Implementation, 2020, No. 27, Financial Stability Institute, BIS. Available online: https://www.bis.org/fsi/publ/insights27.pdf (accessed on 7 February 2023).
  27. nLIFT. Fulfilling the Promise of Fintech: The Case for A Nonprofit Vision and Leadership. The Aspen institute. 2018. Available online: https://www.aspeninstitute.org/wp-content/uploads/2018/09/nLIFT-Manifesto-FINAL-1.pdf?_ga=2.176913637.1579357171.1541431738-807926508.1541431738 (accessed on 7 February 2023).
  28. Friedline, T.; Chen, Z. Digital Redlining and the Fintech Marketplace: Evidence From US Zip Codes. J. Consum. Aff. 2021, 55, 366–388. [Google Scholar] [CrossRef]
  29. Haupert, T. The Racial Landscape of Fintech Mortgage Lending. Hous. Policy Debate 2022, 32, 337–368. [Google Scholar] [CrossRef]
  30. Matz, S.C.; Menges, J.I.; Stillwell, D.J.; Schwartz, H.A. Predicting individual-Level income From Facebook Profiles. PLoS ONE 2019, 14, e0214369. [Google Scholar] [CrossRef]
  31. Matz, S.C.; Netzer, O. Using Big Data as a Window into Consumers’ Psychology. Curr. Opin. Behav. Sci. 2017, 18, 7–12. [Google Scholar] [CrossRef]
  32. Libai, B.; Bart, Y.; Gensler, S.; Hofacker, C.; Kaplan, A.; Kötterheinrich, K.; Kroll, E.B. Brave New World? On AI and the Management of Customer Relationships. J. Interact. Mark. 2020, 51, 44–56. [Google Scholar] [CrossRef]
  33. Ali, M.; Sapiezynski, P.; Bogen, M.; Korolova, A.; Mislove, A.; Rieke, A. Discrimination Through Optimization: How Facebook’s Ad Delivery Can Lead To Biased Outcomes. Proc. ACM Hum.-Comput. Interact. 2019, 3, 199. [Google Scholar] [CrossRef]
  34. Evans, C.; Miller, W. From Catalogs To Clicks: The Fair Lending Implications of Targeted internet Marketing. Consum. Compliance Outlook 2019, 3. Available online: https://consumercomplianceoutlook.org/2019/third-issue/from-catalogs-to-clicks-the-fair-lending-implications-of-targeted-internet-marketing/ (accessed on 7 February 2023).
  35. Hermann, E. Leveraging Artificial intelligence in Marketing for Social Good—An Ethical Perspective. J. Bus. Ethics 2022, 179, 43–46. [Google Scholar] [CrossRef]
  36. Martin, K.D.; Murphy, P.E. the Role of Data Privacy in Marketing. J. Acad. Mark. Sci. 2017, 45, 135–155. [Google Scholar] [CrossRef]
  37. Martin, K.D.; Palmatier, R.W. Data Privacy in Retail: Navigating Tensions and Directing Future Research. J. Retail. 2020, 96, 449–457. [Google Scholar] [CrossRef]
  38. Stefano, P.; Reczek, R.M.; Giesler, M.; Botti, S. Consumer Experiences With Marketing Technology: Solving the Tensions Between Benefits and Costs. NIM Mark. Intell. Rev. 2022, 14, 25–29. [Google Scholar]
  39. National Fair Housing Alliance. Fair Housing Groups Settle Lawsuit with Facebook: Transforms Facebook’s Ad Platform Impacting Millions of Users. 2019. Available online: https://nationalfairhousing.org/national-fair-housing-alliance-settles-lawsuit-with-facebook-transforms-facebooks-ad-platform-impacting-millions-of-users/ (accessed on 7 February 2023).
  40. Jan, T.; Dwoskin, E. Facebook Agrees to Overhaul Targeted Advertising System for Job, Housing and Loan Ads After Discrimination Complaints. The Washington Post, 19 March 2019. Available online: https://www.washingtonpost.com/business/economy/facebook-agrees-to-dismantle-targeted-advertising-system-for-job-housing-and-loan-ads-after-discrimination-complaints/2019/03/19/7dc9b5fa-4983-11e9-b79a-961983b7e0cd_story.html(accessed on 7 February 2023).
  41. Brown, A.K. Fair Lending—Digital Marketing and HMDA 2018. Compliance Session at the Marquis User’s Conference, Skadden Foundation. 2019. Available online: https://gomarquis.com/wp-content/uploads/2019/10/Austin-Brown-Fair-Lending-Digital-Marketing-and-HMDA-2018.pdf (accessed on 7 February 2023).
  42. Ballard Spahr. DOJ/CFPB/OCC Settle Redlining Lawsuit Against Mississippi-Based National Bank. Consumer Finance Monitor. 2021. Available online: https://www.jdsupra.com/legalnews/doj-announces-major-new-initiative-2017442/ (accessed on 7 February 2023).
  43. Humber, N.J.; Matthews, J. Fair Housing Enforcement in the Age of Digital Advertising: A Closer Look at Facebook’s Marketing; Working Paper; Roger Williams University: Bristol, RI, USA, 2020; Available online: https://docs.rwu.edu/cgi/viewcontent.cgi?article=1308&context=law_fac_fs (accessed on 7 February 2023).
  44. Aronowitz, M.; Golding, E.; Choi, J. The Unequal Costs of Homeownership. MIT Golub Center for Finance and Policy. 2020. Available online: https://gcfp.mit.edu/wp-content/uploads/2020/10/Mortgage-Cost-for-Black-Homeowners-10.1.pdf (accessed on 7 February 2023).
  45. Ramirez, E.; Brill, J.; Ohlhausen, M.K.; McSweeny, T. Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues, FTC Report, Federal Trade Commission; 2016. Available online: https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/160106big-data-rpt.pdf (accessed on 7 February 2023).
  46. Kreiswirth, B.; Schoenrock, P.; Singh, P. Using Alternative Data to Evaluate Creditworthiness. Consumer Financial Protection Bureau; 2017. Available online: https://www.consumerfinance.gov/about-us/blog/using-alternative-data-evaluate-creditworthiness/ (accessed on 7 February 2023).
  47. Cochran, K.T.; Stegman, M.; Foos, C. Utility, Telecommunications, and Rental Data in Underwriting Credit. Urban Institute Research Report. 2021. Available online: https://www.urban.org/research/publication/utility-telecommunications-and-rental-data-underwriting-credit/view/full_report. (accessed on 7 February 2023).
  48. Lee, A.S.; Schnare, A.; Turner, M.A.; Walker, P.D.; Varghese, R. Give Credit Where Credit Is Due: Increasing Access To Affordable Mainstream Credit Using Alternative Data. Urban Markets Initiative Report. Policy and Economic Research Council and the Brookings Institution. 2006. Available online: https://www.brookings.edu/research/give-credit-where-credit-is-due-increasing-access-to-affordable-mainstream-credit-using-alternative-data/ (accessed on 7 February 2023).
  49. CFPB. Report on the Use of Remittance Histories in Credit Scoring. Consumer Financial Protection Bureau; 2014. Available online: https://www.consumerfinance.gov/data-research/research-reports/report-on-the-use-of-remittance-histories-in-credit-scoring/ (accessed on 7 February 2023).
  50. Drehobl, A.; Ross, L. Lifting the High Energy Burden in America’s Largest Cities: How Energy Efficiency Can Improve Low Income and Underserved Communities. ACEEE. 2016. Available online: https://www.aceee.org/sites/default/files/publications/researchreports/u1602.pdf (accessed on 7 February 2023).
  51. Byrne, J.; Portanger, C. Climate Change, Energy Policy, and Justice: A Systematic Review. Anal. Krit. 2014, 36, 315–343. [Google Scholar] [CrossRef]
  52. Carley, S.; Konisky, D.M. the Justice and Equity Implications of the Clean Energy Transition. Nat. Energy 2020, 5, 569–577. [Google Scholar] [CrossRef]
  53. National Consumer Law Center. the Credit Score Pandemic Paradox. 2022. Available online: https://www.nclc.org/images/pdf/special_projects/covid-19/IB_Pandemic_Paradox_Credit_invisibility.pdf (accessed on 7 February 2023).
  54. Choi, J.H.; Young, C. Owners and Renters of 6.2 Million Units in Small Buildings Are Particularly Vulnerable During the Pandemic. 2020. Available online: https://www.urban.org/urban-wire/owners-and-renters-62-million-units-small-buildings-are-particularly-vulnerable-during-pandemic (accessed on 7 February 2023).
  55. FinRegLab. The Use of Cash-Flow Data in Underwriting Credit: Empirical Research Findings. The Milken Institute. 2019. Available online: https://finreglab.org/wp-content/uploads/2019/07/FRL_Research-Report_Final.pdf (accessed on 7 February 2023).
  56. Friedman, B.; Nissenbaum, H. Bias in Computer Systems. ACM Trans. Inf. Syst. 1996, 14, 330–347. [Google Scholar] [CrossRef]
  57. Odinet, C.K. Predatory Fintech and the Politics of Banking. Iowa Law Rev. 2021, 10, 1739–1800. [Google Scholar]
  58. Federal Housing Administration. Underwriting Manual: Underwriting and Valuation Procedure Under Title II of the National Housing Act; Federal Housing Administration: Washington, DC, USA, 1938. Available online: https://www.huduser.gov/portal/sites/default/files/pdf/Federal-Housing-Administration-Underwriting-Manual.pdf (accessed on 7 February 2023).
  59. Rothwell, J.; Perry, A.M. Biased Appraisals and the Devaluation of Housing in Black Neighborhoods. Brookings Institute. 2021. Available online: https://www.brookings.edu/research/biased-appraisals-and-the-devaluation-of-housing-in-black-neighborhoods/ (accessed on 7 February 2023).
  60. Folk, J.; Chen, K. Avoiding Overvaluation Risk and Appraisal Bias in Today’s Uniquely Challenging Market Session. Clear Capital. 2021. Available online: https://www.clearcapital.com/avoiding-overvaluation-risk-and-appraisal-bias-in-todays-uniquely-challenging-market-session (accessed on 7 February 2023).
  61. Williamson, J.; Palim, M. Appraising the Appraisal: A Closer Look at Divergent Appraisal Values for Black and White Borrowers Financing Their Home. Fannie Mae. 2022. Available online: https://www.fanniemae.com/media/42541/display (accessed on 7 February 2023).
  62. House Canary. Reducing Racial Bias in Home Appraisals Using Automated Valuation Technology. 2021. Available online: https://www.housecanary.com/wp-content/uploads/2021/12/Reducing-Racial-Bias-in-Home-Appraisals-Using-Automated-Valuation-Technology-December-2021.pdf (accessed on 7 February 2023).
  63. Perry, V.G.; Aronowitz, M.; Choi, J.H.; Golding, E.; Green, M.; Green, R.K.; Jourdain-Earl, M.; Aiko Nelson, A.; Rhue, L.; Rice, L. 2020 State of Homeownership in Black America. 2020. Available online: https://www.shiba2020.com/ (accessed on 7 February 2023).
  64. FinRegLab; Blattner, L.; Spiess, J. Machine Learning Explainability Fairness: Insights From Consumer Lending, Working Paper. 2022. Available online: https://finreglab.org/wp-content/uploads/2022/04/FinRegLab_Stanford_ML-Explainability-and-Fairness_insights-from-Consumer-Lending-April-2022.pdf (accessed on 7 February 2023).
  65. Kluttz, D.N.; Kohli, N.; Mulligan, D.K. Shaping Our Tools: Contestability as a Means To Promote Responsible Algorithmic Decision Making in the Professions. In After the Digital Tornado: Networks, Algorithms, Humanity; Werbach, K., Ed.; Cambridge University Press: Cambridge, UK, 2020; pp. 137–152. [Google Scholar]
  66. Huard, F. How Machine Learning Is Making It Easier to Spot Fraud and Mitigate Risk for Underwriters. Core Logic. 2023. Available online: https://www.corelogic.com/culture-stories/how-machine-learning-is-making-it-easier-to-spot-fraud-and-mitigate-risk-for-underwriters/ (accessed on 26 August 2023).
  67. Schwartz, O. In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation. IEEE Spectr. 2019. Available online: https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation (accessed on 7 February 2023).
  68. Collier, K. Twitter’s Racist Algorithm Is Also Ageist, Ableist and Islamaphobic, Researchers Find. NBC News 2021. Available online: https://www.nbcnews.com/tech/tech-news/twitters-racist-algorithm-also-ageist-ableist-islamaphobic-researchers-rcna1632 (accessed on 7 February 2023).
  69. Guynn, J. Google Photos Labeled Black People “Gorillas”. USA Today. 2015. Available online: https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/ (accessed on 7 February 2023).
  70. Rosen, E.; Garboden, P.M.E.; Cossyleon, J.E. Racial Discrimination in Housing: How Landlords Use Algorithms and Home Visits to Screen Tenants. Am. Sociol. Rev. 2021, 86, 787–822. [Google Scholar] [CrossRef]
  71. Kaye, K. This Senate Bill Would Force Companies to Audit AI Used for Housing and Loans. Protocol. 8 February 2022. Available online: https://www.protocol.com/enterprise/revised-algorithmic-accountability-bill-ai (accessed on 7 February 2023).
  72. Myers, S.; Samuel, M., Jr. On the Surprise of the Minnesota Paradox and Large Racial Disparities. Video. Humphrey School of Public Affairs, University of Minnesota. 2020. Available online: https://www.youtube.com/watch?v=6TsLSv1KCWM (accessed on 7 February 2023).
  73. Ajunwa, I. The Paradox of Automation as Anti-Bias intervention. Cardozo Law Rev. 2019, 41, 1671–1742. [Google Scholar]
  74. Thomas, R.; Uminsky, D. The Problem with Metrics Is a Fundamental Problem for AI. arXiv 2020, arXiv:2002.08512. [Google Scholar]
Figure 1. AI is embedded and expanding throughout the mortgage value chain.
Figure 1. AI is embedded and expanding throughout the mortgage value chain.
Ai 04 00045 g001
Table 1. SCALE framework [10] and AI applications in the mortgage market.
Table 1. SCALE framework [10] and AI applications in the mortgage market.
CriteriaDigitalized Tool/ProcessImpact on Minority Homeownership
Societal valuesAI/ML use of GPS location These data violate privacy norms/magnify income and wealth disparities that have resulted from historical racism and discrimination.
Contextual integrityTargeted digital advertising that filters content based on demographic or psychographic profiles Although these tactics work well in the context of apparel or automobiles, digital advertising may be less appropriate for mortgage lending.
AccuracyProperty valuation algorithmsOn average, Black and Hispanic borrowers pay higher rates and fees and are more likely to have received high-cost subprime loans, faced foreclosure, or sustained significant equity losses during the 2008 crisis. Models based on “comparable” home values may unfairly penalize minority communities.
LegalityAI/ML mortgage underwriting algorithmsAI models may have negative, disparate impacts on certain racial/ethnic groups; due to model complexity, sources of bias may be difficult to detect.
Expanded opportunityAI/ML using non-financial data in credit scoring algorithmsExpanded data used for credit scoring may reduce the population of unscorable households by increasing the number of households with high-risk (i.e., low) credit scores.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Perry, V.G.; Martin, K.; Schnare, A. Algorithms for All: Can AI in the Mortgage Market Expand Access to Homeownership? AI 2023, 4, 888-903. https://doi.org/10.3390/ai4040045

AMA Style

Perry VG, Martin K, Schnare A. Algorithms for All: Can AI in the Mortgage Market Expand Access to Homeownership? AI. 2023; 4(4):888-903. https://doi.org/10.3390/ai4040045

Chicago/Turabian Style

Perry, Vanessa G., Kirsten Martin, and Ann Schnare. 2023. "Algorithms for All: Can AI in the Mortgage Market Expand Access to Homeownership?" AI 4, no. 4: 888-903. https://doi.org/10.3390/ai4040045

Article Metrics

Back to TopTop