How to Fix Non-Agency Lending Part III: Loan Delivery/Data Integrity
Note: This will be the most ignored essay (of the five I write) and I would argue it is the most important of all if you want to fix both agency and non-agency operations.
In Part II we discussed underwriting and AUS, in order to fully comprehend how important the AUS is, please re-read the AUS component of this essay; it will help you grasp the magic.
When you think about “shipping” or “loan delivery” functions inside a large agency lender, do you think of someone creating a digital loan file (with all the documents), sending a data tape to an investor and the investor doing due-diligence on the file (reviewing the data, reviewing the document file, etc.) and then, only once everything is aligned, they purchase the loan? If you do, then you won’t understand why it is so difficult to transition agency lenders to non-agency. I’ll explain why.
First, we need to revisit the AUS (automated underwriting system). In part II, I discussed:
“In agency loans, a file goes through a loan application, credit is pulled and the file is run through the agency’s AUS (DU in our example). From there the file is essentially approved and the lender must now just prove (via documentation) that what the borrower stated on their application was accurate.?Essentially, 90%+ of the heavy lifting is done by the AUS system in agency and it provides significant guardrails for a lender. This makes it virtually impossible for a file to be ‘accidentally approved’ by the underwriter if the AUS does not provide an ‘approval’. The collateral doesn’t need to be reviewed in many cases (if a PIW is granted), the credit report is already read and reviewed by the AUS, the eligibility is already determined based on the Fannie Mae guidelines, and the file is partially conditioned by the AUS (this is a fancy way of saying your loan is approved subject to conditions, such as provide a W-2 to prove you make what you said you did on the loan application).”
The first step in making the above happen requires sending a uniform dataset to the GSEs called the MISMO standard 3.4 file. Once this file is read by the AUS, it is stored along with additional expected future details based on AUS responses. Secret tip: its this step that “allows” the GSE to have an “expected” result in terms of data. What do I mean? Based on this AUS approval, without an underwriter ever having looked at the loan, Fannie already knows what to expect when post-closing uploads the file data (ULDD dataset) after the loan is funded.
To say this another way, to get the GSE to “accept” the loan file (purchase the loan) the lender needs to, after a loan is funded, upload a standardized data tape to the GSE. That data (ULDD) will be compared to the expected data (based on the AUS run) and will either be rejected or accepted. You read that right. There is no file review, there is no making sure loan documents match the data (or vice versa), there is no human being reviewing anything to make the purchase decision. If a loan is sold to the GSE (either via cash-window or via MBS), the only factor required is the data matches. That’s it. Period.
Now here is the ultimate issue with mortgage lending, if it could be simply boiled down to one thing, operations works off documents (paper or digital) while Sales, Capital Markets, Post-Closing, Investors, and the GSE’s themselves work off Data. Despite all the money thrown at “fintech” and “mortgage tech” there simply has been no solid solution to marrying data to documents (or vice versa). In fact, there has been little implementation of (already existing tech) marrying expected data results (from the GSE tools) to the actual data (inside the Lender’s loan origination system). How do I know that? The GSEs share information based on loan delivery data quality relative to quartiles. Did you know that the top 25% of lenders that ?deliver loans to one of the GSEs have their file rejected one out of every two times (at first delivery attempt)? Let me rephrase. Despite the fact that the tools exist to mimic loan delivery throughout the loan origination process, despite 30+ days spent inside a lender gathering documents, inputting information, going through random quality control audits, etc. after the file is already funded, after the borrower has already closed on the loan and after the operations function has long forgot about the file, when the lender extracts the information from their LOS and inputs it into the GSE loan delivery system, a machine (not a person, not someone thoroughly reviewing a loan file, not a chief credit officer digging through a file with a fine tooth comb) simply says “hey the information you just uploaded, doesn’t match what we approved”. And this happens 50% of the time, at the “best” companies in lending. 50%!
Now at this point, when the loan delivery team gets this error message (which takes place in the GSEs website not their LOS) they have two choices:
1. Take this information back to operations, have operations update the LOS and then reimport the information to the GSE
2. Type in the expected information into the loan delivery system directly.
If they do number 1, it basically delays the loan delivery process (think Lucy on the chocolate factory line trying to eat all the chocolates) but the data in their LOS will match the loan delivery system. If they do number 2, it will speed up the process and get the loan purchased faster, it won’t “bug” the operations people upstream BUT the data in the LOS won’t match the loan delivery system. Which one do you think they do 90% of the time? That’s right, number 2.
So to recap, if a loan is “sold” directly to the GSE, loan files are not reviewed (at time of delivery), data is compared to expected data and the purchase moves forward or doesn’t based on that information (data comparison result), this attempt to upload data is rejected 50%+ of the time, the data is generally updated in the GSE system and not the LOS, and thus at the end the loan is purchased with data discrepancies between the LOS and the GSE and these are virtually never reconciled.
Because of this, I always recommend every lender to sell at least some portion of their loans to an aggregator (a middle man/correspondent lender). Why? Because those lenders will actually review loan files and provide feedback. This helps the lender keep a good pulse on their file delivery quality, but many lenders don’t do this because it “delays” loan purchase timelines (well sure, especially if the data is wrong and/or the documents don’t match the data, or the lender’s stacking order is a mess [most are]).
领英推荐
So now that we’ve discussed how agency loan delivery works, let’s juxtapose that against non-agency. For non-agency, as discussed in Part II, there is little to no use for the AUS (its not going to capture this data and compare to loan delivery like Fannie/Freddie which, ironically, is one of the major points of having an AUS and, arguably, the most important from a scale perspective), the underwriter will manually underwrite the file, the operations team will still do their documents herding and reviewing and capital markets, investors and, to some extent, sales will still do their jobs using data. However, when it comes time to get a loan purchased, a lender will upload a non-standardized data tape (with additional fields required for non-agency loans that don’t exist for agency), they will upload a data file and all this will be reviewed by a TPR firm (we will discuss this in Part IV) and the firm that purchased the loan.
At this point, I could write several more pages on the topic, but what I want the reader to takeaway from this is that the entire process of delivering a loan to a non-agency investor exposes the parts of the process most lenders are the worst at AND it doesn’t include the feedback (data comparison) that a lender is used to getting from the GSEs AND it should give the buyer of the loan a bit of heartburn when they realize that if a lender can’t get the data right on an agency loan 1 out of 2 times, how will they get it right on a non-agency loan when all the tools an agency loan has (AUS, ULDD, Data Delivery Feedback, etc.) don’t exist? In other words, in agency loans if the lender says the data is X and an auditor says the data is Y the gold standard is the AUS/ULDD which may say the data needs to be Z. This standard allows for excellent audit results. However, on non-agency loans if a lender says the data is X and the auditor says the data is Y, how does anyone know if the data should have been Z?
In order to fully appreciate the brilliance of the AUS, you must start at the end of the loan origination process. Unfortunately, most of the investment in tech in mortgage is on the front end. If they started at the back, they’d solve more problems, increase tech ROI, reduce errors, cycle times AND create better solutions for the front end. If non-agency wants to scale, it needs to understand how the AUS and ULDD work hand-in-glove (and how it can replace/reduce reliance on TPR), and this will lead to an obvious conclusion which I will discuss in Part V as one of the solutions to fixing the non-agency market.
Pro-Tip (if you work at a lender you should advocate for this, no matter what your position in the firm, if the below is implemented it will improve your life):
Having been a capital markets professional in the agency mortgage space at multiple lenders, I cannot emphasize enough how important it is for lenders to grasp what I’m about to say. Despite numerous tools available (and integrated into most 3rd party LOS systems), free, readily available documentation online, and very responsive reps at the GSEs almost no lender does this right. If there was a single item that would materially improve cycle times, post closing backlogs, QC, cost per loan, etc. it would be doing what I’m about to say properly: understand and implement the “early delivery” toolset from Fannie Mae.
Here are required reading on the subject:
How the original AUS data gets to Fannie:
How loans are delivered to Fannie (data not documents)
https://singlefamily.fanniemae.com/media/5761/display
How to make sure your data is correct (at any point in the origination process, not just after funding):
If you build the above into your origination process, you will be shocked at the improvements you see in customer experience, operational efficiency, and post-closing/secondary will thank you for saving them 50% of their time. Everywhere I’ve implemented this, we have achieved a 99% loan delivery success rate on attempt one. This project takes less than three months to complete and can be done in as little as 30 days, depending on how well the LOS is setup.