Market Condition Adjustments – A word of caution on filtering by price for Real Estate Appraisers
Paul Rayburn, CNAREA, DAR
Real Estate Appraiser, Data Analyst, CNAREA, DAR, Certified Appraisal Reviewer
by Paul Rayburn Jan 29, 2025, *An update to a post from April 2023
In a recent discussion, the subject of how we select data for analysis came up. It was in the context of removing outliers in price from our subject market analysis. The point was that we should be thinking more like a buyer, and if the subject was worth just under a million dollars, buyers probably were not looking at 2 million dollar houses. The easy or most apparent answer to avoid irrelevant data may be to filter by price, or is it?
Filtering by price can be acceptable if you already have an identified market segment and can confidently remove irrelevant outlier sales from the set. What was not apparent from the conversation was how important it is to not simply filter by price when gathering that initial data for making the initial market condition analysis and adjustment calculations.
Depending on the market an appraiser is working in, they may have a lot or a little in terms of quantity in sales. Sifting through thousands of sales may not be practical and one of the most obvious ways to do this is to filter the data, and the price seems like a reasonable place to start.
"Most anybody, even remotely familiar with a market, can come up with a probable price range for a subject competing market."
Let us say we are looking at property that is worth about a million dollars. Indeed, most buyers of a million-dollar home would not be looking seriously at two million-dollar homes. So then, if we are supposedly reflecting what reasonable market participants are contemplating we should be producing credible results. Would that million-dollar buyer look between $900,000 and $1,100,000? well, that’s reasonably probable.
Seems logical so far, or maybe not. You may already be starting to see the problem. Frankly, unless you have seen the data plotted out, you probably couldn't visualize what is occurring. I'd be surprised if anyone could. But once you've seen it hopefully you will never unsee it.
Here’s the problem with filtering by price before making market condition adjustments. That decision could significantly skew relevant date of sale market trend/ condition adjustments. That would be an analytical bias. In the following example, I chose a large market segment. This data set was generated purely for this discussion and not specifically for how I would analyze a particular subject market. Generally, the more data points available, the more consistent the trends.
For this example, to avoid complex issues of market trend trajectories, I chose a market date range where the values were consistently trending to keep this example reasonably simple. *I am also only showing linear trends for the sake of simplicity, and based on the intentionally chosen period and data, best-fit trend lines may not provide any more relevant results.
Market condition adjustments, also referred to as date of sale adjustments, can be one of the most reliable adjustments when appropriately developed. Whether you have reasonable amounts of recent data or need to look years back in history for more complex assignments, this issue is equally important to get it right.
In the following, Chart 1 is all of the data even remotely indicative of a market. The formula in the lower right corner of the charts is the price per day from the slope of the data. For example, in chart 1 is $693.03 per day, and our data date range is 850 days, so that’s 82%. That indicates an 82% increase over the period, but that is a very broad market which may not really represent a conforming market; therefore, we want to refine that market.
In the search field boxes, the simple input of max price $1,100,000 and min price $900,000 would be innocent. Nobody is trying to bias the results, are they? We can see the issue when presenting the results visually. In plot 1a, The triangles contain significant and relevant data, which would be excluded by initially searching this tight price range that arbitrarily results in a nearly linear data set. You wouldn't see this unless you plotted it out in a scatterplot, such as below.
In Chart 2 we see the results of that filtering by price graphically presented, and the calculation is 2%, over the 850 days, which is completely inaccurate and not an actual reflection of the market.
领英推荐
Chart 2a is the modified broader filter by price showing the 37%.
In Chart 2b is the data filtered by price down $750,000 to $1,500,000 and then filtered to a supposedly more competitive market segment of 2,000 to 3000 square feet and lot sizes .15 to .5 acre. and year built 2005 and newer. The other parameters may or may not be valid for a given market, but for the simplicity of this demonstration, we can use those parameters as at least one method of paring down the data. This is still skewed by the initial price filter but does indicate a more plausible adjustment of 60%.
If we filter the data first by those parameters of physical similarity, we can get a more reliable initial data set and then, if need be, remove outliers with a more surgical approach. Chart 3 is filtered by building size and acres with 75% adjustment.
Thats is probably a fine set for date of sale adjustment, but there is still an issue of brand new homes bringing up the rate because they did not exist at the beginning of the set, so filtering by year built might give us a more similar market trend adjustment and also eliminate a few more sales to make the final selection of comparables simpler. The final set in chart 4 is filtered to the year built between 2010 and 2017, bringing the adjustment down to 73%. This now gives us a more manageable set for individual analysis of specific properties and similarities with the subject. The outliers were also eliminated by the filtering in this case and did not need to be removed individually, although, on occasion, that may still be required. It is possible the subject is similar to one of our outliers, but we can still analyze those in conjunction with this more reliable date of sale condition-adjusted data. We may also search for more specific characteristics and even include some comps that we left out in the approach used for the market trend date of sale adjustments with the ability to compare to our adjusted data.
The HPI for this same period is 60%. An appraiser may be challenged to decide if the HPI is reasonably accurate for the subject market. The average HPI price for this broad market is $971,000, but does it represent the relevant market for the subject? Looking at more refined markets may be too limiting. The HPI is not adequately adapted for quality, condition and other relevant components not reported within the aggregated data, and it only provides an adjustment for the most common homes in that market. If you were analyzing a $600,000 property, a 2 million dollar lakeshore home, or vacant land, that would not be well represented by the HPI and on occasion, the HPI can produce erroneous results; after all, it is a program designed by humans, and errors or inappropriate model applications are possible. Therefore, we should have our own additional methods for comparison.
With advances in Automated Value Models (AVM’s) taking away much of the lending business for typical properties, appraisers face the increasing challenge of analyzing more complex markets. However, we also have or should have, increased access to data. Our ability to meaningfully interact with that data has become increasingly important.
As always, I encourage everyone to take George Dell's Stats Graphs and Data Science SGDS courses and join the (CAA) Community of Asset Analysts.
You might like my YouTube channel; you can check it out here. If you want to start out with a bit of a blooper outtake featuring "not Shiny Psaul" and my CAA friends, you could start at about the 22-minute mark HERE
Certified Residential Appraiser in Washington and Oregon ........... FHA / USDA / VA / Luxury / Jumbo / Investments / Green
2 周Seems we think alike ?
Providing science-based real estate appraisals that help you keep your best clients
1 个月Great article Paul Rayburn, CNAREA, DAR Very clear rationale to filter the data first by parameters of physical similarity. Thanks for breaking it down.
Certified General Appraiser (retired)
1 个月Great advice, Paul. When I’m cleaning up data, most outliers can usually be explained by physical, locational, or transactional differences—things like major remodeling, views, busy streets, or VA assumptions of low-interest-rate mortgages (etc.).
Valuation Expert Witness - Litigation Consultant - Real Property Data Analyst - Estates & Trusts - Fractional Interests in R.E. Holding Entities - All Property Types & Interests - Certified General Appraiser
1 个月Thanks Paul. You’re always on the cutting edge.
Seek Knowledge, Bring Positivity, Innovate & Uphold the Public Trust
1 个月Paul Rayburn, CNAREA, DAR —appreciate the article, my friend! I've been wrestling with this question for a while: What are the best practices for compiling a data set for adjustments? My approach is typically broad—I start with an all-encompassing dataset, pulling everything within the defined market and run some trends. From there, I refine it by applying physical characteristic parameters to narrow it down to more competitive sales in my defined market and run trends a second time. Honestly, I'm typically trying to get "adjustment opportunities" so that I can see how they impact my final data set in the sales comparison approach. What's your take?