Getting Rid of Issuer Pay Will Not Improve Credit Ratings, But Maybe Investor Revenue Will
21 April 2024
Douglas J. Lucas
ABSTRACT
Critics of credit rating agencies’ issuer-pay revenue practice note that it allows debt issuers and arrangers to shop among agencies for the highest rating. There is ample evidence that “rating shopping” prompts rating agencies to win business by rating debt higher than warranted by actual credit risk. To eliminate rating shopping and inflated ratings, issuer-pay critics suggest barring debt issuers from selecting and paying rating agencies.
But issuer-pay critics are wrong to assume that credit ratings would be more accurate if only the issuer-pay incentive for inflated ratings was extinguished. We document a historical test of this assumption, an instance when issuer pay and rating shopping were irrelevant concerns and ratings were still inappropriately high. With issuance effectively zero 2007-09, S&P Global Ratings had no commercial incentive to maintain the inflated ratings it did on subprime mortgage-related securities.
S&P’s poor rating accuracy 2007-09 can only be explained by its analysts lacking credit skills and judgment. Which is not to say that the issuer-pay revenue practice played no role. Issuer pay, combined with regulators’ use of ratings in financial regulation, allowed S&P to make money without being very good at analyzing credit risk. After decades of not needing to be good at analyzing credit, S&P became poor at analyzing credit. A more recent example in the US corporate debt sector supports this assertion about the atrophy of S&P’s credit skills and judgment.
New rating agency regulations 2006-10, intended to improve rating quality, failed to do so and one new rule made rating shopping worse. Meanwhile, rules intended to promote financial institution solvency eliminated or decreased the use of credit ratings in bank and insurance company capital calculations. Reducing the use of ratings in regulation reduces a perverse incentive encouraging inflated ratings. To take advantage of the opportunity to improve rating accuracy, we would require rating agencies to balance issuer-paid rating revenue with investor-paid credit research revenue.
Getting Rid of Issuer Pay Will Not Improve Credit Ratings, But Maybe Investor Revenue Will
The credit rating agencies’ issuer-pay revenue practice, where debt issuers choose and pay the agency that rates their debt, has been under scrutiny especially since the 2007-08 financial crisis (Grassley et al 2020, Podkul 2019b, Podkul 2020a, Podkul 2020b, and Warren 2019). Critics note that the practice allows debt issuers and arrangers to shop among agencies for the highest rating. Such “rating shopping” incents agencies to compete for business by offering ratings higher than warranted by actual credit risk. Issuer-pay critics also show that since Congress acted in 2006 to increase the number of credit rating agencies, the competition to supply inflated ratings has increased (Podkul and Banerjo 2019).
To stop rating shopping and eliminate inflated ratings, some issuer-pay critics suggest that rating agencies be randomly assigned to rate debt issues, rather than chosen by issuers. The randomly selected agency would be paid directly from debt proceeds (Kotecha et al 2012). Issuer-pay critics assume that ratings would be accurate if only the issuer-pay incentive for inflated ratings was extinguished. But eliminating an incentive to produce inflated ratings creates neither the incentive nor the ability to produce accurate ratings. The subprime mortgage crisis provided a unique opportunity to test rating agency performance in the absence of perverse issuer-pay incentives.
We detail a two-year period, July 2007 to June 2009, when S&P Global Ratings had no commercial incentive to inflate the ratings of subprime mortgage-backed securities (subprime bonds) and collateralized debt obligations backed by subprime bonds (subprime CDOs).[1] Because issuance of subprime bonds and subprime CDOs was effectively zero during those two years, angling for new-issue rating fees was not a consideration.
No issuer-pay incentive encouraged S&P to maintain erroneously high ratings on the subprime bond and subprime CDO ratings it monitored. In fact, given intense public interest in subprime sector ratings at the time, S&P had significant reputational incentives to get ratings right. Yet, we show that S&P’s ratings were inflated by comparing them to market prices and the credit analyses of market observers. In the absence of issuer-pay incentives, S&P still inflated subprime credit ratings. This demonstrates that banning issuer-pay revenue would not improve credit rating accuracy.
S&P’s poor rating accuracy 2007-09 can only be explained by its analysts lacking credit skills and judgment. Which is not to say that the issuer-pay revenue practice played no role. Issuer pay, combined with financial regulators’ use of ratings, incented inflated ratings. When ratings are used in regulation, investors join issuers and arrangers in wanting inflated ratings. Regulated financial institutions want high ratings to lower capital requirements and fixed income funds want freedom to invest in high-yielding assets their investment guidelines would prohibit if the debts were rated accurately. When ratings are used in regulation, issuers and arrangers can rating shop without investor complaint.
Given the dynamics favoring inflated ratings, S&P discovered that being particularly good at identifying credit risk was an impediment to making money. But 2007-09, it behooved S&P to get subprime ratings right. The fact that S&P couldn’t get those ratings right shows that after decades of not needing to be good at credit analysis, S&P couldn’t be good at credit analysis. A more recent example in the US corporate debt sector supports this assertion about the atrophy of S&P’s credit skills and judgment.
Issuer-pay critics make a mistake by assuming that doing away with the practice will automatically improve rating accuracy. It’s not the case that good credit skills exist at S&P and would suddenly bubble to the surface if some new rule is implemented. Those good credit skills don’t exist. Regulators’ attempts to improve credit rating quality 2006-10 failed because they made the same assumption as issuer-pay critics.
Ratings are no longer used in bank regulation and are not used in insurance company regulation for mortgage-backed securities. Ratings are no longer used to set registration, disclosure, and distribution requirements. This creates the possibility that ratings can re-assume their original purpose of supplying credit guidance to investors. Our suggestion for improving credit rating quality recognizes that credit skills and judgment at S&P are lacking. Our proposal requires rating agencies to balance issuer-paid ratings revenue with investor-paid credit research. Forcing S&P to earn more money from investors will require S&P to do a better job analyzing credit risk for investors and this analysis will be reflected in more accurate ratings.
Rating accuracy is important. The US Department of Justice drew a line from rating agency errors to the 2007-08 financial crisis when it sued S&P in 2013. The DOJ alleged (US Department of Justice 2013) that in years preceding the crisis, “S&P’s concerns about market share, revenues and profits drove them to issue inflated ratings, thereby misleading the public and defrauding investors. In so doing, we believe that S&P played an important role in helping to bring our economy to the brink of collapse.” S&P settled the suit for $1.5 billion in 2015 (Settlement Agreement 2015).
This paper has four sections and two appendices. “The Demise of Subprime Issuance,” shows how little issuance there was after June 2007, thus eliminating any issuer-pay incentive for S&P to maintain inflated ratings on outstanding debt to gain new-issue rating mandates. “Subprime Credit Conditions 2007,” uses market prices and sell-side analyst research to show that the abysmal credit quality of subprime bonds and subprime CDOs was well known. “S&P’s Subprime Ratings 2008-09,” shows that S&P maintained inflated ratings when there was no new issuance of subprime bonds and it had no issuer-pay incentive to inflate ratings. Thus, S&P’s inflated ratings can only be explained by its analysts lacking credit skills and judgment.
“How S&P’s Credit Ratings Can Be Improved” summarizes regulators’ failed attempts to improve credit rating quality by making the same mistake as issuer-pay critics: that good credit analysis exists at S&P and will bubble to the surface once issuer-pay incentives are removed. We make our own suggestion for improving credit rating quality, aware that credit analysis skills at S&P are lacking and must be improved.
In answer to questions about singling out S&P, Appendix I, “The Best Rating Agency During the Subprime Meltdown,” shows that Fitch, ironically, had the lowest, and therefore most accurate, ratings on subprime bonds, although its ratings were still too high. Appendix II, “Incorrectly Calculating Leverage Ratios,” gives a more recent example of S&P’s credit-skills failure, showing that its 2007-09 subprime failures were not an anomaly.
THE DEMISE OF SUBPRIME ISSUANCE
Subprime bond issuance declined from $509 billion in 2006 and $203 billion in the first half of 2007 to $27 billion in the second half of 2007 and virtually zero thereafter. There was a $303 million issue in January 2008, a $48 million issue in October 2008, and nothing in 2009. Monthly issuance is shown in Exhibit 1 (Flanagan, et al 2004-09). In the second half of 2007, the financial infrastructure to originate subprime mortgage loans was effectively dismantled as originators went bankrupt, sold themselves to prime originators, laid off employees, or otherwise shut down subprime lending (Sharif 2007).
Meanwhile, subprime CDO issuance declined from $182 billion in 2006 and $113 billion in the first half of 2007 to $916 million in the second half of 2007 and virtually zero thereafter. There was a $20 million issue in February 2008, a $75 million issue in December 2008, and two issues totaling $821 million in February 2009. Monthly issuance is shown in Exhibit 2.
The $113 billion subprime CDO issuance in the first half of 2007 overstates the health of that market. Many of those CDOs resulted from broker-dealers preemptively liquidating the subprime bond warehouses they financed for CDO managers. The dealers packaged subprime bonds from various manager warehouses together into unmanaged, static CDOs they issued at a loss. Likewise, the blip of issuance in February 2009 does not indicate a revival of CDO issuance. Those CDOs were not created from newly issued subprime bonds and they were not sold to investors. Rather, the CDOs packaged together previously issued distressed subprime bonds so that the CDO’s AAA tranches could be posted as collateral for loans from the European Central Bank. As the subprime bonds underlying these CDOs deteriorated further in 2009, the banks took the CDOs back from the European Central Bank and dissolved them before they could default.
The amount of new ratings business to be gained in subprime bonds and subprime CDOs after June 2007 was trifling. S&P effectively had zero issuer-pay incentive to maintain inflated ratings on outstanding subprime bonds and subprime CDOs to gain new rating mandates.
SUBPRIME CREDIT CONDITIONS 2007
Market consensus on the credit quality of subprime bonds over 2007-09 can be gleaned by ABX index prices. ABX indices were created every six months from 20 subprime bond transactions issued the previous six months. Thus, 2006-1 indices referenced 20 subprime bond deals closing July to December 2005, 2006-2 indices 20 deals closing January to June 2006, 2007-1 indices 20 deals closing July to December 2006, and 2007-2 indices 20 deals closing January to June 2007. Each of these ABX “rolls” was comprised of six indices, each referencing 20 like-rated bonds, one from each of the 20 deals. There were indices for penultimate-pay AAA bonds, last-pay AAA bonds, and ones for bonds rated AA, A, BBB, and BBB-.[2]
Each ABX index formed the basis of a credit default swap. Semiannual protection payments, from protection buyer to protection seller, were set at the time of the index’s conception and never varied, not even for new contracts entered into much later. Instead, to account for changes in perceived credit risk, an upfront payment, expressed as a percent of par, was made at the time the swap was entered into.
Loosely speaking, these upfront payments can be viewed as the market’s best guess as to future losses, in a gambling sense, the “over-under.” Thus, an upfront payment of 50% of par means that protection buyers are willing to buy protection at that price because they think losses will be greater than 50% of par. Protection sellers are willing to take 50% of par upfront because they don’t expect protection payments to exceed 50%.[3]
Exhibit 3, shows upfront payments, or, by our explanation, loss expectations on BBB- ABX indices. Loss expectations for 2006-2, 2007-1, and 2007-2 were 58%-62% of par at the end of July 2007. ABX 2006-1 was relatively creditworthy with a 41% loss expectation. By the end of 2007, loss expectations for all four indices were 71%-81%. Loss expectations increased further in 2008, when we begin comparing ABX prices to S&P’s ratings.
Exhibit 4 shows upfront payments, or loss expectations, on last-pay AAA ABX indices. By the end of 2007, loss expectations for all four indices were 6%-25%. Loss expectations increased further in 2008, when we will begin comparing ABX prices to S&P’s ratings.
The pessimism reflected in ABX prices was shared by sell-side analysts at UBS and JP Morgan. In February 2007, UBS named six bonds in the 2006-2 BBB- index it said would default (Zimmerman et al 2007). JP Morgan went further that same month, naming 11 bonds in the 2006-2 BBB- index and nine bonds in the 2007-1 BBB- index it thought would default (Flanagan 2007a). But by September 2007, the view was even more pessimistic. UBS’ opinion that month was for subprime bond losses to extend up to most A tranches and losses on CDOs backed by BBB- to A+ subprime bonds to extend up to senior AAA tranches (US Senate Permanent Subcommittee on Investigations 2010 and Lucas 2007a). Analysts’ pessimism stemmed from their consideration of an increasing body of evidence.
In September 2007, the latest Case-Shiller Home Price Index for 20 US metropolitan areas showed the greatest year-over-year decline ever, 6.2%. Among the cities with the largest year-over-year declines were Detroit (9.7%), Tampa (8.8%), San Diego (7.8%), Phoenix (7.3%), and Washington DC (7.2%) (Sharif 2007). Back in December 2006, UBS’ chief economist predicted a 15% decline in home prices because the high proportion of vacant houses among houses for sale portended desperate sales.
The effect of home price depreciation on subprime credit quality is dramatic. Besides creating greater losses in the event of a default, home price depreciation increases default frequency, particularly among investor properties. When JP Morgan analysts predicted in February 2007 the default of 20 of the 40 bonds making up the BBB- tranches of 2006-2 and 2007-1, they did so under their base case expectation of 0% home price appreciation in 2007. At the same time, they said that if home prices declined 3% in 2007, 36 of those 40 bonds would default. In fact, assuming a 3% home price decline, JP Morgan predicted 31 of the 40 bonds in 2006-2 and 2007-1 BBB tranches would default (Flanagan 2007b). In the seven months since JP Morgan made those predictions, the Case-Shiller 20 metro home price index had fallen 6.3%.
Loan delinquencies, defaults, and losses kept mounting. Rather than curing, or remaining in the same “delinquency bucket,” a greater percentage of delinquent loans were “rolling” into greater delinquency. For example, a greater percentage of 30-day past due loans were becoming 60-day past due rather than either remaining 30 days past due (the borrower having made one payment and remaining one payment behind) or becoming completely cured. But besides home price depreciation and mounting delinquencies and defaults, another factor suggested that things were going to get worse.
In the past, most subprime borrowers refinanced their adjustable-rate loan before its interest rate reset, when the loan came out of its low interest rate teaser period, jumped to a higher interest rate, and incurred higher monthly mortgage payments. Homeowners who couldn’t refinance often defaulted because of increased payments. But new subprime loan origination was effectively shut down by September 2007 and borrowers’ refinancing opportunities narrowed to Federal Housing Administration (FHA) loans or Federal Home Loan Mortgage Corporation (Freddie Mac) and Federal National Mortgage Association (Fannie Mae) conforming loans under the agencies’ expanded subprime programs. However, these alternative refinancings were only available to the most creditworthy among subprime borrowers. Homeowners who could not refinance and who could not afford higher payments were compelled to either negotiate a loan modification with their loan servicer, entailing a loss to the mortgage holder, or default outright (Zimmerman 2008).
By September 2007 home price depreciation also caused an increase in strategic defaults, where the borrower can make mortgage payments, but decides not to do so, because he owes more on the house than it is worth. The social stigma against defaulting had weakened and it was becoming more acceptable to default on one’s mortgage obligation. Consulting services sprang up to help homeowners navigate the process of handing house keys back to their mortgage lender. Delaying tactics could garner months of free rent for the homeowner, and thus create higher loan losses. The consulting firm whose name most characterized the new attitude toward default was “YouWalkAway.com .” Later, politicians started using the term “predatory lending” to explain the mortgage crisis, giving homeowners a moral justification for defaulting.
S&P’s SUBPRIME RATINGS 2008-09
The rating history we will relate shows that when S&P had no issuer-pay incentive it still maintained obviously inflated ratings on subprime bonds and subprime CDOs. This history convinces us that banning issuer pay, to eliminate commercial incentives for inflated ratings, is not sufficient to improve rating accuracy.
We’ve shown that subprime bond and subprime CDO issuance crashed in July 2007, and we’ve shown the market’s pessimistic view of subprime credit quality in 2007 as embodied in falling ABX prices. We’ve also recounted market conditions and sell-side analyst opinion as of September 2007. But let’s generously assume it would take a while for S&P to realize that the subprime market was dead and that there would be no more revenue from new subprime securitizations. Let’s further assume that S&P wasn’t concerned that subprime debt was being traded in the secondary market, informed by its ratings. Finally, let’s assume that it would take a while for S&P to realize that downgrades were the only way for it to salvage some of its reputation. So, we won’t begin looking at S&P’s subprime ratings until April 2008, seven months after the dire September 2007 credit conditions we described in this paper’s previous section.
Subprime Bond Ratings in April 2008
In April 2008, a subprime bond hadn’t been issued since a $303 million issue in January. The Case-Shiller 20 metro home price index had fallen 15% in 15 months, achieving UBS' economist prediction. Upfront payments on ABX BBB- indices were 85%-94%, depending on the vintage (Exhibit 3). Upfront payments on last pay AAA indices were 5% to 42% (Exhibit 4). In line with these ABX prices, UBS subprime analysts predicted that 292 of the 400[4] bonds composing the four rolls of the ABX index would be at least partially written down, with some loss of principal (Lucas 2008c). In fact, more bonds eventually defaulted.
S&P rated all the bonds UBS said were going to default. By April 2008 it had downgraded 219 bonds underlying ABX indices. It had even downgraded seven ABX-included bonds originally rated AA all the way down to CCC. But it had not downgraded enough bonds and most of the ones it downgraded were not downgraded far enough.
Exhibit 5 shows the then-current S&P ratings of subprime bonds that UBS predicted would be at least partially written down. For example, the second row, second column shows that S&P rated 24 of these bonds AAA on 1 April 2008. As shown in the same row third column, the 24 AAA bonds equate to 8% of the bonds UBS said would default. So 8% of the bonds UBS said were going to default enjoyed S&P’s highest rating.
Adding up the exhibit’s second and third columns, S&P rated AA- or higher 54 or 18% of the bonds UBS said would default. S&P rated investment grade (BBB- or higher) 96 or a third of UBS-predicted defaults. The rating agency only rated 29 bonds, or 10% of the bonds UBS said would default, CC. This was the only appropriate rating for bonds destined to default.
Subprime Bond Ratings in July 2008
By July 2008, a subprime bond still hadn’t been issued since the $303 million issue in January 2008. The Case-Shiller 20 metro home price index had fallen 19% in 18 months. Upfront payments on ABX BBB- indices were 91%-95%, depending on the vintage (Exhibit 3). Upfront payments on last pay AAA indices were 11% to 56% (Exhibit 4) and upfront payment on the new penultimate-pay AAA indices were 5% to 47%. UBS subprime analysts repeated the April analysis, but this time identifying 257 of 480 ABX bonds they believed would be written down completely, with all principal on the bonds lost (Lucas 2008d). In fact, all 257 bonds eventually defaulted with total principal loss, and higher rated tranches above them in the deals’ capital structure also defaulted.
S&P rated all the bonds UBS said were going to default with no principal return. It had downgraded 52 more bonds underlying ABX indices since April, rating seven more bonds CC. But it had not downgraded enough bonds and most of the ones it downgraded were not downgraded far enough. Exhibit 6 shows the then-current S&P ratings of subprime bonds that UBS predicted would be written down completely. For example, the second row, second column shows that S&P rated five of these bonds AA on 14 July 2008. As shown in the same row third column, the five bonds were 2% of the bonds UBS said would default.
Adding up the exhibit’s second and third columns, S&P rated investment grade (BBB- or higher) 39 or 15% of bonds UBS said would have a complete principal loss. The rating agency only rated 36 bonds, or 14% of the bonds UBS said would be completely written down, CC. This was the only appropriate rating for bonds destined to default.
Subprime CDO Ratings 2008-09
Like subprime bond issuance, subprime CDO issuance crashed in July 2007. In all of 2008, only $95 million was issued. Back in August 2007, UBS had predicted severe losses on senior AAA tranches of subprime CDOs including ACA ABS CDO 2006-2 (Lucas, Li, Shinozuka, and Mladinich 2007). Exhibit 7 shows a timeline of S&P’s rating actions in 2008. S&P first downgraded the tranche into a distressed rating category in July 2008, 11 months after UBS had declared it destined for severe losses.
Ambac Ratings 2008-09
In November 2007, UBS estimated $3.75 billion of losses just on the credit protection Ambac Assurance Corporation had written on subprime CDOs and CDOs of subprime CDOs ("CDO squared"). Its prediction did not include any other exposure AMBAC had to mortgages. At the time, Ambac only had $284 million of loss reserve and $5.6 billion of equity. The timeline in Exhibit 8 shows it took 20 months after UBS’ estimate for S&P to downgrade the insurer to speculative grade. But long before that it was obvious that AMBAC could not pay all insurance claims and later, in fact, it did not.
Subprime Loss Estimates in 2009
S&P continued to hold an optimistic view of subprime credit quality into 2009. Exhibit 9 shows subprime loss predictions from UBS, JP Morgan, Barclay’s, and CitiGroup in November and December 2008 and S&P’s base case subprime mortgage loss assumptions in 2009. The sell-side analysts made their predictions by ABX roll or calendar year. In February 2009, S&P’s loss predictions were substantially less than any of the sell-side analysts’ predictions from two and three months previous. The rating agency still lagged sell-side analysts’ 2008 predictions in July 2009.
Did issuer-pay incentives still somehow prevent timely downgrades?
We’ve shown that when S&P had no issuer-pay incentive, it still maintained obviously inflated ratings on subprime bonds and subprime CDOs. S&P did this despite reputational incentives to get its ratings right. This history convinces us that banning issuer pay, to eliminate commercial incentives for inflated ratings, is not sufficient to improve rating accuracy. But commentators wonder if there was not still some way S&P’s issuer-pay revenue practice discouraged timely downgrades. The most frequent suggestion is that arrangers might have taken other ratings business away from S&P if the rating agency had downgraded subprime debts as fast as it should have downgraded them. But this isn’t a plausible explanation for the slow pace of S&P’s subprime downgrades.
Structured products arrangers would be reluctant to boycott S&P. Arrangers must offer the best rating execution to structured products issuers to retain their business. Boycotting S&P because of subprime downgrades would reduce an arrangers’ ability to rating shop and might cause the arranger to lose business to an arranger not boycotting S&P. Outside the structured products area, most ratings are purchased by corporate and financial institution CFOs who don’t care how S&P rates subprime-related credits. Further, many corporate and financial institution debt investors required S&P ratings 2007-09 and issuers and arrangers in those sectors could not boycott S&P. S&P would have understood these constraints and not been afraid of a ratings boycott.
More importantly, arrangers had given up on subprime and had no reason to push S&P to maintain inappropriately high ratings. As we showed, structuring and sales of new subprime bonds and subprime CDOs was virtually non-existent after July 2007. Layoffs in subprime departments began in 2007. By April 2008, when we began looking at S&P’s subprime ratings, UBS had recognized $37.4 billion of subprime-related losses (on its way to recognizing $45 billion), to take one example of Wall Street subprime write downs. As we showed, sell-side researchers at the major arrangers were increasingly negative on subprime bonds and subprime CDOs 2007-08, as reflected in their published loss estimates. If arrangers cared about subprime ratings, they would not have allowed their analysts to publish loss predictions so much higher than the rating agencies’ forecasts.
But in an important way, S&P’s issuer-pay revenue practices did cause it’s 2007-08 subprime rating failures. Issuer pay meant that S&P could make money without being very good at analyzing credit risk. The issuer-pay revenue practice encouraged them to produce a rating product for issuers and arrangers, not investors. In this environment, being good at credit analysis wasn’t necessarily a business advantage. In fact, poor credit skills leading to inflated ratings were a business advantage. After decades of not needing (or not wanting!) to be good at credit analysis, S&P’s credit skills and common sense atrophied. Appendix II shows that poor credit skills are not just a bygone historical phenomenon in a single debt sector, but a condition that continues today.
So, while there was no business reason for S&P to maintain inflated subprime ratings 2007-08, and reputational reasons to get the ratings right, the issuer-pay revenue practice had so damaged S&P’s credit skills that its analysts did not have the ability or culture to properly assess subprime credit quality. S&P analysts ignored security prices, home price depreciation, mortgage delinquencies and defaults, the loss of mortgage refinancing options, the increase in strategic defaults, and the loss estimates of sell-side analysts. Issuer-pay critics make a mistake by assuming that doing away with the practice will automatically improve rating accuracy. Good credit skills at S&P won’t suddenly bubble to the surface if the issuer-pay revenue practice is banned because good credit skills aren’t there.
HOW S&P’s CREDIT RATINGS CAN BE IMPROVED
We summarize regulators’ failed attempts to improve credit rating quality by presuming that S&P, in its current sate, can do a better job. In contrast, our proposal recognizes that credit analysis skills at S&P are lacking and must be improved. We would require rating agencies to balance issuer-paid rating revenue with investor-paid credit research revenue. Attracting investor revenue would require S&P to improve its analyst ranks and its rating accuracy.
Rating Agency Regulatory Failures
The most well-thought-out schemes to implement a non-issuer-pay revenue structure call for rating mandates to be assigned to rating agencies randomly and for rating agencies to receive regulatory-determined fees. Rating agency quality does not factor into these plans to assign mandates and set fees. Issuer-pay opponents just assume that in the absence of issuer-pay incentives, rating quality will improve. We think it more likely that if rating agency revenue is fixed, agencies will focus on reducing costs rather than improving the quality of their analysis.
But if regulators do ban issuer pay and institute some alternative revenue scheme, and it proves ineffectual or even counter-productive, it would only add to the history of flawed rating agency regulation.
The first regulatory blow to credit rating quality occurred decades before rating agencies began charging issuers for ratings. Regulators began appropriating credit ratings for their own use in 1933, when the Office of the Comptroller of the Currency and the Federal Reserve required banks to hold extra capital against speculative-grade bonds. Later in the Great Depression, those regulators banned banks from holding speculative-grade bonds altogether.
The regulatory use of ratings grew over the decades. To provide a few examples, the National Association of Insurance Commissioners began using ratings to assess insurance company investment portfolios in 1951. The Securities and Exchange Commission started using ratings to assess broker-dealer capital in 1975. The Department of Labor started using ratings with respect to pension funds in 1989. Congress applied ratings to Savings and Loan regulation in 1989. The SEC applied ratings to money market funds in 1991. Later, the SEC used ratings as the basis for lowering registration, disclosure, and distribution requirements. Regulators saw credit ratings as a useful and free tool in their efforts to protect the safety and soundness of the financial system (Lucas 2008a).
The head of Moody’s rating practice explained how the regulatory use of ratings threatened rating quality in a speech before the SEC in 1995 (McGuire 1995). He argued that if ratings are exclusively used by investors to make investment decisions, rating agencies will strive to make their ratings accurate, even in the presence of issuer-pay incentives. “As long as the product … being sold to issuers [is] credibility with investors, there [is] a natural force, the need to retain investor confidence, to countervail the pressure of rating fees.”
But when ratings are used in regulation, investors’ desires become muddled. Regulated entities like high ratings on the investments they purchase to please regulators and decrease capital requirements. After purchase, debt investors don’t want to see their investments downgraded, and if they are to be downgraded, they want plenty of warning beforehand.
“By using securities ratings as a tool of regulation, governments fundamentally change the nature of the product agencies sell. Issuers then pay rating fees to purchase, not credibility with the investor community, but a license from a government. As a result, officially recognized rating agencies have a product to sell even when they fail to maintain credibility with the investor community,” said McGuire.
In his plea to stop the use of ratings in regulation, he described the regulatory use of ratings as “a chronic sickness … eroding the integrity and objectivity of the credit rating system ... like a cancer, it slowly and silently kills the natural defenses the rating agencies need to protect themselves against the economic leverage of issuers on their rating decisions.”
Notwithstanding McGuire’s analysis of the problem, Congress passed the Credit Rating Agency Reform Act of 2006 because “additional competition [in the rating agency business] is in the public interest” (Lucas 2008e). The legislation and subsequent SEC rulemaking opened Nationally Recognized Statistical Rating Organization (NRSRO) status to any firm with a three-year track record, 20 customers, and 10 letters of recommendation. Congress effectively forbade the SEC from applying rigorous requirements for NRSRO designation and now there are ten NRSROs.[5]
Market observers note that the additional competition has not worked as intended because the new rating agencies are competing against older agencies by offering higher ratings and, for structured securities, lower credit enhancement requirements. Of course, what can a new credit rating entrant, without name recognition or track record, offer issuers other than a higher rating? In an analysis of 30,000 ratings on $3 trillion of debt, the Wall Street Journal found that “The challengers tended to rate bonds higher than the major firms. Across most structured-finance segments, DBRS, Kroll, and Morningstar were more likely to give higher grades than Moody’s, S&P, and Fitch on the same bonds. Sometimes one firm called a security junk and another gave it a triple-A rating deeming it super-safe” (Podkul and Banerjo 2019).
In 2008, New York Attorney General Andrew Cuomo believed he had stopped the practice of rating shopping (Lucas 2008e). His website described the problem with mortgage-back security ratings and its solution: “The agencies were paid no fees during their initial reviews of the loan pools or during their discussions and negotiations with the investment banks about the structuring of the loan pools. Investment banks were thus able to get free previews of RMBS [residential mortgage-backed security] assessments from multiple credit rating agencies, enabling the investment banks to hire the agency that provided the best rating.” To end this practice, his website announced an agreement with Moody’s, S&P, and Fitch in return for forbearance of further investigation against them: “Credit rating agencies will now establish a fee-for-service structure, where they will be compensated regardless of whether the investment bank ultimately selects them to rate a RMBS.”
The idea was that by providing a conservative rating agency with a pre-rating analysis fee, that rating agency would be less dependent on revenue from being chosen to rate the debt publicly. In theory, the income from these analysis fees would allow a conservative rating agency to stick to its rating opinions without suffering economically. But it was never clear why issuers and arrangers would commission pre-rating analyses from rating agencies they knew to have conservative rating standards. They wouldn’t and they don’t.
In 2010, the SEC amended Rule 17g-5 specifying that structured finance transactions data be shared with rating agencies not asked to rate the transaction’s debt. The idea was to promote rating discipline by enabling un-hired agencies to make unsolicited, and presumably lower, ratings. But it has most likely resulted in zero unsolicited ratings (Podkul 2019a). Rating agencies have no incentive to perform the work required to issue and monitor ratings for free.
From 1933 to 2010, regulators have misunderstood how their regulations incent rating agencies and the entities that buy and use ratings. Regulations have been ineffectual or counter-productive in improving rating quality. Except by accident. The 2010 Dodd-Frank Act, in response to the 2007-08 financial crisis, made many changes to US financial regulation. Among them was eliminating the use of credit ratings in federal financial regulation. Likewise, The National Association of Insurance Commissioners has taken credit ratings out of insurance company capital calculations for mortgage-backed securities. The NAIC is proposing the same for asset-backed securities, and may also limit the use of ratings for corporate debt.
These provisions weren’t made to protect the quality of credit ratings, as McGuire advocated, but to preserve the credit quality of US financial institutions. Nonetheless, eliminating credit ratings from regulation makes it possible to improve rating quality.
领英推荐
Our Suggestion: Balance Issuer-Paid Rating Revenue and Investor-Paid Research Revenue
The idea behind getting rid of issuer pay and eliminating ratings in regulation is to better align rating agencies with their original purpose of helping investors make investment decisions. The healthy situation that McGuire envisioned was that rating agencies rely on and therefore protect their credibility with investors. “As long as the product … being sold to issuers [is] credibility with investors, there [is] a natural force, the need to retain investor confidence, to countervail the pressure of rating fees.” Our suggestion is to demand that rating agencies become more credible to investors. They will have to improve their credit analysis and rating accuracy to do so.
We would require that a rating agency’s issuer-pay ratings revenue be no greater than some percentage of its investor-pay research revenue, say 100%. In this case, the rating agency must get at least half its revenue from investors for the rating agency’s credit research. Currently, issuer rating revenue is around 20 times investor research revenue, so rating agencies would need to massively increase their investor research revenue to retain their existing issuer ratings revenue. To do so, rating agencies will have to cater to investors and improve their credit analyses and ratings. Rating agencies would be forced to get better.
Even without a requirement to do so, we think rating agencies should want to gain investor revenue. According to one estimate, there are 74,000 credit analysts in the United States making an aggregate $5 billion a year (Sokanu 2020). These numbers show that rating agencies are falling short of meeting the demand for credit analysis and have a significant revenue opportunity. Rating agencies should exploit their economy of scale, produce superior credit analysis, and fill in the gap between the credit analysis they currently provide and the credit analysis the market demands and for which it already pays.
If their ratings were accurate, the rating agencies could produce well-appreciated research explaining their rating rationales. Agencies could delve into each factor contributing to an issuer’s credit quality such as financial statement strength, industry characteristics, market and competitive positions, regulatory risks and opportunities, risks from the general economy, and management quality. Insightful explanations of factors affecting credit quality are more useful than a summary rating symbol. An investor can disagree with an analyst’s conclusion, but still find value in the analyst’s opinion if credit factors are individually addressed.
Right now, rating agency issuer reports are often laudatory and seem written for the issuer’s CFO rather than for investors. In fact, these reports are often edited by the issuer’s CFO, but even before that review, analysts are reluctant to boldly state unflattering facts or opinions about an issuer or its management. Meanwhile, if a rating agency industry report does highlight industry-wide risks, the reports often do not describe how particular issuers in the industry are affected. Both these traditional rating agency reports could be greatly improved if analysts wrote them with investors in mind.
A new research series that would be a hit with investors is one explaining discrepancies between ratings and credit spreads, such as when a higher-rated credit has a wider credit spread than a lower-rated credit. Investors love to know about price dislocations. And if an analyst can’t convincingly explain why his ratings are accurate even though the market disagrees, it’s time for a rating committee on the issuer!
Another research hit would be a credit news publication that covers how current events affect the credit quality of issuers or industries, covering all important credit news, not just the news analysts feel like discussing. Right now, it’s evident that ratings analysts prefer to write about good credit news than bad.
An investor focus would have encouraged S&P to downgrade subprime-related credits more rapidly 2007-09. An investor focus also would have prevented S&P from rating those credits so high to begin with. But how many investors do rating analysts talk to or even know? The issuer-focused product they produce puts them in contact with many more CFOs and debt arrangers than investors. An investor focus would also have induced research topics S&P ignored 2007-08. One was whether there were any good values among distressed subprime debts. Another was which subprime CDOs had documentation flaws that made the proper cash distributions to tranches ambiguous. There is a complementary aspect between issuer ratings revenue and investor research revenue. As new subprime ratings fees declined because issuance dried up, investors’ need for credit research increased.
Implementing this rule would have to be careful to prevent gaming. To make sure that ratings are the work of the best and brightest, the same analysts who produce credit research must also produce ratings.[6] The treatment of investor research revenue from entities that are also issuers must be determined. Fees for database distribution of ratings should not be counted as research revenue.
Two other changes would help rating agencies improve. One would be to abolish long-term rating outlooks. These hints at the direction a rating is heading slow down the process of making needed rating changes. Analysts feel they must put a credit on long-term outlook before they can change the actual rating.[7] Another beneficial change would be to abandon sovereign ratings. The regulations that some sovereigns have introduced to protect their own ratings have harmed rating agencies’ effectiveness in rating issuers in the sovereign’s jurisdiction. Besides demanding more of existing ratings analysts, these changes would make rating agencies the premier credit analysis entities and exciting places to work. S&P would attract the best credit analysts.
CONCLUSION
Critics of credit rating agencies’ issuer-pay revenue practice make the erroneous assumption that rating accuracy would be improved if the agencies’ issuer-pay revenue practice was banned. But S&P inflated 2007-08 subprime ratings in the absence of new issuance and issuer-pay incentives. There is not a reservoir of good credit analysis at S&P ready to bubble to the surface if only commercial incentives for inflated credit ratings were eliminated. Decades of the issuer-pay revenue practice and ratings in regulation have destroyed the ability of S&P to produce accurate ratings.
If regulators do ban issuer-pay and institute some alternative revenue scheme, it would only add to the long history of flawed rating agency regulation that ignored rating agency, arranger, and investor incentives. The regulatory use of ratings prompts investors to desire high ratings, dis-incenting them from policing against inflated ratings. Allowing more entities NRSRO status has increased the competition among rating agencies to supply high ratings. New York State’s pre-rating analysis fees failed to reward conservative rating agencies and the sharing of structured finance rating data has resulted in few, and probably zero, unsolicited ratings.
But the Dodd-Frank Act’s and the National Association of Insurance Commissioners’ exclusion of ratings from financial regulation make it possible for S&P to improve. To take advantage of this opportunity, we would require rating agencies to earn significant revenue from investors for credit research. To do so, rating agencies will have to cater to investors and improve their credit analyses and ratings. Rating agencies would be forced to get better.
ABOUT THE AUTHOR
In Moody’s Investors Service’s structured finance group in the late 1980s, Doug produced the first rating agency default and rating transition studies, quantified the expected-loss rating approach, and created the agency’s approach to rating CLOs (inventing the WARF and diversity score portfolio metrics) and triple-A structured derivative dealers. In Moody’s financial institutions group in the early 1990s, he developed rating approaches for debt funds and asset managers, assessed derivative instrument risk at US financial institutions, and rated US security firms. From 1994-2000, he was co-CEO of Salomon Swapco, the most successful structured derivative dealer. As a sell-side analyst at JPMorgan and UBS 2000-08, he was voted #1 for CDO research in Institutional Investor’s poll. Back at Moody’s 2008-18, he managed the agency’s most-read credit research publication, Moody’s Credit Outlook, responsible for 18% of Moody’s total research readership.
In retirement, Doug publishes the stories of finance professionals at Stories.Finance , fund-raises to stage George Balanchine’s seldom-performed ballets at BalanchinePatrons.org , and undertakes consulting engagements. Some of his articles on CLOs, CDOs, other structured finance products, default correlation, credit analysis, rating agencies, and George Balanchine can be found at Academia.edu and SSRN .
APPENDIX I
THE BEST RATING AGENCY DURING THE SUBPRIME MELTDOWN
A draft of this paper elicited questions about how bad S&P’s subprime ratings 2007-09 were relative to other credit rating agencies. In this appendix, we compare S&P’s ratings to those of the most accurate credit rating agency for subprime-related ratings 2007-09. But the most accurate rating agency’s ratings were still too high.
Ironically, the credit rating agency that downgraded subprime bonds and subprime-backed CDOs the most 2007-08, and was thus the most accurate, was Fitch Ratings. We say “ironically” because no other rating agency has done more to promote rating shopping and inflated ratings. In 1989, a recapitalized Fitch wanted to increase its structured products rating business. Having poor name recognition and reputation, the only relevant things Fitch could offer issuers were higher ratings and lower credit enhancement requirements. There was no other reason for an issuer to buy a Fitch rating.
But the structured products market was ideal for a rating agency willing to inflate ratings to gain business. Instead of needing to market to hundreds of corporate CFOs, Fitch only had to engage with the handful of arrangers who structured, underwrote, and bought the credit ratings for mortgage-backed and asset-backed securities.
Fitch would undercut other rating agencies’ credit enhancement requirements and arrangers would try to get S&P or Moody’s to match Fitch’s standards. If there was a discrepancy between S&P’s and Moody’s requirements, Fitch would match the higher rating of the two major agencies. If a single rating from Fitch was acceptable to sell a security, Fitch would rate the issue still higher. The strategy was simple, easy to execute, and might have been frustrated if S&P and Moody’s had aligned their ratings and credit enhancement requirements leaving Fitch with no discrepancy to exploit. But the big legacy rating agencies never organized themselves to combat Fitch (Rubinstein 1997 and Adelsen and Bartlett 2004, 10-14).
Fitch was accommodating, but its analysts weren’t stupid. While Fitch’s original-issue ratings were as high or higher than S&P’s, the comparison below shows that Fitch lowered ratings sooner and further than S&P. Also, in contrast to S&P, Fitch fired most of its structured products rating managers. S&P merely shuffled their structured products rating managers around to new positions within the firm. Wags at S&P called this “the witness protection program” or, even more sardonically, “the witness prevention program.” While S&P settled the Department of Justice’s structured-products suit for $1.5 billion in 2015, the DOJ never went after Fitch.
December 2007 subprime CDO ratings
On 7 December 2007, among the 815 subprime CDO tranches that S&P and Fitch rated in common, Fitch had downgraded those CDOs an average 8.4 rating notches while S&P had only downgraded those CDOs 1.9 rating notches. Among CDOs that both agencies had initially rated AAA, Fitch had downgraded 23 to CCC or below while S&P had downgraded only four to those ratings (Lucas 2007c).
January 2008 Ambac ratings
Fitch downgraded Ambac Assurance Corporation to AA and put it on watch for further downgrade on 18 January 2008. In response, Ambac ceased providing information to Fitch and Fitch had to withdraw its rating. S&P downgraded Ambac to AA five months later on 5 June 2008.
April 2008 subprime bond ratings
In April 2008, UBS subprime analysts predicted that 292 of the 400[8] subprime bonds composing the four rolls of the ABX index would be at least partially written down, with some loss of principal (Lucas 2008c). In fact, more bonds eventually defaulted.
S&P and Fitch both rated 134 of the 292 bonds UBS said were going to default. Exhibit 10 shows the then-current S&P and Fitch ratings of these subprime bonds. For example, the second row, second column of the exhibit shows that S&P rated nine of these bonds AAA. The same row fourth column shows that this was 7% of the bonds UBS thought would default. The same row, third column, shows that Fitch did not rate AAA any bonds UBS thought would default. The exhibit also shows that S&P rated 21, or 16%, of the bonds UBS said would default AA- or higher while Fitch only gave three bonds, or 3%, such high ratings.
At the other end of the rating spectrum, Fitch rated 51, or 38%, of the bonds UBS said would default CC or C. These CC and C ratings were the only appropriate ratings for bonds destined to default. Meanwhile, S&P only rated 20, or 15%, of those bonds CC.
July 2008 subprime bond ratings
In July 2008, UBS subprime analysts predicted that 257 of the 480 bonds composing the four rolls of the ABX index would be completely written down, with total loss of principal (Lucas 2008d). In fact, all 257 bonds were completely written down and higher-rated tranches above them in the deals’ capital structure also defaulted.
S&P and Fitch both rated 121 of the 292 bonds UBS said were going to be completely written down. Exhibit 11 shows the then-current S&P and Fitch ratings of these subprime bonds. For example, the second row, second column of the exhibit shows that S&P rated two of these bonds AA. The same row fourth column shows that this was 2% of the bonds UBS thought would be completely written down. The same row shows that Fitch did not rate AA any bonds UBS thought would be completely written down. Similarly, S&P rated 14% of the bonds UBS thought would be completely written down investment grade (BBB- or higher). Fitch rated 3% that high.
At the other end of the rating spectrum, Fitch rated 59, or 49%, of the bonds UBS said would default CC or C. These CC and C ratings were the only appropriate ratings for bonds destined to default. Meanwhile, S&P only rated 23, or 19%, of those bonds CC.
APPENDIX II
INCORRECTLY CALCULATING LEVERAGE RATIOS
A more recent example of the deterioration of S&P’s credit skills is its calculation of leverage ratios. Leverage ratios are financial-statement-derived credit statistics that every corporate credit analyst knows and uses. Even KMV, which assessed default probability using the Merton model and said the equity market was a complete source of default information, used leverage ratios in its methodology. The leverage ratio that credit rating agencies usually use is some measure of debt divided by some measure of earnings, most often:
Total Debt/EBITDA
But any credit analyst worthy of the title thinks about how this ratio should be applied to a particular company to create a forward-looking indication of credit risk. For example, what if the entity holds a large cash reserve it will soon use to pay down a significant amount of its debt? Certainly, not deducting the amount from total debt in the ratio would overstate the entity’s credit weakness and a measure so calculated would not be a forward-looking indication of credit risk.
Most leverage ratio adjustments are required because, while the numerator of the ratio is a balance-sheet point-in-time statistic, its denominator is an income-statement flow-over-time statistic looking back over the previous 12 months. In our career, we saw poor credit analysts include EBITDA from operating units a company had closed or sold over the previous 12 months. Including EBITDA from a source that is no longer available erroneously lowers the leverage ratio and a ratio so calculated is not a forward-looking indication of credit risk.
We also saw poor credit analysts do essentially the opposite. If a company had purchased a new operating unit within the last 12 months, they put just that partial year’s EBITDA into the leverage ratio, without annualizing it. That underweights an EBITDA source that is available going forward and erroneously increases the leverage ratio. A ratio so calculated is not a forward-looking indication of credit risk.
It was shocking to see Wall Street Journal reporters catching S&P rating analysts making such fundamental mistakes in calculating leverage ratios (Banerjo and Podkul 2019). Cynics among readers might guess that the mistakes S&P analysts made lowered leverage ratios and made companies look like better credits than they were. Such cynics would be correct. S&P included earnings from operations the company had sold in the leverage ratio!
That the mistakes happened in the US corporate sector is ominous. US corporates is the geography and sector where S&P faces the least competitive pressure to inflate ratings, because many investors in that market want an S&P rating on the debt they buy. If mistakes like this are made in the sector and geography where good credit analysis has the best chance of existing, it’s unlikely that better credit analysis exists in sectors and geographies where there is much more competitive pressure for inflated ratings.
REFERENCES
Adelson, Mark, and Elizabeth Bartlett. 2004. ABS Credit Migrations 2004. New York: Nomura Fixed Income Research. 7 December 2004.
Banerjo, Gunjan and Cezary Podkul. 2019. “Bond Ratings Firms Go Easy on Some Heavily Indebted Companies.” Wall Street Journal. 20 October 2019. https://www.wsj.com/articles/bond-ratings-firms-go-easy-on-some-heavily-indebted-companies-11571563801?mod=article_inline
Flanagan, Chris, et. al. 2004-09. Global ABS/CDO Weekly Market Snapshot. New York: J.P. Morgan Securities Inc. 2004-09.
Flanagan, Chris. 2007a. “Asset-Backed Securities.” US Fixed Income Markets Weekly. New York: J.P. Morgan Securities Inc., 23 February 2007.
Flanagan, Chris. 2007b. “Asset-Backed Securities.” US Fixed Income Markets Weekly. New York: J.P. Morgan Securities Inc., 2 March 2007.
Grassley, Charles E., John Kennedy, Roger F. Wicker, Sheldon Whitehouse. 2020. “Letter to SEC.” 3 February 2020. https://s.wsj.net/public/resources/documents/Senate%20SEC%20letter%202-3-2020.pdf?mod=article_inline
Hamilton, David T. and Richard Cantor. 2005. Rating Transitions and Defaults Conditional on Rating Outlooks Revisited: 1995-2005. New York: Moody’s Investors Service, December 2005.
Kotecha, Mahesh; Sharon Ryan; Roy Weinberger and Michael DiGiacomo. 2012. “Proposed Reform of the Rating Agency Compensation Model.” The Journal of Structured Finance, 18 (1) 71-75.
Lucas, Douglas J. 2006. “Using Rating Watches and Outlooks to Improve the Default Prediction Power of Ratings.” CDO Insight. New York: UBS, 31 January 2006.
Lucas, Douglas J. 2007a. “A Break in the Clouds?” CDO Insight. New York: UBS, 3 October 2007.
Lucas, Douglas J. 2007b. “Monoline CDO Losses.” CDO Insight. New York: UBS, 13 December 2007.
Lucas, Douglas J. 2007c. “Market Commentary,” CDO Insight, New York: UBS, 13 December 2007.
Lucas, Douglas J. 2008a. “How to Save the Rating Agencies.” CDO Insight. New York: UBS, 11 March 2008.
Lucas, Douglas J. 2008b. “How Bad Are ABS CDOs? How Bad Can CLOs Get?” UBS CDO Conference Handout. New York: UBS, 24 March 2008.
Lucas, Douglas J. 2008c. “Rating the Rating Agencies on Subprime.” Mortgage Strategist. New York: UBS, 1 April 2008.
Lucas, Douglas J. 2008d. “Rating Agency Optimism and the ABX.” Mortgage Strategist. New York: UBS, 15 July 2008.
Lucas, Douglas J. 2008e. “Wishful Thinking on Securitized Product Ratings” CDO Insight. New York: UBS, 26 August 2008.
Lucas, Douglas J., Shumin Li, Rei Shinozuka and Charles Mladinich. 2007. “ABS CDO Collateral Losses Version 3.0.” CDO Insight. New York: UBS, 3 October 2007.
McGuire, Thomas J. 1995. Ratings in Regulation: A Petition to the Gorillas. New York: Moody’s Investors Service, June 1995.
Podkul, Cezary. 2019a. “SEC Fix for Conflicts of Interest at Credit-Ratings Firms Has Failed: Postcrisis plan enabled unsolicited ratings, meant to limit bond issuers’ ability to exert influence over ratings firms. Few, if any, such ratings have been published.” Wall Street Journal, 29 October 2019. https://www.wsj.com/articles/sec-fix-for-conflicts-of-interest-at-credit-ratings-firms-has-failed-11572341401?mod=article_inline
Podkul, Cezary. 2019b. “SEC Urged to End Ratings Firms’ Conflicted Business Model: An advisory panel is considering whether to recommend a new dynamic.” Wall Street Journal, 4 November 2019. https://www.wsj.com/articles/sec-urged-to-end-ratings-firms-conflicted-business-model-11572911797
Podkul, Cezary and Gunjan Banerjo. 2019. “Inflated Bond Ratings Helped Spur the Financial Crisis. They’re Back: Credit-grading firms are giving out increasingly optimistic appraisals as they fight for market share in booming debt-securities markets.” Wall Street Journal, 7 August 2019. https://www.wsj.com/articles/inflated-bond-ratings-helped-spur-the-financial-crisis-theyre-back-11565194951
Podkul, Cezary. 2020a. “Lawmakers Push for Changes in Credit-Ratings Industry Bipartisan effort focuses on longstanding conflict of interest in industry and could lead to revamp of bond-ratings firms’ business models.” Wall Street Journal, 5 February 2020. https://www.wsj.com/articles/lawmakers-push-for-changes-in-credit-ratings-industry-11580908541?mod=searchresults&page=1&pos=1
Podkul, Cezary. 2020b. “SEC Rethinks Approach to Conflicts Among Bond-Rating Firms: Agency is seeking industry input on how to combat rating inflation as 2010 fix falters.” Wall Street Journal, 24 February 2020. https://www.wsj.com/articles/sec-rethinks-approach-to-conflicts-among-bond-rating-firms-11582589644?mod=searchresults&page=2&pos=2
Rubinstein, Peter. 1997. “The Changing Whole Loan Landscape. Mortgage Strategist. New York: UBS, 3 June 1997.
Settlement Agreement. “Statement of Facts” beginning page 46, 2 February 2015. The case is United States v. McGraw-Hill Companies, Inc., and Standard & Poor’s Financial Services LLC, No. CV 13-00779- DOC filed in US District Court for the Central District of California on 4 February 2013. https://www.sec.gov/Archives/edgar/data/64040/000006404015000004/mhfi-ex1034x20141231xq4.htm
Sharif, Dipa. 2007. “Subprime Market Events.” Mortgage Strategist. New York: UBS, 25 September 2007.
Sokanu 2020. Sokanu is “the Internet’s largest career advancement platform” and these statistics are from its page on credit analyst jobs. https://www.careerexplorer.com/careers/credit-analyst/job-market/
US Department of Justice. 2013. “Department of Justice Sues Standard & Poor’s for Fraud in Rating Mortgage-Backed Securities in the Years Leading Up to the Financial Crisis: Complaint Alleges that S&P Lied About Its Objectivity and Independence and Issued Inflated Ratings for Certain Structured Debt Securities.” Press release 5 February 2013. https://www.justice.gov/opa/pr/department-justice-sues-standard-poor-s-fraud-rating-mortgage-backed-securities-years-leading
US Senate Permanent Subcommittee on Investigations. 2010. “Exhibits: Hearing on Wall Street and the Financial Crisis: The Role of Credit Rating Agencies .” Washington DC: 23 April 2010, page 149.
Warren, Elizabeth. 2019. “Letter to the SEC.” Washington DC: 26 September 2019. https://www.warren.senate.gov/imo/media/doc/2019.09.%2026%20Letter%20to%20SEC%20re%20inflated%20bond%20ratings.pdf
Zimmerman, Tom et al. 2007. “A Simple ABX Loss Projection Model.” Mortgage Strategist. New York: UBS, 27 February 2007.
Zimmerman, Tom et al. 2008. “ABX Implied Writedowns Similar to Shutdown Scenario.” Mortgage Strategist. New York: UBS, 25 September 2007.
[1] Historically, subprime mortgage loans were residential mortgage loans made to individuals who, because of their credit histories, were deemed not to be in the best or “prime” category of credit risk. By 2005, the term “subprime mortgage” was also used to describe “affordability” mortgage products not necessarily made to borrowers with poor credit histories. High loan-to-value loans and loans made simultaneously with a second mortgage lowered required down payments. Loans with introductory teaser interest rates lowered monthly payments, at least until the mortgage rate reset at a higher rate. Finally, “subprime” was used to describe loans made without underwriting rigor, for example “low doc” and “no doc” loans made with little or no verification of a borrower’s loan application. We refer to securitizations of subprime mortgage loans as “subprime bonds,” the terminology of the 2000s. But instead of referring to securitization of subprime mortgage bonds as “asset-backed security collateralized debt obligation” or “ABS CDOs,” the misleading term used in the 2000s, we use the accurate description “subprime CDOs.”
[2] The rule for a bond to qualify for an ABX rating class was that it had to be rated at least that level by both S&P and Moody’s. Thus, when Moody’s took a more conservative rating approach to subprime bonds, some bonds in the BBB-, BBB, A, and AA ABX indices had higher ratings from S&P.
[3] There is certainly room to quibble with this simple, hopefully intuitive, explanation of ABX pricing that among other things ignores present value. But we don’t have to argue that upfront payments exactly equal the market consensus of future losses to show that market prices were incompatible with S&P's subprime ratings. We just need point out the massive difference in perceived credit quality when an ABX index clears with, say, a 50% upfront payment on securities that S&P rates investment grade.
[4] The penultimate-pay AAA index had not yet been launched, so there were only 400 underlying bonds in all the ABX indices.
[5] Most US financial regulators never imposed any eligibility requirement upon the rating agencies whose ratings they used in their regulations other than the agencies be SEC-designated NRSROs. If regulators had monitored rating agency quality and disapproved lax agencies, regulators might have prevented inflated ratings.
[6] For example, Fitch bought CreditSights, a respected provider of credit research, in 2021, but Fitch keeps it as a separate entity and its analysts have no input into Fitch’s ratings. CreditSight’s revenue does not indicate investor appreciation of Fitch’s credit ratings.
[7] They also confuse. Which is a better credit: a higher rated issuer with a negative outlook or a lower rated issuer with a positive outlook? Often, the lower-rated credit with the better outlook (Hamilton and Cantor 2005 and Lucas 2006).
[8] The penultimate AAA index had not yet been launched, so there were only 400 underlying bonds in all the ABX indices.
Pay rating agencies in bonds they rate that they’re required to hold until maturity.
Group Managing Director Head of Commercial Management at Scope Ratings
1 年The conflict of interest is only apparent. Indeed. There is a most important factor that must be factored in: issuers solicit ratings to compensate the asymmetry of information between investors and issuers. If the ratings are not credible in the eyes of investors, there is little if any benefit for the issuer in attempting rating shopping to start with. Additionally, a credit rating agency must build and take care of a reputation that will be recognised by investors. Reputation, as conventional wisdom correctly holds, is very hard to build and very easy to destroy. Consequently, rating agencies can't risk their reputation for the sake of a short-term gain that would compromise the entire business. Studies like this one are nevertheless very valuable because they collect data that can be used by investors to better gauge third party opinions. The analysis of such data, though, must be done taking an agnostic scientific approach where the hypothesis that are put to the test should not be constrained to one specific narrative. A pseudoscientific approach that attempted to "prove" a certain preconceived hypothesis would simply result in fallacious results.