Sunday, September 4, 2011

PART 11

1135 Id.
292
According to Mr. Raiter, even though S&P revenues had increased dramatically and were
generated in large part by the RMBS Group, senior management had no interest in committing
the resources needed to update the RMBS model with improved criteria from the new loan data.
Mr. Raiter said that S&P did not spend sufficient money on better analytics, because S&P
already dominated the RMBS ratings market: “[T]he RMBS group enjoyed the largest ratings
market share among the three major rating agencies (often 92% or better), and improving the
model would not add to S&P’s revenues.”1136
Poor Correlation Risk Assumptions. In addition to using inadequate loan performance
data, the S&P and Moody’s credit rating models also incorporated inadequate assumptions about
the correlative risk of mortgage backed securities. Correlative risk measures the likelihood of
multiple negative events happening simultaneously, such as the likelihood of RMBS assets
defaulting together. It examines the likelihood, for example, that two houses in the same
neighborhood will default together compared to two houses in different states. If the
neighborhood houses are more likely to default together, they would have a higher correlation
risk than the houses in different states.
The former head of S&P’s Global CDO Group, Richard Gugliada, told the Subcommittee
that the inaccurate RMBS and CDO ratings issued by his company were due, in part, to wrong
assumptions in the S&P models about correlative risk.1137 Mr. Gugliada explained that, because
CDOs held fewer assets than RMBS, statistical analysis was less helpful, and the modeling
instead required use of significant performance assumptions, including on correlative risk. He
explained that the primary S&P CDO model, the “CDO Evaluator,” ran 1,000 simulations to
determine how a pool would perform. These simulations ran on a set of assumptions that took
the place of historical performance data, according to Mr. Gugliada, and included assumptions
on the probability of default and the correlation between assets if one or more assets began to
default. He said that S&P believed that RMBS assets were more likely to default together than,
for example, corporate bonds held in a CDO. He said that S&P had set the probability of
corporate correlated defaults at 30 out of 100, and set the probability of RMBS correlated
defaults at 40 out of 100. He said that the financial crisis has now shown that the RMBS
correlative assumptions were far too low and should have been set closer to 80 or 90 out of
100.1138
On one occasion in 2006, an outside party also highlighted a problem with the S&P
model’s consideration of correlative risk. On March 20, 2006, a senior managing director at
Aladdin Capital Management, LLC sent an email to S&P expressing concern about a later
version of its CDO model, Evaluator 3:
“Thanks for a terrific presentation at the UBS conference. I mentioned to you a possible
error in the new Evaluator 3.0 assumptions:
1136 Id. at 6.
1137 Subcommittee interview of Richard Gugliada, Former Head of S&P’s CDO Ratings Group (10/9/2009).
1138 Id.
293
Two companies in the same Region belonging to two different local Sectors are assumed
to be correlated (by 5%), while if they belong to the same local Sector then they are
uncorrelated.
I think you probably didn’t mean that.”1139
Apparently, this problem with the model had already been identified within S&P. Two S&P
employees discussed the problem on the same email, with one saying:
“I have already brought this issue up and it was decided that it would be changed in the
future, the next time we update the criteria. … [T]he correlation matrix is inconsistent.”
Despite this clear problem resulting in the understatement of correlative risk for assets in the
same region, S&P in this instance did not immediately take the steps needed to repair its CDO
model.
At Moody’s, a former Managing Director of the CDO Group, Gary Witt, observed a
different set of correlation problems with Moody’s CDO model. Mr. Witt, who was responsible
for managing Moody’s CDO analysts as well as its CDO modeling, told the Subcommittee that
he had become uncomfortable with the lack of correlation built into the company’s
methodology.1140 According to Mr. Witt, Moody’s model, which then used the “Binomial
Expansion Technique (BET),” addressed correlation by having a diversity score at a time when
CDOs had diverse assets such as credit cards or aircraft lease revenues, in addition to RMBS
securities. By 2004, however, Mr. Witt said that most CDOs contained primarily RMBS assets,
lacked diversity, and made little use of the diversity score.
Mr. Witt told the Subcommittee that, from 2004 to 2005, he worked on modifying the
BET model to improve its consideration of correlation factors. According to Mr. Witt, modeling
changes like the one he worked on had to be done on an employee’s own time – late nights and
weekends – because there was no time during the work week due to the volume of deals. Indeed,
during his eighteen month tenure as a Managing Director in the CDO Group, Mr. Witt “spent a
huge amount of time working on methodology because the ABS CDO market especially was in
transition from multi-sector to single sector transactions [RMBS]” which he felt necessitated an
update of Moody’s model.1141 Mr. Witt indicated that, in June 2005, Moody’s CDO model was
changed to incorporate part of his suggested improvements, but did not go as far as he had
proposed. When asked about this 2005 decision, Mr. Witt indicated that he did not feel that
Moody’s was getting the ratings wrong for CDOs with RMBS assets, but he did “think that we
[Moody’s] were not allocating nearly enough resources to get the ratings right.”1142
1139 3/20/2006 email from Isaac Efrat (Aladdin Capital Management LLC) to David Tesher (S&P), Hearing Exhibit
4/23-26 [emphasis in original].
1140 Subcommittee interview of Gary Witt, Former Managing Director, Moody’s Investors Service (10/29/2009).
1141 6/2/2010 Statement of Gary Witt, Former Managing Director, Moody’s Investors Service, submitted by request
to the Financial Crisis Inquiry Commission, at 11.
1142 Id. at 21.
294
The lack of performance data for high risk residential mortgage products, the lack of
mortgage performance data in an era of stagnating or declining housing prices, the failure to
expend resources to improve their model analytics, and incorrect correlation assumptions meant
that the RMBS and CDO models used by Moody’s and S&P were out of date, technically
deficient, and could not provide accurate default and loss predictions to support the credit ratings
being issued. Yet Moody’s and S&P analysts told the Subcommittee that their analysts relied
heavily on their model outputs to project the default and loss rates for RMBS and CDO pools
and rate RMBS and CDO securities.
(b) Unclear and Subjective Ratings Process
Obtaining expected default and loss analysis from the Moody’s and S&P credit rating
models was only one aspect of the work performed by RMBS and CDO analysts. Equally
important was their effort to analyze a proposed transaction’s legal structure, cash flow,
allocation of revenues, the size and nature of its tranches, and its credit enhancements.
Analyzing each of these elements involved often complex judgments about how a transaction
would work and what impact various factors would have on credit risk. Although both Moody’s
and S&P published a number of criteria, methodologies, and guidance on how to handle a variety
of credit risk factors, the novelty and complexity of the RMBS and CDO transactions, the
volume and speed of the ratings process, and inconsistent applications of the various rules, meant
that CRA analysts were continuously faced with issues that were difficult to resolve about how
to analyze a transaction and apply the company’s standards. Evidence obtained by the
Subcommittee indicates that, at times, ratings personnel acted with limited guidance, unclear
criteria, and a limited understanding of the complex deals they were asked to rate.
Many documents obtained by the Subcommittee disclosed confusion and a high level of
frustration from RMBS and CDO analysts about how to handle ratings issues and how the
ratings process actually worked. In May 2007, for example, one S&P employee wrote: “[N]o
body gives a straight answer about anything around here …. [H]ow about we come out with new
[criteria] or a new stress and ac[tu]ally have clear cut parameters on what the hell we are
supposed to do.”1143 Two years earlier, in May 2005, an S&P analyst complaining about a rating
decision wrote:
“Chui told me that while the three of us voted ‘no’, in writing, that there were 4 other
‘yes’ votes. … [T]his is a great example of how the criteria process is NOT supposed to
work. Being out-voted is one thing (and a good thing, in my view), but being out-voted
by mystery voters with no ‘logic trail’ to refer to is another. ... Again, this is exactly the
kind of backroom decision-making that leads to inconsistent criteria, confused analysts,
and pissed-off clients.”1144
1143 5/8/2007 instant message exchange between Shannon Mooney and Andrew Loken, Hearing Exhibit 4/23-30b.
1144 5/12/2005 email from Michael Drexler to Kenneth Cheng and others, Hearing Exhibit 4/23-10c. In a similar
email, S&P employees discuss questionable and inconsistent application of criteria. 8/7/2007 email from Andrew
Loken to Shannon Mooney, Hearing Exhibit 4/23-96a (“Back in May, the deal had 2 assets default, which caused it
to fail. We tried some things, and it never passed anything I ran. Next thing I know, I’m told that because it had
295
When asked by the SEC to compile a list of its rating criteria in 2007, S&P was unable to
identify all of its criteria for making rating decisions. The head of criteria for the structured
finance department, for example, who was tasked with gathering information for the SEC, wrote
in an email to colleagues:
“[O]ur published criteria as it currently stands is a bit too unwieldy and all over the map
in terms of being current or comprehensive. ... [O]ur SF [Structured Finance] rating
approach is inherently flexible and subjective, while much of our written criteria is
detailed and prescriptive. Doing a complete inventory of our criteria and documenting all
of the areas where it is out of date or inaccurate would appear to be a huge job ....”1145
The confused and subjective state of S&P criteria, including when the criteria had to be
applied, is also evident in a May 2007 email sent by an S&P senior director to colleagues
discussing whether to apply a default stress test to certain CDOs:
“[T]he cash-flow criteria from 2004 (see below), actually states [using a default stress test
when additional concerns about the CDO are raised] ... in the usual vague S&P’s way ....
Still, consistency is key for me and if we decide we do not need that, fine but I would
recommend we do something. Unless we have too many deals in [the] US where this
could hurt.”1146
Moody’s ratings criteria were equally subjective, changeable, and inconsistent. In an
October 2007 internal email, for example, Moody’s Chief Risk Officer wrote:
“Methodologies & criteria are published and thus put boundaries on rating committee
discretion. (However, there is usually plenty of latitude within those boundaries to
register market influence.)”1147
Another factor was that ratings analysts were also under constant pressure to quickly
analyze and rate complex RMBS and CDO transactions. To enable RMBS or CDO transactions
to meet projected closing dates, it was not uncommon, as shown above, for CRA analysts to
grant exceptions to established methodologies and criteria, put off analysis of complex issues to
later transactions, and create precedents that investment banks invoked in subsequent
gone effective already, it was surveillance’s responsibility, and I never heard about it again. Anyway, because of
that, I never created a new monitor.”).
1145 3/14/2007 email from Calvin Wong to Tom Gillis, Hearing Exhibit 4/23-29. See also 2008 SEC Examination
Report for Standard and Poor’s Ratings Services, Inc., PSI-SEC (S&P Exam Report)-14-0001-24, at 6-7 (“[C]ertain
significant aspects of the rating processes and the methodologies used to rate RMBS and CDOs were not always
disclosed, or were not fully disclosed …. [S]everal communications by S&P employees to outside parties related to
the application of unpublished criteria, such as ‘not all our criteria is published. [F]or example, we have no
published criteria on hybrid deals, which doesn’t mean that we have no criteria,’” citing an 8/2006 email from the
S&P Director of the Analytical Pool for the Global CDO Group.).
1146 5/24/2007 email from Lapo Guadagnuolo to Belinda Ghetti, and others, Hearing Exhibit 4/23-31.
1147 10/21/2007 Moody’s internal email, Hearing Exhibit 4/23-24b. Although this email is addressed to and from the
CEO, the Chief Credit Officer told the Subcommittee that he wrote the memorandum attached to the email.
Subcommittee interview of Andy Kimball (4/15/2010).
296
securitizations. CRA analysts were then compelled to decide whether to follow an earlier
exception, revert to the published methodology and criteria, or devise still another compromise.
The result was additional confusion over how to rate complex RMBS and CDO securities.
Publication of the CRAs’ ratings methodologies and criteria was also inconsistent.
According to an October 2006 email sent by an investment banker at Morgan Stanley to an
analyst at Moody’s, for example, key methodology changes had not been made public: “Our
problem here is that nobody has told us about the changes that we are later expected to adhere to.
Since there is no published criteria outlining the change in methodology how are we supposed to
find out about it?”1148 On another occasion, a Moody’s analyst sought guidance from senior
managers because of the lack of consistency in applying certain criteria. He wrote: “Over time,
different chairs have been giving different guidelines at different point[s] of time on how much
over-enhancement we need for a bond to be notched up to Aaa.”1149 In a November 2007 email,
another senior executive described the criteria problem this way: “It seems, though, that the
more of the ad hoc rules we add, the further away from the data and models we move and the
closer we move to building models that ape analysts expectations, no?”1150
The rating agency models were called by some the “black box,” because they were
difficult to understand and not always predictable. Issuers and investors alike vented frustrations
toward the black box and that they had to base their decisions on a computer program few
understood or could replicate. This email from June 20, 2006, recounts the conversation one
Moody’s employee had with another over frustrations they had heard from an outside issuer.
Managers are tired of large ‘grids.’ They would rather prefer a model based test like
what S&P and Fitch do. Pascale disagrees with these managers. As a wrapper, she hates
that the credit quality of what she wraps is linked to a black box. Also, she hates the fact
that the black box can change from time to time.1151
A January 2007 email from BlackRock to S&P (and other rating agencies) also complained
about the “black box” problem:
“What steps are you taking to better communicate and comfort investors about your
ratings process? In other w[o]rds, how do we break the ‘black box’ that determines
enhancement levels?”1152
1148 10/19/2006 email from Graham Jones (Morgan Stanley) to Yuri Yoshizawa (Moody’s) and others, Hearing
Exhibit 4/23-37. See 2008 SEC Examination Report for Moody’s Investor Services Inc., PSI-SEC (Moodys Exam
Report)-14-0001-16, at 5 (“[C]ertain significant aspects of the rating processes and the methodologies used to rate
RMBS and CDOs were not always disclosed, or were not fully disclosed .…”).
1149 6/28/2007 email from Yi Zhang to Warren Kornfeld and others, Hearing Exhibit 4/23-39.
1150 11/28/2007 email from Roger Stein to Andrew Kimball and Michael Kanef, Hearing Exhibit 4/23-44.
1151 6/20/2006 email from Paul Mazataud to Noel Kirnon, MIS-OCIE-RMBS-0035460 [emphasis in original].
1152 1/16/2007 email from Kishore Yalamanchili (BlackRock) to Scott Mason (S&P), Glenn Costello (Fitch
Ratings), and others, PSI-S&P-RFN-000044.
297
At times, some CRA analysts openly questioned their ability to rate some complex
securities. In a December 2006 email chain regarding a synthetic CDO squared, for example,
S&P analysts appeared challenged by a modeling problem and questioned their ability to rate the
product. One analyst wrote: “Rating agencies continue to create and [sic] even bigger monster -
- the CDO market. Let’s hope we are all wealthy and retired by the time this house of cards
falters.”1153 In an email written in a similar vein, an S&P manager preparing for a presentation
wrote to her colleagues: “Can anyone give me a crash course on the ‘hidden risks in CDO’s of
RMBS’?”1154 In an April 2007 instant message, an S&P analyst offered this cynical comment:
“[W]e rate every deal[.] [I]t could be structured by cows and we would rate it.”1155
(4) Failure to Retest After Model Changes
Another key factor that contributed to inaccurate credit ratings was the failure of
Moody’s and S&P to retest outstanding RMBS and CDO securities after improvements were
made to their credit rating models. These model improvements generally did not derive from
data on new types of high risk mortgages, but were intended to improve the models’ predictive
capability,1156 but even after they were made, CRA analysts failed to utilize them to downgrade
artificially high RMBS and CDO credit ratings.
Key model adjustments were made in 2006 to both the RMBS and CDO models to
improve their ability to predict expected default and loss rates for higher risk mortgages. Both
Moody’s and S&P decided to apply the revised models to rate new RMBS and CDO
transactions, but not to retest outstanding subprime RMBS and CDO securities, even though
many of those securities contained the same types of mortgages and risks that the models were
recalibrated to evaluate. Had they retested the existing RMBS and CDO securities and issued
appropriate rating downgrades starting in 2006, the CRAs could have signaled investors about
the increasing risk in the mortgage market, possibly dampened the rate of securitizations, and
possibly reduced the impact of the financial crisis.
Surveillance Obligations. Both Moody’s and S&P were obligated by contract to
conduct ongoing surveillance of the RMBS and CDO securities they rated to ensure the ratings
remained valid over the life of the rated securities. In fact, both companies charged annual
surveillance fees to the issuers of the securities to pay for the surveillance costs, and each had
established a separate division to carry out surveillance duties. Due to the huge numbers of
RMBS and CDO securities issued in the years leading up to the financial crisis, those
surveillance divisions were responsible for reviewing tens of thousands of securities. The issue
of whether to retest the outstanding securities using the revised credit rating models was, thus, a
significant issue affecting numerous securities and substantial company resources.
1153 12/15/2006 email from Chris Meyer to Belinda Ghetti and Nicole Billick, Hearing Exhibit 4/23-27.
1154 1/17/2007 email from Monica Perelmuter to Kyle Beauchamp, and others, Hearing Exhibit 4/23-28.
1155 4/5/2007 instant message exchange between Shannon Mooney and Rahul Dilip Shah, Hearing Exhibit 4/23-30a.
1156 These model improvements still significantly underestimated subprime risk as is evidenced by the sheer number
of downgrades that occurred after the model improvements for securities issued in 2006 and 2007.
298
Increased Loss Protection. In July 2006, S&P made significant adjustments to its
subprime RMBS model. S&P had determined that, to avoid an increasing risk of default,
subprime RMBS securities required additional credit enhancements that would provide 40%
more protection to keep the investment grade securities from experiencing losses.1157 Moody’s
made similar adjustments to its RMBS model around the same time, settling on parameters that
required 30% more loss protection. As Moody’s explained to the Senate Banking Committee in
September 2007:
“In response to the increase in the riskiness of loans made during the last few years and
the changing economic environment, Moody’s steadily increased its loss expectations
and subsequent levels of credit protection on pools of subprime loans. Our loss
expectations and enhancement levels rose by about 30% over the 2003 to 2006 time
period, and as a result, bonds issued in 2006 and rated by Moody’s had more credit
protection than bonds issued in earlier years.”1158
The determination that RMBS pools required 30-40% more credit enhancements to
protect higher rated tranches from loss reflected calculations by the updated CRA models that
these asset pools were exposed to significantly more risk of delinquencies and defaults.
Requiring increased loss protection meant that Moody’s and S&P analysts had to require more
revenues to be set aside in each pool to provide AAA ratings greater protection than before the
model adjustments. Requiring increased loss protection also meant RMBS pools would have a
smaller pool of AAA securities to sell to investors. That meant, in turn, that RMBS pools would
produce fewer profits for issuers and arrangers. Requiring increased loss protection had a similar
impact on CDOs that included RMBS assets.
Retesting RMBS Securities. Even though S&P and Moody’s had independently revised
their RMBS models and, by 2006, determined that additional credit enhancements of 30-40%
were needed to protect investment grade tranches from loss, in 2006 and the first half of 2007,
neither company used its revised models to evaluate existing rated subprime RMBS securities as
part of its surveillance efforts.1159 Instead S&P, for example, sent out a June 2006 email
announcing that no retests would be done:
1157 3/19/2007 “Structured Finance Ratings - Overview and Impact of the Residential Subprime Market,” S&P
Monthly Review Meeting, at S&P SEC-PSI 0001473, Hearing Exhibit 4/23-52b.
1158 Prepared statement of Michael Kanef, Group Managing Director of Moody’s Asset Backed Finance Rating
Group, “The Role and Impact of Credit Rating Agencies on the Subprime Credit Markets,” before the U.S. Senate
Committee on Banking, Housing, and Urban Affairs, S.Hrg. 110-931 (9/26/2007), at 17.
1159 6/2006 S&P internal email exchange, Hearing Exhibit 4/23-72; and 3/31/2008 Moody’s Structured Finance
Credit Committee Meeting Notes, Hearing Exhibit 4/23-80. See also 7/16/2007 Moody’s email from Joseph Snailer
to Qingyu Liu, and others, PSI-MOODYS-RFN-000029 (when an analyst sought guidance on whether to use the
new or old methodology for testing unrated tranches of outstanding deals, she was advised: “The ratings you are
generating should reflect what we would have rated the deals when they were issued knowing what we knew then
and using the methodology in effect then (ie, using the OC model we built then.”); 6/1/2007 email from Moody’s
Senior Director in Structured Finance, “RE: Financial Times inquiry on transparency of assumptions,” MIS-OCIERMBS-
0364942-46, at 43.
299
“Simply put – although the RMBS Group does not ‘grandfather’ existing deals, there is
not an absolute and direct link between changes to our new ratings models and
subsequent rating actions taken by the RMBS Surveillance Group. As a result, there will
not be wholesale rating actions taken in July or shortly thereafter on outstanding RMBS
transactions, absent a deterioration in performance and projected credit support on any
individual transaction.”1160
Moody’s and S&P each advised the Subcommittee that it had decided not to retest any
existing rated RMBS securities, because it felt that actual performance data for the pools in
question would provide a better indicator of future defaults and loss than application of its
statistical model. But actual loan performance data for the subprime mortgages in the pools – the
fact that, for example, timely loan payments had been made in the past on most of those loans –
provided an incomplete picture regarding whether those payments would continue to be made
after home prices stopped climbing, refinancings became difficult, and higher interest rates took
effect in many of the mortgages. By focusing only on actual past performance, the ratings
ignored foreseeable problems and gave investors false assurances about the creditworthiness of
the RMBS and CDO securities.
Some CRA employees expressed concern about the limitations placed on their ability to
alter ratings to reflect expected performance of the rated securities. In a July 2007 email just
before the mass ratings downgrades began, for example, an S&P senior executive raised
concerns about these to the head of the RMBS Surveillance Group as follows:
“Overall, our ratings should be based on our expectations of performance, not
solely the month to month performance record, which will only be backward
looking. ... Up to this point, Surveillance has been ‘limited’ in when we can
downgrade a rating (only after it has experienced realized losses), how far we can
adjust the rating (no more than 3 notches at a time is preferred), and how high up
the capital structure we can go (not downgrading higher rated classes, if they
‘pass’ our stressed cash flow runs).”1161
In addition, many of the RMBS loans were less than a year old, making any performance
data less significant and difficult to analyze. In others words, the loans were too
unseasoned or new to offer any real predictive performance value.
1160 6/23/2006 email from Thomas Warrack to Pat Jordan and Rosario Buendia, Hearing Exhibit 4/23-72 [emphasis
in original]. Despite this 2006 email, the former head of S&P’s RMBS Group, Frank Raiter, told a House
Committee: “At S&P, there was an ongoing, often heated discussion that using the ratings model in surveillance
would allow for re-rating every deal monthly and provide significantly improved measures of current and future
performance.” Prepared statement of Frank L. Raiter, “Credit Rating Agencies and the Financial Crisis,” before the
U.S. House of Representatives Committee on Oversight and Government Reform, Cong.Hrg. 110-155 (10/22/2008),
at 7.
1161 7/3/2007 S&P email from Cliff Griep to Ernestine Warner and Stephen Anderberg, Hearing Exhibit 4/23-32
[emphasis in original].
300
Some internal S&P emails suggest alternative explanations for the decision not to retest.
In October 2005, for example, an S&P analytic manager in the Structured Finance Ratings Group
sent an email to his colleagues asking: “How do we handle existing deals especially if there are
material changes [to a model] that can cause existing ratings to change?” His email then laid out
what he believed was S&P’s position at that time:
• “I think the history has been to only re-review a deal under new assumptions/criteria
when the deal is flagged for some performance reason. I do not know of a situation
where there were wholesale changes to existing ratings when the primary group
changed assumptions or even instituted new criteria. The two major reasons why we
have taken the approach is (i) lack of sufficient personnel resources and (ii) not
having the same models/information available for surveillance to relook at an existing
deal with the new assumptions (i.e. no cash flow models for a number of assets). The
third reason is concerns of how disruptive wholesale rating changes, based on a
criteria change, can be to the market.
• CDO is current[ly] debating the issue and appropriate approach as they change the
methodology.”1162
This email suggests the reason retesting did not occur was, not because S&P thought
actual performance data would produce more accurate ratings for existing pools, but because
S&P did not have the resources to retest, and lower ratings on existing deals might have
disrupted the marketplace, upsetting investment banks and investors. Several S&P managers and
analysts confirmed in Subcommittee interviews that these were the real reasons for the decision
not to retest existing RMBS securities.1163 Moody’s documents also suggest that resource
constraints may lay behind its decision not to retest.1164
The Subcommittee also found evidence suggesting that investment banks may have
rushed to have deals rated before the CRAs implemented more stringent revised models. In an
attempt to explain why one RMBS security from the same vintage and originator was pricing
better than another, a CDO trader wrote:
“Only reasons I can think for my guys showing you a tighter level is that we are very
short this one and that the June 06 deals have a taint that earlier months don[’]t due to the
theory that late June deals were crammed with bad stuff in order to beat the S & P
[model] revisions.1165
1162 10/6/2005 email from Roy Chun, Hearing Exhibit 4/23-62.
1163 Prepared statement of Frank Raiter, Former Managing Director at Standard & Poor’s, April 23, 2010
Subcommittee Hearing, at 2; and Subcommittee interviews of S&P confidential sources (2/24/2010) and (4/9/2010).
1164 See, e.g., 3/31/2008 Moody’s Structured Finance Credit Committee Meeting Notes, Hearing Exhibit 4/23-80
(“Currently, following a methodology change, Moody’s does not re-evaluate every outstanding, affected rating.
Instead, it reviews only those obligations that it considers most prone to multi-notch rating changes, in light of the
revised rating approach. This decision to selectively review certain ratings is made due to resource constraints.”).
1165 10/20/2006 email from Greg Lippmann (Deutsche Bank) to Craig Carlozzi (Mast Capital),
DBSI_PSI_EMAIL01774820.
301
Retesting CDO Securities. The debate over retesting existing CDO securities followed
a similar path to the debate over retesting RMBS securities. The CDO Group at S&P first faced
the retest question in the summer of 2005, when it made a major change to its CDO model, then
Evaluator 3 (E3).1166 The S&P CDO Group appeared ready to announce and implement the
improved model that summer, but then took more than a year to implement it as the group
struggled to rationalize why it would not retest existing CDO securities with the improved
assumptions.1167 Internal S&P emails indicate that the primary considerations were, again,
resource limitations and possible disruption to the CDO market, rather than concerns over
accuracy. For instance, in a June 2005 email sent to an S&P senior executive, the head of the
CDO Group wrote:
“The overarching issue at this point is what to do with currently rated transactions if we
do release a new version of Evaluator. Some of [us] believe for both logistical and
market reasons that the existing deals should mainly be ‘grand fathered’. Others believe
that we should run all deals using the new Evaluator. The problem with running all deals
using E3 is twofold: we don’t have the model or resource capacity to do so, nor do we all
believe that even if we did have the capability, it would be the responsible thing to do to
the market.”1168
Several months later the S&P CDO Ratings Group was still deliberating the issue. In
November 2005, an investment banker at Morgan Stanley who had concerns about whether E3
would be used to retest existing deals and those in the pipeline expressed frustration at the delay:
“We are in a bit of a pickle here. My legal staff is not letting me send anything out to any
investor on anything with an S&P rating right now. We are waiting for you to tell us …
that you approve the disclaimer or are grandfathering [not retesting with E3] our existing
and pipeline deals. My business is on ‘pause’ right now.”1169
One S&P senior manager, frustrated by an inability to get an answer on the retesting issue, sent
an email to a colleague complaining: “Lord help our f**king scam .... this has to be the stupidest
place I have worked at.”1170
1166 The CDO models were simulation models dependent upon past credit ratings for the assets they included plus
various performance and correlation assumptions. See earlier discussion of these models.
1167 7/12/2005 S&P internal email, “Delay in Evaluator 3.0 incorporation in EOD/CDOi platform,” PSI-S&P-RFN-
000017.
1168 6/21/2005 email from Pat Jordan to Cliff Griep, “RE: new CDO criteria,” Hearing Exhibit 4/23-60. See also
3/21/2006 email from an S&P senior official, Hearing Exhibit 4/23-71. (“FYI. Just sat on a panel with Frderic
Drevon, my opposite number at Moody’s who fielded a question on what happens to old transactions when there is a
change to rating methodologie[s]. The official Moody’s line is that there is no ‘grandfathering’ and that old
transactions are reviewed using the new criteria. However, the ‘truth is that we do not have the resources to review
thousands of transactions, so we focus on those that we feel are more at risk.’”).
1169 11/23/2005 email from Brian Neer (Morgan Stanley) to Elwyn Wong (S&P), Hearing Exhibit 4/23-64.
1170 11/23/2005 email from Elwyn Wong to Andrea Bryan, Hearing Exhibit 4/23-64.
302
In May 2006, S&P circulated a draft policy setting up what seemed to be an informal
screening process “prior to transition date” to see how existing CDOs would be affected by the
revised CDO model. The draft offered a convoluted approach in an apparent attempt to avoid
retesting all existing CDOs, which included allowing the use of the prior “E2” model and review
“by a special E3 committee.” The draft policy read in part as follows:
***PRIVILEGED AND CONFIDENTIAL - S&P DISCUSSION PURPOSES ONLY***
Prior to Transition Date (in preparation for final implementation of E3 for cash
CDOs):
A large majority of the pre- E3 cash flow CDOs will be run through E3 in batch
processes to see how the ratings look within the new model …
Ratings falling more than 3 notches +/- from the current tranche rating in the
batch process will be reviewed in detail for any modeling, data, performance or
other issues
If any transactions are found to be passing/failing E3 by more than 3 notches due
to performance reasons they will be handled through the regular surveillance
process to see if the ratings are stable under current criteria (i.e., if they pass
E2.4.3 using current cash flow assumptions the ratings will remain unchanged)
If any transactions are found to be passing/failing E3 by more than 3 notches due
to a model gap between E2.4.3 and E3, they will be reviewed by a special E3
committee ....”1171
It is unclear whether this screening actually took place.
Questions continued to be raised internally at S&P about the retesting issue. In March
2007, almost a year after the change was made in the CDO model, an S&P senior executive
wrote to the Chief Criteria Officer in the structured finance department:
“Why did the criteria change made in mid 2006 not impact any outstanding transactions
at the time we changed it, especially given the magnitude of the change we are
highlighting in the article? Should we apply the new criteria now, given what we now
know? If we did, what would be the impact?”1172
In July 2007, the same senior executive raised the issue again in an email asking the S&P
Analytic Policy Board to address the alignment of surveillance methodology and new model
changes at a “special meeting.”1173 But by then, residential mortgages were already defaulting in
record numbers, and the mass downgrades of RMBS and CDO ratings had begun.
1171 5/19/2006 email from Stephen Anderberg to Pat Jordan, David Tesher, and others, PSI-S&P-RFN-000021
[emphasis in original].
1172 3/12/2007 email from Cliff Griep to Tom Gillis, and others, PSI-S&P-RFN-000015.
1173 7/15/2007 email from Tom Gillis to Valencia Daniels, “Special APB meeting,” Hearing Exhibit 4/23-74.
303
Consequences for Investors. During the April 23 Subcommittee hearing, credit ratings
expert, Professor Arturo Cifuentes, explained to the Subcommittee the importance of retesting
existing rated deals when there is a model change.
Senator Levin: If a ratings model changes its assumptions or criteria, for instance, if it
becomes materially more conservative, how important is it that the credit rating agency
use the new assumptions or criteria to re-test or re-evaluate securities that are under
surveillance?
Mr. Cifuentes: Well, it is very important for two reasons: Because if you do not do that,
you are basically creating two classes of securities, a low class and an upper class, and
that creates a discrepancy in the market. At the same time, you are not being fair because
you are giving an inflated rating then to a security or you are not communicating to the
market that the ratings given before were of a different class.1174
Moody’s and S&P updated their RMBS and CDO models with more conservative criteria
in 2006, but then used the revised models to evaluate only new RMBS and CDO transactions,
bypassing the existing RMBS and CDO securities that could have benefited from the new credit
analysis. Even with respect to the new RMBS and CDOs, investment banks sought to delay use
of the revised models that required additional credit enhancements to protect investment grade
tranches from loss. For example, in May 2007, Morgan Stanley sent an email to a Moody’s
Managing Director with the following:
“Thanks again for your help (and Mark’s) in getting Morgan Stanley up-to-speed with
your new methodology. As we discussed last Friday, please find below a list of
transactions with which Morgan Stanley is significantly engaged already (assets in
warehouses, some liabilities placed). We appreciate your willingness to grandfather
these transactions [under] Moody’s old methodology.”1175
When asked about the failure of Moody’s and S&P to retest existing securities after their
model updates in 2006, the global head trader for CDOs from Deutsche Bank told the
Subcommittee that he believed the credit rating agencies did not retest them, because to do so
would have meant significant downgrades and “they did not want to upset the apple cart.”1176
1174 April 23, 2010 Subcommittee Hearing at 35.
Instead, the credit rating agencies waited until 2007, when the high risk mortgages underlying
the outstanding RMBS and CDO securities incurred record delinquencies and defaults and then,
based upon the actual loan performance, instituted mass ratings downgrades. Those sudden mass
downgrades caught many financial institutions and other investors by surprise, leaving them with
1175 5/2/2007 email from Zach Buchwald (Morgan Stanley Executive Director) to William May (Moody’s Managing
Director), and others, Hearing Exhibit 4/23-76. See also 4/11/2007 email from Moody’s Managing Director to
Calyon, PSI-MOODYS-RFN-000040.
1176 Subcommittee interview of Greg Lippmann, Former Managing Director and Global Head of Trading of CDOs
for Deutsche Bank (10/18/2010). Mr. Lippmann said he thought the agencies’ decision not to retest existing
securities was “ridiculous.”
304
billions of dollars of suddenly unmarketable securities. The RMBS secondary market collapsed
soon after, and the CDO secondary market followed.
(5) Inadequate Resources
In addition to operating with conflicts of interest, models containing inadequate
performance data, subjective and inconsistent rating criteria, and a policy against using improved
models to retest outstanding RMBS and CDO securities, despite the increasing numbers of
ratings issued each year and record revenues as a result, neither Moody’s nor S&P hired
sufficient staff or devoted sufficient resources to ensure that the initial rating process and the
subsequent surveillance process produced accurate credit ratings.
Instead, both Moody’s and S&P forced their staffs to churn out new ratings and conduct
required surveillance with limited resources. Over time, the credit rating agencies’ profits
became increasingly connected to issuing a high volume of ratings. By not devoting sufficient
resources to handle the high volume of ratings, the strain on resources negatively impacted the
quality of the ratings and their surveillance.
High Speed Ratings. From 2000 to 2007, Moody’s and S&P issued record numbers of
RMBS and CDO ratings. Each year the number of ratings issued by each firm increased.
According to SEC examinations of the firms, from 2002 to 2006, “the volume of RMBS deals
rated by Moody’s increased by 137%, and the number of CDO deals … increased by 700%.”1177
At S&P, the SEC determined that over the same time period, “the volume of RMBS deals rated
by S&P increased by 130%, and the number of CDO deals … increased by over 900%.”1178 In
addition to the rapid growth in numbers, the transactions themselves grew in complexity,
requiring more time and talent to analyze.
The former head of the S&P RMBS Group, Frank Raiter, described the tension between
profits and resources this way: “Management wanted increased revenues and profit while
analysts wanted more staff, data and IT support which increased expenses and obviously reduced
profit.”1179
Moody’s CEO, Ray McDaniel, readily acknowledged during the Subcommittee’s April
23 hearing that resources were stressed and that Moody’s was short staffed.1180
1177 2008 SEC Examination Report for Moody’s Investor Services Inc., PSI-SEC (Moodys Exam Report)-14-0001-
16, at 4.
He testified:
“People were working longer hours than we wanted them to, working more days of the week
than we wanted them to.” He continued: “It was not for lack of having open positions, but with
1178 2008 SEC Examination Report for Standard and Poor’s Ratings Services, Inc., PSI-SEC (S&P Exam Report)-
14-0001-24, at 3.
1179 Prepared statement of Frank Raiter, Former Managing Director at Standard & Poor’s, April 23, 2010
Subcommittee Hearing, at 1-2.
1180 April 23, 2010 Subcommittee Hearing at 96-97.
305
the pace at which the market was growing, it was difficult to fill positions as quickly as we
would have liked.”1181
Moody’s staff, however, had raised concerns about personnel shortages impacting their
work quality as early as 2002. A 2002 survey of the Structured Finance Group staff reported,
for example:
“[T]here is some concern about workload and its impact on operating effectiveness. …
Most acknowledge that Moody’s intends to run lean, but there is some question of
whether effectiveness is compromised by the current deployment of staff.”1182
Similar concerns were expressed three years later in a 2005 employee survey:
“We are over worked. Too many demands are placed on us for admin[istrative] tasks ...
and are detracting from primary workflow .... We need better technology to meet the
demand of running increasingly sophisticated models.”1183
In 2006, Moody’s analyst Richard Michalek worried that investment bankers were taking
advantage of the fact that analysts did not have the time to understand complex deals. He wrote:
“I am worried that we are not able to give these complicated deals the attention they
really deserve, and that they (CS) [Credit Suisse] are taking advantage of the ‘light’
review and the growing sense of ‘precedent’.”1184
Moody’s managers and analysts interviewed by the Subcommittee stated that staff
shortages impacted how much time could be spent analyzing a transaction. One analyst
responsible for rating CDOs told the Subcommittee that, during the height of the boom, Moody’s
analysts didn’t have time to understand the complex deals being rated and had to set priorities on
what issues would be examined:
“When I joined the [CDO] Group in 1999 there were seven lawyers and the Group rated
something on the order of 40 – 60 transactions annually. In 2006, the Group rated over
600 transactions, using the resources of approximately 12 lawyers. The hyper-growth
years from the second half of 2004 through 2006 represented a steady and constant
adjustment to the amount of time that could be allotted to any particular deal’s analysis,
and with that adjustment, a constant re-ordering of the priority assigned to the issues to be
raised at rating Committees.”1185
1181 Id. at 97.
1182 5/2/2002 “Moody’s SFG 2002 Associate Survey: Highlights of Focus Groups and Interviews,” Hearing Exhibit
4/23-92a at 6.
1183 4/7/2006 “Moody’s Investor Service, BES-2005: Presentation to Derivatives Team,” Hearing Exhibit 4/23-92b.
1184 5/1/2006 email from Richard Michalek to Yuri Yoshizawa, Hearing Exhibit 4/23-19.
1185 Michalek prepared statement at 20, n.29.
306
A Moody’s managing director responsible for supervising CDO analysts put it this way in a 2007
email: “Unfortunately, our analysts are o[v]erwhelmed…”1186 Moody’s CEO testified at the
Subcommittee’s hearing “[w]e had stress on our resources in this period, absolutely.”1187
Senator Levin asked him if Moody’s was profitable at the time, and he responded “[w]e were
profitable, yes.”1188
S&P also experienced significant resource shortages. In 2004, for example, a managing
director in the RMBS Group wrote a lengthy email about the resource problems impacting credit
analysis:
“I am trying to put my hat on not only for ABS/RMBS but for the department and be
helpful but feel that it is necessary to re-iterate that there is a shortage in resources in
RMBS. If I did not convey this to each of you I would be doing a disservice to each of
you and the department. As an update, December is going to be our busiest month ever
in RMBS. I am also concerned that there is a perception that we have been getting all the
work done up until now and therefore can continue to do so.
“We ran our Staffing model assuming the analysts are working 60 hours a week and we
are short resources. We could talk about the assumptions and make modifications but the
results would be similar. The analysts on average are working longer than this and we
are burning them out. We have had a couple of resignations and expect more. It has
come to my attention in the last couple of days that we have a number of staff members
that are experiencing health issues.”1189
A May 2006 internal email from an S&P senior manager in the Structured Finance Real
Estate Ratings Group expressed similar concerns:
“We spend most of our time keeping each other and our staff calm. Tensions are high.
Just too much work, not enough people, pressure from company, quite a bit of turnover
and no coordination of the non-deal ‘stuff’ they want us and our staff to do ....”1190
The head of the S&P CDO Ratings Group sent a 2006 email to the head of the Structured
Finance Department to make a similar point. She wrote:
“While I realize that our revenues and client service numbers don’t indicate any ill
[e]ffects from our severe understaffing situation, I am more concerned than ever that we
are on a downward spiral of morale, analytical leadership/quality and client service.”1191
1186 5/23/2007 email from Eric Kolchinsky to Yvonne Fu and Yuri Yoshizawa, Hearing Exhibit 4/23-91.
1187 April 23, 2010 Subcommittee Hearing at 97.
1188 Id.
1189 12/3/2004 email from Gail McDermott to Abe Losice and Pat Jordan, PSI-S&P-RFN-000034.
1190 5/2/2006 email from Gale Scott to Diane Cory, “RE: Change in scheduling/Coaching sessions/Other stuff,” PSIS&
P-RFN-000012.
1191 10/31/2006 S&P internal email, “A CDO Director resignation,” PSI-S&P-RFN-000001.
307
Some of the groups came up with creative ways to address their staffing shortages. For
example, the head of the S&P RMBS Ratings Group between 2005 and 2007, Susan Barnes,
advised the Subcommittee that her group regularly borrowed staff from the S&P Surveillance
Group to assist with new ratings. She said that almost half the surveillance staff provided
assistance on issuing new ratings during her tenure, and estimated that each person in the
surveillance group might have contributed up to 25% of his or her time to issuing new
ratings.1192
The Subcommittee investigation discovered a cadre of professional RMBS and CDO
rating analysts who were rushed, overworked, and demoralized. They were asked to evaluate
increasing numbers of increasingly complex financial instruments at high speed, using out-ofdate
rating models and unclear ratings criteria, while acting under pressure from management to
increase market share and revenues and pressure from investment banks to ignore credit risk.
These analysts were short staffed even as their employers collected record revenues.
Resource-Starved Surveillance. Resource shortages also impacted the ability of the
credit rating agencies to conduct surveillance on outstanding rated RMBS and CDO securities to
evaluate their credit risk. The credit rating agencies were contractually obligated to monitor the
accuracy of the ratings they issued over the life of the rated transactions. CRA surveillance
analysts were supposed to evaluate each rating on an ongoing basis to determine whether the
rating should be affirmed, upgraded, or downgraded. To support this analysis, both companies
collected substantial annual surveillance fees from the issuers of the financial instruments they
rated, and set up surveillance groups to review the ratings. In the case of RMBS and CDO
securities, the Subcommittee investigation found evidence that these surveillance groups may
have lacked the resources to properly monitor the thousands of rated products.
At Moody’s, for example, a 2007 email disclosed that about 26 surveillance analysts
were responsible for tracking over 13,000 rated CDO securities:
“Thanks for sharing the draft of the CDO surveillance piece you’re planning to publish
later this week. … In the section about your CDO surveillance infrastructure, we were
struck by the data point about the 26 professionals who are dedicated to monitoring CDO
ratings. While this is, no doubt, a strong team, we wanted to at least raise the question
about whether the company’s critics could twist that number – e.g., by comparing it to the
13,000+ CDOs you’re monitoring – and once again question if you have adequate
resources to do your job effectively. Given that potential risk, we thought you might
consider removing any specific reference to the number of people on the CDO
surveillance team.”1193
The evidence of surveillance shortages at S&P was particularly telling. Although during
an interview with the Subcommittee, the head of S&P’s RMBS Surveillance Group from 2001 to
2008, Ernestine Warner, said she had adequate resources to conduct surveillance of rated RMBS
1192 Subcommittee interview of Susan Barnes (3/18/2010).
1193 7/9/2007 email to Yuri Yoshizawa, “FW: CDO Surveillance Note 7_071.doc,” PSI-MOODYS-RFN-000022.
308
securities during her tenure, her emails indicate otherwise.1194 In emails sent over a two-year
period, she repeatedly described and complained about a lack of resources that was impeding her
group’s ability to complete its work. In the spring of 2006, she emailed her colleague about her
growing anxiety:
“RMBS has an all time high of 5900 transactions. Each time I consider what my group is
faced with, I become more and more anxious. The situation with Lal [a surveillance
analyst], being off line or out of the group, is having a huge impact.”1195
In June 2006, she wrote that the problems were not getting better:
“It really feels like I am repeating myself when it comes to completing a very simple
project and addressing some of the other surveillance needs. … The inability to make a
decision about how the project is going to be resourced is causing undue stress. I have
talked to you and Peter [D’Erchia, head of global structured finance surveillance,] about
each of the issues below and at this point I am not sure what else you need from me. …
To rehash the points below:
In addition to the project above that involves some 863 deals, I have a back log of deals
that are out of date with regard to ratings. … We recognize that I am still
understaffed with these two additional bodies. … [W]e may be falling further behind at
the rate the deals are closing. If we do not agree on the actual number, certainly we can
agree that I need more recourse if I am ever going to be near compliance.”1196
In December 2006, she wrote:
“In light of the current state of residential mortgage performance, especially sub-prime, I
think it would be very beneficial for the RMBS surveillance team to have the work being
done by the temps to continue. It is still very important that performance data is loaded
on a timely basis as this has an impact on our exception reports. Currently, there are
nearly 1,000 deals with data loads aged beyond one month.”1197
In February 2007, she expressed concerns about having adequate resources to address
potential downgrades in RMBS:
“I talked to Tommy yesterday and he thinks that the [RMBS] ratings are not going to
hold through 2007. He asked me to begin discussing taking rating actions earlier on the
1194 During an interview, the head of RMBS surveillance advised that she believed she was adequately resourced and
prioritized her review of outstanding securities by focusing on 2006 and 2007 vintages that had performance
problems. Subcommittee interview of Ernestine Warner (3/11/2010).
1195 4/28/2006 email from Ernestine Warner to Roy Chun, and others, Hearing Exhibit 4/23-82.
1196 6/1/2006 emails from Ernestine Warner to Roy Chun, Hearing Exhibit 4/23-83.
1197 12/20/2006 email from Ernestine Warner to Gail Houston, Roy Chun, others, Hearing Exhibit 4/23-84.
309
poor performing deals. I have been thinking about this for much of the night. We do not
have the resources to support what we are doing now. A new process, without the right
support, would be overwhelming. ... My group is under serious pressure to respond to the
burgeoning poor performance of sub-prime deals. … we are really falling behind.… I am
seeing evidence that I really need to add staff to keep up with what is going on with sub
prime and mortgage performance in general, NOW.”1198
In April 2007, a managing director at S&P in the Structured Finance Group wrote an
email confirming the staffing shortages in the RMBS Surveillance Group:
“We have worked together with Ernestine Warner (EW) to produce a staffing model for
RMBS Surveillance (R-Surv). It is intended to measure the staffing needed for detailed
surveillance of the 2006 vintage and also everything issued prior to that. This model
shows that the R-Surv staff is short by 7 FTE [Full Time Employees] - about 3 Directors,
2 AD’s, and 2 Associates. The model suggests that the current staff may have been right
sized if we excluded coverage of the 2006 vintage, but was under titled lacking sufficient
seniority, skill, and experience.”1199
The global head of the S&P Structured Finance Surveillance Group, Peter D’Erchia, told
the Subcommittee that, in late 2006, he expressed concerns to senior management about
surveillance resources and the need to downgrade subprime in more significant numbers in light
of the deteriorating subprime market.1200 According to Mr. D’Erchia, the executive managing
director of the Global Structured Finance Ratings Group, Joanne Rose, disagreed with him about
the need to issue significantly more downgrades in subprime RMBS and this disagreement
continued into the next year. He also told the Subcommittee that after this disagreement with
her, he received a disappointing 2007 performance evaluation. He wrote the following in the
employee comment section of his evaluation:
“Even more offensive – and flatly wrong – is the statement that I am not working for a
good outcome for S&P. That is all I am working towards and have been for 26 years. It
is hard to respond to such comments, which I think reflect Joanne’s [Rose] personal
feelings arising from our disagreement over subprime debt deterioration, not professional
assessment. … Such comments, and others like it, suggest to me that this year-end
appraisal, in contrast to the mid-year appraisal, has more to do with our differences over
subprime deterioration than an objective assessment of my overall performance.”1201
In 2008, Mr. D’Erchia was removed from his surveillance position, where he oversaw
more than 314 employees, as part of a reduction in force. He was subsequently rehired as a
managing director in U.S. Public Finance at S&P, a position without staff to supervise.
1198 2/3/2007 email from Ernestine Warner to Peter D’Erchia, Hearing Exhibit 4/23-86 [emphasis in original].
1199 4/24/2007 email from Abe Losice to Susan Barnes, “Staffing for RMBS Surveillance,” Hearing Exhibit 4/23-88.
1200 Subcommittee interview of Peter D’Erchia (4/13/2010).
1201 2007 Performance Evaluation for Peter D’Erchia, S&P SEN-PSI 0007442; See also April 23, 2010
Subcommittee Hearing at 74-75.
310
Similarly, Ernestine Warner, the head of RMBS Surveillance, lost her managerial position and
was reassigned to investor relations in the Structured Finance Group.
On July 10, 2007, amid record mortgage defaults, S&P abruptly began downgrading its
outstanding RMBS and CDO ratings. In July alone, it downgraded the ratings of more than
1,000 RMBS and 100 CDO securities. Both credit rating agencies continued to issue significant
downgrades throughout the remainder of 2007. On January 30, 2008, S&P took action on over
8,200 RMBS and CDO ratings – meaning it either downgraded their ratings or placed the
securities on credit watch with negative implications. These and other downgrades, matched by
equally substantial numbers at Moody’s, paint a picture of CRA surveillance teams acting at top
speed in overwhelming circumstances to correct thousands of inaccurate RMBS and CDO
ratings. When asked to produce contemporaneous decision-making documents indicating how
and when the ratings were selected for downgrade, neither S&P nor Moody’s produced
meaningful documentation. The facts suggest that CRA surveillance analysts with already
substantial responsibilities and limited resources were forced to go into overdrive to clean up
ratings that could not “hold.”
(6) Mortgage Fraud
A final factor that contributed to inaccurate credit ratings involves mortgage fraud.
Although the credit rating agencies were clearly aware of increased levels of mortgage fraud,
they did not factor that credit risk into their quantitative models or adequately factor it into their
qualitative analyses. The absence of that credit risk meant that the credit enhancements they
required were insufficient, the tranches bearing AAA ratings were too large, and the ratings they
issued were too optimistic.
Reports of mortgage fraud were frequent and mounted yearly prior to the financial crisis.
As noted above, as early as 2004, the FBI began issuing reports on increased mortgage fraud.1202
The FBI was also quoted in Congressional testimony and in the popular press about the mortgage
fraud problem. CNN reported that “[r]ampant fraud in the mortgage industry has increased so
sharply that the FBI warned Friday of an ‘epidemic’ of financial crimes which, if not curtailed,
could become ‘the next S&L crisis.’”1203 In 2006, the FBI reported that the number of
Suspicious Activity Reports on mortgage fraud had increased sixfold, from about 6,800 in 2002,
to about 36,800 in 2006, while pending mortgage fraud cases nearly doubled from 436 in FY
2003 to 818 in FY 2006.1204 The Mortgage Asset Research Institute, LLC (MARI) also reported
increasing mortgage fraud over several years, including a 30% increase in 2006 alone.1205
1202 FY 2004 “Financial Institution Fraud and Failure Report,” prepared by the Federal Bureau of Investigation,
available at http://www.fbi.gov/stats-services/publications/fiff_04.
1203 “FBI warns of mortgage fraud ‘epidemic’,” CNN.com (9/17/2004), http://articles.cnn.com/2004-09-
17/justice/mortgage.fraud_1_mortgage-fraud-mortgage-industry-s-1-crisis?_s=PM:LAW.
1204 “Financial Crimes Report to the Public: Fiscal Year 2006, October 1, 2005 – September 30, 2006,” prepared by
the Federal Bureau of Investigation, available at http://www.fbi.gov/statsservices/
publications/fcs_report2006/financial-crimes-report-to-the-public-2006-pdf/view.
1205 4/2007 “Ninth Periodic Mortgage Fraud Case Report to Mortgage Bankers Association,” prepared by Mortgage
Asset Research Institute, LLC.
311
Published reports, as well as internal emails, demonstrate that analysts within both
Moody’s and S&P were aware of the serious mortgage fraud problem in the industry.1206
Despite being on notice about the problem and despite assertions about the importance of loan
data quality in the ratings process for structured finance securities,1207 neither Moody’s nor S&P
established procedures to account for the possibility of fraud in its ratings process. For example,
neither company took any steps to ensure that the loan data provided for specific RMBS loan
pools had been reviewed for accuracy.1208 The former head of S&P’s RMBS Group, Frank
Raiter, stated in his prepared testimony for the Subcommittee hearing that the S&P rating
process did not include any “due diligence” review of the loan tape or any requirement for the
provider of the loan tape to certify its accuracy. He stated: “We were discouraged from even
using the term ‘due diligence’ as it was believed to expose S&P to liability.”1209 Fraud was also
not factored into the RMBS or CDO quantitative models.1210
Yet when Moody’s and S&P initiated the mass downgrades of RMBS and CDO
securities in July 2007, they directed some of the blame for the rating errors on the volume of
mortgage fraud. On July 10, 2007, when S&P announced that it was placing 612 U.S. subprime
RMBS on negative credit watch, S&P noted the high incidence of fraud reported by MARI,
“misrepresentations on credit reports,” and that “[d]ata quality concerning some of the borrower
and loan characteristics provided during the rating process [had] also come under question.” 1211
In October 2007, the CEO of Fitch Ratings, another ratings firm, said in an interview that “the
blame may lie with fraudulent lending practices, not his industry.”1212 Moody’s made similar
observations. In 2008, Moody’s CEO Ray McDaniel told a panel at the World Economic
Forum:
“In hindsight, it is pretty clear that there was a failure in some key assumptions that were
supporting our analytics and our models.… [One reason for the failure was that the]
1206 See, e.g., 9/2/2006 email chain between Richard Koch, Robert Mackey, and Michael Gutierrez, “Nightmare
Mortgages,” Hearing Exhibit 4/23-46a; 9/5/2006 email chain between Edward Highland, Michael Gutierrez, and
Richard Koch, “Nightmare Mortgages,” Hearing Exhibit 4/23-46b; and 9/29/2006 email from Michael Gutierrez,
Director of S&P, PSI-S&P-RFN-000029.
1207 See, e.g., 6/24/2010 supplemental response from S&P to the Subcommittee, Exhibit H, Hearing Exhibit 4/23-
108 (7/11/2007 “S&PCORRECT: 612 U.S. Subprime RMBS Classes Put On Watch Neg; Methodology Revisions
Announced,” S&P’s RatingsDirect (correcting the original version issued on 7/10/2007)).
1208 See, e.g., 2008 SEC Examination Report for Moody’s Investor Services Inc., PSI-SEC (Moodys Exam Report)-
14-0001-16, at 7; and 2008 SEC Examination Report for Standard and Poor’s Ratings Services, Inc., PSI-SEC (S&P
Exam Report)-14-0001-24, at 11 (finding with respect to each credit rating agency that it “did not engage in any due
diligence or otherwise seek to verify the accuracy and quality of the loan data underlying the RMBS pools it rated”).
1209 Prepared statement of Frank Raiter, Former Managing Director at Standard & Poor’s, April 23, 2010
Subcommittee Hearing, at 3.
1210 Subcommittee interviews of Susan Barnes (3/18/2010) and Richard Gugliada (10/9/2009).
1211 6/24/2010 supplemental response from S&P to the Subcommittee, Exhibit H, Hearing Exhibit 4/23-108
(7/11/2007 “S&PCORRECT: 612 U.S. Subprime RMBS Classes Put On Watch Neg; Methodology Revisions
Announced,” S&P’s RatingsDirect (correcting the original version issued on 7/10/2007)).
1212 10/12/2007 Moody’s internal email, PSI-MOODYS-RFN-000035 (citing “Fitch CEO says fraudulent lending
practices may have contributed to problems with ratings,” Associated Press, and noting: “After S&P, Fitch is now
blaming fraud for the impact on RMBS, at least partially.”).
312
‘information quality’ [given to Moody’s,] both the complete[ness] and veracity, was
deteriorating.”1213
In 2007, Fitch Ratings decided to conduct a review of some mortgage loan files to
evaluate the impact of poor lending standards on loan quality. On November 28, 2007, Fitch
issued a report entitled, “The Impact of Poor Underwriting Practices and Fraud in Subprime
RMBS Performance.” After reviewing a “sample of 45 subprime loans, targeting high CLTV
[combined loan to value] [and] stated documentation loans, including many with early missed
payments,” Fitch reported that it decided to summarize information about the impact of fraud, as
well as lax lending standards, on the mortgages. Fitch explained: “[t]he result of the analysis
was disconcerting at best, as there was the appearance of fraud or misrepresentation in almost
every file.”1214
To address concerns about fraud and lax underwriting standards generally, S&P
considered a potential policy change in November 2007 that would give an evaluation of the
quality of services provided by third parties more influence in the ratings process. An S&P
managing director wrote:
“We believe our analytical process and rating opinions will be enhanced by an increased
focus on the role third parties can play in influencing loan default and loss performance.
… [W]e’d like to set up meetings where specific mortgage originators, investment banks
and mortgage servicers are discussed. We would like to use these meetings to share ideas
with a goal of determining whether loss estimates should be altered based upon your
collective input.”1215
An S&P employee who received this announcement wrote to a colleague: “Should have been
doing this all along.” 1216
S&P later decided that its analysts would also review specific loan originators that
supplied loans for the pool. Loans issued by originators with a reputation for issuing poor
quality loans, including loans marked by fraud, would be considered a greater credit risk and
ratings for the pool containing the loans would reflect that risk. S&P finalized that policy in
November 2008.1217 As part of its ratings analysis, S&P now ranks mortgage originators based
on the past historical performance of their loans and factors the assessment of the originator into
credit enhancement levels for RMBS.1218
1213 “Moody’s: They Lied to Us,” New York Times (1/25/2008),
http://norris.blogs.nytimes.com/2008/01/25/moodys-they-lied-to-us/.
1214 11/28/2007 “The Impact of Poor Underwriting Practices and Fraud in Subprime RMBS Performance,” report
prepared by Fitch Ratings, at 4, Hearing Exhibit 4/23-100.
1215 11/15/2007 email from Thomas Warrack to Michael Gutierrez, and others, Hearing Exhibit 4/23-34.
1216 11/15/2007 email from Robert Mackey to Michael Gutierrez, and others, Hearing Exhibit 4/23-34.
1217 6/24/2010 supplemental letter from S&P to the Subcommittee, Exhibit W, Hearing Exhibit 4/23-108
(11/25/2008 “Standard & Poor’s Enhanced Mortgage Originator and Underwriting Review Criteria for U.S.
RMBS,” S&P’s RatingsDirect).
1218 Id.
313
In September 2007, Moody’s solicited industry feedback on proposed enhancements to
its evaluation of nonprime RMBS securitizations, including the need for third-party due
diligence reviews of the loans in a securitization. Moody’s wrote: “To improve the accuracy of
loan information upon which it relies, Moody’s will look for additional oversight by a qualified
third party.”1219 In November 2008, Moody’s issued a report detailing its enhanced approach to
RMBS originator assessments.1220
E. Preventing Inflated Credit Ratings
Weak credit rating agency performance has long been a source of concern to financial
regulators. Many investors rely on credit ratings to identify “safe” investments. Many regulated
financial institutions, including banks, broker-dealers, insurance companies, pension funds,
mutual funds, money market funds, and others have been required to operate under restrictions
related to their purchase of “investment grade” versus “noninvestment grade” financial
instruments. When credit agencies issue inaccurate credit ratings, both retail investors and
regulated financial institutions may mistakenly purchase financial instruments that are riskier
than they intended or are permitted to buy. The recent financial crisis has demonstrated how the
unintended purchase of high risk financial products by multiple investors and financial
institutions can create systemic risk and endanger, not only U.S. financial markets, but the entire
U.S. economy.
(1) Past Credit Rating Agency Oversight
Even before the recent financial crisis, the SEC and Congress had been reviewing the
need for increased regulatory oversight of the credit rating industry. In 1994, for example, the
SEC “issued a Concept Release soliciting public comment on the appropriate role of ratings in
the federal securities laws, and the need to establish formal procedures for recognizing and
monitoring the activities of [credit rating agencies].”1221
In 2002, the Senate Committee on Governmental Affairs examined the collapse of the
Enron Corporation, focusing in part on how the credit rating agencies assigned investment grade
credit ratings to the company “until a mere four days before Enron declared bankruptcy.”1222
The Committee issued a report finding, among other things, that the credit rating agencies:
1219 “Moody’s Proposes Enhancements to Non-Prime RMBS Securitization,” Moody’s (9/25/2007).
1220 “Moody’s Enhanced Approach to Originator Assessments for U.S. Residential Mortgage Backed Securities
(RMBS),” Moody’s, Hearing Exhibit 4/23-106 (originally issued 11/24/2008 but due to minor changes was
republished on 10/5/2009).
1221 1/2003 “Report on the Role and Function of Credit Rating Agencies in the Operation of the Securities Markets,”
prepared by the SEC, at 5.
1222 10/8/2002 “Financial Oversight of Enron: The SEC and Private-Sector Watchdogs,” prepared by the U.S.
Senate Committee on Governmental Affairs, at 6. See also “Rating the Raters: Enron and the Credit Rating
Agencies,” before the U.S. Senate Committee on Governmental Affairs, S.Hrg. 107-471 (3/20/2002). The
Committee has since been renamed as the Committee on Homeland Security and Governmental Affairs.
314
“failed to detect Enron’s problems – or take sufficiently seriously the problems they were
aware of – until it was too late because they did not exercise the proper diligence. …
[T]he agencies did not perform a thorough analysis of Enron’s public filings; did not pay
appropriate attention to allegations of financial fraud; and repeatedly took company
officials at their word … despite indications that the company had misled the rating
agencies in the past.”1223
The report also found the credit rating “analysts [did] not view themselves as accountable for
their actions,” since the rating agencies were subject to little regulation or oversight, and their
liability for poor quality ratings was limited by regulatory exemptions and First Amendment
protections.1224 The report recommended “increased oversight for these rating agencies in order
to ensure that the public’s trust in these firms is well-placed.”1225
In 2002, the Sarbanes-Oxley Act required the SEC to conduct a study into the role of
credit rating agencies in the securities markets, including any barriers to accurately evaluating
the financial condition of the issuers of securities they rate.1226 In response, the SEC initiated an
in-depth study of the credit rating industry and released its findings in a 2003 report. The SEC’s
oversight efforts “included informal discussions with credit rating agencies and market
participants, formal examinations of credit rating agencies, and public hearings, where market
participants were given the opportunity to offer their views on credit rating agencies and their
role in the capital markets.”1227 The report expressed a number of concerns about CRA
operations, including “potential conflicts of interest caused by the [issuer-pays model].”1228
The Credit Rating Agency Reform Act, which was signed into law in September 2006,
was designed to address some of the shortcomings identified by Congress and the SEC. The Act
made it clear that the SEC had jurisdiction to conduct oversight of the credit rating industry, and
formally charged the agency with designating companies as NRSROs.1229 The statute also
required NRSROs to meet certain criteria before registering with the SEC. In addition, the
statute instructed the SEC to promulgate regulations requiring NRSROs to establish policies and
procedures to prevent the misuse of nonpublic information and to disclose and manage conflicts
of interest.1230 Those regulations were designed to take effect in September 2007.
In the summer of 2007, after the mass downgrades of RMBS and CDO ratings had begun
and as the financial crisis began to intensify, the SEC initiated its first examinations of the major
1223 10/8/2002 “Financial Oversight of Enron: The SEC and Private-Sector Watchdogs,” prepared by the U.S.
Senate Committee on Governmental Affairs, at 6, 108.
1224 Id. at 122.
1225 Id. at 6.
1226 Section 702 of the Sarbanes-Oxley Act of 2002.
1227 1/2003 “Report on the Role and Function of Credit Rating Agencies in the Operation of the Securities Markets,”
prepared by the SEC, at 4.
1228 Id. at 19.
1229 9/3/2009 “Credit Rating Agencies and Their Regulation,” report prepared by the Congressional Research
Service, Report No. R40613 (revised report issued 4/9/2010).
1230 Id.
315
credit rating agencies. According to the SEC, “[t]he purpose of the examinations was to develop
an understanding of the practices of the rating agencies surrounding the rating of RMBS and
CDOs.”1231 The examinations reviewed CRA practices from January 2004 to December 2007.
In 2008, the SEC issued a report summarizing its findings. The report found that “there was a
substantial increase in the number and in the complexity of RMBS and CDO deals,” “significant
aspects of the ratings process were not always disclosed,” the ratings policies and procedures
were not fully documented, “the surveillance processes used by the rating agencies appear to
have been less robust than the processes used for initial ratings,” and the “rating agencies’
internal audit processes varied significantly.”1232 In addition, the report raised a number of
conflict of interest issues that influenced the ratings process, noted that the rating agencies failed
to verify the accuracy or quality of the loan data used to derive their ratings, and raised questions
about the factors that were or were not used to derive the credit ratings.1233
(2) New Developments
Although the Credit Rating Agency Reform Act of 2006 strengthened oversight of the
credit rating agencies, Congress passed further reforms in response to the financial crisis to
address weaknesses in regulatory oversight of the credit rating industry. The Dodd-Frank Act
dedicated an entire subtitle to those credit rating reforms which substantially broadened the
powers of the SEC to oversee and regulate the credit rating industry and explicitly allowed
investors, for the first time, to file civil suits against credit rating agencies.1234 The major
reforms include the following:
a. establishment of a new SEC Office of Credit Ratings charged with overseeing the
credit rating industry, including by conducting at least annual NRSRO examinations
whose reports must be made public;
b. SEC authority to discipline, fine, and deregister a credit rating agency and associated
personnel for violating the law;
c. SEC authority to deregister a credit rating agency for issuing poor ratings;
d. authority for investors to file private causes of action against credit rating agencies
that knowingly or recklessly fail to conduct a reasonable investigation of a rated
product;
e. requirements for credit rating agencies to establish internal controls to ensure high
quality ratings and disclose information about their rating methodologies and about
each issued rating;
1231 7/2008 “Summary Report of Issues Identified in the Commission Staff’s Examinations of Select Credit Rating
Agencies,” prepared by the SEC, at 1. The CRAs examined by the SEC were not formally subject to the Credit
Rating Agency Reform Act of 2006 or its implementing SEC regulations until September 2007.
1232 Id. at 1-2.
1233 Id. at 14, 17-18, 23-29, 31-37.
1234 See Title IX, Subtitle C – Improvements to the Regulation of Credit Rating Agencies of the Dodd-Frank Act.
316
f. amendments to federal statutes removing references to credit ratings and credit rating
agencies in order to reduce reliance on ratings;
g. a GAO study to evaluate alternative compensation models for ratings that would
create financial incentives to issue more accurate ratings; and
h. an SEC study of the conflicts of interest affecting ratings of structured finance
products, followed by the mandatory development of a plan to reduce ratings
shopping.1235
The Act stated that these reforms were needed, “[b]ecause of the systemic importance of
credit ratings and the reliance placed on credit ratings by individual and institutional investors
and financial regulators,” and because “credit rating agencies are central to capital formation,
investor confidence, and the efficient performance of the United States economy.”1236
(3) Recommendations
To further strengthen the accuracy of credit ratings and reduce systemic risk, this Report
makes the following recommendations.
1. Rank Credit Rating Agencies by Accuracy. The SEC should use its regulatory
authority to rank the Nationally Recognized Statistical Rating Organizations in terms
of performance, in particular the accuracy of their ratings.
2. Help Investors Hold CRAs Accountable. The SEC should use its regulatory
authority to facilitate the ability of investors to hold credit rating agencies accountable
in civil lawsuits for inflated credit ratings, when a credit rating agency knowingly or
recklessly fails to conduct a reasonable investigation of the rated security.
3. Strengthen CRA Operations. The SEC should use its inspection, examination, and
regulatory authority to ensure credit rating agencies institute internal controls, credit
rating methodologies, and employee conflict of interest safeguards that advance
rating accuracy.
4. Ensure CRAs Recognize Risk. The SEC should use its inspection, examination, and
regulatory authority to ensure credit rating agencies assign higher risk to financial
instruments whose performance cannot be reliably predicted due to their novelty or
complexity, or that rely on assets from parties with a record for issuing poor quality
assets.
1235 See id. at §§ 931-939H; “Conference report to accompany H.R. 4173,” Cong. Report No. 111-517 (June 29,
2010).
1236 See Section 931 of the Dodd-Frank Act.
317
5. Strengthen Disclosure. The SEC should exercise its authority under the new Section
78o-7(s) of Title 15 to ensure that the credit rating agencies complete the required
new ratings forms by the end of the year and that the new forms provide
comprehensible, consistent, and useful ratings information to investors, including by
testing the proposed forms with actual investors.
6. Reduce Ratings Reliance. Federal regulators should reduce the federal government’s
reliance on privately issued credit ratings.
.
318
3/4/2011 “U.S. Mortgage-Related Securities Issuance” and 1/1/2011 1237 “Global CDO Issuance,” charts prepared
by Securities Industry and Financial Markets Association, www.sifma.org/research/statistics.aspx. The RMBS total
does not include about $6.6 trillion in RMBS securities issued by government sponsored enterprises like Fannie Mae
and Freddie Mac.
1238 See, e.g., 2/2011 chart, “Goldman Sachs Expected Profit from RMBS Securitizations,” prepared by the U.S.
Senate Permanent Subcommittee on Investigations using Goldman-produced documents for securitizations from
2005-2007 (underlying documents retained in Subcommittee file); 3/21/2011 letter from Deutsche Bank counsel,
PSI-Deutsche_Bank-32-0001.
1239 See “Banks’ Self-Dealing Super-Charged Financial Crisis,” ProPublica (8/26/2010),
http://www.propublica.org/article/banks-self-dealing-super-charged-financial-crisis (“A typical CDO could net the
bank that created it between $5 million and $10 million – about half of which usually ended up as employee bonuses.
Indeed, Wall Street awarded record bonuses in 2006, a hefty chunk of which came from the CDO business.”). Fee
information obtained by the Subcommittee is consistent with this range of CDO fees. For example, Deutsche Bank
received nearly $5 million in fees for Gemstone 7, and the head of its CDO Group said that Deutsche Bank received
typically between $5 and 10 million in fees, while Goldman Sachs charged a range of $5 to $30 million in fees for
Camber 7, Fort Denison, and the Hudson Mezzanine 1 and 2 CDOs. 12/20/2006 Gemstone 7 Securitization Credit
Report, DB_PSI_00237655-71 and 3/15/2007 Gemstone CDO VII Ltd. Closing Memorandum, DB_PSI_00133536-
41; Subcommittee interview of Michael Lamont (9/29/2010); and Goldman Sachs response to Subcommittee QFRs
at PSI-QFR-GS0249.
VI. INVESTMENT BANK ABUSES:
CASE STUDY OF GOLDMAN SACHS AND DEUTSCHE BANK
A key factor in the recent financial crisis was the role played by complex financial
instruments, often referred to as structured finance products, such as residential mortgage backed
securities (RMBS), collateralized debt obligations (CDOs), and credit default swaps (CDS),
including CDS contracts linked to the ABX Index. These financial products were envisioned,
engineered, sold, and traded by major U.S. investment banks.
From 2004 to 2008, U.S. financial institutions issued nearly $2.5 trillion in RMBS
securities and over $1.4 trillion in CDOs securitizing primarily mortgage related products.1237
Investment banks charged fees ranging from $1 to $8 million to act as the underwriter of an
RMBS securitization,1238 and from $5 to $10 million to act as the placement agent for a CDO
securitization.1239 Those fees contributed substantial revenues to the investment banks which set
up structured finance groups, and a variety of RMBS and CDO origination and trading desks
within those groups, to handle mortgage related securitizations. Investment banks placed these
securities with investors around the world, and helped develop a secondary market where private
RMBS and CDO securities could be bought and sold. The investment banks’ trading desks
participated in those secondary markets, buying and selling RMBS and CDO securities either for
their customers or for themselves.
Some of these financial products allowed investors to profit, not only from the success of
an RMBS or CDO securitization, but also from its failure. CDS contracts, for example, allowed
counterparties to wager on the rise or fall in the value of a specific RMBS security or on a
collection of RMBS and other assets contained or referenced in a CDO. Major investment banks
also developed standardized CDS contracts that could be traded on a secondary market. In
319
addition, they established the ABX Index which allowed counterparties to wager on the rise or
fall in the value of a basket of subprime RMBS securities, and which could be used to reflect the
state of the subprime mortgage market as a whole.
Investment banks sometimes matched up parties who wanted to take opposite sides in a
structured finance transaction, and other times took one or the other side of a transaction to
accommodate a client. At still other times, investment banks used these financial instruments to
make their own proprietary wagers. In extreme cases, some investments banks set up structured
finance transactions which enabled them to profit at the expense of their clients.
Two case studies, involving Goldman Sachs and Deutsche Bank, illustrate a variety of
troubling and sometimes abusive practices involving the origination or use of RMBS, CDO,
CDS, and ABX financial instruments. Those practices included at times constructing RMBS or
CDOs with assets that senior employees within the investment banks knew were of poor quality;
underwriting securitizations for lenders known within the industry for issuing high risk, poor
quality mortgages or RMBS securities; selling RMBS or CDO securities without full disclosure
of the investment bank’s own adverse interests; and causing investors to whom they sold the
securities to incur substantial losses.
In the case of Goldman Sachs, the practices included exploiting conflicts of interest with
the firm’s clients. For example, Goldman used CDS and ABX contracts to place billions of
dollars of bets that specific RMBS securities, baskets of RMBS securities, or collections of assets
in CDOs would fall in value, while at the same time convincing customers to invest in new
RMBS and CDO securities. In one instance, Goldman took the entire short side of a $2 billion
CDO known as Hudson 1, selected assets for the CDO to transfer risk from Goldman’s own
holdings, allowed investors to buy the CDO securities without fully disclosing its own short
position, and when the CDO lost value, made a $1.7 billion gain at the expense of the clients to
whom it had sold the securities. While Goldman Sachs sometimes told customers that it might
take an adverse investment position to the RMBS or CDO securities it was selling them,
Goldman did not disclose that, in fact, it already had significant proprietary investments that
would pay off if the particular security it was selling or if RMBS and CDO securities in general
fell in value. In another instance, Goldman marketed a CDO known as Abacus 2007-AC1 to
clients without disclosing that it had allowed the sole short party in the CDO, a hedge fund, to
play a major role in selecting the assets. The Abacus securities quickly lost value, and the three
long investors together lost $1 billion, while the hedge fund profited by about the same amount.
In still other instances, Goldman took on the role of a collateral put provider or liquidation agent
in a CDO, and leveraged that role to obtain added financial benefits to the fiscal detriment of the
clients to whom it sold the CDO securities.
In the case of Deutsche Bank, during 2006 and 2007, the bank’s top CDO trader, Greg
Lippmann, repeatedly warned and advised his Deutsche Bank colleagues and some of his clients
seeking to buy short positions about the poor quality of the RMBS securities underlying many
CDOs, describing some of those securities as “crap” and “pigs.” At one point, Mr. Lippmann
320
was asked to buy a specific CDO security and responded that it “rarely trades,” but he “would
take it and try to dupe someone” into buying it. He also disparaged RMBS securities that, at the
same time, were being included in Gemstone 7, a CDO being assembled by the bank for sale to
investors. Gemstone 7 included or referenced 115 RMBS securities, many of which carried
BBB, BBB-, or even BB credit ratings, making them among the highest risk RMBS securities
sold to the public, yet received AAA ratings for its top three tranches. Deutsche Bank sold $700
million in Gemstone securities to eight investors who saw their investments rapidly incur
delinquencies, rating downgrades, and losses. Mr. Lippmann at times referred to the industry’s
ongoing CDO marketing efforts as a “CDO machine” or “ponzi scheme,” and predicted that the
U.S. mortgage market as a whole would eventually plummet in value. Deutsche Bank’s senior

No comments:

Post a Comment