i
INTRODUCTION
i
FY 2011
Annual Report
In my report to you last year, I discussed four initiatives that I am undertaking as Director, Operational Test and Evaluation:
eld new capability rapidly; engage early to improve requirements; integrate developmental, live re, and operational
testing; and substantially improve suitability before initial operational test and evaluation (IOT&E). In this Introduction,
I report on the progress made implementing these initiatives, discussing several success stories as well as areas requiring
additional effort. I will rst discuss key issues causing program delays in defense acquisition and the marginal cost of
operational testing. I will also include a discussion of operational test and evaluation (OT&E) interest areas, as well as a
summary of my monitoring and reporting activities on OT&E.
Additionally, I have included a new discussion in the Activity and Oversight chapter of this annual report containing my
assessment of signicant issues observed in operational testing of systems under my oversight in 2010-2011. These issues, in
my view, should have been discovered and resolved prior to the commencement of operational testing. This new section also
provides my identication of signicant issues observed in early testing of systems during 2010-2011 that, if not corrected,
could adversely affect my assessment of those systems’ effectiveness, suitability, and survivability during IOT&E.
PROGRAM DELAYS
In response to continuing comments by the acquisition community that testing drives undue requirements, excessive cost, and
added schedule into programs, I conducted a systematic review of recent major acquisition programs that experienced delays.
I examined these programs to determine the causes and lengths of program delays, and the marginal cost of operational test
and evaluation. The Under Secretary of Defense (Acquisition, Technology, and Logistics) (USD(AT&L)) had also chartered
a team to assess the acquisition community’s concerns regarding testing. The results of both studies indicated that testing
and test requirements do not cause major program delays or drive undue costs. Dr. Carter and I signed a joint memorandum
addressing these issues as well as other problems that were identied in the two studies, summarized below.
The USD(AT&L) study team found that tensions are often evident between programs and the test community and for the
most part these are normal and healthy; however, there is room for improvement in these relationships and interactions. Four
potential mitigations were identied:
• Stronger mechanisms for a more rapid adaptation to emerging facts
• A requirements process that produces well-dened and testable requirements
• Alignment of acquisition and test strategies (i.e., programs lack the budgetary and contract exibility necessary to
accommodate discovery)
• Open communications between programs and testers, early and often, with constructive involvement of senior leaders
Causes of program delays
My review examined 67 major programs that experienced signicant
delays and/or a Nunn-McCurdy breach. Thirty-six of these programs
experienced a Nunn-McCurdy breach and six of these programs
were ultimately canceled. (Two of the 36 Nunn-McCurdy programs
experienced no delays to their schedule.) We identied ve categories of
problems that resulted in delays:
• Manufacturing and development (to include quality control, software
development, and integration issues)
• Programmatic (scheduling or funding problems)
• Poor performance in developmental testing (DT)
• Poor performance in operational testing (OT)
• Difculties conducting the test (such as range availability, test
instrumentation problems, and other test execution problems)
Of the 67 programs, we found that 56 programs (or 84 percent) had
performance problems in testing (either DT, OT, or both) while only
eight programs (or 12 percent) had issues conducting the tests that led
Figure 1.
reasons behind Program delays