i
IntroductIon
i
FY 2012
Annual Report
Since my conrmation as Director of Operational Test and Evaluation (DOT&E) in 2009, I have implemented initiatives to
improve the quality of test and evaluation (T&E) within the Department of Defense. I have emphasized early engagement
of testers in the requirements process, improving system suitability by designing reliability into systems from the outset,
and integrating developmental, operational, and live re testing. Implementing these initiatives has revealed the need for an
additional area of focus – the requirement to incorporate statistical rigor in planning, executing, and evaluating the results of
testing.
There are signicant opportunities to improve the efciency and the outcomes of testing by increasing interactions between
the testing and requirements communities. In particular, there should be early focus on the development of operationally
relevant, technically feasible, and testable requirements. In this Introduction, I discuss the crucial role the T&E community
can and should play as requirements are developed. Additionally, I describe DOT&E efforts to institutionalize the use of
statistical rigor as part of determining requirements and in T&E. I also provide an update on the Department’s efforts to
implement reliability growth planning and improve the reliability and overall suitability of our weapon systems. And lastly, I
describe challenges and new developments in the area of software T&E.
Last year, I added a new section to my Annual Report assessing systems under my oversight in 2010 – 2011 with regard
to problem discovery during testing. My assessment fell into two categories: systems with signicant issues observed in
operational testing that should, in my view, have been discovered and resolved prior to the commencement of operational
testing, and systems with signicant issues observed during early testing that, if not corrected, could adversely affect my
evaluation of those systems’ effectiveness, suitability, and survivability during Initial Operational Test and Evaluation
(IOT&E). This year, I am providing an update to the status of those systems identied last year, as well as my assessment of
systems under my oversight in 2012 within those two categories.
THE ROLE OF T&E IN REQUIREMENTS
There is an inherent and necessary link between the requirements and the test communities. The requirements community
must state our ghting force’s needs in the form of concrete, discrete capabilities or requirements. The testing community
must then assess a system that is developed and produced to meet those requirements to determine whether it provides the
military capability being sought; that is, we evaluate the system’s operational effectiveness and suitability when used by our
forces in combat. In my opinion, the collaboration needed between the requirements and the test communities to discharge
these responsibilities needs to be strengthened.
In my report last year, I discussed the Defense Acquisition Executive (DAE) independent assessment of concerns that the
Department’s developmental and operational test communities’ approach to testing drives undue requirements, excessive cost,
and added schedule into programs. The DAE assessment team “found no signicant evidence that the testing community
typically drives unplanned requirements, cost, or schedule into programs.” However, they did note that there were four
specic areas that needed attention:
“The need for closer coordination and cooperation among the requirements, acquisition, and testing communities;
theneedforwell-denedtestablerequirements;thealignmentofacquisitionstrategiesandtestplans;andtheneed
tomanagethetensionbetweenthecommunities.”
The lack of critically needed collaboration among the technical, test, and requirements communities is not new. The 1986
Packard Commission found that success in new programs depends on “an informed trade-off between user requirements, on
one hand, and schedule and cost, on the other.” It therefore recommended creation of a new body representing both military
users and acquisition/technology experts. This ultimately led to the creation of the Joint Requirements Oversight Council
(JROC), which includes the military operators as formal members but includes, as advisors only, the acquisition and test
communities. In 1998, the National Research Council (NRC) identied the need for greater interaction between the test and
the requirements communities; the NRC pointed out that operational test personnel should be included in the requirements