CMC ELECTRONICS INC.

Determinations


CMC ELECTRONICS INC.
File No. PR-2001-052

TABLE OF CONTENTS


Ottawa, Thursday, May 2, 2002

File No. PR-2001-052

IN THE MATTER OF a complaint filed by CMC Electronics Inc. under subsection 30.11(1) of the Canadian International Trade Tribunal Act, R.S.C. 1985 (4th Supp.), c. 47;

AND FURTHER TO a decision to conduct an inquiry into the complaint under subsection 30.13(1) of the Canadian International Trade Tribunal Act.

DETERMINATION OF THE TRIBUNAL

Pursuant to subsection 30.14(2) of the Canadian International Trade Tribunal Act, the Canadian International Trade Tribunal determines that the complaint is valid in part.

Pursuant to subsections 30.15(2) and (3) of the Canadian International Trade Tribunal Act, the Canadian International Trade Tribunal recommends that the results of the evaluation of all rated requirements of the Request for Proposal, in relation to the consequence of failure factor for the three proposals received, be set aside. Furthermore, it recommends that the Department of Public Works and Government Services and the Department of National Defence re-evaluate the consequence of failure factor, including the key risk categories upon which it depends, for all rated requirements, essential or not, in both the technical and management proposals for the three proposals submitted. The re-evaluation will be conducted strictly, according to the criteria and methodology set out in the Request for Proposal, including the Evaluation Plan. This will require the removal, from the electronic proposal evaluation software, of the consequence of failure categories selected by the subject matter experts, which are inserted in the evaluation screens under the tab "Guidance to Evaluators", and of the estimated scores for the consequence of failure factor. The estimated scores for the consequence of failure factor will, in no way, be used in the re-evaluation. The procurement process will proceed as provided for in the above-mentioned solicitation documents and the Agreement on Internal Trade.

The re-evaluation will be conducted by a new evaluation team that will be composed of members other than those involved in the original evaluation and will exclude the fourth subject matter expert who was involved in determining the estimated scores for the consequence of failure factor prior to the evaluation of proposals.

Pursuant to subsection 30.16(1) of the Canadian International Trade Tribunal Act, the Canadian International Trade Tribunal awards CMC Electronics Inc. its reasonable costs incurred in preparing and proceeding with the complaint.



Patricia M. Close

Patricia M. Close
Presiding Member

Richard Lafontaine

Richard Lafontaine
Member

Ellen Fry

Ellen Fry
Member


Michel P. Granger

Michel P. Granger
Secretary

The statement of reasons will follow at a later date.
 
 

Date of Determination:

May 2, 2002

Date of Reasons:

May 23, 2002

   

Tribunal Members:

Patricia M. Close, Presiding Member

 

Richard Lafontaine, Member

 

Ellen Fry, Member

   

Investigation Manager:

Paule Couët

   

Investigation Officer:

Ronald B. Harrigan

   

Counsel for the Tribunal:

John Dodsworth

   

Complainant:

CMC Electronics Inc.

   

Counsel for the Complainant:

Gregory O. Somers

 

Paul D. Conlin

   

Interveners:

Thales Systems Canada

 

DRS Technologies Inc.

   

Counsel for the Interveners:

Barbara A. McIsaac, Q.C.

 

Kris Klein

 

Ronald D Lunau

 

Phuong T.V. Ngo

   

Government Institution:

Department of Public Works and Government Services

   

Counsel for the Government Institution:

David M. Attwater

 
 
Ottawa, Thursday, May 23, 2002

File No. PR-2001-052

IN THE MATTER OF a complaint filed by CMC Electronics Inc. under subsection 30.11(1) of the Canadian International Trade Tribunal Act, R.S.C. 1985 (4th Supp.), c. 47;

AND FURTHER TO a decision to conduct an inquiry into the complaint under subsection 30.13(1) of the Canadian International Trade Tribunal Act.

STATEMENT OF REASONS

COMPLAINT

On December 18, 2001, CMC Electronics Inc.1 (CMC) filed a complaint with the Canadian International Trade Tribunal (the Tribunal) under subsection 30.11(1) of the Canadian International Trade Tribunal Act.2 The complaint concerns a procurement (Solicitation No. W8485-01NA20/B) by the Department of Public Works and Government Services (PWGSC), on behalf of the Department of National Defence (DND), for the supply of a communications management system (CMS) for the Canadian Forces' CP-140 Aurora aircraft. The work includes, but is not limited to, the design, building, installation and integration of a new CMS solution for the Aurora aircraft fleet;3 the modification of various aircraft sub-systems; the provision of a modified systems integration lab and a modified integrated avionics trainer; the provision of an integrated logistics support program; the provision of program management activities to deliver the requirements; and the provision of necessary data, rights and documentation as are required to ensure the successful delivery and completion of the CMS solution.

CMC alleged that, contrary to Articles 501 and 506(6) of the Agreement on Internal Trade,4 PWGSC disregarded the Evaluation Plan (EP) by applying default "consequence of failure"5 scores rather than individual assessments when evaluating the consequence of failure, as provided in the Request for Proposal (RFP); evaluated proposals using unpublished evaluation weighting factors; erroneously calculated the total scores of bidders; prevented bidders from knowing the severe evaluation point loss associated with low-weighted proposed elements; and applied undisclosed evaluation criteria, i.e. aircraft-installed experience, in the evaluation of proposals.

CMC requested that the Tribunal issue an order postponing the award of any contract in relation to this solicitation until the Tribunal determined the validity of the complaint. CMC further requested, as a remedy, that the Tribunal recommend that all bidders that submitted bids in this solicitation be permitted to obtain disclosure of all applicable evaluation criteria, including scores for consequence of failure, that they be given the opportunity to repair bids and adequate time to do so, and that such bids be re-evaluated exclusively in accordance with disclosed criteria. CMC requested that the Tribunal recommend that it be compensated by an amount equal to its lost profits and lost opportunity in being wrongly deemed non-compliant. In addition, CMC requested that the Tribunal award it its costs incurred in preparing a response to the solicitation and in proceeding with this complaint.

On December 24, 2001, the Tribunal informed the parties that the complaint had been accepted for inquiry, as it met the requirements of subsection 30.11(2) of the CITT Act and the conditions set out in subsection 7(1) of the Canadian International Trade Tribunal Procurement Inquiry Regulations.6 That same day, the Tribunal issued an order postponing the award of any contract in relation to this solicitation until the Tribunal determined the validity of the complaint. On January 4 and 16, 2002, the Tribunal informed the parties that Thales Systems Canada7 (Thales) and DRS Technologies Inc. (DRS), respectively, had been granted intervener status in the matter.

On January 17, 2002, CMC filed a motion with the Tribunal for the production of documents by PWGSC. On January 28, 2002, PWGSC filed a Government Institution Report (GIR) with the Tribunal in accordance with rule 103 of the Canadian International Trade Tribunal Rules.8 The same day, PWGSC filed comments on CMC's motion of January 17, 2002, submitting that it should not be required to respond to a motion for the production of documents until CMC had established a prima facie case of their relevance to the issues raised in the complaint. On February 1, 2002, CMC filed with the Tribunal a rationale in support of the production of additional documents by PWGSC. On February 13, 2002, the Tribunal ordered that PWGSC file additional documents relating to the evaluation of proposals. On February 18, 2002, PWGSC filed additional documents with the Tribunal. On March 1, 2002, CMC and Thales filed comments on the GIR with the Tribunal. DRS, an intervener in the matter, did not file submissions in response to the GIR.9 On March 5, 2002, Thales filed comments in response to CMC's comments. On March 7, 2002, PWGSC wrote to the Tribunal indicating that it would not request the right to make further submissions and requesting that the Tribunal expedite its inquiry.

On March 13, 2002, the Tribunal requested that PWGSC provide additional information. PWGSC filed the additional information with the Tribunal on March 20, 2002. Thales filed comments in response on March 27, 2002, and CMC and DRS did likewise on March 28, 2002.

Given that there was sufficient information on the record to determine the validity of the complaint, the Tribunal decided that a hearing was not required and disposed of the complaint on the basis of the information on the record.

PROCUREMENT PROCESS

On October 26, 2000, a Letter of Interest and draft RFP10 were posted on MERX11 . This was followed by a site visit and briefing sessions at Canadian Forces Base Greenwood on November 29 and 30, 2000. Eight companies, including CMC, participated in these events. On January 23, 2001, PWGSC issued the RFP for this solicitation. The RFP closed on April 19, 2001. Three bids were received, one each from CMC, DRS and Thales. The RFP included, at Appendix H, the EP12 with project evaluation tree attached as Annex A.

The RFP includes the following provisions that are relevant to this case:

[Article 26.2.1]
It is the Bidders' sole responsibility to provide sufficient information to permit a full understanding of what is proposed.
[Article 27.1.1]
The evaluation of the Bidder's offer shall be based solely on the contents of its proposal in response to the RFP, the SOW, the Specifications, and the documents called up therein. Failure to provide sufficient information in any area may result in the assumption of non-compliance in that area.
[Article 27.2.3]
Those bids which are deemed Technically Compliant with the Mandatories will then be evaluated in accordance with the Evaluation Plan included herewith, to determine compliance with the Rated Essential Management & Technical Requirements of the SOW and the Specification, and must receive a pass mark of 70% in each of the Management and Technical requirements in order to be evaluated further.

The following provisions of the EP13 are relevant to this case:

3.4 STAGE 3 - DETAILED EVALUATION OF RATED REQUIREMENTS, PROPOSAL SCORE ROLL-UPS AND RANKINGS
Compliant proposals from stage 2 will be evaluated in detail against the Rated requirements in terms of compliance, capability and risk. Upon completion of the detailed evaluations (including any re-adjustment for individual score divergences) the ETL [Evaluation Team14 Leader] shall use the automated features of the Project Evaluation Software (PES)[15] to arrive at scores for all proposals in accordance with the methodology described in this plan. . . . Rated requirements fall into two categories; essential and desirable.
3.9 MANDATORY VERSUS RATED REQUIREMENTS
To safeguard against a proposal not delivering the essential requirements of the CMS work, DND has identified certain items to be Mandatory Requirements. These are identified in Appendix I of the RFP.
A Mandatory Requirement is defined as a requirement that must be met in order for the Bidders' proposals to be further considered for evaluation. Mandatory Requirements are assessed as either compliant or non-compliant prior to scoring and any non-compliant proposals will be eliminated.
Rated requirements fall into two categories - essential requirements and desirable requirements. Rated essential requirements are designated through the use of the word "shall". Rated desirable requirements are designated through the use of the words "should" or "may".
For each proposal, these rated requirements are assessed and scored to determine the proposal's degree of compliance, the bidder's capability to meet the requirement, and any associated risks.
3.10 EVALUATION INTEGRITY & CONSISTENCY
The evaluation process, procedures and interpretation of requirements shall not be changed once the RFP has been released to industry unless an RFP amendment is issued by PWGSC.
6. TECHNICAL AND MANAGEMENT SCORING METHODOLOGY
. . . Each proposal that meets the Mandatory Requirements will be evaluated using the methodology outlined below. Any bidder's proposal that fails to achieve an overall score of 70 percent (against the rated essential requirements only), in each of the CMS SOW and CMS Functional Performance Specification, shall be rejected on the basis that it fails to meet the Technical and Management requirement. The pass or fail calculation shall be done against the rated essential requirements only. The calculation for the final cost per point shall include the rated essential requirements and the rated desirable requirements.
6.1 REVIEW OF MANDATORY REQUIREMENTS
Proposals to meet Mandatory Requirements shall be rated as either "Compliant" or "Non-compliant." Proposals shall be eliminated based on any non-compliance with Mandatory Requirements.
6.2 EVALUATION TREE CRITERIA AND ASSIGNED WEIGHTS
For the Management and Technical portions of the evaluation, this EP is built around an Evaluation Tree (ET). The ET is provided at Annex A.
The Management Proposal is weighted at 35% and includes the Bidders' responses to the SOW requirements in the RFP. The Technical Proposal is weighted at 65% and includes the Bidders' responses to the CMS Functional Performance Specification requirements in the RFP.
The Technical and Management main branch of the ET is broken down into major sub-branches organized along the functional lines found in the project RFP documents. Weights have been designed to ensure appropriate balance and relative importance has been catered to. Note that the weighting factors determine the amount of influence that every proposal element contributes to the overall scores.
6.3 WEIGHTING FACTORS
The numerical weights apportioned to the evaluation criteria (i.e. the evaluated SOW and Specification package) provide a measure of their relative importance in deciding on the winning Proposal. They are computed using the Value Tree method, whereby criteria are organized into a hierarchy tree, with branches and sub-branches. Scoring is then done against the lowest level criteria.
Weights are assigned to branches and sub-branches until all branches have been assigned weights. Branch weights are assigned such that they sum to a normalised value, such as 100. The Final Criteria Weights are calculated as the products of the criterion branch weights times the weights assigned to all of its ancestor branches.
6.4 EVALUATION TECHNICAL AND MANAGEMENT SCORING
This EP presents a standard method for evaluating the Bidders' proposals against each of the Technical and Management branch elements. Element scores will be based on the following factors:

a. Compliance Score. The evaluation by the evaluator of the degree which the Bidder states that the requirement will be met, as proposed. It is a measure of how well the proposal meets (or states it meets) the requirement.
b. Capability Score. The evaluation by the evaluator of the Bidders' capability to meet requirements, as proposed. It is a measure of the capability of the Bidder to meet the requirement.
c. Risk Factors. The evaluation by the evaluator of the measure of the risk associated with meeting requirements, as proposed. It is a measure of the level of risk associated with the Bidder meeting the criterion. The key risk categories considered are risks associated with operations, support, or schedule. Other categories of risk may be accommodated.

It is the bidder's responsibility to prove compliance and demonstrate capability and the level of risk. If this requires the delivery of supporting documentation in the proposal, the burden of proof is their responsibility, without DND prompting.
6.5 INDIVIDUAL SCORING
All proposals meeting the Mandatory Requirements will be subject to Technical and Management scoring by assigned evaluation team members. Technical and Management scoring will be conducted against each individual evaluation element or criteria. For each proposal, each evaluation element shall be independently assessed in order to assign a Compliance Score, a Capability Score and Risk Score.
6.5.2 Capability Scoring
Capability scoring of each proposal shall be based on the following "word pictures" and scores.
The question that shall be asked is "Based on the claims in the proposal, what is the Bidder's capability to meet this requirement?"
Capability

· Previously Proven/Demonstrated Capable 10
· Claimed Capable (with credible approach and reasonable evidence) 8
· Probably Capable (credible approach, limited direct evidence) 7
· Perhaps Capable (insufficient evidence) 3
· Not Likely Capable 1
· . . .
· No Information or Clearly Not Capable 0

6.6 RISK SCORING
Risk scoring shall consider "Probability" and "Consequence" (of failure) separately and shall be based on the "word pictures" and scores in the following sections.
Using the PES, the evaluators select from a single[ 16 ] Consequence score scale (Major, Very Significant, etc) and enter a single Consequence assessment (i.e., based on operational, support or schedule related consequence categories).
6.6.1 Scoring Probability
When scoring the probability of failure associated with proposal elements, the question that will be asked is:
"Based on the proposal, what is the likelihood that this Bidder will fail to meet this requirement?"
Answer: The likelihood is: Score
Highly Probable - almost certain to fail 1
Very Probable - strongly suspect will fail 3
Probable - good chance will fail 5
Might Happen - may or perhaps will fail 8
Improbable - not likely to fail 10
When scoring "Probability", only "likelihood" should be considered, independent of any other aspect of risk.
6.6.2 Scoring Consequence for Operational Risks
When considering the Operational consequence of failure associated with proposal elements, the question that will be asked is:
"If the Bidder fails to meet this requirement, what would the impact on operations be?"
Answer: The most likely impact would be: Score
Major - Jeopardise Project 1
Very Significant - Potentially Compromise Airworthiness or Capability 3
Significant - Significantly Reduce Capability/Availability 5
Limited - Slightly Reduce Capability/Availability 8
None/Minimal - Minimal to No significant impact 10
6.6.3 Scoring Consequence for Support Risks
When considering the Support related consequence of failure associated with proposal elements, the question that will be asked is "If the Bidder fails to meet this requirement, what would the impact on support be?"
Answer: The most likely impact would be: Score
Major - Make System Generally Unsupportable 1
Very Significant - Major reduction in Supportability 3
Significant - Significant Reduction in Capability/Readiness 5
Limited - Slight Reduction in Supportability 8
None/Minimal - Minimal to No significant impact 10
6.6.4 Scoring Consequence for Schedule Risks
When considering the Schedule related consequence of failure associated with proposal elements, the question that will be asked is "If the Bidder fails to meet this requirement, what would the impact on schedule be?"
The impact would be: Score
Major - Potential to Stop Project or Delay for Years 1
Very Significant - Potential Major Delay (> 1 year) 3
Significant - Potential Major Delay (3-12 months) 5
Limited - Slight Delay (1-2 months) 8
None/Minimal - Minimal Delay (< 1 mo) 10
6.6.5 Scoring Consequence for Programmatic or Other Risks
Consequence scores may be assigned for elements where the consequence of failure cannot easily be categorised as being operational, support or schedule related. For these elements, the ETL may define appropriate word pictures using a similar scoring scale.
6.7 CONSENSUS SCORING
As directed by the ETL, the individual scoring of Compliance, Capability and Risk Factors will be reviewed to achieve consensus by the evaluation team. The ETL is responsible for determining and documenting that consensus scores have been reached. Once consensus is achieved, the rolled up evaluation results are based on these consensus scores.

On October 2, 2001, PWGSC advised CMC, in writing, of its scores determined by the PES for the rated essential requirements of the Statement of Work (SOW) and the Functional Performance Specification (the Specification). CMC was also advised that it had not achieved the 70 percent passing mark, as required by the Specification.

In correspondence dated December 4, 2001, PWGSC advised CMC that the score for consequence of failure for each requirement were considered by the individual evaluators and again by the evaluation team when determining consensus scores. According to the GIR, individual evaluators actually changed some estimated scores.17 However, the evaluation team determined that the consensus score should be the estimated score.

POSITION OF PARTIES

PWGSC's Position

PWGSC submitted that CMC failed because it was declared as non-compliant on some of the 249 rated essential requirements. It was not because of the score for consequence of failure but because compliance was the predominant evaluation factor modulated by the factors of capability, probability and consequence.

PWGSC admitted that estimated scores were used in the PES for assessing the risk associated with the consequence of failure. Estimated scores were decided over several days by a team of four subject matter experts from DND, three of whom were subsequently assigned to the evaluation team. Each requirement of the Specification and SOW was reviewed to discuss the impact of a requirement not being met, using the four published consequence of risk categories, i.e. operational, support, schedule and programmatic. The category most impacted was identified and, using the word pictures and definitions for consequence of failure set out in the EP and anticipated proposals from bidders, default scores were assigned to each requirement that best estimated the most likely consequence of failure of a bidder not meeting the requirement. The consequence of risk category preselected by the subject matter experts to estimate the consequence of failure was identified in the PES under the tab "Guidance to Evaluators".

PWGSC added that DND used its best estimate of the consequence of failure to determine the most appropriate passing mark for stage 3 of the evaluation process. Using estimated scores for consequence of failure and minimum acceptable scores for compliance, capability and probability of risk, DND determined that the minimum passing mark was 70 percent, based on a sensitivity analysis. This scoring would pass a proposal that, at a minimum, was compliant, had claimed to be capable of meeting the requirements and provided evidence to support that claim, and for which the probability of failing the requirement was somewhere between "improbable" and "might happen".

PWGSC submitted that most of CMC's grounds of complaint are based on the false assumption that consequence of failure was not individually scored and that the scores attributed were values merely assigned to bids and that nothing bidders proposed could change this fact. PWGSC submitted that, on this basis, CMC alleged that the Crown failed to apply the evaluation criteria specified in the RFP.

In response, PWGSC submitted that the facts of this case indicate that: (1) estimated scores for consequence of failure were included in the PES for practical reasons; (2) scores for consequence of failure were a variable within the PES; (3) evaluators were at complete liberty to change the estimated scores for consequence of failure; (4) each evaluator independently scored each of the 709 rated requirements using the four factors specified in the RFP, including consequence of failure; (5) individual evaluators awarded scores for consequence of failure different from those estimated by DND; (6) the evaluation team considered the scores set by the evaluators for the purpose of coming to consensus scores; and (7) the evaluation team awarded consensus scores for consequence of failure similar to those estimated by the subject matter experts from DND.

PWGSC further elaborated that, during the pre-evaluation briefing, the evaluation team leader verbally briefed all evaluators on the process by which scores for consequence of failure were estimated and the reason for displaying these data as default scores in the PES. It submitted that the evidence18 shows that consequence of failure was a live evaluation factor. Evaluators were informed that, for practical reasons, estimated scores for consequence of failure had been imported as default scores in the PES, but that consequence of failure remained an evaluated measure and that scores for consequence of failure had to be changed if, for any reason, an evaluator's assessment of consequence of failure differed from the estimated scores.

For these reasons, PWGSC concluded that the scores for consequence of failure received by bidders were not values immutably assigned and that the use of estimated scores in the PES was not inconsistent with the EP or with the evaluation criteria specified in the RFP.

PWGSC acknowledged that, if the estimated scores for consequence of failure were essentially fixed prior to the evaluation of proposals, they could constitute "weights". However, in this instance, estimated scores were not immutably fixed. Therefore, the consensus scores awarded by the evaluation team constitute scores and not weights.

PWGSC submitted that, in assessing risk, the probability of failure must be distinguished from the consequence of failure. It submitted that consequence is a function of a requirement not being met. Given that all bids had similar architectures, then the scores for consequence of failure for all 709 rated requirements should be similar for all bidders. PWGSC argued that, while different products may result in different scores for probability of failure, the consequence of not meeting a particular requirement should be similar for all bidders with similar architectures. In this instance, the consensus scores awarded by the evaluation team for consequence of failure did not vary between bids because the three bidders made similar proposals, consistent with what was anticipated by DND. Furthermore, no bidder proposed features to mitigate the consequence of non-compliance with a rated requirement. PWGSC submitted that most of the mitigating features proposed by bidders were geared towards probability of failure and, therefore, did not change the scores for consequence of failure. Moreover, taking into consideration the granularity of the scoring scale for consequence of failure, (i.e. possible scores of 1, 3, 5, 8 and 10), PWGSC submitted that the proposals did not introduce anything significant enough to justify a change to the estimated scores for consequence of failure.

With respect to the role of the bidders' initial assessment of risk in evaluating consequence of failure, PWGSC submitted that the initial assessment of risk, required of bidders at article 10.11 of the SOW, describes a process to identify, control and track risks throughout the project life. It submitted that this initial assessment of risk must be contrasted to features that may have been proposed by a bidder to mitigate the consequence of non-compliance with a rated requirement of the RFP.

PWGSC submitted that CMC's allegation that its score is well above the minimum compliance level rests on misleading arithmetic. CMC's claim is based on it scoring 100 percent of the points available for consequence of failure under its revised scenario,19 which is inconsistent with the evaluation team's assessment on this point. PWGSC further indicated that, as all points available for consequence of failure could have been awarded under the EP specified in the RFP, a reduction from 61.227 to 55.4014 points for the rated essential requirements of the Specification is not justified.

Furthermore, PWGSC argued that removing consequence of failure from the evaluation formula specified in section 6.8 of the EP would result in an inaccurate assessment of risk and, assuming all else remained the same, would change nothing to CMC's situation, whose revised score would remain below the revised passing mark. PWGSC asserted that at no time had it admitted that all bidders received identical default scores for consequence of failure that bore no relation to any particular bidder's proposal, reputation, experience or any other individually assessed variable. PWGSC submitted that, contrary to CMC's assertion, bidders were aware of the risk factors in the RFP. Rated requirements were clearly distinguished between essential and desirable, and weights for every rated requirement were published in the RFP. This, PWGSC submitted, informed bidders of the relative importance of each of the 709 rated requirements and allowed them to structure their proposals accordingly.

PWGSC argued that the alleged unanticipated results of manipulating information in the RFP were available prior to CMC's bid being declared non-compliant. PWGSC submitted that estimated and actual scores for consequence of failure simply did not have the perverse effect of greatly increasing the loss of points impact of low-weighted but "higher probability of failure" elements, as alleged by CMC. Furthermore, the proposals passed or failed stage 3 of the evaluation process based on their element scores and not on their normalized scores.20

As to CMC's allegation of undisclosed preferences, PWGSC submitted that there were none used in the evaluation of any proposal. It is clear from the RFP that DND requires a CMS for operational CP-140 Aurora aircraft and not for laboratory testing. The vast majority of the 709 rated requirements of the CMS are sensitive to DND's needs for an airworthy communications system.

PWGSC submitted that, when assessing capability, "Previously Proven/Demonstrated Capable" was the highest score that a bidder could achieve. This status, PWGSC argued, connotes that the capability of achieving a requirement has previously been proven or has been "demonstrated". In addition, the RFP (articles 1.2.2 and 3.2.1 of the SOW) encouraged bidders to use commercial off-the-shelf subsystems and non-developmental hardware and software components. PWGSC submitted that, where an evaluator was assessing the capability of a proposal to meet a specific technical requirement of the CMS for an operational CP-140 Aurora aircraft, it was reasonable for the evaluator to require appropriate evidence of the highest possible capability of meeting the requirement before awarding the highest possible score for capability. Bidders had the onus of providing sufficient evidence to permit a full understanding of their proposals, to prove compliance and demonstrate capability and the level of risk. PWGSC submitted that these requirements were clearly stated in several provisions of the RFP.

PWGSC submitted that the inter-communication system21 proposed by CMC was not scored as a whole during stage 3 of the evaluation process. Rather, numerous aspects of the system proposed by CMC were each scored against the rated requirements comprising the inter-communication system, using the four evaluation factors. PWGSC further submitted that, of the 48 requirements under article 3.3.21 of the Specification comprising some aspects of the system, a significant number obtained the highest available capability score. On this basis, PWGSC denied that only aircraft-installed systems would be deemed previously proven and obtain a score of 10.

PWGSC submitted that, for those requirements that were sensitive to the environmental conditions of an operational CP-140 Aurora aircraft, such as temperature, pressure, noise, humidity, vibration, airborne particles gravitational forces, electro-magnetic interference, etc., it was open to evaluators to require evidence of the system performance level before according the highest possible score for capability. PWGSC argued that, because some of these requirements cannot be fully simulated,22 it could reasonably be inferred from the EP and the technical requirements of the RFP that evaluators could require evidence from flight performance before awarding the highest score for capability of certain proposed features of the CMS.

The EP did not dictate the type of evidence that an evaluator could require to establish to his/her satisfaction that a capability was previously proven or demonstrated. It was open to evaluators, based on the word pictures included in the EP, to award scores as they considered appropriate in the circumstances. PWGSC submitted that what CMC is asking, in effect, is that the Tribunal substitute its judgement on scores for that of the evaluation team. PWGSC submitted that the evaluation team is best qualified and placed to award scores.

PWGSC further submitted that the numerical weight assigned for each element of the CMS had no bearing on the scores for consequence of failure. It submitted that weight is a measure of importance of a requirement to DND, while consequence of failure is a measure of the impact to the project of a bidder failing to meet a requirement. The assignment of weights and the estimation of scores for consequence of failure were done at different times. Weights were assigned prior to the RFP being issued, while scores for consequence of failure were estimated after the RFP was issued.

PWGSC indicated that the process by which consensus scoring was achieved varied depending on the individual scores. Where all evaluators had the same score, the score remained unchanged unless new information came to light, e.g. response to a clarification question. Where individual scores differed slightly, there was a discussion until the evaluation team unanimously agreed on a consensus score that reflected the best evaluation of the bidder's response. Where one individual score differed significantly from the rest, the evaluator with the different score explained the basis for the score and, after discussion, a consensus was reached.23 Where individual scores were divided, there was a discussion until all evaluators reached an agreement.

PWGSC reserved the right to make further submissions on the award of cost.

Thales' Position

Thales submitted that there was clearly an individual assessment of the consequence of failure portion of each proposal.

Thales submitted that it is unreasonable to expect the government to publish values for consequence of failure without knowing for sure what approach the bidders would take to meet the requirements. However, to establish a set of estimated values for consequence of failure, first, and then compare the bidders' approaches to this set of values is an entirely reasonable way to proceed and the only way to ensure some consistency in the values. Thales argued that the fact that the estimated set of scores ultimately remained unchanged is an indication that DND found the bidders to have similar approaches to meeting the requirements.

With respect to the application of undisclosed evaluation criteria during the evaluation of proposals, Thales submitted that it was clear from the outset that, from the point of view of technical evaluation, the project management office preferred off-the-shelf, non-developmental items for the CMS with previously proven/demonstrated capability. This was fully disclosed through briefings with the proponents and through the structure of the EP. Comparing its proposed, already flying, Palomar system to the laboratory-demonstrated Telephonics system proposed by CMC, Thales argued that it would be unreasonable to assess the relative maturity of the two systems with the same score. Furthermore, Thales submitted that the distinction in maturity between the two systems was fairly and reasonably reflected in a score of 10 for the former and a score of 8 or less for the latter.

Thales requested its costs incurred in participating in this complaint. It submitted that it was reasonable and necessary that it participate in these proceedings to protect its position as the successful proponent.

Should the Tribunal find that the complaint is valid, Thales submitted that it would not be possible to grant the relief sought by CMC. Thales submitted that the primary relief sought in paragraph 8 of CMC's complaint is not available because CMC has now received disclosure of details24 of Thales' bid and that it would be unfair and inappropriate to now allow CMC an opportunity to repair its bid as part of a re-evaluation exercise conducted to correct the situation.

CMC's Position

CMC submitted that a fundamental issue found in the complaint and the GIR arises from disagreement as to whether the Crown was required, under the AIT, to disclose evaluative information that impacted the scores received by bidders. CMC submitted that the failure to disclose the scores for consequence of failure denied bidders the opportunity to effectively respond to the solicitation. This omission undermined the ability of bidders to put forward bids that reflected best value because the relative importance of the requirements of the RFP was not disclosed to bidders. These scores could not have been anticipated by bidders, nor derived from the RFP. They were an integral part of the evaluation methodology and, as such, needed to be disclosed in the tender documentation in accordance with Articles 501 and 506(6) of the AIT.

CMC submitted that, for the Crown to achieve "best value" and for potential suppliers to have a fair opportunity to supply the Crown's needs, it is critical that tender documents clearly and accurately reflect the importance ascribed to the various elements of a given solicitation. In solicitations that are ultimately judged on a cost-per-point basis, such as the one here, potential suppliers are required to weight the cost of each element of their proposals against their perceived importance to the Crown. CMC submitted that the failure to disclose the scores for consequence of failure assigned by DND and accepted by the evaluation team and the requirement for flight performance of certain components frustrated its ability to respond to the solicitation and breached the provisions of the AIT.

CMC further submitted that, contrary to paragraph 6.3 of the EP and PWGSC's assertion in the GIR,25 the actual weights and scores for consequence of failure assigned to requirements do not support the assertion that "[m]ore important requirements were assigned a higher numerical weight." When viewed in light of the estimated scores for consequence of failure, there appear to be many inconsistencies and contradictions in the numerical weights.26

CMC submitted that, by withholding the estimated scores for consequence of failure, the government did not clearly identify the methods of weighting and evaluating the criteria, as required by Article 506(6) of the AIT. CMC submitted that the scores for consequence of failure were, in fact, predetermined and not scored by the evaluation team and, consequently, had the impact of a weighting methodology.

CMC submitted that a proper interpretation of the EP is that bidders were required, inter alia, to achieve 70 percent of the maximum available points against the rated essential requirements only in order to be compliant. However, CMC argued, it was impossible for any bidder to achieve the maximum score for consequence of failure for every requirement because DND's pre-assigned scores for consequence of failure represented approximately 70 percent of the maximum score set out in the RFP, thereby reducing the number of available points from 62.02 to 55.40. In this context, CMC submitted, the percentage achieved by a bidder should have been calculated by dividing the bidders' points by the maximum available points (55.40) to determine if the required overall score of 70 percent was achieved.

CMC did not suggest that consequence of failure be eliminated from the evaluation of proposals. Rather, the scores for consequence of failure should have been either determined in accordance with the EP or, if pre-assigned scores were used, released to bidders, since consequence of failure was clearly not a variable dependent upon the contents of bids. CMC argued that pre-assigning scores to consequence of failure and then leaving them the same for all bidders is not the same as scoring each bidder on consequence of failure. The equal application of scores for consequence of failure to all bidders, in effect, modified the weights assigned to each requirement in the RFP.

CMC asserted that, because the estimated scores for consequence of failure were established on the basis of "anticipated proposals from bidders", the pre-assignment of scores by DND occurred prior to the receipt of bids and, in fact, prior to the issuance of the RFP itself, since, according to the GIR, these scores were used to test the sensitivity of the 70 percent pass mark for rated essential requirements at stage 3 of the evaluation process. This is contrary to article 27.1.1 of the RFP, which required that the evaluation of a bidder's offer be based solely on the contents of its proposal in response to the RFP, the SOW, the Specification and the documents called up therein.

CMC argued that, since the Crown employed estimated scoring prior to the submission of proposals or the issuance of the RFP, the EP should have described the use of pre-assigned scores for consequence of failure and their intended use.

CMC submitted that the establishment of expert-derived scores for consequence of failure to reduce subjectivity in evaluation inherently limited, if not effectively limited, the opportunity for individual scoring by a member of the evaluation team. In essence, CMC argued, these actions are exclusive and, under even the most benign of circumstances, an evaluator would believe that he/she would have to justify a deviation from a score established by a group of subject matter experts. In fact, CMC emphasized, over 709 requirements and three competing proposals, the scores for consequence of failure "awarded" did not deviate one iota from those estimated prior to the issuance of the RFP.

Commenting on the suggestion in the GIR that "estimated" scores for consequence of failure were issued in order to make the evaluation process more efficient, to reduce the effort required by individual evaluators or to facilitate the evaluation process, CMC submitted that the evaluation should have been carried out on an element-by-element, score-by-score basis, regardless of how many actions were required. Each element should have been scored without any knowledge of predetermined values. CMC submitted that the notion of facilitating the process suggests a process whereby pre-assigned scores for consequence of failure were, at best, reviewed and not, in fact, scored at all, a process described in other pronouncements in the GIR.27 CMC submitted that the scores for consequence of failure were pre-assigned and, therefore, influenced, consciously or subconsciously, the evaluation team to the point that, given 12,762 opportunities28 to change the scores for consequence of failure, only two modifications were proposed and, ultimately, not a single score was changed.

CMC indicated that it does not agree that scores for consequence of failure are least affected by proposals, although it admits that this is certainly true if the scores for consequence of failure are pre-assigned. Had it been aware of the importance given to some elements through assigning serious consequence of failure to comply with minor requirements, CMC submits that it would have proposed either risk-mitigating features or different solutions.

Even though the SOW and Specification were very "tight", i.e. left few alternatives to bidders, CMC submitted that the three proposals were significantly different. For example, CMC indicated that it understands that one bidder proposed an inter-communication system based on a closed architecture analog technology, another, a developmental airborne solution and, the third bidder, a digital open architecture solution. CMC submitted that the differences in technology and adaptability of these three solutions suggest a range of overall solutions broader than that acknowledged in the GIR.

CMC submitted that paragraph 46 of the GIR purports to link the method by which normalized scores are calculated and the low average weights assigned to each of the 249 rated essential requirements for the Specification to CMC failing to meet the 70 percent passing mark. The reality, CMC submitted, is that these two parameters are entirely independent from bidders' proposals, as they are defined by DND in the EP and are unaffected by the content of bidders' proposals. The real reason for its low score, CMC argued, is that it bid specific responses based on a best-value approach, i.e. maximizing points while minimizing price. CMC submitted that it would and could have made different choices, had it been aware of the undisclosed scores for consequence of failure used during the evaluation of proposals.

CMC submitted that the published weights do not provide a clear indication of the importance of each rated requirement. It is only when the pre-assigned scores for consequence of failure are disclosed that the importance of each requirement becomes clearly understood by bidders.

On the question of whether higher-weighted requirements would have a major consequence, CMC submitted that there is no correlation between the weight of a requirement and the score for consequence of failure assigned to that requirement. Therefore, CMC submitted, without full knowledge of both, it could not judge which requirements were of higher importance. More specifically, CMC submitted that those requirements for which it was most heavily penalized had weights assigned that did not reflect their assigned scores for consequence of failure. The weights assigned to these requirements, when compared with other requirements in the RFP, suggest that non-compliance with the former would have little or no consequence on scoring. Since the only variable known to bidders was weight, CMC submitted that any attempt to predict consequence of failure based on that variable was essentially impossible.29

With respect to the standard required to achieve a score of previously proven capability, CMC submitted that the use of the oblique in section 6.5.2 of the EP in the term "Previously Proven/Demonstrated Capable" connotes that, in order to receive a score of 10, the proposal must be either "Previously Proven" or "Demonstrated Capable". The term "Previously Proven" suggests that proof of successful previous deployment of the proposed solution must be furnished to satisfy this standard. On the other hand, the term "Demonstrated Capable" implies that the bidder can demonstrate a proposed solution that is capable of meeting the requirement.

CMC submitted that "Demonstrated Capable" can reasonably be assumed to require, for example, established performance of the proposed solution in a laboratory setting. CMC submitted that the GIR, at paragraph 70, appears to support this interpretation. Therefore, CMC argued, the introduction of the requirement that a particular solution be aircraft-installed represents a fundamental departure from the evaluation criteria and a violation of Article 506(6) of the AIT. CMC submitted that, while it may be reasonable to expect "Previously Proven" to require the proof of previously installed performance in an aircraft, assuming as one must assume that the terms "Previously Proven" and "Demonstrated Capable" are not redundant, it is unreasonable to place the same meaning on "Demonstrated Capable".

CMC submitted that PWGSC's argument at paragraph 82 of the GIR that bidders were to draw "inferences" from the EP regarding the level of proof or demonstration of capability is direct evidence that the EP did not clearly define the evaluation criteria.

CMC further submitted that the draft integrated test management plan was included in the proposal, as required by article 26.2 of the RFP and that it is, therefore, improper that the evaluators considered its proposal on this point as a lack of capability. Indeed, CMC submitted, all bidders, regardless of previous experience or proven capability, were required to perform proof of compliance using prototype flight testing, in accordance with the SOW.

DRS's Position

DRS wrote the Tribunal asking it to refer to its reply to the GIR in File No. PR-2001-05130 as its intervener's response to the GIR in this case.

TRIBUNAL'S DECISION

Subsection 30.14(1) of the CITT Act requires that, in conducting an inquiry, the Tribunal limit its considerations to the subject matter of the complaint.31 Furthermore, at the conclusion of the inquiry, the Tribunal must determine whether the complaint is valid on the basis of whether the procedures and other requirements prescribed in respect of the designated contract have been observed. Section 11 of the Regulations further provides that the Tribunal is required to determine whether the procurement was conducted in accordance with the trade agreements, in this instance, the AIT.

CMC alleged that PWGSC and DND, acting in breach of Articles 501 and 506(6) of the AIT, improperly evaluated its proposal at stage 3 of the evaluation process.

Article 501 of the AIT states that the purpose of Chapter Five of the AIT is to establish a framework that will ensure equal access to procurement to all Canadian suppliers. Against this backdrop, Article 506(6) provides, in part, that "[t]he tender documents shall clearly identify the requirements of the procurement, the criteria that will be used in the evaluation of bids and the method of weighting and evaluating the criteria."

The Tribunal must determine whether CMC's proposal was properly evaluated at stage 3 of the evaluation process. This will entail considering whether, pursuant to Article 506(6) of the AIT, PWGSC and DND used and applied properly, in the evaluation of CMC's proposal, the evaluation criteria and methodology set out in the RFP (which includes the EP).

The Tribunal finds that the estimated scores for consequence of failure used as default scores in the evaluation of CMC's proposal are tantamount to weights. As such, the default scores introduced a material change in the evaluation criteria and methodology from those communicated in the solicitation documents, which was not transparent to bidders. This is contrary to Article 506(6) of the AIT.

In reaching this conclusion, the Tribunal carefully considered PWGSC and Thales' submission that individual and consensus scores for consequence of failure were awarded during the evaluation according to the process described in the EP and were not set in advance.

In the Tribunal's opinion, PWGSC and Thales' submission is not sufficient to explain the perfect match between the default scores for each of the 709 rated essential and desirable requirements of the SOW, and Specification for the three proposals and their subsequent scores by the evaluation team. Moreover, there also appears to be a perfect match between the preselected and subsequent evaluator-selected key risk categories.

In the Tribunal's opinion, it is improbable that this perfect match is a coincidence, particularly when considered in light of PWGSC and DND's explanation that the default scores were developed by subject matter experts as a practical guide to evaluators to remove the subjectivity involved in assessing consequence of failure and to create a more efficient human-machine interface. The fact that PWGSC and DND intended the default scores to remove subjectivity indicates to the Tribunal that they intended the default scores to influence the scoring. Although there is no evidence that the evaluators made a conscious decision to duplicate the default scores, they did in fact duplicate the default scores. In the Tribunal's view, it would be difficult for an evaluator to avoid being influenced by the default scores particularly because the default scores were clearly identified in the PES score sheets under "Evaluator", as were the risk categories under the tab "Guidance to Evaluators", and three of the persons who prepared the default scores were members of the evaluation team.

The evidence shows that not only did the six evaluators come up with the same score for consequence of failure as the default score, but they also made the same determination with respect to the preselected consequence of failure category, upon which these scores were based for each of the 709 rated requirements. Their "scoring" of these requirements, in evaluating the three proposals, was identical in all respects except for two instances. In the Tribunal's view, this improbable coincidence of scoring means that the default scores were, in effect, tantamount to weights.

The Tribunal also finds that the effect of estimated scores for consequence of failure reduced the total number of points available and, as a result, modified the calculation of the percentage score at stage 3 of the evaluation process. Furthermore, in the Tribunal's opinion, the effect of using the default scores was to alter, in a non-transparent manner, the evaluation methodology as it was known to CMC at the time of bidding.

With respect to the use of evidence of aircraft-installed experience in evaluating the capability of the system proposed by CMC, the Tribunal finds that this ground of complaint has no merit.

In the Tribunal's opinion, the RFP did not make it mandatory that the proposed CMS be flight-proven. However, this does not mean that evidence of flight performance could not be used in rating certain aspects of the response of the RFP, as was done in this instance. In the Tribunal's opinion, the real question is whether, under the RFP, evaluators could reasonably require evidence of flight performance as a prerequisite to attributing the highest rating for capability, particularly for those requirements that cannot be completely tested outside an operational aircraft.

The Tribunal is satisfied that the RFP made it abundantly clear that this procurement is for a CMS onboard operational aircraft, i.e. the CP-140 Aurora. As well, the RFP indicated DND's preference for off-the-shelf, non-developmental items. It is clear, in reading the SOW and Specification, that a significant number of the 709 rated requirements are sensitive to the operational environment of an aircraft. Moreover, the EP clearly stated, at section 6.5.2, that, in order to obtain the maximum score of 10 for capability for any rated requirement, a bidder had to show that the item was "Previously Proven/Demonstrated Capable". Under these conditions, the Tribunal is satisfied that the capability criterion and the manner in which it would be assessed were clearly stated in the RFP and that the evaluators did not breach the provisions of the RFP when they required evidence of flight performance before awarding a capability score of 10 to certain requirements sensitive to the operational conditions of an aircraft. Moreover, the Tribunal is not convinced that the evaluators used CMC's proposal for prototype aircraft flight testing to demonstrate its lack of capability. Rather, the Tribunal is of the view that the related sections in the GIR are simply intended to demonstrate that some requirements do require flight testing and are, thus, sensitive to DND's needs for an airworthy communication system.

With respect to whether or not CMC, had it known the de facto weighting of the default scores for consequence of failure, would have restructured its bid to provide the best value, the Tribunal is of the view that it is clear that CMC could not have known, at the time at which it submitted its bid, the weights attributed by the default scores. This is evident since the only weighting available to the bidders was the weighting assigned to the rated requirements in the evaluation tree and PWGSC admitted that these known weights did not correspond to the default scores for consequence of failure assigned to the same rated requirements by the subject matter experts. As noted above, it is the Tribunal's opinion that the estimated scores for consequence of failure were improperly used by DND and PWGSC, in a manner tantamount to weights, in conducting the evaluation of proposals and that this action introduced, in a non-transparent manner, a material change to the evaluation methodology set out in the EP. Whether or not CMC would have been able to successfully restructure its bid, had it known the weighting of the consequence values, is a matter for conjecture, which the Tribunal need not address.

Nevertheless, and for the reasons noted above, the Tribunal finds that CMC's proposal was improperly evaluated at stage 3 of the evaluation process.

In determining the appropriate remedy in the circumstances, the Tribunal considered the prejudice caused to CMC and other bidders by this improper evaluation. In preparing their bids, the bidders relied on the criteria and evaluation methodology set out in the RFP. If they had known how consequence of failure would be evaluated, they might have submitted different bids. The Tribunal has considered PWGSC's statement that CMC failed not because of the scores for consequence of failure that it received but because it was declared non-compliant on some of the 249 rated essential requirements. The Tribunal notes that a proposal cannot fail only by failing to meet one or several rated essential requirements; rather, failing to meet the 70 percent passing mark was the discriminating factor. Even if compliance was the predominant evaluation factor, modulated by the factors of capability, probability and consequence, as PWGSC claims, until consequence of failure is actually scored instead of "weighted", it is impossible to know whether CMC's proposal succeeded or not. Consequently, the Tribunal is of the view that the evaluation process should be corrected so that bids are evaluated in compliance with the criteria and evaluation methodology set out in the RFP. Given that the Tribunal is of the view that the use of estimated scores for consequence of failure based upon preselected risk categories is the only valid ground of complaint in this instance, it recommends that only the consequence of failure and the risk categories upon which each score depends be re-evaluated. This re-evaluation is to be done for all 709 rated requirements for the three proposals, in accordance with the criteria and methodology set out in the RFP. Based upon the results of this re-evaluation, a successful bidder will be identified in accordance with the relevant provisions of the RFP and the AIT.

DETERMINATION OF THE TRIBUNAL

In light of the foregoing, the Tribunal determines that the procurement was not conducted in accordance with the provisions of the AIT and that the complaint is valid in part.

Pursuant to subsections 30.15(2) and (3) of the CITT Act, the Tribunal recommends that the results of the evaluation of all rated requirements of the RFP, in relation to the consequence of failure factor for the three proposals received, be set aside. Furthermore, it recommends that PWGSC and DND re-evaluate the consequence of failure factor, including the key risk categories upon which it depends, for all rated requirements, essential or not, in both the technical and management proposals for the three proposals submitted. The re-evaluation will be conducted strictly according to the criteria and methodology set out in the RFP, including the EP. This will require the removal, from the electronic PES, of the consequence of failure categories selected by the subject matter experts, which are inserted in the evaluation screens under the tab "Guidance to Evaluators", and of the estimated scores for the consequence of failure factor. The estimated scores for the consequence of failure factor will, in no way, be used in the re-evaluation. The procurement process will proceed as provided in the above-mentioned solicitation documents and the AIT.

The re-evaluation will be conducted by a new evaluation team that will be composed of members other than those involved in the original evaluation and will exclude the fourth subject matter expert who was involved in determining the estimated scores for the consequence of failure factor prior to the evaluation of proposals.

Pursuant to subsection 30.16(1) of the CITT Act, the Tribunal awards CMC its reasonable costs incurred in preparing and proceeding with the complaint.

1 . Formerly BAE Systems Canada Inc.

2 . R.S.C. 1985 (4th Supp.), c. 47 [hereinafter CITT Act].

3 . DND operates a fleet of 18 CP-140 Aurora aircraft as a long-range maritime patrol platform for surface and undersea surveillance roles.

4 . 18 July 1994, C. Gaz. 1995.I.1323, online: Internal Trade Secretariat <http://www.intrasec.mb.ca/eng/it.htm> [hereinafter AIT].

5 . The expressions "default" or "estimated scores" or "values" hereinafter all refer interchangeably to the estimated scores established by DND subject matter experts.

6 . S.O.R./93-602 [hereinafter Regulations].

7 . Formerly Thompson-CSF.

8 . S.O.R./91-499.

9 . In its letter dated March 1, 2002, DRS indicated that, in reply to the GIR in this case, it relied upon the submissions that it had made in the parallel proceedings in File No. PR-2001-051, as the issues in the two complaints are largely identical.

10 . The draft RFP was reviewed, before its release, for consistency and conformity with applicable rules by BMCI Consulting Inc. (BMCI).

11 . Canada's Electronic Tendering Service.

12 . The EP was designed by Electronic Warfare Associates.

13 . Six amendments to the RFP were issued, including amendment No. 001, which, inter alia, amended the EP.

14 . According to the GIR, the evaluation team consisted of several individuals, including a leader. As set out in section 4.5 of the EP, the ETL was responsible, in part, for resolving any extreme scores, determining the official consensus regarding evaluators' scores for each proposal evaluation element and identifying high-risk areas. According to the GIR, four evaluators came from the "Project Management Office for the CP-140 Aurora Incremental Modernization Project"; and two evaluators came from "operational CP-140 squadrons at 14 Wing Greenwood N.S." without previous exposure to the bidders, the products or the RFP. Also according to the GIR, airworthiness, quality assurance and DTSES/TEMPEST representatives were utilized as required. Each evaluator had an equal voice in the evaluation process. Individual evaluators started their evaluation of different bids and of different sections of the bids. Discussions among evaluators were to be limited during the individual scoring phase of the evaluation to ensure that all proposals were treated consistently and equally, without preferential treatment or bias, as required by section 3.10 of the EP. Each evaluator had a computer with a unique password that prohibited access, except by the leader. Once all the individual scores were awarded, the evaluators were brought together to decide on consensus scores. The evaluation team discussed individual scores for a requirement before agreeing on a consensus score, and this was done for every rated requirement in the RFP.

15 . According to the GIR, the PES was initially developed for an earlier DND solicitation. It was modified and used for bid evaluations on several other government projects in accordance with the project's unique procurement strategies and evaluation plan. The PES has the equation used to calculate normalised scores and the published weights for each rated requirement imbedded in its software. The PES database is an efficient, single repository for all information relating to the evaluation activities. The PES reflects the technical and management criteria contained in the EP and the RFP. In addition to ensuring control and security of sensitive evaluation results, the PES "guides the evaluation team scoring using a range of compliance, capability and risk factors in a consistent manner, by using a systematic and disciplined approach". The PES supports both subjective and objective evaluations of proposals by using criteria and weights defined by the project management office. It eliminates human errors in manually calculating rolled-up scores to compare proposals.

16 . Amendment No. 001 to the RFP reads, in part:

Question 18

Section 6.6 of the evaluation plan (29395 v3 - CMS EVALUATION PLAN) notes separate risk consequence scores will be assessed for Operational, Support, Schedule, and Programmatic Risks. Section 6.8.1 does not explain how the single value for Consequence that appears in the Criterion Risk Score equation is calculated from the four separate Consequence scores. Could the mathematical equation used to calculate Consequence as a function of Operational, Support, Schedule, and Programmatic Risks, be provided?
Answer 18
The intent is to apply the most applicable and significant of the four Risk elements. Thus, only one of the four consequences will be used to score each requirement.

17 . PES Screen, performance specification 3.3.21.9; GIR (confidential) Exhibit 4.

18 . PWGSC's submissions, 18 February 2002, Document 1 at 22, 27-29.

19 19. CMC would keep all the points that it scored for consequence of failure under the EP, specified in the RFP, which it deems to be 100 percent of the points available for consequence of failure under its revised scenario.

20 . Normalized scores were calculated according to the formula set out in the RFP and which had been loaded onto the PES. Element scores are equal to normalized scores x weighting factors from the evaluation tree.

21 . The inter-communication system is a major component of the CMS.

22 . This is acknowledged by CMC itself, see section 5.5, "Prototype Aircraft Flight Testing" of CMC's Integrated Test Management Plan.

23 . PWGSC's response, 20 March 2002, Question 6(c).

24 . In the proposal submitted by Thales on April 19, 2001, the modification of the co-pilot control display unit (CDU) was included, as requested by the RFP, as a ceiling price option. CMC, by virtue of having won the CP140 Navigation and Flight Instruments Modernization Program contract, is the only company able to do the work with respect to the CDU because of its proprietary rights and unique position to certify air worthiness of the installations. As a result, Thales had obtained a rough order of magnitude cost from CMC for the purpose of submitting its proposal. On October 2, 2001, after the completion of the evaluation of proposals, PWGSC indicated to Thales personnel that it intended to exercise the CDU option, and this intention was confirmed in a draft contract that was sent to Thales on or about December 11, 2001. On October 18, 2001, Thales met with a representative of CMC in order to discuss a more specific price for the modification work. To that end, Thales provided CMC with information in its proposal, including its architecture and a revised specification that identified all the equipment that it had selected as part of its solution. As a result, CMC has considerable knowledge about the details of Thales' proposal.

25 . GIR, para. 15.

26 . CMC's comments on the GIR, para. 8.

27 . GIR, paras. 33, 34, 35 and 37.

28 . 709 requirements x 6 evaluators x 3 proposals.

29 . See para 58-, inclusive, of CMC's comments on the GIR for statistical analysis.

30 . Re Complaint Filed by DRS Technologies (2 May 2002), where DRS's submissions are reported at length in the statement of reasons.

31 . This determination deals with CMC's complaint only. However, DRS, which also filed a separate complaint on the same procurement, was granted intervener status in this matter, and the submissions that it made have been considered by the Tribunal.


[ Table of Contents]

Initial publication: May 23, 2002