DELOITTE INC.

DELOITTE INC.
v.
DEPARTMENT OF FISHERIES AND OCEANS; AND DEPARTMENT OF PUBLIC WORKS AND GOVERNMENT SERVICES
File No. PR-2016-069

Determination and reasons issued
Tuesday, July 25, 2017

TABLE OF CONTENTS

 

IN THE MATTER OF a complaint filed by Deloitte Inc. pursuant to subsection 30.11(1) of the Canadian International Trade Tribunal Act, R.S.C., 1985, c. 47 (4th Supp.);

AND FURTHER TO a decision to conduct an inquiry into the complaint pursuant to subsection 30.13(1) of the Canadian International Trade Tribunal Act.

BETWEEN

DELOITTE INC. Complainant

AND

THE DEPARTMENT OF FISHERIES AND OCEANS Government Institution

AND

THE DEPARTMENT OF PUBLIC WORKS AND GOVERNMENT SERVICES Government Institution

DETERMINATION

Pursuant to subsection 30.14(2) of the Canadian International Trade Tribunal Act, the Canadian International Trade Tribunal determines that the complaint is valid in part.

Pursuant to subsections 30.15(2) to 30.15(3) of the Canadian International Trade Tribunal Act, the Canadian International Trade Tribunal recommends that the Department of Fisheries and Oceans compensate Deloitte Inc. for its lost profits for the Phase 1 work and, to the extent that the Department of Fisheries and Oceans has already exercised, or intends to exercise, its options for them, the Phase 2 and Phase 3 work.

The Canadian International Trade Tribunal further recommends that the parties negotiate the amount of compensation to be paid and report the outcome of the negotiations to the Canadian International Trade Tribunal within 30 days of the issuance of the statement of reasons for this determination.

Should the parties be unable to agree on the amount of compensation, Deloitte Inc. shall file with the Canadian International Trade Tribunal, within 40 days of the issuance of the statement of reasons for this determination, a submission on the issue of compensation. The Department of Fisheries and Oceans will then have seven working days after the receipt of Deloitte Inc.’s submission to file a response. Deloitte Inc. will then have five working days after the receipt of the Department of Fisheries and Oceans’ reply submission to file any additional comments. Counsel are required to serve each other and file with the Canadian International Trade Tribunal simultaneously.

Pursuant to section 30.16 of the Canadian International Trade Tribunal Act, the Canadian International Trade Tribunal awards Deloitte Inc. its reasonable costs incurred in preparing and proceeding with the complaint, which costs are to be paid by the Department of Fisheries and Oceans. In accordance with the Procurement Costs Guideline, the Canadian International Trade Tribunal’s preliminary indication of the level of complexity of the complaint is Level 2 and its preliminary indication of the amount of the cost award is $2,750. If any party disagrees with the cost decision, it may make submissions to the Canadian International Trade Tribunal, as contemplated by article 4.2 of the Procurement Costs Guideline.

The Canadian International Trade Tribunal reserves jurisdiction to establish the final amount of the compensation and the cost award.

Peter Burn
Peter Burn
Presiding Member

Tribunal Panel: Peter Burn, Presiding Member

Support Staff: Rebecca Marshall-Pritchard, Counsel
Dustin Kenall, Counsel

Complainant: Deloitte Inc.

Counsel for the Complainant: Vincent DeRose
Jennifer Radford

Government Institutions: Department of Public Works and Government Services
Department of Fisheries and Oceans

Counsel for the Government Institutions: Roy Chamoun
Susan Clarke
Ian McLeod
Kathryn Hamill

Please address all communications to:

The Registrar
Secretariat to the Canadian International Trade Tribunal
333 Laurier Avenue West
15th Floor
Ottawa, Ontario  K1A 0G7

Telephone: 613-993-3595
Fax: 613-990-2439
E-mail: citt-tcce@tribunal.gc.ca

STATEMENT OF REASONS

INTRODUCTION

  1. On March 29, 2017, Deloitte Inc. (Deloitte) filed a complaint with the Canadian International Trade Tribunal (the Tribunal), pursuant to subsection 30.11(1) of the Canadian International Trade Tribunal Act,[1] regarding a Request for Proposal (RFP) (Solicitation No. F5211-160590) issued on January 13, 2017, by the Department of Fisheries and Oceans (DFO) under the framework of the Task and Solutions Professional Services Supply Arrangement No. E60ZN-15TSSB, as issued by the Department of Public Works and Government Services Canada (PWGSC), for three phases of work associated with the Canadian Coast Guard’s (CCG) fleet procurement planning requirements.
  2. Deloitte alleges that the DFO incorrectly determined that its proposal did not clearly demonstrate experience responsive to five rated requirements. Deloitte alleges that, but for these errors, it would have had the highest ranked, compliant proposal and would have won the resulting contract and the two one-year extension options (if exercised by the DFO).
  3. As a remedy, Deloitte requests that it be compensated for its lost profits or, alternatively, the lost opportunity it would have realized on the Phase 1 work. Additionally, Deloitte requests that it be awarded the Phase 2 and Phase 3 contracts or, alternatively, that the DFO retender the solicitation for the Phase 2 and 3 contracts; and that it be awarded its costs in bringing this complaint. Deloitte has not requested its bid preparation costs.
  4. The Tribunal designated PWGSC as a respondent government institution in this matter along with the DFO because while the DFO issued the RFP and conducted the technical evaluation, PWGSC is the contracting authority for the resulting contract awarded to QinetiQ Ltd. (QinetiQ), the winning bidder.

BACKGROUND

  1. The RFP is for the provision of consulting services, in three successive phases of work, to help the CCG develop its 2017 Fleet Renewal Plan (2017 FRP). Phase 1 is for the development of a Concept of Analysis for a Fleet Optimization Study to determine and assess possible procurement options to achieve an optimal mix, number and sequencing of vessels for the 2017 FRP. Phase 2 is a contract (at the DFO’s option) to conduct the actual Fleet Optimization Study. Phase 3 (also at the DFO’s option) is for the evaluation, planning and implementation of the 2017 FRP on an as-needed basis.[2]
  2. The RFP provides for the award of a contract to the compliant bid with the highest combined technical and financial scores, weighted 70 and 30 percent respectively. The RFP includes four mandatory criteria (MT1 through MT4) and 12 rated criteria (RT1 through RT12). RT1 through RT5 relate to the Phase 1 work; RT6 through RT11 to the Phase 2 work; and RT12 to the Phase 3 work.[3]
  3. The Bid Evaluation Score Sheet provided for scoring of 0, 10, 30, or 50 points for the rated criteria based on experience demonstrated, for a maximum possible score of 600 points. Technical proposals were evaluated on February 6 to 9, 2017, first on an individual and then on a final consensus scoring basis by a panel of four experienced CCG officials.[4] Individual scores with handwritten notes (first in blue ink and then after the consensus meeting in red ink) were recorded on Technical Evaluation Score Sheets (Individual Sheets).[5] The final consensus score was recorded on Technical Evaluation Summary Sheets (Summary Sheets).[6] The only documents filed in this proceeding that record the reasoning of the members of the evaluation committee for their scores are the Individual Sheets.
  4. The RFP was issued on January 13, 2017. On January 23, 2017, the DFO issued Addendum 1 to the RFP to extend the closing date from January 30, 2017, to February 3, 2017. On January 25, 2017, it issued Addendum 2 amending the costing certification requirements for RT11 and enclosing the Bid Evaluation Score Sheet. Addendum 3, further amending the requirements for RT11, was issued on January 31, 2017.
  5. Two bidders submitted proposals: Deloitte and QinetiQ. Deloitte’s technical proposal included the résumés and short biographies of each of the proposed resources for the work, descriptions of the projects on which those resources had worked, and descriptions of the specific work conducted by those individuals on the referenced projects that Deloitte was relying upon in its response to the rated criteria.
  6. Deloitte passed all the mandatory criteria and received full points for RT1 through RT5 (the Phase 1 work) and RT12 (the Phase 3 work). However, Deloitte’s proposal did not receive full marks for the six rated criteria for the Phase 2 work (the Fleet Optimization Study). In its complaint, Deloitte argues that it should have received full marks on five of these criteria, specifically RT6, RT7, RT8, RT10 and RT11.

TRADE AGREEMENTS

  1. Section 1.2.1 of Part 1 of thehe RFP provides that it is governed by the Agreement on Internal Trade,[7] the North American Free Trade Agreement[8] and the Agreement on Government Procurement.[9]
  2. Article 506(6) of the AIT provides that

    . . . [t]he tender documents shall clearly identify the requirements of the procurement, the criteria that will be used in the evaluation of bids and the methods of weighting and evaluating the criteria.

  3. 13.Article 1013 of NAFTA provides that

    . . . [w]here an entity provides tender documentation to suppliers, the document shall contain all information necessary to permit suppliers to submit responsive tenders . . . . The documentation shall also include:

    . . .

    (h) the criteria for awarding the contract, including any factors other than price that are to be considered in the evaluation of tenders . . . .

  4. 14.Finally, Article XII of the AGP provides that

    . . . [t]ender documentation provided to suppliers shall contain all information necessary to permit them to submit responsive tenders, including . . . 

    (h) the criteria for awarding the contract, including any factors other than price that are to be considered in the evaluation of tenders . . . .

  5. 15.Deloitte argues that in not awarding it full marks on the five contested rated criteria, the DFO failed to apply the published evaluation criteria set out in the RFP and introduced undisclosed evaluation criteria, in breach of the disciplines of the trade agreements set forth above.

ANALYSIS

  1. The principles governing the Tribunal’s review of government institutions’ evaluations of proposals in procurements are well settled and simply stated. The bidder bears the burden of ensuring its bid clearly and unambiguously demonstrates compliance with the requirements of a solicitation.[10] The Tribunal will only interfere with an evaluation that is unreasonable and will substitute its judgment for that of the evaluators only when they have not applied themselves in evaluating a bidder’s proposal, have ignored vital information provided in a bid, have wrongly interpreted the scope of a requirement, have based their evaluation on undisclosed criteria or have otherwise not conducted the evaluation in a procedurally fair way.[11] In addition, a government institution’s determination will be considered reasonable if it is supported by a tenable explanation, regardless of whether the Tribunal itself finds that explanation compelling.[12]

RT6 (Simulation and Modelling)

  1. RT6 required the following:

    The Bidder demonstrates the resources proposed for conducting the Fleet Optimization Study have previous experience conducting simulation and modelling of a study of similar size, scope, and complexity.

  2. RT6 (as well as RT7 and RT8) had the following point allocation scheme:
    • 0 points when “[t]he Bidder does not clearly demonstrate experience relevant to the criterion”;
    • 10 points when “[t]he Bidder’s team lead for the Fleet Optimization Study clearly demonstrates that they have experience that meets the criterion”;
    • 30 points when “[t]he Bidder’s proposed team lead and the majority of the personnel for the Fleet Optimization Study clearly demonstrate that they have experience that meets the criterion”; and
    • 50 points when “[t]he Bidder’s proposed team lead and all personnel for the Fleet Optimization Study clearly demonstrate that they have experience that meets the criterion”.[13]

    [Emphasis added]

  3. For RT6, Deloitte identified a team lead and six additional team resources. Although three of the evaluators initially awarded Deloitte full points in their individual evaluations, at the consensus meeting the panel agreed on a final score of ██ points, on the basis that Deloitte’s proposal did not clearly demonstrate that team member ██ had experience conducting simulation and modelling.

Positions of Parties

  1. Deloitte argues that the DFO’s scoring ignores explicit representations in its proposal about ██’s experience in simulation modelling, including the following:[14]
    • her biography’s description of her speciality as including “costing and life cycle modelling across government and commercial sectors”;
    • the description of Deloitte’s work on the NZDF White Paper (for which ██ is listed as only the “Resource Involved”) as including “████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███████ █████ ████ ████ █████ █████ ████ █████ ████ ████ █ ████ ███ ████ ██████”;
    • the statement in the proposal that ██ “played a central role in the financial strategy and modelling team as part of the [NZDF White Paper]”; and
    • the statement in the proposal that ██ “conducted simulation modelling using the Capital Planning Tool, to develop capital plans for the ████ █████ ███ ███ ████ ██████ ██”.
  2. At the individual scoring stage, the evaluator (NG), who did not initially award Deloitte full points on RT6, wrote on his Individual Sheet that “████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███████ █████ █”.[15] Another evaluator (JO), while not deducting any points at this stage, wrote the following: “████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ █”.[16]
  3. After the consensus scoring, the following comments were added by the evaluators to their Individual Sheets to explain the final score:[17]

    ████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███████ █████ ████ ████ █████ █████ ████ █████ ████ ████ █ ████ ███ ████ ██████ [NP];

    ████ █████ ███ ███ ████ ██████ █ [JO]; and

    ████ █████ ███ ███ ████ ██████ █ [KM].

  4. In the GIR, the DFO submits that the evaluators reasonably found that Deloitte’s proposal lacked sufficient supporting detail about the type and extent of simulation modelling ██ performed, and therefore did not meet the experience necessary to qualify for the full 50 points.
  5. In its reply to the GIR, Deloitte argues that the Tribunal should find that the evaluation breached the trade agreements because the DFO has failed to explain how three of the four evaluators, who originally awarded full points, changed their mind at the consensus scoring stage. Further, Deloitte submits that the change in scoring indicates that the evaluators each had different interpretations of what was required to “clearly demonstrate” experience and that there was a latent ambiguity as to the required level of detail to be provided by bidders.

Analysis

  1. Deloitte’s score is based on the evaluation team’s finding that ██’s role was not clearly identified in the NZDF White Paper with regard to simulation modelling. To obtain full points on RT6, a bidder only needed to clearly demonstrate that all of its proposed resources have experience that meets the criterion, which is previous experience conducting simulation and modelling of a study of similar size, scope and complexity. The evaluation team did not find that the NZDF White Paper was not a study of similar size, scope and complexity. It simply found that ██’s role and the scope of her experience in that study were not clearly demonstrated.[18] The work explicitly attributed to ██ is that she was “seconded” to the NZDF Financial Strategy and Modelling Team, that she “played a central role” on that team, and that she “conducted simulation modelling using the Capital Planning Tool, to develop capital plans for the ██ ██”.
  2. The Tribunal finds that the evaluation team reasonably determined that the above information provided by Deloitte about ██ lacked the minimal specificity necessary to clearly demonstrate her experience in simulation modelling. It is well settled that a bidder must demonstrate how it meets the requirements of an RFP, beyond merely repeating the words of the requirements and stating in conclusory fashion that they meet them.[19] This instruction was included in the RFP as well, which warned bidders that “[s]imply repeating the statement contained in the bid solicitation is not sufficient.”[20] The onus rests on the bidder to clearly demonstrate that it meets tender requirements and not the other way around. This is so because one of the principal objectives of the procurement process is to minimize the scope for subjective (and discriminatory) interpretation of bids.[21] The trade agreements mandate that requirements and criteria be clearly stated in writing. It is the bidder’s responsibility to ensure that its proposal unambiguously meets those requirements and criteria. As a corollary, the government institution must evaluate proposals thoroughly and based only on the contents of the proposal.[22] There is, quite intentionally, little margin within this structured framework for importing assumptions by either bidders or evaluators.
  3. Thus, Deloitte’s representation that ██ “played a central role” in modelling on the NZDF White Paper fails to clearly demonstrate concretely what that role was and just how much direct personal experience she had in that regard. It was reasonable for the DFO to have such a concern, given that she had completed her undergraduate degree and joined Deloitte in 2011, and that the first and only modelling experience identified is in 2016 for the NZDF White Paper. Moreover, the only modelling experience identified with specificity is her experience using the Capital Planning Tool to develop capital plans on that project.
  4. Finally, as to Deloitte’s objection to the change in final scores at the consensus stage, it has not pointed to any part of the RFP that prohibited this nor to any consensus scoring in general. Further, it has provided no case or other authority supporting its argument that an adverse inference should be drawn or a latent ambiguity recognized simply on account of a discrepancy between individual scores and consensus scoring (which is to be expected in a diverse panel of four). The courts have upheld Tribunal decisions rejecting such complaints where the evidence shows that the individual scores were merely the “starting point” for discussion and debate and, as such, it is reasonable that the consensus scores “would not always reflect the averages or medians of individual scores.”[23] They have also found that “deviation from the median individual scores” is not, by itself “a sufficient basis for demonstrating unfairness.”[24] Here, it is not contested that the individual scoring was anything other than a starting point for discussion and debate to be finalized at the consensus scoring stage. Further, the final award of ██ points is consistent with the pre-established Bid Evaluation Score Sheet for when the proposal does not clearly demonstrate that every member of the team has the relevant experience. Accordingly, the change in scoring is not a valid ground of complaint.

RT7 (Multi-Criteria Decision Analysis)

  1. RT7 required the following:

    The Bidder demonstrates the resources proposed for conducting the Fleet Optimization Study have previous experience conducting multi-criteria decision analysis to assess the operational effectiveness of each option and providing recommendations for a study of similar size, scope, and complexity.

  2. For RT7, although three of the evaluators initially awarded Deloitte full points, at the consensus meeting the panel agreed on a final score of thirty points, on the basis that the proposal did not clearly demonstrate that the proposed team member, ██, had experience conducting multi-criteria decision analysis (MCDA).
  3. At the individual scoring stage, the evaluator (NG), who did not initially award full points on RT7, wrote on his Individual Sheet that “████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███”.[25] Another evaluator (JO), while not deducting any points at this stage, wrote that “████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███”.[26]
  4. After the consensus scoring, the following comments were added by evaluators to their Individual Sheets to explain the final score:[27]

    ████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███

    ████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ █

    ████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███

Positions of Parties

  1. In the GIR, the DFO submits that the concepts of MCDA and multi-criteria analysis (MCA) are related but distinct. Relying on definitions from a textbook and journal article (neither of which the RFP referenced), the DFO submits that MCA is “a method of research and decision making analysis that is particularly applicable to complex problems where a single-criterion approach falls short and it is necessary to include a full range of detailed analysis from relevant geographical, economic, social, environmental, and technical fields, among other factors in order to generate evidence in support of decision making”, while MCDA is “a methodology which supports decision-makers in the evaluation and ranking or selection of different alternatives, using a systematic analysis that allows overcoming the limitations of unstructured individual or group decision-making”.[28] The DFO submits that while the proposal stated that ██ had experience in MCA, it did not explicitly confirm experience in MCDA and therefore Deloitte failed to meet RT7 fully and was awarded a score of ██ instead of the full 50 points available.
  2. For RT7 for ██, Deloitte referenced the ████ █████ ███ ███ █, in which the proposal bulleted her experience as follows:[29]
    • ██ was engaged to conduct a series of options analysis [sic] on the best option for ██ ██ to pursue based on a multi-criteria analysis.
    • [AH] conducted the evaluation and selection approach, including definition of the process, establishment of criteria, and facilitation of workshops with senior stakeholders to complete the evaluation and select a recommended option.
    • ████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███
    • ████ █████ ███ ███ ████ ██████ ███████ █████ █ █████ ████ █████ ██████ ████ ███████ ███████ █████ ███ ███ ████ ██████ █
  3. Importantly, the proposal also includes a section describing the ████ █████ ███ ███ █ ████ █████ ███ ███ █ itself. It explicitly identifies ██ (and only ██) under the row “Resource Involved” and bulleted the following experience under the heading “Deloitte Support Relevant to Statement of Work”:[30]
    • Developed an options evaluation and selection approach using multi-criteria decision analysis;
    • Supported the client through the definition of process, establishment of criteria and facilitation of workshops with senior stakeholders to complete the evaluation and select a recommended option; and
    • ████ █████ ███ ███ █ ████ █████ ███ ███ █ ████ █████ ███ ███ █

    [Emphasis added]

  4. Deloitte argues that the industry does not distinguish between MCA and MCDA. Alternatively, to the extent that the DFO intended to require only MCDA experience, the distinction is so subtle that three of their own evaluators did not initially recognize it. As such, Deloitte claims, it amounts to a latent ambiguity for which it should not be penalized.

Analysis

  1. Here, the DFO did not deduct points from Deloitte’s proposal for failure to provide sufficient detail to assess ██’s level of experience (as it did for RT6), but rather because the DFO distinguished between two types of experience (MCA and MCDA).
  2. The Tribunal finds that this distinction is unsupported in the text of the RFP. Moreover, even if such a distinction were valid, the substance of ██’s experience as described clearly demonstrates that she meets the MCDA experience requirement.
  3. First, the Tribunal finds that the DFO’s purported distinction between MCDA and MCA qualifies as a latent ambiguity. The Tribunal has held that

    [w]hen there is latent ambiguity, the potential supplier will not likely become aware of the ambiguity before learning of the results of the evaluation. When there is patent ambiguity, it is (or should be) apparent on the face of the RFP article or amendment concerned, and the potential supplier must seek clarification of what is being required or otherwise file an objection or a complaint in a timely manner.[31]

  4. Where a bidder has reasonably construed a latent ambiguity introduced by the government institution, the Tribunal has held that the bidder should not be penalized.[32] Here, regardless of whether, in fact, there is a difference acknowledged in the industry or academia between MCA and MCDA, the DFO has not identified it anywhere in the RFP itself: either by reference in the Bid Preparation Instructions, the Evaluation Procedures, the Technical Criteria or the three phases of deliverables in the Statement of Work, i.e. the Concept of Analysis, the Fleet Optimization Study or the as-needed consulting work.
  5. Second, even assuming that the difference between MCA and MCDA is (as the DFO represents) that the latter is focused more on decision-making than analysis, the information provided in the proposal supports a finding that ██’s experience in the ████ █████ ███ ███ █ included this MCDA work by creating recommendations to the client. AH did not merely analyze options along multiple criteria, but also, according to the proposal, developed a “selection approach”, held “workshops with senior stakeholders to complete the evaluation and select a recommended option”, and took “████ █████ ███ ███ █ ████ █████ ███ ███ █ ████ █████ ███ ███ █ ████ █████ ███ ███ █”. This is all consistent with the DFO’s definition of MCDA as “a methodology which supports decision-makers in the evaluation and ranking or selection of different alternatives, using a systematic analysis that allows overcoming the limitations of unstructured individual or group decision-making”.
  6. Third, the description of the project in the proposal does specifically use the words “multi-criteria decision analysis”; it is reasonable to attribute that work to ██ since she is the only person identified as the “Resource Involved” for that project, and because the proposal specifically stated that she “was engaged to conduct a series of options analysis [sic]” for the project, which the proposal had earlier described as “an options evaluation and selection approach using multi-criteria decision analysis.”
  7. Thus, this is not a case where the Tribunal is second-guessing the evaluation committee’s exercise of judgment or discretion, but rather where the evaluators have failed to properly consider the substance of the proposal by deeming it non-responsive based on mere semantics. Moreover, the DFO ignored vital information in the proposal that demonstrated compliance and consistency with the evaluators’ own narrow and unsupported interpretation of the requirement. Just as it is improper for a bidder to attempt to demonstrate compliance by merely repeating the quoted requirements word for word, it is improper for evaluators to find non-compliance based only on a failure to repeat the proper code words from the RFP rather than by looking into the substance of the proposal itself. This is especially pertinent in the scenario where the evaluators relied on a subtle one-word distinction between two technical terms that were not defined in any of the tender documents.
  8. For these reasons, the Tribunal finds the evaluation committee’s decision unreasonable and this ground of complaint to be valid.

RT8 (Cost Analysis and Cost Benefit Strategies)

  1. RT8 required the following:

    The Bidder demonstrates that the resources proposed for conducting the Fleet Optimization Study have previous experience with conducting cost analysis and cost benefit strategies for a study of similar size, scope, and complexity.

  2. For RT8, although three of the evaluators initially awarded Deloitte full points, at the consensus meeting the panel agreed on a final score of ██ points, on the basis that the proposal did not clearly demonstrate that proposed team member ██ had experience conducting cost benefit analysis.

Positions of Parties

  1. Deloitte argues that the DFO ignored relevant information in its proposal. In the description of “████ █████ ███ ███ █” (the project relied upon for ██) at page 34 of its proposal, Deloitte stated that its team “[d]eveloped high level Options Tools to provide cost/benefit analysis and operational effectiveness (including fleet optimization)”.[33] In the response to RT8 for ██ at pages 90-91 of its proposal, Deloitte represented that ██ was the “Lead Project Manager” for this project and, as such, “worked with strategic decision stakeholders . . . to conduct cost analysis and provide understanding of the affordability of the Department’s asset renewal strategy by . . . [c]reating cost-scenario analysis for the financial impacts of various workforce level adjustments”.[34] He also “engaged with senior stakeholders . . . to provide the outputs and status of the detailed cost analysis . . . ”.[35]
  2. At the individual scoring stage, the evaluator (KM), who did not initially award full points on RT8 to Deloitte, wrote the following on her Individual Sheet: “[██] – can’t find cost benefit strategies (90‑91)”.[36] After the consensus scoring, the other evaluators changed their preliminary scores, writing that ██ did not clearly demonstrate experience in “cost benefit strategies” or “cost benefit analysis”.[37]
  3. In the GIR, the DFO submits that none of the statements found on pages 90-91 of Deloitte’s proposal expressly or clearly referred to “cost benefit strategies”, only to “cost analysis” or “cost/benefit analysis” without reference to “strategies”. Further, ██ is not identified on page 34 of the proposal under the heading “Resources Bid in this Proposal for the DFO who were involved in this Project”, although he is described on page 90 as the “Lead Project Manager” for the project.

Analysis

  1. According to the DFO, Deloitte’s proposal should have specifically referenced “cost benefit strategies” and not only “cost analysis”.
  2. The Tribunal finds the evaluators’ decision to be unreasonable for the same reasons articulated regarding RT7: it creates a distinction not supported by the tender documents that is, at best, a latent ambiguity if not an undisclosed criterion; and, second, it ignores evidence of compliance in Deloitte’s proposal.
  3. The DFO does not state what, if any, distinction there is between cost benefit “strategies” and cost benefit “analysis”. Neither are the terms “cost analysis” and “cost benefit strategies” defined in the tender documents. Further, the DFO has not introduced any evidence from outside the RFP (as it did with RT7) of a technical definition supporting the finding of a distinction between the terms. The evaluators’ notes also do not disclose what, if any, distinction they believe existed between the terms “cost analysis”, “cost benefit analysis” and “cost benefit strategies”. The proliferation of undefined and thus potentially confusing technical terms in the RFP is exemplified by the fact that one member of the evaluation team justified the consensus determination for RT8 on the basis of the lack of “cost benefit analysis” in Deloitte’s proposal, even though RT8 contains no such term.[38] Thus, the Tribunal finds that the evaluation committee’s determination of non-responsiveness rests on a distinction that is, at best, a latent ambiguity if not an undisclosed criterion, for which Deloitte should not be penalized.
  4. Further, Deloitte’s proposal does detail ██ experience in cost analysis and cost/benefit analysis, under the relevant project description applying to ██. (The evaluation committee may have ignored this, perhaps by reading Deloitte’s proposal in an unduly compartmentalized fashion.) Although ██ is not identified in the relevant project description on page 34, he is explicitly identified as the Lead Project Manager for this project in the section of the proposal on pages 90-91 directly responding to RT8. As such, it was unreasonable for the evaluators not to acknowledge his experience with the “cost/benefit analysis” tools Deloitte developed and describes in the project description on page 34 of its proposal. Accordingly, while Deloitte could and should have used the precise language of RT8 in its response, it suffices that, in substance, the description of the relevant experience accords with the requirement.
  5. For these reasons, the Tribunal finds this ground of complaint to be valid.

RT10 (Large Asset Acquisition and Asset Management Projects)

  1. RT10 required the following:

    The Bidder demonstrates previous experience with planning implementation for proposed strategies for large asset acquisition and asset management projects.

  2. For RT10, 0 points were to be awarded when “[t]he Bidder does not clearly demonstrate experience relevant to the criterion”, 10 points when “[t]he Bidder clearly demonstrates experience with one project of similar scope and complexity and provides evidence of supporting documentation demonstrating how the example(s) meet the criteria”, and 50 points when “[t]he Bidder clearly demonstrates experience with two or more projects of similar scope and complexity and provides evidence of supporting documentation demonstrating how the example(s) meet the criteria” [emphasis added]. There was no provision for 30 points for RT10.[39]
  3. Although three of the evaluators initially awarded Deloitte the full 50 points, at the consensus meeting the panel agreed on a final score of ██ points, on the basis that the proposal did not clearly demonstrate that Deloitte had previous experience with a single project that involved both large asset acquisition and asset management. The evaluators found it insufficient that Deloitte had proposed ██ ██ ██ of large asset acquisition projects and ██ ██ ██ of a large asset management project (as opposed to one or more projects combining both).

Positions of Parties

  1. Deloitte argues that this decision is unreasonable because it either introduces an undisclosed criterion into the evaluation or results from a latent ambiguity which should be read in Deloitte’s favour.
  2. At the individual scoring stage, only one evaluator deducted points, writing that the “criterion for the project has to be both large asset acquisition and asset management . . .” [emphasis added]. After the consensus scoring, the other evaluators revised their scores to zero, noting, in agreement, that the requirement was conjunctive, without any explanation as to why they had concluded so.[40]
  3. There is no record of what was discussed at the consensus scoring, but in the GIR, the DFO states, either as speculation or based on discussions not evidenced on the record, that the evaluators referred to the Bid Evaluation Score Sheet to determine whether the requirement was conjunctive or not. They observed that the Bid Evaluation Score Sheet allots 10 points for experience with “one project of similar scope and complexity” and 50 points for “two or more projects of similar scope and complexity”. They then read “one project of similar scope and complexity” as referring to the “large asset acquisition and asset management projects” [emphasis added] language from RT10. In essence, the Tribunal understands that the evaluators appear to have surmised that a project concerned solely with asset acquisition or asset management would not be one project of similar scope and complexity, presumably because the “large asset acquisition and asset management projects” language in RT10 uses “and” rather than “or”.

Analysis

  1. The DFO’s arguments are problematic. First, there is no evidence on the record as to why the three evaluators changed their minds at the consensus scoring. In RT6, RT7 and RT8, when individuals agreed to a consensus score lower than their own individual score, it was because the evaluators reconsidered whether a description of experience met the RFP’s burden-of-proof type requirement that the proposal “clearly demonstrate” the required experience—an issue of fact or application of fact. However, even though RT10 raises an issue of the interpretation of the RFP, the DFO gives no explanation of its deliberations.
  2. In any event, the reasoning given by the DFO in the GIR is questionable. When the Bid Evaluation Score Sheet refers to “similar scope and complexity”, it is just as natural to read it to mean one project “each” of asset acquisition and asset management as one project of “both” asset acquisition and asset management. Two reasons support this dual interpretation. First, as a matter of the RFP’s requested deliverables, the DFO has provided no explanation for why the experience of one person based on one project involving both asset acquisition and asset management elements would not be equal to the experience of two persons with experience in one project each of asset acquisition and asset management. This goes to the presumed intention of the parties in the procurement, a relevant factor in interpretation of tender documents.
  3. Second, as a matter of plain meaning interpretation, the use of the conjunctive “and” is not necessarily dispositive as it is sometimes used in a disjunctive fashion. It is settled law that “and” “may indeed be conjunctive or disjunctive, depending on the context.”[41] This is because it is not always clear whether the writer intends the several version of “and” (A and B, jointly or severally) or the more limited joint version of “and” (A and B jointly, but not severally).[42] Sullivan submits that “and” “tends to be used jointly and severally” but that this may be rebutted by “linguistic considerations or by knowledge of the world”.[43]
  4. Deloitte submits that industry experience supports its interpretation of “and” as being used severally, not jointly, in this context. All of the rated criteria from RT6 through RT11 concern only Phase 2 work (the Fleet Optimization Study)—they do not involve Phase 1 (Concept of Analysis) or Phase 2 (as‑needed work). The phrase “similar scope and complexity” used in the Bid Evaluation Score Sheet for RT10 is pulled directly from the requirements in RT6, RT7, RT8 and RT9, which all refer to “a study of similar size, scope, and complexity”. The study referenced is clearly the Fleet Optimization Study. The DFO has not submitted any evidence that the Fleet Optimization Study is an asset management study rather than primarily an asset acquisition study. In fact, the asset management portion of the work appears to be the Phase 3 “as needed” management work based on the language of RT12 (Phase 3), which reads as follows: “The Bidder demonstrates previous experience with supporting a large agency or government department as they implement an asset management renewal strategy”[44] [emphasis added]. The above reading of the RFP is also consistent with Deloitte’s argument in its reply to the GIR that industry practice recognizes asset management and asset acquisition as occurring during opposite ends of the asset life cycle—asset acquisitions are usually conducted before and separate from asset management projects.
  5. Further, had the DFO intended the narrower interpretation, it could have avoided the latent ambiguity by phrasing the requirement more clearly as “previous experience with planning implementation for proposed strategies for projects involving both (a) large asset acquisition and (b) asset management”. Instead, the DFO phrased the requirement as “previous experience with planning implementation for proposed strategies for large asset acquisition and asset management projects.” The Tribunal finds that by placing the word “projects” after rather than before the particular types requested and by using the ambiguous “and” instead of “both . . . and”, the DFO created a latent ambiguity. The ambiguity was latent both because the requirement naturally reads in the several rather than joint sense, and because the alternative, narrower reading is not supported or suggested (and, if anything, is rebutted) by the RFP’s description of work and categorization of the technical requirements. All of the above supports a several reading, not a joint one. Indeed, the fact that the DFO has admitted in the GIR that its evaluation members themselves did not have a pre-existing understanding of the requirement before reading the proposals and having to apply them to RT10 is compelling evidence that the ambiguity was latent.[45]
  6. For these reasons, the Tribunal finds that the evaluation committee unreasonably interpreted “and” in the joint rather than the several sense. Alternatively, even if the Tribunal’s interpretation is incorrect, at best, the requirement contains a latent ambiguity.
  7. For these reasons, the Tribunal finds this ground of complaint to be valid.

RT11 (Costing Society Certification or Commensurate Experience and Technical Ability)

  1. RT11 required the following:[46]

    The Bidder demonstrates that the resources proposed for conducting the cost analysis component of the Fleet Optimization Study possess a certification from an internationally recognized costing certification society, or experience and technical ability commensurate with the requirements to attain the International Cost Estimating and Analysis Association’s Certified Cost Estimator/Analyst certification.

  2. For RT11, the point allocation was as follows:
    • 0 points when “[t]he Bidder does not clearly demonstrate experience relevant to the criterion”;
    • 10 points when “[t]he Bidder’s team lead for the costing component of the Fleet Optimization Study clearly demonstrates they have a junior/introductory level certification that meets the criterion”;
    • 30 points when “[t]he Bidder’s proposed team lead for the costing component of the Fleet Optimization Study clearly demonstrate they have a senior level certification or experience and technical ability commensurate with the requirements to attain the International Cost Estimating and Analysis Association’s Certified Cost Estimator/Analyst certification that meets the criterion, or the majority of personnel for the costing component of the Fleet Optimization Study clearly demonstrate they have a junior/introductory certification that meets the criterion”; and
    • 50 points when “[t]he Bidder’s proposed team lead and all personnel for the costing component of the Fleet Optimization Study clearly demonstrate they have a senior level certification or experience and technical ability commensurate with the requirements of the International Cost Estimating and Analysis Association’s Certified Cost Estimator/Analyst certification that meets the criterion.”[47]

    [Emphasis added]

  3. The language recognizing “experience and technical ability commensurate with” the International Cost Estimating and Analysis Association’s (ICEAA) Certified Cost Estimator/Analyst (CCEA) certification was added by amendment at the request of bidders.[48] Eligibility to take the CCEA certification exam is met by either of the following: (1) (a) a bachelor’s degree in any field from an accredited college; and (b) five years of cost experience; or (2) 8 years of cost experience in lieu of a bachelor’s degree.
  4. Deloitte’s proposal confirmed that all of its named resources for the Fleet Optimization Study met the educational and experience requirements to take the CCEA certification exam (though none of them had an actual certification).[49]
  5. Although three of the evaluators initially awarded Deloitte full points, at the consensus meeting the panel agreed on a final score of ██ points. The evaluators did not contest that Deloitte met the educational and experience requirements for eligibility for taking the exam to obtain the CCEA certification. Rather, they concluded that Deloitte failed to demonstrate “technical ability” comparable to the 16 modules of the ICEAA’s 16 module “Testable Topics List”.[50] These modules comprise topics on which applicants for the certification may be tested. In the GIR, the DFO submits that, in order to demonstrate technical ability commensurate with the CCEA certification, bidders needed to provide information clearly demonstrating their experience in all of these testable topics.

Positions of Parties

  1. Deloitte argues that this requirement is either an undisclosed criterion or a latent ambiguity. Specifically, Deloitte argues that the ICEAA documents which the evaluators relied upon, titled “The Certification Program” and “Testable Topics List”, were not included in, or referenced by, the RFP. Deloitte observes that the Testable Topics List is 17 pages long and contains hundreds of sub-topics.
  2. Similar to the other RTs, the evaluation of RT11 involved three evaluators awarding Deloitte full marks when the evaluation was conducted individually, only to change their scores during the consensus evaluation. Deloitte argues that, because three of the evaluators had the same interpretation of RT11 as it did, while the fourth had the opposite interpretation, this is proof of a latent ambiguity.

Analysis

  1. The DFO’s interpretation is unreasonable because it is unfeasible and creates an absurd result. It is also based on requirements not stated in the RFP itself or incorporated by reference and, as such, constitutes an undisclosed criterion. There may be cases where an RFP can be reasonably interpreted as necessarily implying reference to, or incorporation of, an external document or requirement, but this is not such a case, as doing so here by reference to ICEAA’s Testable Topics List renders compliance with the requirement all but impossible. Further, the DFO’s own submissions in the GIR confirm that the evaluation team members did not have any pre-existing understanding before reviewing the proposals of how the equivalency language in RT11 (“experience and technical ability commensurate with the requirements to attain the International Cost Estimating and Analysis Association’s Certified Cost Estimator/Analyst certification”) would be applied. They determined it after the proposals had already been read and over a week after the language had been inserted into the RFP through the amendment process.[51]
  2. Indeed, the DFO does not explain how a proposal would demonstrate compliance with the hundreds of cost estimating topics included in the ICEAA’s Testable Topics List. This is a severe onus on bidders. As the qualification language was provided as an alternative to the requirement of holding a certification, the Tribunal finds that all parties would have intended it to be a real, feasible option included in good faith by the DFO. Further, even if the DFO is correct to assume that there was a requirement to show “technical ability” in addition to experience and education, its decision to require bidders to demonstrate competence in all of the matters in the Testable Topics List is without merit. The list is long and detailed and it is not clear how a bidder would demonstrate compliance with the numerous technical topics, or that every ICCEA test includes all of these topics rather than a sample of them. In those circumstances, if it was a true requirement, the DFO should have incorporated it explicitly by reference into the RFP before the bid period closed. The evaluation team should not have incorporated criteria from an external document that was (i) not referenced in the tender documents, (ii) not capable of being reasonably complied with and (iii) not contemplated by the DFO or bidders at the time of the amendment.
  3. For these reasons, the Tribunal finds that this ground of complaint is valid.

REMEDY

  1. Having found that Deloitte’s complaint is valid in part, the Tribunal must determine the appropriate remedy, in accordance with subsections 30.15(2) to 30.15(3) of the CITT Act.
  2. Deloitte requests that it be compensated for its lost profits or, alternatively, the lost opportunity it would have realized on the Phase 1 work. Additionally, Deloitte requests that it be awarded the Phase 2 and Phase 3 contracts or, alternatively, that the DFO retender the solicitation for the Phase 2 and 3 contracts; and that it be awarded its costs in bringing this complaint. Deloitte has not requested its bid preparation costs.
  3. The DFO submits that no remedy should be granted in this case because both Deloitte and QinetiQ lost points with respect to the five rated criteria in issue; therefore, the evaluators treated the bidders equally and Deloitte has not proven that, but for the errors, it would have been ranked higher than QinetiQ.
  4. To recommend a remedy, the Tribunal must consider all the circumstances relevant to the procurement in question including the following: (1) the seriousness of the deficiencies found; (2) the degree to which the complainant and all other interested parties were prejudiced; (3) the degree to which the integrity and efficiency of the competitive procurement system was prejudiced; (4) whether the parties acted in good faith; and (5) the extent to which the contract was performed.

Seriousness of the Deficiencies Found in the Procurement Process

  1. Regarding the seriousness of the deficiencies found, the Tribunal finds that the DFO’s evaluation of Deloitte’s proposal in a manner that did not comply with the criteria set out in the RFP is a serious deficiency because the evaluation of proposals in accordance with the criteria stated in tender documentation is a key principle of the trade agreements.

Prejudice to Deloitte

  1. Regarding the degree to which the complainant and all other interested parties were prejudiced, the Tribunal determines that Deloitte has demonstrated that, but for the evaluation team’s errors, it would have received a higher total Best Value Score than QinetiQ.
  2. Deloitte submitted that it required ██ additional technical rated points, relative to QinetiQ, to be identified as the highest ranked bidder based on the Best Value Score—a combination of technical (70%) and price (30%) scores. This is correct on the basis of the formula in the RFP for calculating the Best Value Score and the relative scores of the two bidders, but only when and if QinetiQ’s absolute score remains unchanged. That is not the case here, however, where the Tribunal has determined that four rated criteria were interpreted too narrowly by the DFO. Consequently, both bidders gain additional points, and the ultimate ranking can only be determined by calculating the total technical score each bidder should have received and then inputting the revised technical scores and the (unrevised) financial scores into the RFP’s formula to determine the Best Value Score.[52]
  3. Deloitte argues that the Tribunal should not consider any adjustment to QinetiQ’s technical score regarding RT6, RT7 and RT8 because all these relate to errors in the DFO’s evaluation unique to Deloitte’s proposal (as opposed to interpretive errors of uniform applicability regarding RT10 and RT11). It also notes that neither the DFO nor QinetiQ has argued that QinetiQ’s proposal with respect to RT6, RT7 or RT8 was improperly scored and thus there is no evidentiary basis to conclude it should have been scored higher.
  4. The Tribunal disagrees. RT6 is not in issue because the Tribunal found that ground of complaint to be invalid. RT7 involved the distinction between MCDA and MCA. RT8 involved the distinction between “cost Analysis” and “cost benefit strategies”. Thus, RT7 and RT8 involve interpretive categories that can be applied consistently to QinetiQ’s proposal to the extent (as discussed further below) that the evaluation team may have deducted points from QinetiQ’s proposal for the same reasons based on these distinctions that it deducted points from Deloitte’s proposal.
  5. A word before the Tribunal begins this analysis. In the GIR, the DFO argues that QinetiQ would still have been the highest-ranked bidder regardless of the alleged errors in the evaluation because the evaluation team applied the same interpretation principles to both proposals and QinetiQ also, as a result, lost points. To test this proposition, the Tribunal ordered PWGSC to produce the record of the evaluation of QinetiQ to enable the Tribunal to conduct this analysis. It also offered the parties an opportunity to make submissions on whether Deloitte would have, but for the alleged errors in scoring, been the highest ranked bidder. The DFO filed no submissions other than a one-page letter reiterating its position in the GIR. Deloitte filed a three-page submission that discussed the change in relative scoring based on consensus scores, but did not address on the merits (i.e. with reference to the comments from the Individual Scoring Sheets, the text of QinetiQ’s proposal, or its supporting résumés and case studies). Whether for RT6, RT7, RT8, RT10 and RT11, QinetiQ’s score was responsive to the requirements as Deloitte argues they should have been interpreted. While the Tribunal did not order that submissions be made, the absence of submissions on the merits and, in particular, the DFO’s lack of any submissions at all, was inconsistent with the importance of this issue of causation—which goes to the heart of the type of remedy to which Deloitte is entitled.

RT7 (Multi-Criteria Decision Analysis)

  1. RT7 and RT8 had the following point allocation scheme:
    • 0 points when “[t]he Bidder does not clearly demonstrate experience relevant to the criterion”;
    • 10 points when “[t]he Bidder’s team lead for the Fleet Optimization Study clearly demonstrates that they have experience that meets the criterion”;
    • 30 points when “[t]he Bidder’s proposed team lead and the majority of the personnel for the Fleet Optimization Study clearly demonstrate that they have experience that meets the criterion”; and
    • 50 points when “[t]he Bidder’s proposed team lead and all personnel for the Fleet Optimization Study clearly demonstrate that they have experience that meets the criterion”.[53]

    [Emphasis added]

  2. The evaluation team awarded Deloitte ██ points, because it identified one Deloitte team member as not having the requisite experience. The Tribunal has found that the team member did have the required experience. Therefore, Deloitte’s score should be revised up from ██ to ██ points.
  3. The evaluation team awarded QinetiQ ██ points, because it found that ██ ██ ██ ██ ██ ██ ██ ██ ██. Initial individual scores were respectively ██ ██ ██ █ and ██ points. The evaluation team member who individually scored QinetiQ’s response █ points initially wrote that ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██. To support the consensus ██ -point score, the member wrote that ██ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ██.[54]
  4. The evaluation team member who individually scored QinetiQ’s response ██ points initially wrote that ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ After the consensus scoring and QinetiQ’s response to a clarification question was received, the member wrote that ██ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ██.[55]
  5. One of the two evaluation team members who individually scored QinetiQ’s response ██ points wrote the following of two of the proposed Deloitte team members’ previous experience: “██ ██ ██ ██” and “██ ██ ██ ██ ██ ██ ██ ██ ██”.[56] The other evaluation team member who individually scored ██ points indicated in both the individual and consensus scoring notes that the team leader had MCDA experience, but it is not clear why that team member’s mind changed (e.g. whether because of the distinction between MCDA and MCA or some other issue).
  6. Based on the above, the Tribunal cannot conclude that QinetiQ would have received a higher score but for the evaluation team’s distinction between MCDA and MCA. While some of the individual scoring notes mention MCDA, they also express concern with whether the projects relied on by QinetiQ were ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██. No one has challenged this aspect of QinetiQ’s scoring. Given the above and the absence of representations from the DFO or PWGSC providing any evidence on the disparities in the individual scores and why the consensus score was fixed at ██, the Tribunal finds that QinetiQ’s score likely would not have increased. Therefore, its score should not be revised.

RT8 (Cost Analysis and Cost Benefit Strategies)

  1. The evaluation team awarded Deloitte ██ points because it identified one Deloitte team member as not having the requisite experience. The Tribunal has found that the team member did have the required experience. Therefore, Deloitte’s score should be revised up from ██ to ██ ██ ██ ██.
  2. The evaluation team awarded QinetiQ ██ points because it found that ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██. Specifically, individual team members wrote that ██ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██ ██ ██ ██ ██ ██.[57] Nowhere in the Individual Score Sheets or the Summary Sheets is there anything indicating that points were deducted because of a failure to demonstrate cost benefit “strategies” or some other distinction involving cost analysis and cost benefit analysis. All the evidence suggests QinetiQ only lost points because ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██. For these reasons, the Tribunal concludes that there should be no revision to QinetiQ’s score for RT8.

RT10 (Large Asset Acquisition and Asset Management Projects)

  1. For RT10, 0 points were to be awarded when “[t]he Bidder does not clearly demonstrate experience relevant to the criterion”, 10 points when “[t]he Bidder clearly demonstrates experience with one project of similar scope and complexity and provides evidence of supporting documentation demonstrating how the example(s) meet the criteria”, and 50 points when “[t]he Bidder clearly demonstrates experience with two or more projects of similar scope and complexity and provides evidence of supporting documentation demonstrating how the example(s) meet the criteria” [emphasis added]. There was no provision for 30 points for RT10.[58]
  2. The evaluation team awarded Deloitte ██ points, because it found that Deloitte did not identify any projects with both large asset acquisition and asset management components. The Tribunal has found that RT10 only required bidders to list projects with at least one (not both) of these components. Deloitte identified ██ asset acquisition projects and ██ asset management project. Therefore, Deloitte should have received ██ points.
  3. The evaluation team awarded QinetiQ ██ points, because it determined that ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██.[59] Three of the four members of the evaluation team individually gave QintetiQ ██ points, with only one member giving it ██ points. After the consensus meeting, the evaluation team agreed that the ██ ██ ██ ██ ██ ██ project qualified as involving both components, but that the ██ ██ ██ █ ██ project only involved asset maintenance.[60] The Tribunal has found that RT10 only required bidders to list projects with at least one (not both) of these components. Therefore, QinetiQ should have received ██ points.

RT11 (Costing Society Certification or Commensurate Experience and Technical Ability)

  1. For RT11, the point allocation was as follows:
    • 0 points when “[t]he Bidder does not clearly demonstrate experience relevant to the criterion”;
    • 10 points when “[t]he Bidder’s team lead for the costing component of the Fleet Optimization Study clearly demonstrates they have a junior/introductory level certification that meets the criterion”;
    • 30 points when “[t]he Bidder’s proposed team lead for the costing component of the Fleet Optimization Study clearly demonstrate they have a senior level certification or experience and technical ability commensurate with the requirements to attain the International Cost Estimating and Analysis Association’s Certified Cost Estimator/Analyst certification that meets the criterion, or the majority of personnel for the costing component of the Fleet Optimization Study clearly demonstrate they have a junior/introductory certification that meets the criterion”; and
    • 50 points when “[t]he Bidder’s proposed team lead and all personnel for the costing component of the Fleet Optimization Study clearly demonstrate they have a senior level certification or experience and technical ability commensurate with the requirements of the International Cost Estimating and Analysis Association’s Certified Cost Estimator/Analyst certification that meets the criterion.”[61]

    [Emphasis added]

  2. The evaluation team awarded Deloitte ██ points, because it found that neither its team leader nor any of its team members had the requisite certification or its equivalent. The Tribunal has found that RT11 recognizes equivalence based on eligibility to take the certification exam: i.e. (1) (a) a bachelor’s degree in any field from an accredited college; and (b) five years of cost experience; or (2) 8 years of cost experience in lieu of a bachelor’s degree. It is not contested that Deloitte’s proposed team lead and members each met the eligibility requirement for a senior-level certification based on a bachelor’s degree and at least five years of costing experience.[62] Therefore, Deloitte should have received ██ points for RT11.
  3. The evaluation team awarded QinetiQ ██ points, because it found that ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██.[63] The team member’s résumé indicates that he has a ██ ██ ██ ██ ██. QinetiQ’s proposal represents that he has “██ ██ ██ ██ ██ ██ ██ ██ ██”, citing the ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██.[64] His résumé states that he obtained his ██ ██ ██ ██ ██ in 1984, and lists ██ different projects but only ██ of these are referenced in the proposal for the basis of costing experience.[65] Further, unlike Deloitte, QinetiQ did not provide a chart in its proposal that identified the time periods during which the team member worked on each project for purposes of calculating the minimum five-year requirement for being eligible for the ICEAA certification. The Tribunal therefore finds it questionable whether ██ ██ ██ ██ ██ met the eligibility requirement. Nevertheless, as the evaluation team notes do not identify a failure to meet the eligibility requirement, the Tribunal finds that QinetiQ would have received ██ points for RT11 but for the evaluation team’s decision to require experience in each of the modules of the Testable Topics List.

Conclusion

  1. Based on the above, Deloitte’s score should be raised by ██ points, calculated as follows: RT7 (██ points), RT8 (██ points), RT10 (██ points) and RT11 (██ points). For its part, Qinetiq’s score should be raised by ██ points, calculated as follows: RT7 (██ points), RT8 (██ points), RT10 (██ points) and RT11 (██ points).
  2. Thus, the total technical points Deloitte should have been awarded is ██ rather than ██, and the total technical points QinetiQ should have been awarded is ██ instead of ██.[66] Using the formula in the RFP, Deloitte’s revised Best Value Score is ██ and Qinetiq’s is ██. Accordingly, the Tribunal finds that Deloitte has been prejudiced by the DFO’s evaluation errors because it would, but for those errors, have been the bidder with the highest Best Value Score.

Prejudice to Procurement System

  1. Regarding the degree to which the integrity and efficiency of the competitive procurement system was prejudiced, the prejudice of the non-compliance to the integrity and efficiency of the procurement system is real, but limited, in this instance, as the bidders were treated equally in terms of the evaluation team’s interpretation of the technical criteria at issue. This is not a case where the DFO’s conduct in the procurement process was so prejudicial, unfair and lacking in transparency that a retendering is required. There was a consensus evaluation process in which the evaluators discussed and debated the technical requirements of the RFP and the responsiveness of the bidders’ proposals. The Tribunal has found that on four rated requirements the evaluation team misinterpreted the requirements or misapplied them, but the errors were to a large extent due to imprecision in the drafting of the requirements. There is no evidence that either QinetiQ or Deloitte were prevented from putting their “best foot forward” in terms of their proposals due to the manner in which the DFO conducted the procurement. It is important to note that here, in part because there were only two bidders, the Tribunal has been able to determine which proposal should have received the highest total score. Accordingly, the objectives of maintaining bidders’ confidence in the system and thereby increasing opportunities for the government to obtain the most advantageous proposal for the services it wants to acquire are not substantially prejudiced.

Good Faith

  1. Regarding whether the parties acted in good faith, there is no allegation or evidence in the record that the parties did not act in good faith or were biased. Consistent with the analysis regarding prejudice above, the Tribunal finds the evaluation team’s errors were ones of reasoning but not intentional or reckless.

Contract Performance

  1. Finally, regarding the extent to which the contract was performed, the RFP’s Project Schedule in the Statement of Work contemplated that Phase 1 would be completed six weeks after contract award, by March 31, 2017; Phase 2 (April 1, 2017, through March 31, 2018) would have a targeted deadline of August 2017 for the final Fleet Optimization Study report and “be completed 6 months after option period award”; and Phase 3 (the second option period) would commence after Phase 2 and comprise as-needed support for a one-year period (April 1, 2017-March 31, 2019).[67]

Conclusion

  1. The Tribunal concludes that the appropriate remedy in this case, based on the above considerations, is an award of lost profits for the Phase 1, Phase 2 and Phase 3 work (to the extent that the last two options regarding Phase 2 and 3 have or will be exercised by PWGSC with respect to QinetiQ’s contract). Because the Tribunal has been able to establish that but for the evaluation team’s errors, Deloitte would have been the highest-ranked bidder, compensation in the form of discounted lost opportunity instead of lost profits is not appropriate. Further, ordering that the contract be rescinded and awarded to Deloitte is not appropriate, because QinetiQ has already completed the Phase 1 work, the Phase 2 work is complete or substantially underway, and the Phase 3 as-needed consulting work is based on the Phase 1 and Phase 2 work and thus more appropriately performed by the party who performed the earlier work. Retendering is not appropriate as a remedy for the same reasons. Additionally, the Tribunal concludes that the errors made by the DFO in the evaluation of proposals were not such as to so prejudice the integrity or fairness of the procurement process that a retendering is warranted—neither of the two bidders were prevented from putting their best foot forward, and the Tribunal has been able to establish which of the two should have been the highest‑ranked bidder.

COSTS

  1. Both parties requested costs in relation to the proceeding. Given its success on four of the five grounds of its complaint, pursuant to section 30.16 of the CITT Act, the Tribunal awards Deloitte its reasonable costs incurred in preparing and proceeding with the complaint, which costs are to be paid by the DFO. In accordance with the Procurement Costs Guideline (the Guideline), the Tribunal’s preliminary indication of the level of complexity of the complaint is Level 2 and its preliminary indication of the amount of the cost award is $2,750. If any party disagrees with the cost decision, it may make submissions to the Tribunal, as contemplated by article 4.2 of the Guideline. The Tribunal reserves jurisdiction to establish the final amount of the costs award.

DETERMINATION

  1. Pursuant to subsection 30.14(2) of the CITT Act, the Tribunal determines that the complaint is valid in part.
  2. Pursuant to subsections 30.15(2) to 30.15(3) of the CITT Act, the Tribunal recommends that the DFO compensate Deloitte for its lost profits for the Phase 1 work and, to the extent that the DFO has already exercised, or intends to exercise, its options for them, the Phase 2 and Phase 3 work.
  3. The Tribunal further recommends that the parties negotiate the amount of compensation to be paid and report the outcome of the negotiations to the Tribunal within 30 days of the issuance of the statement of reasons for this determination.
  4. Should the parties be unable to agree on the amount of compensation, Deloitte shall file with the Tribunal, within 40 days of the issuance of the statement of reasons for this determination, a submission on the issue of compensation. The DFO will then have seven working days after the receipt of Deloitte’s submission to file a response. Deloitte will then have five working days after the receipt of the DFO’s reply submission to file any additional comments. Counsel are required to serve each other and file with the Tribunal simultaneously.
  5. The Tribunal reserves jurisdiction to establish the final amount of the compensation.
 

[1].     R.S.C., 1985, c. 47 (4th Supp.) [CITT Act].

[2].     Exhibit PR-2016-069-13, exhibit 1 at 75-76, Vol. 1A. Throughout, the page numbers cited in the footnotes are those of the (consecutively numbered across all internal documents) PDF page numbers of each of the exhibits on the Tribunal record—not the printed page numbers of the internal documents included in each exhibit. Thus, PDF page 75 of exhibit 1 of exhibit PR-2016-069-13 corresponds to printed page 34 of exhibit 1 (the RFP) of the Government Institution Report (GIR).

[3].     Ibid. at 59-62.

[4].     Ibid. at paras. 24-26 at pages 13-14.

[5].     Exhibit PR-2016-069-13A, exhibit 7, Vol. 2A (protected).

[6].     Ibid., exhibit 8.

[7].     18 July 1994, C. Gaz. 1995.I.1323, online: Internal Trade Secretariat <http://www.ait-aci.ca/agreement-on-internal-trade/> [AIT].

[8].     North American Free Trade Agreement between the Government of Canada, the Government of the United Mexican States and the Government of the United States of America, 17 December 1992, 1994 Can. T.S. No. 2, online: Global Affairs Canada <http://international.gc.ca/trade-commerce/trade-agreements-accords-comme... (entered into force 1 January 1994) [NAFTA].

[9].     Revised Agreement on Government Procurement, online: World Trade Organization <http://www.wto.org/‌english/docs_e/legal_e/rev-gpr-94_01_e.htm> (entered into force 6 April 2014) [AGP].

[10].   Samson & Associates v. Department of Public Works and Government Services (19 October 2012), PR-2012-012 (CITT) [Samson] at para. 28.

[11].   Northern Lights Aerobatic Team, Inc. v. Department of Public Works and Government Services (7 September 2005), PR-2005-004 (CITT) at para. 52.

[12].   Samson at paras. 26-27.

[13].   Exhibit PR-2016-069-13, exhibit 3, 14, Vol. 1A.

[14].   Exhibit PR-2016-069-13A, exhibit 6 at 64, 74 and 136, Vol. 2A (protected).

[15].   Ibid., exhibit 7 at 255.

[16].   Ibid. at 273.

[17].   Ibid. at 264, 273 and 282.

[18].   ██’s work on the NZDF White Paper is described on page 83 of Deloitte’s bid.

[19].   See, for example, Samson at para. 46 (distinguishing between repeating requirements and providing substantive description of experience).

[20].   Exhibit PR-2016-069-13, exhibit 1 at 53, Vol. 1A.

[21].   Samson at para. 28.

[22].   IBM Canada Ltd. (5 November 1999), PR-99-020 (CITT).

[23].   CGI Information Systems and Management Consultants Inc. v. Canada Post Corporation, 2015 FCA 272 (CanLII) at para. 83.

[24].   TPG Technology Consulting Ltd. v. Canada, 2014 FC 933 (CanLII) at para. 151.

[25].   Exhibit PR-2016-069-13A, exhibit 7 at 255, Vol. 2A (protected).

[26].   Ibid. at 273.

[27].   Ibid. at 264, 273 and 282.

[28].   Exhibit PR-2016-069-13 at 25, Vol. 1A.

[29].   Exhibit PR-2016-069-13A, exhibit 6 at 141-142, Vol. 2A (protected).

[30].   Ibid. at 118.

[31].   Primex Project Management Ltd. (22 August 2002), PR-2002-001 (CITT) at 10.

[32].   IBM Canada Ltd. (24 April 1998), PR-97-033 (CITT).

[33].   Exhibit PR-2016-069-13A, exhibit 6 at 87, Vol. 2A (protected).

[34].   Ibid., exhibit 7 at 143.

[35].   Ibid. at 144.

[36].   Ibid. at 283.

[37].   Ibid. at 256, 265 and 274.

[38].   Ibid. at 265.

[39].   Exhibit PR-2016-069-13, exhibit 3 at 102, Vol. 1A.

[40].   Exhibit PR-2016-069-13A, exhibit 7 at 257, 266, 275 and 284, Vol. 2A (protected).

[41].   Seck v. Canada (Procureur general), 2012 FCA 314 (CanLII) at para. 47.

[42].   R. Sullivan, Sullivan on the Construction of Statutes, 6th ed., s. 4.97 at p. 101.

[43].   Ibid., ss. 4.98-4.99 at pp. 101-102.

[44].   Exhibit PR-2016-069-13, exhibit 3 at 103, Vol. 1A.

[45].   Ibid. at paras. 78-81.

[46].   Ibid., exhibit 5 at 120.

[47].   Ibid., exhibit 3 at 102; Ibid., exhibit 4 at 108.

[48].   The ICEAA sets out the requirements for its CCEA certification at the following address: http://www.iceaaonline.com/certification-matters/the-certification-program/.

[49].   Exhibit PR-2016-069-01A, tab B at 214-218, Vol. 2 (protected).

[51].   Exhibit PR-2016-069-13 at p. 36 at para. 93, Vol. 1A.

[52].   The evaluation team’s calculation of each bidder’s Best Value Score using their technical score, financial score and the RFP formula, is presented in exhibit 9 of the GIR.

[53].   Exhibit PR-2016-069-13, exhibits 3 and 14, Vol. 1A.

[54].   Exhibit PR-2016-069-20, attachment 2 at 68, Vol. 2A (protected).

[55].   Ibid. at 56.

[56].   Ibid. at 64.

[57].   Ibid. at 57, 61, 65 and 69.

[58].   Exhibit PR-2016-069-13, exhibit 3 at 102, Vol. 1A.

[59].   Exhibit PR-2016-069-20, attachment 3 at 78, Vol. 2A (protected).

[60].   Ibid., attachment 2, at 58 ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██, 62 ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██, 66 ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ and 70 ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██.

[61].   Exhibit PR-2016-069-13, exhibit 3 at 102, Vol. 1A; Ibid., exhibit 4 at 108.

[62].   Exhibit PR-2016-069-13A, exhibit 6 at 156-160, Vol. 2A (protected).

[63].   Exhibit PR-2016-069-20, attachment 3 at 79, Vol. 2A (protected).

[64].   Ibid., attachment 1 at 49.

[65].   Ibid. at 45.

[66].   Exhibit PR-2016-069-13A, exhibit 9, Vol. 2A (protected).

[67].   Exhibit PR-2016-069-13, exhibit 1 at 79-80, Vol. 1A.