Procurement Inquiries

Decision Information

Decision Content

File No. PR-2020-085

ACT for Performance Inc.

v.

Department of Foreign Affairs, Trade and Development

Determination and reasons issued
Tuesday, June 15, 2021



IN THE MATTER OF a complaint filed by ACT for Performance Inc. pursuant to subsection 30.11(1) of the Canadian International Trade Tribunal Act, R.S.C., 1985, c. 47 (4th Supp.);

AND FURTHER TO a decision to conduct an inquiry into the complaint pursuant to subsection 30.13(1) of the Canadian International Trade Tribunal Act.

BETWEEN

ACT FOR PERFORMANCE INC.

Complainant

AND

THE DEPARTMENT OF FOREIGN AFFAIRS, TRADE AND DEVELOPMENT

Government Institution

DETERMINATION

Pursuant to subsection 30.14(2) of the Canadian International Trade Tribunal Act (CITT Act), the Canadian International Trade Tribunal determines that the complaint is not valid.

Pursuant to section 30.16 of the CITT Act, the Tribunal awards the Department of Foreign Affairs, Trade and Development its reasonable costs incurred in preparing and proceeding with this complaint, which costs are to be paid by ACT for Performance Inc. In accordance with the Procurement Costs Guideline (Guideline), the Tribunal’s preliminary indication of the level of complexity for this complaint is Level 2, and its preliminary indication of the amount of the cost award is $2,750. If any party disagrees with the preliminary level of complexity or indication of the amount of the cost award, it may make submissions to the Tribunal, as contemplated in Article 4.2 of the Guideline. The Tribunal reserves jurisdiction to establish the final amount of the cost award.

Frédéric Seppey

Frédéric Seppey
Presiding Member


 

Tribunal Panel:

Frédéric Seppey, Presiding Member

Tribunal Counsel:

Michael Carfagnini, Counsel

Complainant:

ACT for Performance Inc.

Counsel for the Complainant.:

Manon Lavoie

Government Institution:

Department of Foreign Affairs, Trade and Development

Counsel for the Government Institution:

Mark Magro
Victor Au

Please address all communications to:

The Deputy Registrar
Telephone: 613-993-3595
E-mail: citt-tcce@tribunal.gc.ca

 


STATEMENT OF REASONS

OVERVIEW

[1] The present complaint concerns a Request for Proposals (RFP) by the Department of Foreign Affairs, Trade and Development (DFATD) for the provision of consulting and professional services (Solicitation No. 2020-7412621-2).

[2] The procurement sought consulting and professional services to evaluate seven international development assistance projects funded by DFATD, each implemented by a Canadian civil society organization in partnerships with local organizations in developing countries.

SUMMARY OF THE COMPLAINT

[3] The complainant, ACT for Performance Inc. (ACT), argues that the proposal it submitted in response to the RFP was evaluated in an erroneous and arbitrary manner, resulting in an inappropriately low score for ACT’s proposal and the award of the resulting contract to another bidder. Specifically, ACT alleges that DFATD unreasonably evaluated its proposal, ignored vital information provided in its bid, wrongly interpreted the scope of the requirements and based its evaluation on undisclosed criteria.

[4] As a remedy, ACT requests the following:

· that the contract be terminated, that its bid be re-evaluated and that it be awarded the contract if successful upon re-evaluation;

· compensation for lost profits; or

· its costs in preparing the bid and for preparing and proceeding with the complaint.

[5] In response, DFATD submitted that ACT failed to discharge its onus to demonstrate unambiguously how its proposal meets the rated requirements under the RFP and that the evaluators’ conclusions in this regard were reasonable.

PROCEDURAL BACKGROUND

[6] On May 7, 2020, DFATD published the tender notice and RFP on Buyandsell.gc.ca. [1]

[7] On June 5, 2020, DFATD issued Addendum 1 to extend the closing date and answer five questions. [2]

[8] On June 30, 2020, DFATD issued Addendum 2 to answer five questions. [3]

[9] On July 8, 2020, DFATD issued Addendum 3 to answer one question by confirming that the closing date could not be further extended. [4]

[10] On July 9, 2020, DFATD issued Addendum 4 to answer one question. [5]

[11] On July 15, 2020, the RFP closed as amended. The same day, ACT submitted its proposal in response to the RFP. [6]

[12] On December 18, 2020, ECORYS Nederland B.V. (ECORYS) was awarded the contract as the only bidder whose proposal passed the technical evaluation.

[13] On January 7, 2021, ACT received a regret letter from DFATD informing it that it was not the successful bidder under the solicitation and that the contract had been awarded to ECORYS. DFATD’s letter stated that, although ACT’s proposal was found to be responsive to the mandatory requirements of the solicitation, it did not achieve the minimum pass mark required under the terms of the RFP. [7]

[14] On January 11, 2021, ACT emailed DFATD to request information regarding the evaluation of its bid. The same day, DFATD sent ACT a debriefing report regarding the evaluation of its bid. [8]

[15] On January 13, 2021, ACT replied to DFATD, contesting certain elements of the debriefing report, and requested a revision of the evaluation. [9]

[16] On January 19, 2021, the notice of contract award was published on Buyandsell.gc.ca.

[17] On January 26, 2021, DFATD sent ACT a document providing further details regarding the evaluation of its bid. [10]

[18] On January 28, 2021, after reviewing the debriefing documents, ACT informed DFATD of its intention to file a complaint with the Tribunal. [11]

[19] On February 5, 2021, ACT filed the present complaint with the Tribunal. [12]

[20] On February 10, 2021, the Tribunal acknowledged receipt of the complaint. [13]

[21] On February 11, 2021, the Tribunal decided to conduct an inquiry into the complaint.

[22] On February 15, 2021, the Tribunal informed ACT that the complaint had been accepted for inquiry. [14]

[23] Also on February 15, 2021, the Tribunal informed DFATD that the complaint had been accepted for inquiry and requested information with regard to the contract awardee. In the letter accepting the complaint for inquiry, the Tribunal noted that the solicitation documents stated that no trade agreements applied to the procurement. The Tribunal therefore directed DFATD to address, in its Government Institution Report (GIR) to be submitted under subrule 103(1) of the Canadian International Trade Tribunal Rules, [15] the preliminary issue of the Tribunal’s jurisdiction to inquire into the complaint. [16]

[24] On February 18, 2021, ACT filed its public complaint with the Tribunal as well as additional public and confidential attachments to the complaint. [17]

[25] On February 19, 2021, the Department of Public Works and Government Services (PWGSC) forwarded to the Tribunal correspondence from DFATD authorizing counsel for PWGSC to represent DFATD in these complaint proceedings. [18] The same day, under separate cover, PWGSC on behalf of DFATD, wrote to the Tribunal acknowledging receipt of the complaint on February 17, 2021, confirming that a contract had been awarded to ECORYS and providing copies of the tender documents. [19]

[26] Also on February 19, 2021, ACT filed additional public attachments to the complaint, specifically the public version of its proposal in response to the RFP. [20]

[27] On February 22, 2021, the Tribunal wrote to ECORYS advising that the complaint had been accepted for inquiry and that, should it wish to intervene, it must seek leave of the Tribunal. [21] ECORYS did not seek such leave.

[28] On February 23, 2021, DFATD emailed the Tribunal requesting clarification of the confidentiality designations of submissions filed by ACT as well as confirmation of the time limit for filing the GIR, in light of the additional submissions filed by ACT after it had acknowledged receipt of the complaint on February 17, 2021.

[29] On February 24, 2021, the Tribunal advised DFATD that, for the purpose of calculating the time limit for filing the GIR, it could consider the complaint as having been received as of February 23, 2021.

[30] On February 25, 2021, DFATD wrote to the Tribunal acknowledging receipt of the complaint on February 23, 2021, and advising the Tribunal of its recalculated time limit for filing the GIR. [22]

[31] On March 22, 2021, DFATD submitted the public and confidential versions of its GIR. [23]

[32] On March 23, 2021, the Tribunal wrote to the parties acknowledging receipt of the public and confidential versions of the GIR and advising them of the time limit for ACT to file comments on the GIR. [24]

[33] On March 30, 2021, ACT requested an extension of time to file its comments on the GIR. DFATD consented to the request on March 31, 2021. [25]

[34] On April 1, 2021, the Tribunal wrote to the parties granting ACT’s request for an extension of time to file comments on the GIR and informing the parties that, as a result of the extension, it would issue its findings and recommendations within 135 days of the filing of the complaint, in accordance with paragraph 12(c) of the Canadian International Trade Tribunal Procurement Inquiry Regulations. [26]

[35] On April 15, 2021, ACT filed its public and protected comments on the GIR. [27]

[36] On April 20, 2021, DFATD requested to file a response to the comments on the GIR. [28] On April 21, 2021, the Tribunal granted DFATD’s request to respond to the comments on the GIR. [29]

[37] On April 29, 2021, DFATD filed its reply to the comments on the GIR. [30]

[38] Given that there was sufficient information on the record to determine the validity of the complaint, the Tribunal decided that an oral hearing was not required and ruled on the complaint based on the written record.

PRELIMINARY ISSUE: TRIBUNAL JURISDICTION

[39] Subsection 30.11(1) of the Canadian International Trade Tribunal Act (CITT Act) provides that “a potential supplier may file a complaint with the Tribunal concerning any aspect of the procurement process that relates to a designated contract and request the Tribunal to conduct an inquiry into the complaint.”

[40] Section 30.1 of the CITT Act defines the term “designated contract” as “a contract for the supply of goods or services that has been or is proposed to be awarded by a government institution and that is designated or of a class of contracts designated by the regulations.”

[41] Subsection 3(1) of the Regulations reads as follows:

For the purposes of the definition designated contract in section 30.1 of the Act, any contract or class of contract concerning a procurement of goods or services or any combination of goods or services, as described in Article 1001 of NAFTA, in Article II of the Agreement on Government Procurement, in Article Kbis-01 of Chapter Kbis of the CCFTA, in Article 1401 of Chapter Fourteen of the CPFTA, in Article 1401 of Chapter Fourteen of the CCOFTA, in Article 16.02 of Chapter Sixteen of the CPAFTA, in Article 17.2 of Chapter Seventeen of the CHFTA, in Article 14.3 of Chapter Fourteen of the CKFTA, in Article 19.2 of Chapter Nineteen of CETA, in Article 504 of Chapter Five of the CFTA, in Article 10.2 of Chapter Ten of CUFTA or in Article 15.2 of Chapter Fifteen of the TPP, that has been or is proposed to be awarded by a government institution, is a designated contract.

[42] Therefore, in order for the Tribunal to have jurisdiction to conduct an inquiry into a complaint by a potential supplier, the complaint must be in respect of a designated contract. This means, inter alia, that it must concern a procurement of goods or services, or any combination thereof, as described in the provisions of the trade agreements that are listed in subsection 3(1) of the Regulations. These provisions are the “scope and coverage” articles of the agreements. The Federal Court of Appeal and the Supreme Court of Canada have analogized the trade agreements as “doors” into the jurisdiction of the Tribunal. [31]

[43] In the tender notice published May 7, 2020, DFATD indicated that this procurement is not covered by any trade agreements. As outlined above, in the letter advising DFATD that the complaint had been accepted for inquiry, the Tribunal directed DFATD to address the preliminary issue of the Tribunal’s jurisdiction.

[44] In the GIR, DFATD submitted that the tender notice indicated that no trade agreements apply to the procurement because it is part of DFATD’s international development assistance program called “Technological Platforms to Strengthen Public Sector Accountability and Citizen Engagement.”

[45] The trade agreements exclude from coverage the procurement of goods and services conducted for the specific purpose of providing international assistance, including development aid. [32] However, DFATD submitted that, upon careful review, this particular contract is not excluded from coverage under the trade agreements. The services being procured under the contract are project evaluation services. Based on the estimated value of the contract at the time the procurement was initiated, DFATD submits that all of the procurement chapters of the trade agreements apply to this procurement and does not dispute the Tribunal’s jurisdiction to inquire into this procurement process. [33]

[46] Based on DFATD’s submissions, the Tribunal concludes that ACT’s complaint is in respect of a designated contract as required by subsection 30.11(1) of the CITT Act. The Tribunal therefore has jurisdiction to conduct an inquiry into the complaint.

ANALYSIS

[47] ACT alleges that its proposal was deficiently evaluated contrary to the terms set out in the RFP, that the evaluators demonstrated a lack of familiarity with the criteria and terms of the RFP, that the evaluators incorrectly assessed certain criteria and made personal evaluations contrary to the terms of the RFP, and that, as a result, its proposal did not receive the points to which it was entitled for the rated criteria outlined below.

[48] Specifically, ACT contests the consensus evaluation scores awarded to its proposal under the following rated technical criteria: R.2.1 B; R.2.1 C; R.2.2 A; R.2.2 B; R.2.2 C; R.2.3 A; and R.2.3 B. [34]

Standard of review

[49] In the GIR, DFATD sets out three principles which it submitted should guide the Tribunal’s analysis.

[50] First, DFATD submitted that the onus is on bidders to exercise due diligence in the preparation of their bid to ensure it demonstrates that it meets the requirements set out in the solicitation. DFATD cites Samson & Associates, where the Tribunal stated as follows:

It is also well established that there is an onus on bidders to demonstrate how their proposals meet the mandatory and rated criteria published in the solicitation documents. Stated another way, the responsibility for ensuring that a proposal is compliant with all essential elements of a solicitation or meets the rated criteria ultimately resides with the bidder. [35]

[Footnote omitted]

[51] Second, DFATD submitted that the onus is on bidders to ensure that their proposal’s responses to requirements are unambiguous and can be readily understood by evaluators. DFATD cites Raymond Chabot, where the Tribunal stated “it is incumbent upon the bidder to exercise due diligence in the preparation of its proposal to ensure that it is unambiguous and properly understood” by the procuring entity. [36] DFATD further cites Madsen Power Systems, where the Tribunal stated that “the requirement to demonstrate compliance cannot be abridged or left to inference.” [37]

[52] Third, DFATD submitted that the Tribunal’s standard of review in respect of determinations by evaluators is that of reasonableness. DFATD again cites Samson & Associates, where the Tribunal stated as follows:

The Tribunal typically accords a large measure of deference to evaluators in their evaluation of proposals. Therefore, the Tribunal has repeatedly stated that it will interfere only with an evaluation that is unreasonable and will substitute its judgment for that of the evaluators only when the evaluators have not applied themselves in evaluating a bidder’s proposal, have ignored vital information provided in a bid, have wrongly interpreted the scope of a requirement, have based their evaluation on undisclosed criteria or have otherwise not conducted the evaluation in a procedurally fair way. In addition, the Tribunal has previously indicated that a government entity’s determination will be considered reasonable if it is supported by a tenable explanation, regardless of whether the Tribunal itself finds that explanation compelling. [38]

[Footnotes omitted]

[53] DFATD argues that ACT failed to discharge its onus to unambiguously demonstrate how its proposal meets the rated requirements under the RFP and that the evaluators’ conclusions in this regard were reasonable.

[54] DFATD also contests ACT’s characterization of the evaluation as removing or subtracting points which ACT “lost” or had “deducted” as inaccurate. [39] DFATD submitted rather that proposals started with no points and that points could then be awarded based on demonstration of fulfillment of the applicable criteria in accordance with the point breakdown described in each rated criterion, consistent with the RFP and the “Evaluation Team Basic Guidelines” (evaluation guidelines) which were signed by each evaluator. [40]

[55] In its comments on the GIR, ACT explicitly does not dispute the principle that bidders bear the onus of exercising due diligence in demonstrating that their proposals meet the requirements set out in the solicitation in a way that is unambiguous and can be readily understood by evaluators. [41] Nor does ACT dispute that the Tribunal owes deference to evaluators and will interfere only where an evaluation is unreasonable. In addition to Tribunal decisions on this point, ACT cites the recent decision of the Supreme Court of Canada in Vavilov that a reasonable decision is one: (1) based on internally coherent reasoning; and (2) justified in light of the legal and factual constraints that bear on the decision. [42]

[56] ACT argues that the evaluation of its proposal was unreasonable because the evaluators did not apply themselves, ignored vital information provided in the bid, and based their evaluation on undisclosed criteria, and that the evaluation was internally irrational contrary to the first characteristic of a reasonable decision set out in Vavilov.

[57] In its reply to the comments on the GIR, DFATD expressly denies all allegations made by ACT except to the extent that they are expressly adopted or accepted in DFATD’s submissions. [43]

Analysis

[58] The parties agree that, in assessing whether procedures in tender documentation have been followed, the Tribunal shows deference to evaluators and interferes only if an evaluation is unreasonable. The Federal Court of Appeal has described the Tribunal’s role in this regard as “to decide if the evaluation is supported by a reasonable explanation, not to step into the shoes of the evaluators and reassess the unsuccessful proposal.” [44] The reasonableness standard also applies to review of the procuring entity’s interpretation of the procurement documents. [45]

[59] DFATD is also correct that bidders bear the onus of unambiguously demonstrating how their proposals fulfill the requirements of the solicitation, including any point-rated criteria, however, there is nuance in the authorities DFATD cites on this issue. The Tribunal’s statements in both Raymond Chabot and Madsen Power Systems, cited by DFATD above, were made specifically in the context of compliance with mandatory criteria, as opposed to point-rated criteria such as are at issue in the present complaint. [46] That said, the Tribunal has applied this reasoning in reviewing the evaluation of point-rated criteria as well. [47]

[60] The Tribunal also does not fully accept DFATD’s argument (most explicitly with regard to rated criterion R.2.2 B) that the evaluators could not consider information labelled as responding to one requirement in assessing whether its proposal fulfilled a different requirement. [48]

[61] In Star Group, the Tribunal noted that the specific terms “training, risk assessment, risk mitigation and sub-contractor safety,” which did not appear in the request for abbreviated proposals, were applied in evaluating proposals against a requirement to demonstrate “Health and Safety policy, procedures and practices.” [49] The Tribunal found the evaluation against this requirement to be unreasonable on the basis that evaluators failed to consider how information contained elsewhere in the complainant’s bid satisfied at least some of these terms, such as risk mitigation. [50]

[62] Further, in Deloitte, the Tribunal found that evaluators adopted an overly narrow reading of point-rated personnel experience requirements based on a distinction between the terms “multi‑criteria decision analysis (MCDA)” (as set out in the RFP) and “multi-criteria analysis (MCA)” referred to in the evaluated proposal. The government argued, based on textbook and journal definitions, that these types of analyses are related but distinct. The Tribunal found that this distinction was unsupported by the text of the RFP, constituting a “latent ambiguity” of which the potential supplier is unlikely to become aware until learning of the results of the evaluation, and for which it therefore should not be penalized. [51]

[63] The Tribunal in Deloitte further found the evaluation unreasonable on the basis that, even if this distinction were justified, the proposed personnel resource met the MCDA requirement according to the government’s own definition of the term. [52] The Tribunal concluded as follows:

Thus, this is not a case where the Tribunal is second-guessing the evaluation committee’s exercise of judgment or discretion, but rather where the evaluators have failed to properly consider the substance of the proposal by deeming it non-responsive based on mere semantics. Moreover, the DFO ignored vital information in the proposal that demonstrated compliance and consistency with the evaluators’ own narrow and unsupported interpretation of the requirement. Just as it is improper for a bidder to attempt to demonstrate compliance by merely repeating the quoted requirements word for word, it is improper for evaluators to find non-compliance based only on a failure to repeat the proper code words from the RFP rather than by looking into the substance of the proposal itself. This is especially pertinent in the scenario where the evaluators relied on a subtle one-word distinction between two technical terms that were not defined in any of the tender documents. [53]

[64] Based on the above, the Tribunal will consider the complaint according to the reasonableness standard, as previously established in jurisprudence and agreed by the parties. However, in conducting its analysis, the Tribunal will consider not only the evaluators’ assessment as to whether the proposal demonstrated compliance with the terms of the RFP, but also whether their reading of those terms was itself reasonable in the sense of not being overly or unjustifiably narrow. The Tribunal will further assess whether the evaluators may have ignored information in ACT’s proposal which demonstrated fulfillment of a given criterion. That said, the Tribunal notes that all rated criteria at issue, except for rated criterion R.2.3 B, provide for awarding 0 points in respect of a “limited” or “incomplete” explanation. In the Tribunal’s view, this means that even if a proposal speaks in some respects to the requirements under a given criterion, the evaluators’ nevertheless enjoyed relatively broad discretion in assessing its fulfillment, so long as their decision complies with the elements of reasonableness outlined above.

Rated criterion R.2.1

[65] Rated criterion R.2.1, titled “Evaluation Approach and Methodology,” requires the bidder to “demonstrate its understanding of the services described in the ToR (Terms of Reference) by describing in detail its intended evaluation approach and methodology.” [54]

[66] Section 5 of the ToR, titled “Evaluation Methodology and Approach,” provides that “[t]he evaluation will utilize Theory-based and Case-based approaches along with contribution analysis.” [55]

[67] Evaluation under rated criterion R.2.1 is divided into five sub-criteria, labelled A through E, each worth up to 25 points. Although each is worth up to 25 points, the RFP indicates that only three possible scores could be granted for each: “not demonstrated”, earning 0 points; “well demonstrated”, earning 18 points; and “fully demonstrated”, earning 25 points. [56]

[68] ACT argues that the evaluators wrongly awarded its proposal 0 out of 25 possible points for each of rated criteria R.2.1 B and R.2.1 C.

Rated criterion R.2.1 B

[69] Rated criterion R.2.1 B assesses “the bidder’s understanding of evaluation approaches and methodology.” The debriefing documents submitted in the public attachments to ACT’s complaint indicate that its proposal received 0 out of a possible 25 points for rated criterion R.2.1 B. ACT submitted that this evaluation was incorrect because its proposal demonstrated all the information required under the criterion.

[70] Specifically, ACT contests the evaluators’ conclusions in the debriefing documents that it “proposed irrelevant approaches and provided incomplete/limited explanation of the proposed methodology” by proposing a “realist evaluation approach” while the ToR clearly state that a “contribution analysis” is required. The evaluators’ comments further conclude that ACT’s bid did not explain its understanding of contribution analysis or how this would be applied in the context of the evaluation mandate, and provided a “very incomplete” explanation of how ACT would answer evaluation questions 5 and 6 in the ToR. [57]

[71] ACT submitted that these comments reflect the evaluators’ poor understanding of the evaluation terminology set out in the RFP, as well as the technical processes referenced in both the RFP and ACT’s proposal. ACT submitted that its proposal clearly describes and demonstrates ACT’s understanding of a contribution analysis, as opposed to “the process of attribution”. [58] Further, ACT submitted that an entire section of its proposal explains how it would answer questions 5 and 6. [59]

[72] In the GIR, DFATD submitted that ACT’s proposal does refer to assessment of attribution, [60] and that the proposal’s discussion of contribution analysis merely provides a generic definition of the term. [61]

[73] Regarding whether the proposal explained how ACT would answer questions 5 and 6 in the ToR, DFATD submitted that ACT proposed to use a realist-based approach to determine “whether and how an intervention contributed to observed results,” whereas questions 5 and 6 focus on assumptions to be assessed through a review and reconstruction of the theory of change (TOC) for each project. [62]

[74] DFATD argues that the evaluators awarded ACT 0 points under this criterion because it did not achieve the level of “Well demonstrated – Acceptable and adequate explanation” (which would have been worth 18 points), but rather “incomplete or limited explanation or irrelevant approach and methodology,” which are elements described in the “Not Demonstrated” category (worth 0 points).

- Analysis

[75] As highlighted by DFATD, ACT’s proposal does provide a simple definition of contribution analysis, however it also states the utility of this type of analysis in reconstructing TOC and that it will be used to do so. [63] While ACT’s proposal does provide for considering attribution, [64] the description of contribution analysis linked to in section 5 of the RFP makes clear that attribution is a problem to be addressed. [65] That said, the discussion of contribution analysis in ACT’s proposal is limited to a few sentences and not discussed in depth.

[76] Based on the above, the Tribunal considers that this lack of detail on key concepts set out in the ToR could be reasonably assessed as not adequately demonstrating an understanding of the evaluation approaches and methodology. As such, the Tribunal finds reasonable the evaluators’ conclusion that ACT’s explanation in this regard was limited, consistent with the rationale for an award of 0 points under rated criterion R.2.1 B.

[77] ACT also submitted that an entire section of its proposal explains how it would answer questions 5 and 6. DFATD submitted that ACT proposed to use a realist-based approach to determine “whether and how an intervention contributed to observed results,” whereas questions 5 and 6 focus on assumptions to be assessed through a review and reconstruction of the TOC for each project.

[78] Although rated criterion R.2.1 B does not explicitly reference answering questions 5 and 6, it does require bidders to demonstrate both their understanding of the services described in the ToR and how they will reconstruct the evaluated projects’ TOC, which section 3 of the ToR makes clear is a (if not the) major focus of evaluating questions 5 and 6. The information ACT points to in its proposal regarding how it intends to answer questions 5 and 6 is limited to two sentences essentially acknowledging that it is expected to do so. [66] In the Tribunal’s view, it was therefore reasonably open to the evaluators to conclude that ACT’s proposal provided an “incomplete or limited explanation” of how it intended to answer these questions.

[79] The information ACT points to in its proposal as meeting rated criterion R.2.1 B does speak to the elements discussed in the evaluators’ comments, however these references are limited to a few sentences. In the Tribunal’s view, ACT has not demonstrated that the evaluators either unreasonably construed the requirements or ignored information in ACT’s bid which demonstrated their fulfillment. Although the evaluators may have emphasized different elements of the evaluation approaches and methodology than did ACT’s proposal, the Tribunal finds that it was reasonably open to them to do so.

[80] Based on the above, the Tribunal finds reasonable the evaluation of ACT’s proposal under rated criterion R.2.1 B. As such, the Tribunal finds that this ground of complaint is not valid.

Rated criterion R.2.1 C

[81] The debriefing documents indicate that ACT received 0 out of a possible 25 points for rated criterion R.2.1 C. ACT submitted that this evaluation was incorrect because its proposal demonstrated all the information required under the criterion.

[82] Rated criterion R.2.1 C evaluates “the bidder’s understanding and its pragmatic application in reconstructing theories of change and in defining evidence-based assumptions to be tested by the evaluation.” [67]

[83] ACT contests the conclusion in the debriefing documents that its bid did not sufficiently explain its pragmatic application of reconstructing theories of change or defining evidence-based assumptions to be tested by the evaluation, noting that the proposal “remains at the level of principles” in this regard. [68]

[84] Regarding TOC, ACT submitted that section 6.2.1 of its proposal, titled “Evaluation approach, lines of evidence and sampling” sets out the methodology “on a theory-driven approach that implies a reconstruction and testing” of the TOC, or “intervention logic”, of the project as defined in the RFP. [69]

[85] ACT further submitted that section 6.2.4, titled “Best practices and lessons learned to inform the reconstruction of the Theory of Change” explains the capabilities ACT has acquired through previous evaluation projects regarding the application of TOC reconstruction. [70]

[86] Finally, ACT submitted that the RFP does not require proposals to define evidence-based assumptions to be tested by the evaluation, but rather clearly states (at section 6.4.1) that this was to be done during the inception phase of the project. [71] ACT argues that the evaluators therefore clearly exceeded the scope of the RFP in concluding that ACT’s proposal failed to meet this requirement.

[87] In the GIR, DFATD submitted that assessing how the proposal demonstrates the bidder’s understanding and pragmatic application in reconstructing TOC and in defining evidence-based assumptions is explicitly provided for in the RFP. DFATD argues that, while actually defining evidence-based assumptions was to be done during the inception phase, rated criterion R.2.1 C required bidders to demonstrate both understanding and pragmatic application in defining what evidence-based assumptions would be tested.

[88] DFATD submitted that ACT’s proposal did not demonstrate such understanding or pragmatic application and that all three individual evaluators’ notes reflect this to some degree. [72] It was therefore evaluated as being “at the level of principles only” which is consistent with the “theoretical only” language in the “not demonstrated” scoring category worth 0 points under the RFP.

- Analysis

[89] DFATD is correct that the RFP explicitly provides for assessing how the proposal demonstrates the bidder’s understanding and that pragmatic application in reconstructing TOC and in defining evidence-based assumptions is explicitly provided for in the RFP. [73] Although section 6 of ACT’s proposal refers, in several places, to exploring, testing and verifying assumptions underlying the TOC, it does not clearly state how ACT planned to define those assumptions, although it does characterize “clear conceptualization” of assumptions as a “best practice”. [74]

[90] In the Tribunal’s view, the highly technical nature of rated criterion R.2.1 C underscores the principle of deference to evaluators’ expertise, and of interfering with their conclusions only where these are found to be unreasonable. Here, the Tribunal has no reason to doubt the expertise and experience which informed the evaluators’ assessment, and finds no indication that they failed to apply themselves or otherwise conducted the evaluation in an unreasonable manner. To the extent that the Tribunal is in a position to assess the completeness of ACT’s proposal under this criterion, the Tribunal agrees that the references in ACT’s proposal to reconstructing the TOC are largely theoretical or otherwise limited in describing how this will be done.

[91] As submitted by DFATD, the RFP provides for the award of no points under rated criterion R.2.1 C if evaluators find that the “explanation is theoretical only.” Based on the above, the Tribunal finds the evaluators’ conclusions in this regard to be reasonable. As such, the Tribunal finds that this ground of complaint is not valid.

Rated criterion R.2.2

[92] Rated criteria R.2.2, titled “Organization of Bidder’s Team”, requires bidders’ to demonstrate how the organization of their evaluation team aligns with the approach and methodology they propose, through the following three items:

1. An organigram/organization chart demonstrating reporting relationships and explanation of how the proposed team structure will ensure compliance with the evaluation outlined in the ToR

2. An explanation of the proposed composition of the evaluation team including information regarding each team member and their roles and responsibilities; and

3. A detailed work plan for fulfillment of the evaluation outlined in the ToR, including the level of effort for each team member and a staff schedule specifying each team member’s tasks and the time allocated for them. [75]

[93] Evaluation under rated criterion R.2.2 is divided into three sub-categories, labelled A through C, each worth up to 25 points. Although each is worth up to 25 points, the RFP indicates that only three possible scores could be granted for each: “not demonstrated”, earning 0 points; “well demonstrated”, earning 18 points; and “fully demonstrated”, earning 25 points.

[94] ACT argues that the evaluators’ wrongly awarded its proposal 0 out of 25 possible points for each of rated criteria R.2.2 A, R.2.2 B and R.2.2 C.

Rated criterion R.2.2 A

[95] Rated criterion R.2.2 A assesses the organizational structure of the proposed evaluation team and requires bidders to demonstrate that “lines of command, communication, coordination and accountability among team members are in line with the proposed approach and methodology.” ACT submitted that its score of 0 points under this criterion was incorrect because its proposal demonstrated all the required information.

[96] Specifically, ACT contests the evaluators’ conclusion that the organizational structure and lines of communication in its proposal were unclear and missing certain positions, namely the qualitative data expert. [76] ACT submitted that its organigram complies with the RFP by outlining the organizational structure and distribution of tasks among team members, and including a chart showing the working relationships between team members. [77]

[97] ACT submitted that the role of the qualitative data expert which the evaluators’ noted as missing is explained in its proposal [78] , but acknowledges that this was mislabelled as a “quantitative expert” in the organigram due to a typographical error. [79] ACT submitted that this error should not result in the loss of all possible points under rated criterion R.2.2.

[98] In response, DFATD submitted that rated criterion R.2.2 requires bidders to provide the proposed composition of the entire team, including the core evaluation team, which pursuant to section 7.1 of the ToR must include technical expertise in both quantitative and qualitative data analysis. [80] DFATD submitted that therefore, even if the typo had correctly listed the qualitative data expert, the chart would then have been missing the quantitative expert.

[99] ACT also contests the evaluators’ conclusion that under its proposal, the core evaluation team would not lead case studies, citing parts of its proposal indicating that: the team leader oversees all deliverables and works closely with the qualitative data expert to develop case studies, [81] and personally leads the Indonesia case study and coordinates all data analysis; [82] the senior evaluator (and CEO of ACT) leads the case studies and is directly responsible for finalizing the seven desk‑based project review reports and the four case-study reports; and the local coordinators are senior experts. [83]

[100] In the GIR, DFATD concurred with this argument and withdrew its comments in the debriefing documents that the core evaluation team does not lead case studies. However, it maintained that the score of 0 points is appropriate on the basis of the evaluators’ other observations under this criterion. [84]

[101] ACT further contests the evaluators’ conclusions questioning the feasibility of local coordinators to prepare all the case studies, arguing that this is “more of a value judgment or seems to interpret what the firm may or may not be able to accomplish, exceeding the requirements of the RFP grid.” [85] ACT submitted that the regional coordinators’ CV’s demonstrate that they are experienced senior evaluators and that the organigram shows they would work with local consultants and coordinate data collection in each country. [86]

[102] Finally, ACT contests the evaluators’ questioning whether the phone-based field surveys proposed in ACT’s bid would be feasible, and their questioning the reporting relationships among team members responsible for these surveys. ACT submitted that it does not understand what aspects of its bid were put into question in this regard, but that the mobile phone-based field surveys it proposed “would allow real-time data collection in the field in order to reach more beneficiaries” and that the organigram clearly outlined the reporting relationships for conducting field surveys. [87]

[103] In the GIR, DFATD submitted that the evaluators understood the inclusion of the “survey expert” in ACT’s proposal to constitute “additional non-specialized personnel”, as permitted in section 7.5 of the ToR, [88] as opposed to a member of the core evaluation team. However they found the overall proposal to be unclear as to who would be responsible for what parts of the multiple surveys contemplated in the proposal. [89] DFATD submitted that the evaluators therefore reasonably found ACT’s proposal to be incomplete, with little explanation, and therefore that its organizational structure was “not demonstrated” (earning 0 points) under rated criterion R.2.2 A.

[104] In its comments on the GIR, ACT submitted that the considerable difference between the evaluators’ individual scores (with higher scores from evaluators B and K and a lower score from evaluator P), resulted in a low consensus score contrary to the majority opinion, and that this result was unreasonable. [90]

- Analysis

[105] DFATD is correct that rated criterion R.2.2 and section 7.1 of the ToR required bidders to provide the proposed composition of the entire team, including the core evaluation team, comprising both the quantitative expert and the qualitative data experts. DFATD also appears to be correct that the organigram in ACT’s proposal would have been missing the quantitative expert if this had been correctly labelled the “qualitative data expert”. ACT submitted that the team member labelled “Survey Expert” in the organigram is intended to represent the qualitative data expert, however this is unclear both from the chart describing the role of the “Qualitative data analysis expert”, which does not mention surveys. [91]

[106] ACT’s proposal therefore appears to use the terms “survey expert” and “qualitative data analysis expert” interchangeably, without ever explicitly linking the two. This can be contrasted to both the chart and the CV of the quantitative data expert included in ACT’s proposal, which explicitly links the proposed positions of “statistician” and “quantitative data analysis expert”. [92]

[107] Based on these observations, the Tribunal finds that it was reasonably open to the evaluators to conclude that ACT’s proposal was unclear as to the composition of its proposed team and who would be responsible for what parts of the multiple surveys contemplated therein, such that it provided an incomplete or limited explanation of the lines of command, communication, coordination and accountability among team members.

[108] Based on the above, the Tribunal finds that DFATD’s evaluation of ACT’s bid under this criterion was reasonable. As such, the Tribunal finds that this ground of complaint is not valid.

Rated criterion R.2.2 B

[109] Rated criterion R.2.2 B assesses the allocation of resources in the proposed evaluation team, and requires bidders to demonstrate that the tasks and responsibilities allocated between and among resources of the proposed team are in line with the proposed approach and methodology. [93] The debriefing documents indicate that ACT received 0 out of a possible 25 points for rated criterion R.2.2 B. ACT submitted that its score of 0 points under this criterion was incorrect because its proposal demonstrated all the required information.

[110] Specifically, ACT contests the evaluator’s conclusion that no team members were tasked to analyse the evaluated projects’ TOC or perform the participatory validation/update of their intervention logics, which the evaluators concluded demonstrated that ACT did not understand the methodological requirements of the RFP. [94] ACT submitted that its proposal provides that two senior experts, the senior evaluator and the gender expert, would “review the intervention logic of the 7 projects” and that “this, of course, implies working on the projects’ theories of change and the participatory/validation update of their intervention logics.” [95]

[111] ACT further contests the evaluators’ conclusions that its proposal did not assign tasks “to define assumptions and thereafter their verification” which would be “core” of both realist and contribution analysis. [96] ACT submitted that its proposed approach explicitly contemplated evaluators testing “the assumed causal chain of results, checking each link and verifying assumptions, to prove (or to reconstruct) the theory,” as well as verify “the assumptions set out in the desk-based project reviews while conducting the contribution analysis.” [97]

[112] In the GIR, DFATD submitted that it would have been inappropriate for evaluators to assume that the statement that the team leader and gender expert would “review the intervention logic of the 7 projects” implied fulfillment of the multiple tasks identified in the RFP for reviewing the projects theories of change, specifically those at sections 5.1, 5.2, 6.3(5), 6.3(7), and 6.3(10) of the ToR. [98]

[113] DFATD points out that the reference to reviewing the intervention logic appears in the section of ACT’s bid which is explicitly labelled as responding to rated criterion R.2.1, and does not appear in the section labelled as responding to R.2.2. [99] DFATD submitted that nothing in the section of ACT’s proposal labelled as responding to R.2.2 clearly identifies team members responsible for: updating/reconstructing the intervention logics of each project selected for a country case study; reviewing evaluation questions 5 and 6 to identify assumptions to be assessed; or designing field-based case studies, including assumptions to be verified, all of which are required under section 6.3 of the ToR.

[114] In its comments on the GIR, ACT reiterated that its proposal indicated numerous times that it would be working on the projects’ theories of change, and points specifically to pages 24 and 39 of its proposal in this regard. ACT further submitted that, again, the higher individual scores from evaluators B and K appear to have resulted in a low consensus score reflecting evaluator P’s individual score. [100]

[115] In its reply to the comments on the GIR, DFATD reiterated its arguments from the GIR that the technical proposal provided limited explanations and omitted key elements required in the ToR. [101]

- Analysis

[116] DFATD submitted that nothing in the section of ACT’s proposal labelled as responding to R.2.2 clearly identifies team members responsible for: updating/reconstructing the intervention logics of each project selected for a country case study; reviewing evaluation questions 5 and 6 to identify assumptions to be assessed; or designing field-based case studies, including assumptions to be verified, all of which are required under section 6.3 of the ToR. The Tribunal notes that “review the intervention logic of the 7 projects” is listed as a responsibility/task for both the senior evaluator and gender expert, and that item 1.5 of the task allocation refers to assessing intervention logic. [102] These references appear in the portion of ACT’s proposal labelled as responding to rated criterion R.2.2 B, in addition to the mention of reviewing the intervention logic which DFATD refers to in the section responding to rated criterion R.2.1. However, even if this information appeared only in the latter section, following the analysis in Star Group and Deloitte discussed above, the evaluators would not be precluded from considering it in their assessment under rated criterion R.2.2.

[117] Nevertheless, and similar to the analysis for other rated criteria, these references in ACT’s proposal are in the Tribunal’s view somewhat limited. Regarding ACT’s argument that the evaluators’ questioning the feasibility of local coordinators to prepare all the case studies represents a “value judgment” about “what the firm may or may not be able to accomplish,” it is unclear how that is inconsistent with the RFP given that the evaluators questioned this feasibility precisely on the basis of the requirements of the proposed approach and methodology. Further, in the Tribunal’s view, this was not a “value judgment” of what ACT “may or may not be able to accomplish,” but rather an expert assessment of what ACT’s proposal demonstrated it would be able to accomplish, which judgment was entirely within the evaluators’ purview.

[118] While it is true that two members of the core team have “review intervention logic of the 7 projects” in their tasks, the Tribunal does not consider it unreasonable for the evaluators to have concluded that ACT’s proposal provided an incomplete or limited explanation in this regard and to award it 0 points under this criterion on that basis.

[119] As such, the Tribunal finds that this ground of complaint is not valid.

Rated criterion R.2.2 C

[120] Rated criterion R.2.2 C evaluates “resource utilization and planning” for the proposed evaluation team, and requires bidders to demonstrate that these are consistent with the proposed approach and methodology proposed in rated criterion R.2.1 as well as the timelines outlined in the ToR. [103] ACT submitted that its score of 0 points under this criterion was unreasonable because its proposal demonstrated compliance with the ToR.

[121] Specifically, ACT contests the evaluators’ conclusions that:

  • its proposed resource utilization and planning was not consistent with its proposed approach and methodology;

  • its proposal contemplates 13 to 14 months to complete the evaluation while the RFP requires this to be done in 12 months; and

  • the time and level of effort required during the evaluation project’s “inception phase” under its proposal was inconsistent with the requirements of a theory-based evaluation using contribution analysis, and therefore with the ToR and the required approach. [104]

[122] Regarding planning, ACT submitted that this is always subject to discussions with the client and cannot be provided for in advance.

[123] Regarding the timeline for the overall evaluation, ACT submitted that its proposal demonstrated its intention to submit the evaluation report within 12 months after the beginning of the project, though it acknowledges providing for an additional month to conclude the evaluation brief and disseminate the results. [105] In the GIR, DFATD submitted that this clearly exceeds the period of 12 months set out in the RFP, both at section 12 of the ToR as well as in the “Summary Description” on page 3 which states “The services are expected to start in July 2020 for a period of 12 months.” [106]

[124] Regarding the contemplated timeline and level of effort for the inception phase, ACT submitted that its proposal demonstrates an inception phase duration exactly as requested in the RFP. Regarding proposed level of effort, ACT submitted that the RFP does not specify the level of effort required for the inception phase and that its proposal to complete the inception phase in 58 days (out of a total of 227 days, or 25 percent of the project’s total level of effort) is appropriate. [107]

[125] In the GIR, DFATD submitted that, when the time allotted for each deliverable is added together, the time schedule at section 12 of the ToR indicates a total review period of 49 weeks, though this appears to assume no time for comments and approval of preceding deliverables, which trigger the start of the period allotted for deliverables 2, 3, 4, 6, 8, 10 and 11. [108] DFATD argues that the evaluators’ reasonably interpreted the timeline in ACT’s proposal setting out an inception phase spanning October and November 2020, as intending an eight week inception period, in contrast to the six weeks set out in the ToR and noted by the evaluators. [109]

[126] Regarding level of effort, DFATD argues that, as under rated criterion R.2.1, the evaluators reasonably found the proposal to provide limited or no explanation of how ACT intended to reconstruct the TOC and define assumptions, as necessary in the inception phase under a contribution analysis approach. [110] DFATD further argues that the evaluators reasonably questioned whether ACT could feasibly complete these meticulous and time-intensive tasks within the time allotted in the proposal. [111]

[127] ACT further argues that the timeline set out at section 12 of the RFP is titled “Indicative Timeline Schedule and Deliverables”, and submitted that “it seems abusive to remove all points for something which is only indicative.” [112]

- Analysis

[128] In the Tribunal’s view, it is not clear how ACT’s proposed timeline for the inception phase was inconsistent with the requirements of the RFP. Although ACT’s proposal contemplates an inception phase taking place over the course of October and November, it is not obvious that this necessarily implies an inception phase lasting eight weeks, as the first part of the following phase of the evaluation is also contemplated to take place in November. That said, ACT bore the onus of demonstrating that its proposal demonstrated fulfillment of the requirements in the RFP. That its proposal is not obviously inconsistent with the requirement does not entitle ACT to the award of all or even some points, especially under an RFP requirement such as rated criterion R.2.2 C, which explicitly contemplates the award of 0 points for an explanation that is merely limited or incomplete.

[129] Furthermore, DFATD is correct that the RFP sets out a 12-month timeline for the overall evaluation, both at section 12 of the ToR as well as in the “Summary Description” on page 3, which states “The services are expected to start in July 2020 for a period of 12 months.” [113]

[130] With regard to ACT’s argument that the timeline at section 12 of the RFP is merely “indicative”, the Tribunal is again guided by the principle of deference to evaluators’ interpretation of the solicitation documents. The Tribunal does not accept ACT’s argument that its proposal complied with the stated requirements of the RFP because those requirements were actually subject to negotiation between the parties, absent some evidence of an intention to so negotiate reflected in the solicitation documents. The mere use of the word “indicative” is not sufficient to displace the role of evaluators in assessing compliance with the requirements set out in the RFP, especially where an evaluation period of 12 months was explicitly contemplated elsewhere in the RFP.

[131] Based on the above, the Tribunal finds that the evaluators reasonably concluded that ACT’s proposal provided an incomplete or limited explanation of how its work plan and effort level were in line with its proposed approach and methodology, as well as the evaluation timelines outlined in the SOR, and awarded it 0 points under this criterion.

[132] As such, the Tribunal finds that this ground of complaint is not valid.

Rated criterion R.2.3

[133] Rated criterion R.2.3 provides for the evaluation of the bidder’s “Evaluation Quality Assurance System” (EQAS). It requires proposals to describe the EQAS in terms of:

· the components that will be covered;

· the points in the evaluation process when these components will be covered;

· the proposed steps to ensure the listed components are covered at each point;

· the roles and responsibilities of quality assurance (QA) personnel; and

· evidence that the EQAS has been used in previous evaluations. [114]

[134] Evaluation under rated criterion R.2.3 is divided into two sub-categories, labelled A and B, worth a combined total of up to 25 points.

[135] ACT argues that the evaluators wrongly awarded its proposal 1 out of 25 possible points in total for rated criteria R.2.3 A and R.2.3 B. because the evaluators ignored information contained in its proposal, incorrectly assessed certain criteria and made personal evaluations. [115]

Rated criterion R.2.3 A

[136] Rated criterion R.2.3 A evaluates whether the proposed EQAS fully ensures quality in conducting the services described in the ToR, and is worth up to 20 points. Although worth up to 20 points, only three possible scores could be granted for this criterion: “not demonstrated”, earning 0 points; “well demonstrated”, earning 15 points; and “fully demonstrated”, earning 20 points. The RFP described “not demonstrated” as meaning:

No details provided or incomplete description of elements or quality assurance/control is limited to the Evaluation Team Leader’s oversight or on normally expected work on the validity and reliability of data/information sources. [116]

[137] ACT contests the evaluators’ comments concluding that the requirements of rated criterion R.2.3 A were “not demonstrated”, which were as follows:

The bidder did not demonstrate that its EQAS fully ensures quality in conducting the Services described in the Terms of Reference.

The quality control mechanisms outlined by the bidder depend on normally expected work/Team leader oversight, with an external expert reviewing final documents only at the very end. This is not enough for effective quality control of services. The QA expert should be involved throughout the evaluation and at all stages. [117]

[138] ACT submitted that its proposal provided for three senior experts (the team leader, senior expert and peer reviewer) to be involved at all stages for all deliverables, and an external peer review which ACT submitted is “the most elaborate an evaluation firm usually provides.” [118]

[139] In the GIR, DFATD submitted that ACT’s proposal planned for QA oversight by the team leader and an external expert review of the final documents only at the end of the evaluation process. DFATD submitted that rated criterion R.2.3 A requires QA mechanisms to be applied throughout the evaluation process, [119] and that the evaluators determined that the only QA mechanism applied throughout the evaluation process was oversight by the team leader, since the external expert would review only final deliverables. DFATD argues that the evaluators therefore reasonably assessed that QA mechanisms were essentially limited to the evaluation team leader’s oversight, consistent with the language informing a score of “not demonstrated” (worth 0 points) in the RFP.

[140] In its comments on the GIR, ACT submitted that the EQAS portion of its proposal clearly indicates that three experts would be involved in the QA process throughout the evaluation and reiterated that peer review would be conducted on all products. ACT further submitted that the evaluators’ assessment that its proposed QA resources were insufficient was based on undisclosed criteria and therefore unreasonable, because the RFP did not request a quantitative figure of resources but simply the need for specific resources. [120]

[141] In its reply to the comments on the GIR, DFATD submitted that, of the three experts ACT refers to, the only expert directly linked to QA is the peer review expert who was to review the quality of deliverables in the reporting phase, not throughout the evaluation process. DFATD agrees that rated criterion R.2.3 did not require a “quantitative figure of resources”, but submitted that the quantitative sufficiency of resources was not a factor in the evaluators’ assessment of that criterion. Rather, the evaluators determined that ACT’s proposal did not demonstrate that its EQAS fully ensured quality in conducting the services described in the ToR. [121]

- Analysis

[142] The Tribunal does not agree with DFATD that ACT’s proposal only contemplated the peer review expert reviewing the quality of deliverables in the reporting phase, as the task allocation chart appears to contemplate this team member being engaged during the inception phase (at item 1.10) as well as during data collection and analysis (at items 2.2, 2.3 and 2.8). [122] However, in the Tribunal’s view this does not indicate that the evaluation was unreasonable, as the evaluators themselves did not justify their assessment on that basis.

[143] Regarding the points at which QA mechanisms would be applied throughout the evaluation process, the evaluators’ conclusions appear to turn largely on the peer reviewer only reviewing deliverables prior to submitting them to DFATD, as opposed to an iterative process where the peer reviewer would provide input throughout the development of each deliverable. ACT appears to have interpreted the criterion as being fulfilled by the former level of QA review, while DFATD’s interpretation was that the latter more extensive level of QA would demonstrate the fulfillment of the requirement. The relevant question is therefore whether DFATD’s interpretation in this regard, and therefore its evaluation under the criterion on the basis of that interpretation, was reasonable.

[144] The parties did not submit evidence as to what would constitute “normally expected work” with regard to QA of data and information sources, beyond ACT’s statement that an external peer reviewer is the “most elaborate an evaluation firm usually provides.” The Tribunal notes that two of the three individual evaluators considered ACT’s proposal deficient in this area, either with regard to the level of detail provided, the fact that the peer reviewer would review only final deliverables, or both. [123]

[145] Regarding ACT’s argument that the evaluators applied undisclosed criteria, the Tribunal fails to see how this is the case. Evaluators are naturally called to use their judgment to gauge how well a bid demonstrates preparedness (on a point-rated criterion), without necessarily needing a numerical reference point to do so. The Tribunal’s view in this regard is informed by the high standard imposed for this criterion under the RFP, i.e. that “normally expected work” would earn a score of “not demonstrated”.

[146] That the parties reached different interpretations of the standard imposed under this criterion does not in the Tribunal’s view reflect an ambiguity in the terms of the RFP, because the parties’ positions are not mutually exclusive. DFATD did not interpret the meaning of QA, EQAS, or other abbreviations used in the requirement as having a different meaning than ACT understood them to have, similar to the distinction between the different types of analysis meant by the abbreviations MCDA and MCA applied by the evaluators in Deloitte. Rather, DFATD assessed ACT’s proposed EQAS as insufficient in the extent to which it fulfilled the requirement, specifically that it was not a complete or even an adequate description of all elements of the requirement.

[147] This is not a case of evaluators failing to consider the substance of a proposal which simply failed to reproduce the semantic language used in the RFP, as the Tribunal found in Deloitte. DFATD’s evaluators considered precisely the substance of ACT’s proposal, which provided for a QA review of each deliverable prior to submitting it to DFATD, and judged that this did not fulfill the highest possible, or even an adequate, standard of applying QA mechanisms throughout the development of each deliverable.

[148] Essentially, ACT is arguing that the evaluators should have applied a lower standard than they did in assessing its EQAS. However, it has not provided an evidentiary basis for the Tribunal to find unreasonable DFATD’s assessment that an adequate description of the proposed EQAS would involve QA mechanisms being applied throughout the development of each deliverable. As the Tribunal stated in Samson and Associates, a government entity’s determination will be considered reasonable if it is supported by a tenable explanation, regardless of whether the Tribunal itself finds that explanation compelling. [124] Based on the above, the Tribunal finds DFATD’s explanation of its evaluation under rated criterion R.2.3 A to be tenable, and therefore finds that the evaluation itself was reasonable.

[149] As such, the Tribunal finds that this ground of complaint is not valid.

Rated criterion R.2.3 B

[150] Rated criterion R.2.3 B evaluates the demonstration that the EQAS is in place and has been used in previous evaluations, and is worth up to 5 points. Although the criterion is worth up to 5 points, the RFP indicates that only three possible scores could be granted for this criterion: 0 points if the EQAS is not in place and has not been used in previous evaluations; 1 point if the EQAS is in place but has not been used in previous evaluations; and 5 points if the EQAS is in place and evidence is provided that it has been used in previous evaluations. [125]

[151] ACT contests the evaluators’ award of 1 point under this criterion, which in the debriefing documents is justified on the basis that “[t]he bidder has a quality assurance policy but no evidence that it has been used in previous evaluations.” [126] ACT submitted that its proposal demonstrated that it has developed its own QA policy since 2015, and provided seventeen (the confidential record indicates eighteen) examples of previous evaluations it has conducted since that time. [127]

[152] In the GIR, DFATD submitted that neither ACT’s proposal, nor its complaint, provide evidence that its EQAS was used in previous evaluations. DFATD reiterated that points are not lost or deducted through the evaluation, but rather awarded for positive demonstration of meeting a requirement, and that the list of previous evaluations do not indicate that the EQAS was used. DFATD submitted that the evaluators therefore reasonably assessed ACT’s proposal as demonstrating that it had an EQAS policy in place, but not that it had been used in previous evaluations. [128]

- Analysis

[153] The Tribunal finds DFATD’s evaluation under this criterion reasonable on the basis that ACT’s proposal demonstrated fulfillment of only one of the two requirements. While it demonstrated that ACT had an EQAS in place, albeit one that the evaluators considered as inadequately described under rated criterion R.2.3 A, the Tribunal does not see what ACT’s proposal demonstrates the use of this system in previous evaluations.

[154] Based on the above, the Tribunal finds that this ground of complaint is not valid.

Consistency of evaluation

[155] In the GIR, DFATD describes how the technical evaluation of ACT’s bid was conducted. First, each of the three evaluators individually evaluated the five proposals, then met to discuss their evaluations and reach a consensus determination for the score to be awarded under each point-rated criterion for each of the proposals. Following the consensus meetings, the evaluators determined that only three proposals met the mandatory technical criteria, and only one of those bidders, the winning bidder, received the passing technical evaluation score of 410 points or more.

[156] DFATD submitted that ACT’s bid met the mandatory technical evaluation criteria, but did not meet the minimum technical score of 410 out of a possible 585 points. ACT’s proposal received a consensus technical score of 375 points, 313 points for rated criterion R1 (personnel), and 62 points for rated criterion R2 (methodology). [129]

[157] In its comments on the GIR, ACT submitted that the evaluation was flawed from the outset because one of the evaluators (evaluator P) used an evaluation grid [130] different from that provided in the RFP and used by the other two evaluators (evaluators B [131] and K [132] ). ACT submitted that evaluator P evaluated the personnel component of ACT’s proposal (rated criterion R1) out of a possible 380 points instead of the 360 points set out in the RFP and used by the other evaluators, [133] thus increasing the denominator by which total points were divided to 605 as opposed to the intended 585. [134] ACT submitted that this reduced the percentage score its proposal received for rated criterion R1, putting it at an arbitrary disadvantage.

[158] ACT also submitted that evaluator P appears to have scored rated criterion R.1.2A out of a different total number of points than evaluators B and K, and that evaluator P’s score on this criterion provided no explanation in the section provided for “comments and substantiating information.” [135] ACT makes the same argument regarding evaluator P’s individual scoring grid for rated criterion R.2.1 B. however, as pointed out by DFATD, this portion of the grid appears to be consistent with both the other individual evaluation grids and the RFP. [136]

[159] In its reply to the comments on the GIR, DFATD submitted that the 380 point total in evaluator P’s individual scoring grid, as opposed to 360 points, as well as the total points listed for rated criterion R.2.1 A, were typographical errors. DFATD submitted that these errors did not prejudice or disadvantage ACT’s proposal because:

  • the typographical errors were in respect of the amount of total available points, and because points were awarded rather than deducted, this could not have affected the number of points awarded; and

  • the typographical errors occurred in an individual evaluator’s grid, and decisions were made following reconciliation of the individual evaluations through consensus. [137]

[160] ACT further submitted that evaluators B and K awarded ACT passing individual scores, but that the consensus score more closely reflects evaluator P’s lower individual score (which ACT reiterated was based on a different evaluation grid). [138] ACT highlights the scoring for rated criterion R.2.1 B, under which ACT received a higher individual score from all three evaluators, [139] than its consensus score (the debriefing documents submitted as public attachments to ACT’s complaint indicate a consensus score of 0 points). [140]

[161] ACT submitted that these facts contradict DFATD’s position that the evaluators, especially evaluator P, conducted the evaluation in a procedurally fair manner and that the decision was reasonable.

[162] In its reply to the comments on the GIR, DFATD submitted that, in the context of a consensus-based evaluation, individual scores are not actual points awarded but instead form part of evaluators’ discussion when deciding how many points to award for specific criteria.

[163] DFATD cites CGI, where the Tribunal found that individual score sheets were in fact the starting point for heated discussions and extensive debates, and that it was therefore reasonable that the scores that resulted would not always reflect the averages or medians of individual scores. [141] DFATD cites a similar conclusion by the Tribunal in Deloitte, where it found that discrepancies between individual scores and consensus scores were not a sufficient basis to draw an adverse inference of unfairness or unreasonableness. [142] DFATD also cites Solutions Moerae Inc. O/A MSi v. Industry Canada, where the Tribunal reiterated that it is reasonable (and even expected) for evaluators to reach different individual scores which are then reconciled by collectively developing consensus scores. [143]

[164] DFATD refers to the evaluation guidelines, noted above, which were provided to and signed by the evaluators, and discussed the consensus process as follows:

Following completion of the individual evaluations, the evaluators will notify the SGC Contracting Officer.

The evaluation team will proceed with the reconciliation of the evaluation criteria for each proposal. The evaluation team must agree on a final rating for each rated criterion through consensus. Consensus is to be reached through discussion of the weaknesses and strengths of each bid and a supportable rating allocated by the evaluation team, as applicable. [144]

[165] DFATD submitted that the individual evaluations do not stand separate from the consensus findings and scoring, nor are they signed or changed after completion of the consensus grid, which is signed. DFATD further submitted that ACT’s consensus score was higher than the average of individual scores for rated criteria R.1.2 A, R.2.1 A and R.2.1 E. Finally, DFATD submitted that the Tribunal should assess the reasonableness of the evaluation by considering how the consensus evaluation applied the requirements in the RFP to the information contained in ACT’s proposal.

Analysis

[166] ACT is correct that evaluator P’s scoring grid evaluated rated criterion R.1.2A out of a different total number of points than did the grids of evaluators B and K. DFATD submitted that this was a typographical error, which seems plausible given that rated criterion R.1.2A involved scoring two different assignments (previous projects), each out of 20 points. Evaluator K’s individual scoring grid also indicates a correction to this same initial typographical error. [145]

[167] ACT is also correct that evaluator P’s scoring grid rated the personnel component of ACT’s proposal (rated criterion R1) out of a possible 380 points instead of the 360 points set out in the RFP, and that this increased the maximum total score (and therefore the denominator for determining a passing 70 percent score) from 585 to 605 points. However, DFATD is correct that, under the evaluation, points were to be awarded rather than deducted. Although the increased total maximum points (from 585 to 605) resulting from this error did increase the required mark for a passing 70 percent score on evaluator P’s scoring grid (from 410 to 423.5), the total score awarded in evaluator P’s individual scoring grid indicates this would not have made a difference in terms of whether ACT’s proposal received a passing grade, even if it had received the maximum available points under rated criterion R.1.2A. [146]

[168] The Tribunal also agrees with DFATD that the individual scoring grids were merely a starting point for discussions leading to the final score under the consensus evaluation which was to be determinative. DFATD is also correct that ACT’s consensus score was higher than the average of individual scores for rated criteria R.1.2 A, R.2.1 A and R.2.1 E. [147] This fact no more indicates to the Tribunal that the evaluation was unreasonable in ACT’s favour, than do the consensus scores which were lower than the average individual scores indicate that the evaluation was unreasonable in a way prejudicial to ACT’s proposal.

[169] ACT’s frustration that the consensus scoring converged toward the lower individual score is understandable. However, the Tribunal fails to see what in ACT’s submissions might suggest that the consensus score, which determined the actual outcome of the evaluation, was reached contrary to the terms of the solicitation. Evaluator P’s individual scoring grid is lacking explanation with regard to rated criterion R.1.2 A, and includes what the Tribunal accepts was a typographical error regarding the maximum possible point score. However, as submitted by DFATD, the Tribunal has consistently found differences between individual and consensus evaluation scores to be reasonable, and indeed expected, based on the reasoning in the decisions outlined above.

[170] Further, the Tribunal notes that the very detailed evaluation guideline signed by the evaluators make clear that the consensus evaluation is determinative. The question before the Tribunal is whether the consensus evaluation was reasonable based on the reasons provided in support thereof and the terms of the solicitation. In this regard, the Tribunal finds the evaluators’ decisions to be supported by a tenable explanation that is both internally coherent and consistent with the terms of the solicitation.

Conclusion

[171] For the foregoing reasons, the Tribunal is not persuaded that the evaluators failed to apply themselves in evaluating ACT’s proposal, ignored vital information therein, wrongly interpreted the scope of the requirements under the RFP, based their evaluation on undisclosed criteria or otherwise failed to conduct the evaluation in a procedurally fair way.

[172] Based on the above, the Tribunal finds that the evaluation of ACT’s proposal, when considered under each of the evaluation criteria discussed above and in terms of its overall consistency and conclusions, was reasonable.

[173] For the reasons set out above, the Tribunal finds that this complaint is not valid.

COSTS

[174] The Tribunal has broad discretion to award costs under section 30.16 of the CITT Act. The Tribunal follows the “judicial model” under which, generally, the winning party is entitled to its costs. As such, the Tribunal will award costs to DFATD.

[175] In determining the amount of cost award for this complaint, the Tribunal considered its Procurement Costs Guideline (Guideline), which contemplates classification of the level of complexity of cases on the basis of three criteria: the complexity of the procurement, the complexity of the complaint and the complexity of the complaint proceedings.

[176] In light of the criteria set out in the Guideline, the Tribunal finds the following regarding this complaint:

  • The procurement of services involved a defined service project or study (level 2);

  • The issue was an evaluation based on a significant evaluation grid, involving many elements of allegedly ambiguous or overly restrictive specifications (level 3); and

  • The process required the use of the 135-day time frame (level 3), but no public hearing was held (level 1 or 2).

[177] Accordingly, the Tribunal’s preliminary indication of the level of complexity for this complaint is Level 2, which has an associated flat-rate cost award amount of $2,750.

DECISION

[178] Pursuant to subsection 30.14(2) of the CITT Act, the Tribunal determines that the complaint is not valid.

[179] Pursuant to section 30.16 of the CITT Act, the Tribunal awards DFATD its reasonable costs incurred in preparing and proceeding with this complaint, which costs are to be paid by ACT.

[180] In accordance with the Guideline, the Tribunal’s preliminary indication of the level of complexity for this complaint is Level 2, and its preliminary indication of the amount of the cost award is $2,750.

[181] If any party disagrees with the preliminary level of complexity or indication of the amount of the cost award, it may make submissions to the Tribunal, as contemplated in Article 4.2 of the Guideline. The Tribunal reserves jurisdiction to establish the final amount of the cost award.

Frédéric Seppey

Frédéric Seppey
Presiding Member


ANNEX 1: RELEVANT RFP PROVISIONS

Terms of Reference clause 3

Clause 3 of the ToR provides as follows:

3. Evaluation Questions

The Consultant will address the following questions:

1. To what extent has each of the seven projects achieved the expected immediate (see Logic Models, Annex 10) and mandatory intermediate (as specified in the call for proposals, see Annex 9) outcomes?

2. What results have been obtained by each project?

(Note: This question does not pertain to the extent to which the seven projects achieved their expected results as per the logic model, but what results were obtained. As such, results may correspond with expected results as per the logic model but not necessarily.)

3. Were results achieved relevant to the needs and priorities of the beneficiaries, especially women and marginalized groups?

4. Were the innovative1 ICT tools and approaches designed and implemented in a way that they will continue to be used beyond the life of the project?

Note: For the following two questions (five and six), the Consultant must identify a limited number of key factors (assumptions) to be assessed through a review and reconstruction of the theory of change for each project based on programming documents and discussions with key stakeholders during the inception phase. Contributing and/or hindering factors have to be supported with robust evidence. Finally, among the factors, particular attention must be paid to how the ICT tools and approaches may have hindered or enhanced the participation and inclusion of women and marginalized groups.

5. What key factors hindered the achievement of expected results?

6. What key factors contributed to the achievement of the obtained results (as measured by question
two)?

Rated criterion R.2.2

The chapeau of rated criterion R.2.2 of the RFP’s evaluation criteria provides as follows:

Organization of Bidder’s Team (maximum 75 points)

A maximum of five (5) pages will be considered for this requirement

Points will be awarded for each of the following elements according to their alignment with the proposed approach and methodology:

The Bidder should provide:

  1. An organigram/organization chart illustrating the reporting relationships, together with a description of how such organization of the team structure will ensure the fulfilment of the Evaluation outlined in the ToR

  2. The proposed composition of the entire Bidder’s Team, including the Core Evaluation Team, Quality Assurance Personnel, Local Coordinator-Specialists and Additional Specialized and Non Specialized Personnel. The following information should be provided for each member of the Consultant’s Team:

  • The name of the proposed resource;

  • Positions (role/function);

  • Responsibilities and work tasks (including supervisory) which would be assigned to each individual, including location of field work for the Local Coordinator-Specialists.

  1. A detailed work plan (such as a Gantt chart) for fulfilment of the Evaluation outlined in the ToR. The Bidder should include:

  • the level of effort of each member of the entire Bidder’s Team;

  • a staffing schedule that specifies the tasks performed by each team member and the time allocated to each of them.

Points will be awarded on the following elements.

Rated criterion R.2.3

The chapeau of rated criterion R.2.3 of the RFP’s evaluation criteria provides as follows:

Bidder’s Quality Assurance System (maximum 25 points)

A maximum of two (2) pages will be considered for this requirement

The Bidder is expected to have an Evaluation Quality Assurance System (EQAS). That is, the Bidder is expected to dedicate specific resources to quality assurance efforts and have quality assurance mechanisms which will be applied throughout the evaluation process. The Bidder should provide:

  1. a description of the Bidder’s EQAS including the following elements:

  • The components of the evaluation mandate that will be covered;

  • The points in the evaluation process when the above components will be covered;

  • The proposed process steps/mechanisms to ensure the above components will be covered at each point in the evaluation process listed above [i.e. how the quality of the evaluation management and evaluation deliverables will be ensured throughout the evaluation process]; and

  • The roles and responsibilities of quality assurance personnel. evidence that the described EQAS is in place and has been used for previous evaluations;

Points will be awarded on the following elements.

Evaluation team basic guidelines

Clause 6 of the evaluation team basic guidelines governing the technical evaluation provides as follows:

6. TECHNICAL EVALUATION

The documents for the evaluation process include the following:

  • The RFP and all addendums, including all questions and answers published on Buy and Sell during the bid solicitation;

  • Technical Proposals received from the Bidders;

  • Evaluation Grids;

  • Evaluation Guidelines.

6.1. INDIVIDUAL EVALUATION OF PROPOSALS

Each evaluation team member must read, evaluate and rate objectively each proposal in accordance with items 6.1.1 and 6.1.2 below. Evaluators are reminded that they are only to use information provided in the bid, no outside knowledge is to be considered as part of the evaluation.

Strengths, weaknesses (as applicable), missing information and cross-references to proposals must be documented in the individual evaluation grids. Remarks should be professional, relevant, factual and uncompromising; so as to avoid embarrassment should this information be requested by the Bidder through Access to Information (ATIP), auditors, or by Canadian International Trade Tribunal (CITT) investigators.

The originals of the individual evaluations must not be destroyed and are to be retained by the SGCC Contracts Contracting following completion of the evaluation process. These documents may be requested by the CITT in the event of a complaint and must be readily available.

6.1.1. Evaluation of the Mandatory Criteria

For each technical proposal to be evaluated, each evaluator will independently evaluate the Technical Mandatory Requirements and complete an individual evaluation grid.

Should an individual evaluator identify noncompliance with a Technical Mandatory Requirement, the individual evaluator must notify the SGC Contracting Officer, stop evaluating and wait for further directives.

6.1.2. Evaluation of the Point Rated Criteria

For each proposal meeting the mandatory requirements, each evaluator will independently evaluate the point rated evaluation criteria and complete the individual evaluation grid for the Rated Requirements.

6.2. RECONCILIATION OF INDIVIDUAL EVALUATIONS

Following completion of the individual evaluations, the evaluators will notify the SGC Contracting Officer.

The evaluation team will proceed with the reconciliation of the evaluation criteria for each proposal. The evaluation team must agree on a final rating for each rated criterion through consensus. Consensus is to be reached through discussion of the weaknesses and strengths of each bid and a supportable rating allocated by the evaluation team, as applicable.



[1] Exhibit PR-2020-085-10 at 56-188.

[2] Ibid. at 190-191.

[3] Ibid. at 193-194.

[4] Ibid. at 196.

[5] Ibid. at 198.

[6] Exhibit PR-2020-085-01A (protected) at 175-298.

[7] Exhibit PR-2020-085-10 at 329-330; Exhibit PR-2020-085-01 at 9. Although dated December 8, 2020, the regret letter was sent to ACT on January 7, 2021.

[8] Exhibit PR-2020-085-10 at 329; Exhibit PR-2020-085-01 at 11-12.

[9] Exhibit PR-2020-085-10 at 334.

[10] Ibid. at 337, 340-342.

[11] Exhibit PR-2020-085-01 at 18.

[12] Exhibit PR-2020-085-01; Exhibit PR-2020-085-01A (protected).

[13] Exhibit PR-2020-085-02.

[14] Exhibit PR-2020-085-03.

[15] SOR/91-499.

[16] Exhibit PR-2020-085-04.

[17] Exhibit PR-2020-085-01B; Exhibit PR-2020-085-01C (protected).

[18] Exhibit PR-2020-085-06.

[19] Exhibit PR-2020-085-07; Exhibit PR-2020-085-08.

[20] Exhibit PR-2020-085-01D.

[21] Exhibit PR-2020-085-09.

[22] Exhibit PR-2020-085-07A.

[23] Exhibit PR-2020-085-10; Exhibit PR-2020-085-10A (protected).

[24] Exhibit PR-2020-085-11.

[25] Exhibit PR-2020-085-12; Exhibit PR-2020-085-13.

[26] Exhibit PR-2020-085-14; SOR/93-602 [Regulations].

[27] Exhibit PR-2020-085-15; Exhibit PR-2020-085-15A (protected).

[28] Exhibit PR-2020-085-16.

[29] Exhibit PR-2020-085-19.

[30] Exhibit PR-2020-085-20.

[31] Northrop Grumman Overseas Services Corp. v. Canada (Attorney General), 2009 SCC 50 (CanLII) at para. 17; Canada (Attorney General) v. Northrop Grumman Overseas Services Corp., 2008 FCA 187 (CanLII) at paras. 50, 85.

[32] See, for example, Article 504(11)(vii) of the Canadian Free Trade Agreement, online: Internal Trade Secretariat <https://www.cfta-alec.ca/wp-content/uploads/2017/06/CFTA-Consolidated-Text-Final-Print-Text-English.pdf> (entered into force 1 July 2017).

[33] Exhibit PR-2020-085-10 at para. 32.

[34] Section 5 of the RFP sets out the evaluation criteria for evaluating proposals, including both mandatory and point‑rated technical criteria. The rated technical criteria comprise a section evaluating proposed personnel (R1), worth a maximum of 360 points, and proposed methodology (R2) worth a maximum of 225 points. The total maximum points for the rated technical criteria (R1 plus R2) was therefore 585 points, with a minimum required “pass” mark of 410 points (70 percent). Rated criterion R.2.1 assessed bidders’ proposed evaluation approach and methodology, rated criterion R.2.2 assessed the organization of the proposed evaluation team, and rated criterion R.2.3 assessed the proposed quality assurance system. See Exhibit PR-2020-085-10 at 138-148.

[35] Samson & Associates v. Department of Public Works and Government Services (13 April 2015), PR-2014-050 (CITT) at paras. 35-36 [Samson & Associates], citing: Northern Lights Aerobatic Team, Inc. v. Department of Public Works and Government Services (7 September 2005), PR-2005-004 (CITT) [Northern Lights] at para. 52; Accipiter Radar Technologies Inc. v. Department of Fisheries and Oceans (17 February 2011), PR-2010-078 (CITT) at para. 52.

[36] Raymond Chabot Grant Thornton Consulting Inc. and PricewaterhouseCoopers LLP (25 October 2013), PR‑2013-005 and PR-2013-008 (CITT) [Raymond Chabot] at para. 37.

[37] Madsen Power Systems v. Department of Public Works and Government Services (4 May 2016), PR-2015-047 (CITT) [Madsen Power Systems] at para. 41.

[38] At para. 35, citing: Samson & Associates v. Department of Public Works and Government Services (19 October 2012), PR-2012-012 (CITT) at paras. 26-28; Northern Lights at para. 52.

[39] Exhibit PR-2020-085-10 at para. 39; Exhibit PR-2020-085-01B at paras. 43, 59, 85, 87.

[40] Exhibit PR-2020-085-10 at paras. 39-40 and at 138-148, 348-360. See Annex 1 of these reasons for the entirety of section 6 of the evaluation guidelines.

[41] Exhibit PR-2020-085-15 at paras. 2-3.

[42] Canada (Minister of Citizenship and Immigration) v. Vavilov, 2019 SCC 65 [Vavilov].

[43] Exhibit PR-2020-085-20 at 1.

[44] Heiltsuk Horizon Maritime Services Ltd. v. Atlantic Towing Ltd., 2021 FCA 26 at para. 70, citing: Saskatchewan Polytechnic Institute, 2015 FCA 16 at para. 7.

[45] AJL Consulting (12 February 2020), PR‑2019-045 (CITT) at para. 9. See also CAE Inc. v. Department of Public Works and Government Services (26 August 2014), PR-2014-007 (CITT) at para. 45; Team Sunray and CAE Inc. v. Department of Public Works and Government Services (25 October 2012), PR-2012-013 (CITT) at para. 41; Falconry Concepts v. Department of Public Works and Government Services (10 January 2011), PR-2010-046 (CITT) at para. 59; C3 Polymeric Limited v. National Gallery of Canada (21 February 2013), PR-2012-020 (CITT) at para. 39; Pennecon Hydraulic Systems v. Department of Public Works and Government Services (4 September 2019), PR-2019-007 (CITT) at para. 56.

[46] Raymond Chabot at paras. 2, 25, 37; Madsen Power Systems. See also Valcom Consulting Group Inc. v. Department of National Defence (14 June 2017), PR-2016-056 (CITT) at para. 53, for the restatement of the principle from Madsen Power Systems specifying the context of mandatory criteria.

[47] SoftSim Technologies Inc. v. National Research Council Canada (11 October 2018), PR-2018-015 at paras. 33-41.

[48] PR-2020-085-010 at para. 90.

[49] Star Group International Trading Corporation (7 April 2014), PR-2013-032 (CITT) [Star Group] at para. 54.

[50] Ibid. at paras. 55-56.

[51] Deloitte Inc. (25 July 2017), PR-2016-069 (CITT) [Deloitte] at paras. 38-40, citing: Primex Project Management Ltd. (22 August 2002), PR-2002-001 (CITT) at 10; IBM Canada Ltd. (24 April 1998), PR-97-033 (CITT).

[52] Deloitte at paras. 33-54.

[53] Ibid. at para. 43.

[54] Exhibit PR-2020-085-10 at 145.

[55] Ibid. at 103.

[56] Ibid. at 146.

[57] Exhibit PR-2020-085-01B at para. 28; Exhibit PR-2020-085-01 at 11, 14. See Annex 1 of these reasons for the evaluation questions set out at clause 3 of the ToR.

[58] Exhibit PR-2020-085-01B at para. 31.

[59] Ibid. at para. 32; Exhibit PR-2020-085-01A (protected) at 204-206.

[60] Exhibit PR-2020-085-01A (protected) at 206.

[61] Ibid. at 204.

[62] Exhibit PR-2020-085-10 at paras. 54-55; Exhibit PR-2020-085-01A (protected) at 204.

[63] Exhibit PR-2020-085-01A (protected) at 204, 206.

[64] Ibid. at 206.

[65] Online: <https://www.betterevaluation.org/en/plan/approach/contribution_analysis>.

[66] Exhibit PR-2020-085-01A (protected) at 206.

[67] Exhibit PR-2020-085-10 at 146.

[68] Exhibit PR-2020-085-01 at 12, 14.

[69] Exhibit PR-2020-085-01B at para. 40; Exhibit PR-2020-085-01A (protected) at 203-206.

[70] Exhibit PR-2020-085-01B at para. 41; Exhibit PR-2020-085-01A (protected) at 207.

[71] Exhibit PR-2020-085-01B at paras. 42-44; Exhibit PR-2020-085-10 at 107.

[72] Exhibit PR-2020-085-10A (protected) at 376, 399, 414.

[73] Exhibit PR-2020-085-10 at 146.

[74] Exhibit PR-2020-085-01A (protected) at 204, 206-207.

[75] Exhibit PR-2020-085-10 at 147. See Annex 1 of these reasons for the entire chapeau of rated criterion R.2.2 setting out these requirements.

[76] Exhibit PR-2020-085-01 at 12, 14.

[77] Exhibit PR-2020-085-01B at paras. 46-48; Exhibit PR-2020-085-01A (protected) at 209-213.

[78] Exhibit PR-2020-085-01B at para. 49; Exhibit PR-2020-085-01A (protected) at 211, 215.

[79] Exhibit PR-2020-085-01A (protected) at 209.

[80] Exhibit PR-2020-085-10 at 110, 147.

[81] Exhibit PR-2020-085-01A (protected) at 215.

[82] Ibid. at 209.

[83] Exhibit PR-2020-085-01B at para. 53.

[84] Exhibit PR-2020-085-10 at para. 73-74.

[85] Exhibit PR-2020-085-01B at para. 52; Exhibit PR-2020-085-01 at 12, 14.

[86] Exhibit PR-2020-085-01B at paras. 55-56; Exhibit PR-2020-085-01A (protected) at 209.

[87] Exhibit PR-2020-085-01B at paras. 60-65; Exhibit PR-2020-085-01A (protected) at 205, 209-210.

[88] Exhibit PR-2020-085-01A (protected) at 209-211; Exhibit PR-2020-085-10 at 111.

[89] Exhibit PR-2020-085-10 at paras. 75-82; Exhibit PR-2020-085-01A (protected) at 209-210, 215.

[90] Exhibit PR-2020-085-15 at paras. 39-41.

[91] Exhibit PR-2020-085-01A (protected) at 211.

[92] Ibid. at 233, 238.

[93] Exhibit PR-2020-085-10 at 147.

[94] Exhibit PR-2020-085-01 at 12.

[95] Exhibit PR-2020-085-01B at para. 69; Exhibit PR-2020-085-01A (protected) at 210, 212.

[96] Exhibit PR-2020-085-01B at para. 72; Exhibit PR-2020-085-01 at 15.

[97] Exhibit PR-2020-085-01B at paras. 73-75; Exhibit PR-2020-085-01A (protected) at 204, 206.

[98] Exhibit PR-2020-085-10 at 104, 106.

[99] Exhibit PR-2020-085-01A (protected) at 207, 209-213.

[100] Exhibit PR-2020-085-15 at para. 43.

[101] Exhibit PR-2020-085-20 at 4.

[102] Exhibit PR-2020-085-01A (protected) at 210, 212.

[103] Exhibit PR-2020-085-10 at 147.

[104] Exhibit PR-2020-085-01B at paras. 77-78; Exhibit PR-2020-085-01 at 12, 15.

[105] Exhibit PR-2020-085-01B at para. 83-84; Exhibit PR-2020-085-01A (protected) at 213.

[106] Exhibit PR-2020-085-10 at 58.

[107] Exhibit PR-2020-085-01B at paras. 79-80; Exhibit PR-2020-085-01A (protected) at 212-213.

[108] Exhibit PR-2020-085-10 at para. 99 and at 58, 112.

[109] Exhibit PR-2020-085-01A (protected) at 213; Exhibit PR-2020-085-01 at 15.

[110] Exhibit PR-2020-085-01 at 15.

[111] Exhibit PR-2020-085-10 at paras. 101-107; Exhibit PR-2020-085-01A (protected) at 212.

[112] Exhibit PR-2020-085-01B at para. 87; Exhibit PR-2020-085-10 at 112.

[113] Exhibit PR-2020-085-10 at 58.

[114] Ibid. at 148. See Annex 1 of these reasons for the entire chapeau of rated criterion R.2.3 setting out these requirements.

[115] Exhibit PR-2020-085-01B at paras. 88-95.

[116] Exhibit PR-2020-085-10 at 148.

[117] Exhibit PR-2020-085-01B at para. 89; Exhibit PR-2020-085-01 at 15.

[118] Exhibit PR-2020-085-01B at paras. 91-95; Exhibit PR-2020-085-01A (protected) at 214-215.

[119] Exhibit PR-2020-085-10 at 111, 148.

[120] Exhibit PR-2020-085-15 at paras. 47-49.

[121] Exhibit PR-2020-085-20 at 4-5.

[122] Exhibit PR-2020-085-01A (protected) at 212-213, although the Tribunal notes that the acronym used for the peer review expert in the task allocation chart does not appear to be defined anywhere in ACT’s proposal.

[123] Exhibit PR-2020-085-01A (protected) at 381, 416.

[124] At para. 35.

[125] Exhibit PR-2020-085-10 at 148.

[126] Exhibit PR-2020-085-01B at para. 96; Exhibit PR-2020-085-01 at 16.

[127] Exhibit PR-2020-085-01B at paras. 97-98; Exhibit PR-2020-085-01A (protected) at 189-190, 214.

[128] Exhibit PR-2020-085-10 at paras. 119-124.

[129] Exhibit PR-2020-085-01 at 11.

[130] Exhibit PR-2020-085-10A (protected) at 407-417.

[131] Ibid. at 364-382.

[132] Ibid. at 386-403.

[133] Exhibit PR-2020-085-10 at 145, 374, 396, 412.

[134] Ibid. at 148, 382, 403, 416.

[135] Exhibit PR-2020-085-10A (protected) at 366, 389, 409.

[136] Ibid. at 142, 367, 390, 409.

[137] Exhibit PR-2020-085-20 at 2.

[138] Exhibit PR-2020-085-15A (protected) at paras. 29-32, 35-36.

[139] Exhibit PR-2020-085-10A (protected) at 376, 398, 414.

[140] Exhibit PR-2020-085-01 at 14; Exhibit PR-2020-085-10A (protected) at 211.

[141] CGI Information Systems and Management Consultants Inc. v. Canada Post Corporation and Innovapost Inc. (24 October 2014), PR-2014-015 and PR-2014-020 (CITT) [CGI] at para. 144, application for judicial review dismissed 2015 FCA 272.

[142] Deloitte at para. 28.

[143] (15 September 2016), PR-2016-004 (CITT) at para. 60.

[144] Exhibit PR-2020-085-10 at 349.

[145] Exhibit PR-2020-085-10A (protected) at 389.

[146] Ibid. at 416.

[147] Ibid. at 208, 211, 366, 375, 378, 409, 413, 414.

 You are being directed to the most recent version of the statute which may not be the version considered at the time of the judgment.