/
HR: Performance Reviews
  • Up to date
  • HR: Performance Reviews

    We run quarterly performance reviews, which twice a year include a full 360° assessment with direct reports and peers weighing in.

    While it is a time commitment, it is time well spent. Quarterly reviews multiply performance data points and ensure we keep our eyes on the ball at all times. We act quickly on opportunities to improve and/or scale best practices. 360Learners also receive a steady stream of insights from all colleagues to help them grow.

    We want to keep the workload down during busy January and PTO-heavy July, therefore we run:

    • “standard” reviews in January (Q4) and July (Q2)

    • full 360° reviews in April (for Q1) and October (Q3)

    We do not run reviews for employees who joined in-quarter and only do so on an opt-in basis - provided the coach and employee agree, either can ping their HR BP while the review cycle is open - for those who joined less than 2 quarters before the review begins. After that it becomes mandatory.

    We do not run reviews for employees who have been off sick or on maternity for more than 1 month in the quarter being reviewed. This is to guarantee no partial transitional review can negatively affect expecting 360Learners.

    Employees on garden leave or serving notice are not included.


    Timeline of a full 360° review

    Notes: for standard reviews simply ignore the steps related to peers and upward reviews; “Q” stands for the “end of the quarter”


    Your "to-do" list for a full 360° review

    Notes: as above, for standard reviews, just ignore the steps related to peers and upward reviews


    The review format

    • Each review writer uses a specific template dependent on their role in the review process.

      • Self-reviews are for 360Learners to reflect, self-assess, and provide insights to their coach. You’ll write about your main growth area as well as rate your overall performance for the quarter versus 360Learning standards. You’ll also input your OKR achievement %. Writing your review you’ll also let your coach know how engaged you feel, the state of your workload, and how they can help.

      • Peer reviews are for 360Learners who depend on the reviewee’s work or collaborate closely with them to offer broader insights. They are not necessarily peers in the strictest sense of the term. As a peer, you’ll provide actionable feedback that can be used for growth.

      • Upward reviews are for direct reports - if any - to assess how their coach is doing on key coaching and Convexity markers and what their “coaching NPS” is. As a reviewer, you’ll answer a “start, stop, continue” set of questions. We expect coaches to invite candid feedback, receive it humbly, and take low participation very seriously.

      • Coach reviews are for coaches to assess their direct reports’ performance and give them consolidated feedback and career guidance. As a coach, after reading all other review writers’ work if any, you’ll give your definitive view of the reviewee’s main growth areas as well as rate their overall performance for the past quarter versus 360Learning standards. You don’t have to agree with peers or direct reports but should be able to discuss gaps. Likewise, for calibration, we request that you run your direct reports’ performance ratings by your own coach before submitting them. Finally, you’ll reflect on any flight risks on the team and mitigation actions.

    • Each review is shared with the reviewee at the end of the review, with the name of the reviewer attached. The only exceptions are upward reviews which are anonymized, and the coach’s reflection on flight risks which isn't shared. Both of these exceptions exist to maximize candor.

    • Whatever your role, we expect a healthy mix of benevolence, sincerity, and high standards from all reviewers to grow as an organization.

    • Reviews are written in English to enable global careers and collaboration.


    Performance ratings, how they're calibrated, and what they're used for

    • The role of performance ratings in performance reviews is often a topic of discussion, so it deserves a mention. At 360Learning, performance ratings answer the question: “Based on your experience working with the reviewee, how well did they perform versus the company standards for the role and/or level?”

    • While we’re not fixated on giving everyone a grade, we need performance ratings to ensure that 360Learners get the “brass tacks” on performance in an unambiguous way - whereas it is sometimes lost in the noise - and to operate our transparent reward processes effectively.

    • We reject forced ranking as a destructive practice and therefore only provide reviewers recommendations on performance ratings” repartition. However, given the importance of performance, we apply several calibration routines to smooth out bias:

      • Requiring coaches to discuss their direct reports ratings with their own coach prior to submitting them, so the skip-level coach can ensure fairness across their organization

      • Sharing all performance ratings in the open performance database (see https://360learning.atlassian.net/wiki/spaces/360LEARNIN/pages/91849171 for more)

      • Systematic calibration of performance reviews' at the Department level, at the outcome stage

        • What it means: We would systematically realign every Department’s breakdown of performance ratings to the Company’s average or Company guideline (whichever is most conservative). We would do so for every process that utilizes performance ratings, namely equity grants and compensation reviews. We would rely on the same methodology as the one used for July 2023 equity grants.

        • Rationale: We adopt a less heavy-handed approach to calibration that makes each department and leadership team accountable for their performance assessment fairness. However, we cannot let unavoidable - even if minimal - bias directly impact how 360Learners are rewarded. We are therefore guaranteeing that the pay-for-performance approach is fair across departments.

    • Refresher equity grants and yearly compensation reviews are based on the average of the last 4 available ratings. We compute averages by:

      • assigning points to each rating, from -2 for "strongly below expectations" to 4 for "strongly exceeds expectations"

      • computing the average score using the last 4 available ratings

      • rating the highest 10% averages as "strongly exceeds expectations"

      • rating the next 25% as "exceeds expectations"

      • rating anyone below 0.5 as "below expectations"

      • rating everybody else as "meets expectations"

      • the ranking is done at the Department level, but we reserve the right to handle it at a lower level - Team - if we notice bias in ratings and the Exec in charge approves it

      • More details about this computed calibration and an example can be found here https://docs.google.com/presentation/d/1zR8G5lFoB8SzoxTRW_QTCZjI6V16QcgLB3TzS0OQ2yw/edit#slide=id.g256feb36b37_0_871 in slides 4 to 8.

    More details about our pay-for-performance policies can be found in https://360learning.atlassian.net/wiki/spaces/360LEARNIN/pages/59244817 .