Back to Principal's Playbook

Planning the 2026-2027 Teacher Observation Cycle: A Summer Checklist

By Observation Copilot Team

Summer is the window principals use to plan the next school year's observation cycle before September demands make structural changes impossible. Effective planning for the 2026-2027 cycle means auditing last year's feedback data, setting an observation calendar, reviewing state policy changes, calibrating co-evaluators, and choosing tools that hold up across 180 days of classroom visits. This checklist walks through what to decide before the first bell rings.

Why Plan in Summer and Not in September?

Most principals treat observation scheduling as a September problem. By mid-August, the cycle is set by default - the same formal observation counts, the same evaluators, the same rubric interpretation - and the changes that could have improved it stay on the to-do list all year.

Summer is the only window with enough margin to make structural changes stick. Calibration sessions, pre-conference templates, policy audits, and tool selection all take a few hours that are impossible to find once teachers are back in the building. Principals who use June and July to plan walk into September with a calendar, not a scramble.

What Should Principals Audit Before the New Cycle Begins?

Before building the 2026-2027 calendar, spend an hour with last year's data. Three questions are worth answering:

  1. Which teachers were under-observed? A single formal observation represents less than 1% of a teacher's annual instructional time - mathematically, one 45-minute visit against roughly 1,000 hours of classroom instruction. Look at informal walkthrough frequency per teacher and identify anyone who received three or fewer touchpoints all year.
  2. Where did summative ratings cluster? If every teacher landed at "Proficient" or "Effective," either the faculty is genuinely uniform or the rubric is being applied with rating inflation. Note the distribution and flag it for calibration work.
  3. Which domains produced the thinnest evidence? Skim last year's write-ups and note which framework domains kept getting single-sentence summaries. Those are the domains where principals struggle to collect evidence, and the areas to prioritize in evaluator training before September.

This audit takes about an hour when the observation data is in one place. When it is spread across emails, Google Docs, and paper notebooks, it takes an afternoon.

How Do You Build an Observation Calendar That Actually Sticks?

The best observation calendars are built backward from summative deadlines, not forward from the first week of school. Start with the date your district requires summative evaluations to be submitted. Work backward from there:

  1. Block the summative conference window first. Give yourself two weeks before the district deadline. For a May 15 deadline, book summative conferences the first week of May.
  2. Reserve calibration days with co-evaluators. If assistant principals also conduct observations, plan at least two half-days of norming per year - one in September and one in January. Without calibration, a teacher's rating becomes partly a function of which administrator walked in.
  3. Schedule formal observations in a 60/40 split. 60% in fall semester, 40% in spring. Front-loading gives teachers time to act on feedback and ensures a documented first data point before the winter holiday.
  4. Build informal walkthrough blocks into your weekly calendar. Kim Marshall, author of Rethinking Teacher Supervision and Evaluation (Jossey-Bass, 2013), recommends approximately ten brief mini-observations per teacher per year. Across a 25-teacher faculty and a 36-week school year, that is roughly 6-7 walkthroughs per week.
  5. Leave deliberate buffer weeks. The first week after a school break and the two weeks surrounding standardized testing should be protected from formal observations. Classrooms during those windows do not represent typical instruction.

This calendar is a working document, not a commitment device. Expect to adjust by October. The point is starting with a structure so drift from the plan is visible and correctable.

How Should Observation Cycles Differ by Teacher Experience?

Not every teacher benefits from the same cycle. Differentiating reduces principal workload and directs the most feedback toward the teachers who need it most.

  1. First- and second-year teachers typically benefit from 2-3 formal observations plus frequent informal walkthroughs - weekly or biweekly. The goal is volume and tight feedback loops during the years when instructional habits form.
  2. Tenured, high-performing teachers may be eligible for reduced cycles. Several states, including Michigan under Public Act 224 of 2023, allow teachers rated effective or highly effective for three consecutive years to be evaluated on a biennial or triennial schedule.
  3. Teachers on growth plans need documented, frequent observation - often monthly or more - with specific evidence aligned to the plan's goals. Informal walkthroughs do not substitute for the formal documentation these cycles require.

Differentiation only works when it is written down before September. Decisions made ad hoc during the year skew toward whoever requests the most attention, not whoever needs the most support.

How Do Principals Calibrate Evaluators Before the First Observation?

Inter-rater reliability is the quiet determinant of whether observation ratings mean anything across a building. Two evaluators watching the same lesson should arrive at similar ratings. In practice, they often do not.

A summer calibration session that fits in 90 minutes:

  1. Watch a recorded lesson together. Domain-specific video from the Danielson Group or internal recordings from last year both work.
  2. Score independently. Each evaluator rates the lesson using the full rubric without discussion.
  3. Compare ratings. Where did evaluators diverge by more than one performance level? Those are the calibration gaps worth talking through.
  4. Discuss evidence, not scores. The productive conversation is "what did you see that led to this rating" - not "you were too generous." Grounding the disagreement in observable evidence keeps the norming honest.
  5. Document the shared interpretation. Write down how the team will treat edge cases next time. This becomes the reference point for mid-year norming.

Learning Forward, the professional standards organization for educator development, treats calibration as a core component of effective professional learning communities. Many states also now require evaluators to complete formal rater reliability training every three years. If your last training was in 2023, 2026-2027 is the year to recertify.

What Policy Changes Should Principals Review for 2026-2027?

State evaluation policy shifted meaningfully between 2024 and 2026. Summer is the time to confirm your cycle reflects current rules.

  1. Student growth weighting is decreasing. Michigan reduced the required weight of student growth data from 40% to 20% of a teacher's evaluation, effective July 1, 2024. Illinois moved student growth data toward optional status. Several states are returning evaluation procedures to local collective bargaining.
  2. Rating systems are simplifying. The trend is toward 3-level rating scales - Effective, Developing, Needing Support - replacing older four-level systems. Confirm your framework's scale matches current state guidance.
  3. AI use by teachers is now in scope. Many districts published AI use guidelines during the 2025-2026 school year. Review your district's policy before September so you know how to document AI-assisted lesson planning or student-facing materials during observations.
  4. Observation frequency is becoming more flexible. High-performing teachers may be eligible for reduced cycles. Teachers on growth plans require increased frequency. Make sure your calendar reflects both ends of this spectrum.

For district leaders managing these shifts across multiple schools, district partnerships help standardize the observation process and ensure every principal is applying current policy consistently. The bridge from September planning to May summative ratings is also where documentation quality matters most - see our end-of-year summative review guide for what to keep in mind as the cycle winds down.

Choosing Tools That Hold Up Across 180 Days

The tools you pick in August determine how much friction you carry all year. Principals often overestimate the importance of the note-taking tool and underestimate the importance of the feedback tool. Raw notes are easy. Turning 30 minutes of notes into a structured, framework-aligned feedback draft is what eats the hours.

Observation Copilot has helped me to streamline and speed up the teacher feedback process. In the past, it's taken me up to two weeks to get the final report written, identify areas of strength and weakness, and then finally sit down and be able to have the meeting with the teacher to go over it.

- Jason Cunningham, Principal, Stockdale Independent School District, Stockdale, TX

Observation Copilot generates a structured feedback draft aligned to the framework you select - Danielson FFT, T-TESS, Marzano, Kim Marshall, or any of the state and district rubrics the tool supports. Principals typically reduce post-observation write-up time from two to three hours per teacher to under 30 minutes. That time returns to instructional leadership, or to the next walkthrough. See how those time savings compound over a full year of observations.

Choosing the tool in summer matters because the first week of school is not the week to learn a new interface. Sign up, run a practice note through it, and decide in July whether it fits your workflow.

Frequently Asked Questions

When should principals start planning the next observation cycle?

June or July is ideal. Planning in summer gives principals time to audit last year's data, book calibration sessions, review state policy updates, and test new tools before September. Waiting until August typically results in the same cycle as last year by default, regardless of what needed to change.

How many formal observations should principals plan per teacher?

Most evaluation frameworks require one or two formal observations per year, but Kim Marshall's research recommends approximately ten brief mini-observations to supplement formal observations. Differentiate by experience: first- and second-year teachers benefit from more frequent visits than tenured teachers with consistent effective ratings.

What is the minimum calibration principals should do with co-evaluators?

At least two half-day calibration sessions per year - one in September and one in January. Watching a recorded lesson together and comparing independent ratings surfaces drift between evaluators before it affects teacher outcomes. Many states also require formal rater reliability training every three years.

How do 2026-2027 policy changes affect observation cycles?

Key shifts include reduced weighting of student growth data in several states, simplified 3-level rating scales, flexible observation frequency for high-performing teachers, and new district-level guidelines on AI use by teachers. Confirm your state and district rules before building the new cycle.

Can AI tools help with observation cycle planning?

Yes. AI tools like Observation Copilot handle the time-intensive part of each observation - turning raw notes into structured, framework-aligned feedback - which makes frequent walkthroughs realistic across a full year. It is free for individual principals at app.observationcopilot.com.

Walk into September with a cycle that holds up all year.