Patterns
The opposite of
“tell us about yourself.”
Most apps ask you to fill out a profile and then build the experience around what you said. Axis builds a second profile alongside it — observed entirely from your actual behavior. When the two disagree, the observed one is usually right.
Why this exists
Two profiles. The one you wrote, and the one your behavior wrote.
Profile-on-signup is an inherited convention. It made sense before software could see what its users actually did. It makes much less sense now. People describe themselves the way they want to be — when the planner needs to know how they actually are.
Axis maintains both. Your declared profile (archetype, chronotype, life priorities, work hours) is what you told us at signup. Your observed profile is what the system has watched you do since. The cards below are the second one, surfaced in plain language.
When intent and behavior disagree, behavior is the data we trust.
What gets observed
Five cards, each fed by a specific slice of your data. Every observation shows what it’s based on so you can sanity-check it. Every observation can be pushed back on if the system is wrong.
01
Behavioral chronotype
When you actually focus best — by the data, not the survey.
Twelve two-hour buckets across the day, scored from when your blocks start, how you rate them, and how energized you felt. The buckets where you consistently rate sessions highest get highlighted. If your declared chronotype and your behavioral chronotype disagree, the dashboard shows both — and the engine listens to the second one.
Inputs: block start times × session outcome × energy state, last 28 days.
02
Mode preferences
What you actually do inside the modes you said you’d work in.
When you scheduled “Deep Build” last quarter, what activity did the time actually go to — coding, design, technical writing, presentation building? Top three horizontal bars per work mode, sized by minutes. The split is often surprising. People declare one thing and consistently do another, and the planner should know which is real.
Inputs: block.modeL1 × block.activity × duration, last lookback window.
03
Skip patterns
The kinds of blocks that quietly keep getting cancelled.
Drawn from the lifecycle log, not from feedback you remember to give. If a class of block keeps getting cancelled — Friday afternoon strategy, Tuesday workouts, anything labeled “admin” — the pattern surfaces here as plain text. No score, no shaming. Just: this keeps happening. The decision to act on it is yours.
Inputs: BlockLifecycle CANCELED events grouped by mode/activity/day of week.
04
Energy by day of week
Which days you actually have energy on. Often not the ones you’d guess.
Monday-through-Sunday bars showing the share of sessions you rated as energized. The best day gets highlighted. Useful when designing the week — the engine respects this when placing high-stakes blocks if you let it.
Inputs: block.energyState across day of week, current rolling window.
05
Priority alignment
How much of your time actually went to the goals you said were the priority.
A gauge against an 80% target: of all hours that landed against a goal, what share went to goals you marked CRITICAL or PRIMARY. It’s a single number that catches the most common gap between intent and behavior — that the goals you said mattered are not the goals your week ran on.
Inputs: block.goalId × goal.priority × duration.
When the system is wrong
Every observation has a “this isn’t right.”
A profile that observes you needs an honest way to be wrong. Every card carries a small affordance — a “this isn’t right” link that opens a textbox where you can explain why. The observation gets struck through in place with your note attached. Undo is one click.
Your pushback is keyed to the specific observation, not the whole category. When the underlying pattern shifts and the observation changes, it re-surfaces — so silencing a specific call doesn’t silence a whole section forever.
The system gets to make claims. You get to overrule them.
How the synthesis works
Deterministic. Auditable. Free.
The observations are computed from your raw data — blocks, lifecycle events, goals — by code, not by an AI call. That matters for three reasons.
- Reproducible. Same data, same window — same observations every time.
- No tokens spent. Reading the dashboard never costs you AI credits.
- No hallucination surface. The system can’t invent a claim that isn’t in the math.
Sparse-data sections show a soft “need a few more rated sessions” note instead of inventing observations from too little. The header strip on every card shows the lookback window and the underlying block count — so you can see exactly what the synthesis is based on.
Where it shows up
The observed profile feeds the rest of the planner.
The behavioral chronotype overlays the trends heatmap so misalignment becomes visible. The skip-pattern engine surfaces the same data on the trends dashboard under Divergence Detection. The priority-alignment gauge gets cross-referenced in the weekly review.
Over time, the engine starts giving the observed profile more weight than the declared one when the two conflict. That’s how the system gets sharper the longer you use it.
Find out what your weeks have been telling you.
Patterns is included on Axis AI and Axis AI Pro.