I hear "this should be possible" three or four times a week. From reps. From CX. From my boss. From me, talking to myself, while squinting at a Salesforce report.
Most operators learn to ignore it. The ones who learn to triage it run circles around the ones who don't.
Triaging means putting every "this should be possible" ask into one of three buckets:
- Tractable in a week. Ship something this Friday.
- Tractable in a month. Scope it, get a sponsor, build a real thing.
- Intractable. Say no, with a reason. Find a smaller wedge that solves part of it.
The framework's value is mostly in bucket three. Most teams never give a clean "no" to a wishful product roadmap, and the result is a year of half-built initiatives that ship nothing. Saying "no, here's why, here's what we can do instead" — fast — is the most underrated operating skill I've watched leaders develop.
Here's how I sort.
Bucket 1: tractable in a week
A week-tractable ask passes four tests. If all four are true, you should be ashamed if you don't ship by Friday.
- The data already exists in a system I have read access to.
- The user needs to see it, not act on it from inside their primary system.
- A static or hourly-refreshed view is good enough.
- It can live outside the primary system (a separate URL, a Slack post, a static page).
Examples that pass all four:
- "Can we see which accounts haven't been touched in 60 days?" — Salesforce report or 30-line script. Static page. Ship.
- "How much revenue did the codes-promo campaign drive?" — SQL, Python, JSON, Cloudflare. Ship.
- "Which customers have unused activation codes in their org?" — Magento API + a Worker. Ship. (I literally shipped this one.)
Examples that fail one of the tests, and therefore aren't week-tractable:
- "We need account managers to update lead scores from the dashboard." — fails test 2 (write-back into Salesforce). That's a month, not a week.
- "We need a tool that suggests next-best-actions in real time during sales calls." — fails test 3 (real-time). Month or more.
- "Show me which reps are giving discounts above policy, but make it look exactly like Salesforce so they don't have to leave their workflow." — fails test 4 (must live in the primary system). Either a real Salesforce build or no build at all.
Most "this should be possible" asks pass all four tests. Most teams treat them as if they don't.
What to ship in week-tractable cases
Ship the smallest thing that answers the question. Resist the urge to make it pretty. Resist the urge to make it reusable. Resist the urge to make it queryable by someone who isn't in the meeting.
My preferred stack for this category:
- Python script that hits the source-system API.
- Output to JSON or static HTML.
- Cloudflare Pages or a Worker for hosting.
- Cloudflare Access for auth (Google SSO, two clicks to set up).
Total cost: zero infrastructure dollars. Total build time: hours, not days. Total ongoing maintenance: refresh on a cron, ignore otherwise.
The thing nobody tells you about this stack: it scales further than you'd believe. Cloudflare's free tier serves more traffic than your operating team will ever generate. JSON files don't go down. Python scripts that ran fine last quarter still run fine this quarter unless an upstream API changed. The reliability concern most operators have about this category of work is mostly imaginary.
Bucket 2: tractable in a month
A month-tractable ask is one where at least one of the four tests above fails — but not in a way that requires inventing the system.
The most common failure modes:
- The user needs to act, not just see — write-back into a system of record (usually Salesforce or your LMS).
- Multiple stakeholders need different views of the same underlying data.
- Real-time matters (sub-minute latency).
- It needs to live inside the primary system, mimicking native UX.
- It involves a vendor integration that exists but isn't trivial.
These are real builds. They require a sponsor with budget. They require a project plan, even if a small one. They benefit from a kickoff meeting, intermediate check-ins, and a launch.
Examples I've shipped in roughly a month:
- The Stripe → Salesforce B2B pricebook integration with auto-link Flow. Required a real Salesforce Flow, a vendor package, and a careful schema decision. Couldn't have been done in a week.
- The custom chatbot pilot. Required a vendor (LiveChat), a backend integration (Magento + Moodle), an LLM vendor decision, and a real eval loop. Three weeks to first useful version.
- The Magenest order-sync repair. Required reverse-engineering a vendor integration, building a backfill recipe, and validating against historical data. Two weeks of focused work.
What to ship in month-tractable cases
Different from week-tractable in three ways:
- Documentation matters. If the integration breaks at 2am, someone needs to be able to read what you wrote and fix it.
- Tests matter. Not unit tests for everything, but at minimum: a known-good fixture and a check that it produces the expected output.
- Sponsor alignment matters. Get someone with budget to say "yes, this is worth a month" before you start. Otherwise you'll get to week three and discover the project doesn't have political air cover.
The biggest mistake operators make in this bucket: trying to ship like it's week-tractable. The cost of cutting corners on a month-tractable build is that you end up with a thing that mostly works, breaks at the worst possible moment, and erodes the trust that lets you ship the next thing. Slow down. Document. Test. Get a sponsor.
Bucket 3: intractable
Intractable doesn't mean impossible. It means impossible-within-the-current-constraints. The current constraints are usually budget, vendor access, or fundamental data that doesn't exist.
Hallmarks of intractable asks:
- It requires changing the underlying object model of a system of record (Salesforce schema rewrite, LMS data model overhaul).
- It requires ML on data that doesn't exist yet — and won't exist for years.
- It requires a vendor integration with a vendor that has chosen not to open the relevant API.
- It assumes a behavior change in users that won't happen because the incentive structure doesn't reward it.
- The cost of doing it correctly exceeds the value of the answer by an order of magnitude.
Examples I've said no to:
- "Can we have one unified customer view across Salesforce, Magento, the LMS, and the support tool — fully real-time, mutable from any surface?" — Yes, technically. Cost: $400k and a year. Value: roughly $40k. No.
- "Build a model that predicts which prospects will churn before they buy." — The data doesn't exist. We don't have engagement signal on prospects. No.
- "Auto-generate the perfect outreach email for every lead." — The technology exists. The behavior change required (reps trusting the auto-generated draft enough to send it) doesn't, because the incentive is to differentiate. No.
The wedge: how to say no productively
Saying "no" without a wedge is a leadership failure. Saying "no, but here's a smaller version that solves 60% of it" is the highest-leverage move an operator can make.
For the unified-customer-view ask: the wedge was a static dashboard that showed the most-asked-about fields from each system, refreshed nightly, read-only. It solved 60% of the value at 5% of the cost.
For the prospect-churn model: the wedge was a much simpler heuristic — a "high-risk prospect" tag based on three observable behaviors that any rep could check. Not a model. A rule. Solved 30% of the value at 0.1% of the cost.
For the auto-generated email: the wedge was a Slack bot that drafted a first paragraph that the rep could optionally use. The rep stayed in control. Reps used it about 40% of the time. The remaining 60%, they were doing exactly what they would have done anyway.
The wedge is almost always the right answer. The wedge is almost always smaller than the original ask. The wedge is almost always the thing that gets shipped.
How to use this in practice
I keep a running list of "this should be possible" asks. Three columns: ask, bucket, note. The note explains why it landed in that bucket.
Once a quarter, I re-read the list. Three things happen:
- Some bucket-1 items I never shipped get an "I'll do this Friday" tag.
- Some bucket-2 items have changed shape — vendor APIs opened, sponsors materialized, the right data started flowing — and they move to bucket 1.
- Some bucket-3 items have moved to bucket 2 because the technology landscape changed. This last one is happening more frequently in the AI era than I'd have predicted three years ago.
The rebucketing is the underrated part of the framework. Things that were intractable in 2022 are tractable-in-a-month now. Things that were tractable-in-a-month in 2024 are tractable-in-a-week now. Operators who don't re-evaluate their buckets are still saying no to things they could ship in three days.
One more thing
The framework assumes the operator can write the SQL, build the Worker, deploy the dashboard. That used to require a developer. With Claude in the loop, it doesn't anymore. Which means the constraint on shipping bucket-1 work is no longer "can someone build this?" — it's "is someone willing to spend a Saturday on it?"
In most teams, the answer to that question is no. Which is the actual reason most "this should be possible" asks die. Not because they're impossible. Because nobody on the team has the toolkit and the willingness in the same person.
That's the role I play, fractional. It's also a role you can build in-house if you have someone willing.
Got a list of "this should be possible" asks that haven't moved in a year?
That's literally my engagement model. I sort the list, ship the bucket-1 items in the first month, and write up the bucket-3 ones with their wedges. No long roadmap, no Figma deck.
Book a 15-min intro call Email me