I turn noisy operating context into decisions a team can defend when the week gets real.
I co-founded ResiDesk because multifamily teams already hear the risk signals. The problem is timing: those signals arrive as calls, tickets, surveys, reviews, and inbox fragments.
Before that, I built outcome-based lending at Climb Credit and advisor-facing analytics at BlackRock. Cornell physics left me with a stubborn bias: measure the system before claiming you improved it.
The pattern has held across every job: the useful information is already there. It just reaches the accountable person too late, with too much context missing.
Building the context layer for resident operations.
Most of my attention is on ResiDesk: resident signal, operator workflows, AI-assisted response, and the internal systems that let a small team carry more context than its headcount should allow.
Looking for
Operators with messy resident context.
Teams whose calls, tickets, reviews, surveys, and inboxes already contain the answer, just not in a form anyone can use before the next escalation.
Writing about
Systems after the model.
AI products that still work after evaluation, handoffs, edge cases, and the boring moment when someone has to decide what happens next.
Not useful
Generic chatbot conversations.
If there is no workflow owner, no source of truth, and no decision that changes afterward, I am probably not the person for it.
Pulled from the way I talk about ResiDesk, BlackRock, AI agents, hiring, and customer context: make hidden information usable before claiming the system got smarter.
01 / Context
Start with the real job.
I am much better at problems when the business context is visible. Products behave the same way: show the workflow, the stakes, and the owner before choosing the tool.
02 / Adoption
Demos lie by omission.
A flashy prototype is not proof. The useful test is whether someone still reaches for it mid-work, with no audience, when the old spreadsheet is one click away.
03 / Human loop
Use people for judgment, not cleanup.
The human loop should add empathy, prioritization, and follow-through. If people only correct machine output, the system is wasting its best asset.
04 / Good faith
Assume every side wants the system to work.
Residents, property teams, owners, and internal teams usually have different constraints, not bad intent. Good product work makes the shared ground visible.
05 / Talent
Hire high-autonomy builders.
The best people can take in too much context, find the few facts that matter, and create value without waiting for a narrow mandate.
06 / Bandwidth
Make every first meeting a second meeting.
Capture institutional knowledge so live conversations can focus on judgment and ideas, not fact transfer. That is where cycle time shrinks.
Signal map
[MODULE 05]
From resident conversation to operating action
[STEP 01 / CAPTURE]
Hear the resident while the issue is still cheap to fix.
The raw material is already there: texts, tickets, calls, reviews, surveys, and the small comments that usually disappear when the immediate issue closes.
I grew up around research, so the default future looked academic. I studied applied physics at Cornell because it sat between the things I liked most: fundamental science, real systems, and problems where small details changed the outcome.
Software came in sideways. I was in an electron microscopy lab and wrote code to make a magnetic-noise setup faster. It saved hours of manual work immediately, which made the lesson hard to ignore: the right tool changes the shape of the job.
The industries changed. The job did not. At BlackRock it meant making institutional tools usable for advisors. At Climb Credit it meant underwriting against outcomes. At ResiDesk it means turning resident conversations into operating signal before small problems get expensive.
Physics
Software became the lever.
I wrote a tool in an electron microscopy lab to speed up magnetic-noise setup. It saved enough time, fast enough, that software stopped feeling abstract.
BlackRock
Real stakes sharpen the interface.
I came back six months later, re-interviewed, and moved to New York. It was where I learned how much interface quality matters when a user is making decisions with real money behind them.
Climb Credit
Outcomes changed the product.
Instead of asking who looked safest on paper, we asked what happened to earnings after the program. That reframed underwriting, product, and data, and took annual loan volume from $1 million to $300 million.
ResiDesk
The signal is already in the conversation.
Residents tell operators what matters every day. The work is translating those conversations into priority, timing, and ownership before the signal turns into churn.
The work has mostly been the same job in different settings: find the useful signal inside a messy process, then build the shortest responsible path from context to action.
Renewal signal
7%
Reported renewal and rent lift after resident feedback moved earlier into the operating cadence.
Climb Credit
$1M → $300M
Annual loan volume growth after treating student outcomes as product data, not marketing decoration.
Advisor workflow
$40M ARR
Advisor-facing analytics product taken from zero to $40 million ARR in year one.
We help multifamily operators understand what residents are telling them across renewals, rent, maintenance, and staffing. The product earns its keep when it changes the next action: owner, urgency, context, and follow-through.
We underwrote against a different question: not who looked safest on paper, but what happened to a graduate's earnings. That forced outcomes into the product's infrastructure.
The job was turning institutional infrastructure into something advisors could use with clients. Same information underneath, but packaged for the moment where a person had to explain, compare, and decide.
I write when the consensus feels too smooth. Most pieces circle back to AI, product adoption, and the gap between a model that can answer and a system a team can trust at work.
If a tool does not help someone reach a better decision sooner, with less context loss, it is decoration.
Understand the job first.
Without knowing what someone is actually trying to do, you are just moving interface chrome around.
Build the harness, not just the model.
The model is one part. Context, tools, guardrails, evaluation, and the handoff into someone's day decide whether it matters.
Demos lie by omission.
What matters is whether people still reach for it mid-work, mid-mess, with no audience and no demo to grade.
FAQ
[MODULE 11]
Questions that come up a lot
What AI systems do I actually build?
I build AI that fits into how people already work. At ResiDesk, that means turning resident conversations into operational context, copilots, and agent systems that help teams decide earlier and follow through with less context loss.
What have I built before ResiDesk?
I worked at Climb Credit and BlackRock. At Climb, I helped grow annual loan volume from $1 million to $300 million. At BlackRock, I built a retail analytics product that reached $40 million ARR in its first year.
What do I usually write and speak about?
How AI products hold up in practice: agent workflows, LLM evaluation, product loops, and what separates a strong demo from something that works on a Tuesday afternoon. Recent talks start with resident conversations and end with what better operating systems should do next.
What is my view on AI?
I care less about whether something looks impressive and more about whether it helps someone make a better call. That usually means getting context right, measuring what matters, and keeping a human in the loop before automating the wrong thing.
I have invested in more than 100 startups and mentored through Techstars. I tend to back founders who are close to the problem, close to the customer, and honest about what they do not know yet.
That honesty matters more now because generic advice is free. What still counts is specific context, good judgment, and helping someone reach the next real decision faster.