I build software for places where the customer is talking, but the person who can fix the problem is too far away to hear it.
I co-founded ResiDesk because renting is one of the biggest checks most people write, and the experience is still weirdly bad. Residents are already telling buildings what is broken. The hard part is getting that truth back to the people who can do something with it.
Before that I worked on outcome-based lending at Climb Credit and advisor tools at BlackRock. I came out of applied physics with the same bias I still have: give me the context, the measurement, and the actual job.
I do not think of this as AI for real estate. I think of it as business 101: talk to your customer, understand what is actually happening, and make sure someone owns the next step.
Building ResiDesk so a building can talk to residents every day, understand what is actually happening, and get the right issue to the right person without making someone read every text by hand.
Before
Product and engineering at Climb Credit and BlackRock. Different industries, same lesson: the work gets better when the person making the decision has the right context.
Start here
Start with Work if you want the arc, Proof if you want receipts, and Talks if you want to hear the argument in my own words.
What I care about
Tools that make a real job easier without pretending the people doing the job should disappear.
Making rental properties better at hearing residents.
Most of my attention is on ResiDesk. We text with residents, understand the property context behind the message, help the team answer well, and show owners what people actually want from the building.
Looking for
Owners and operators who want the ground truth.
The best conversations start with teams who already know residents are telling them something important. They just cannot read every text, review, ticket, and survey by hand.
Writing about
What happens after the answer.
AI is interesting when it changes the next task: who handles it, what context they have, how fast they can move, and whether the resident has to explain the same problem again.
Not useful
AI that only answers.
If nobody owns the next step, nobody trusts the answer, and nothing changes afterward, I do not care how good the demo looked.
The way I think about it is pretty simple: talk to the customer, make the work visible, use AI to take the boring load off the team, and never confuse a good demo with something people trust.
01 / Customer
Talk to the customer.
That is still business 101. In housing, the hard part is hearing enough residents without forcing a person to read everything by hand, then getting the pattern back to the people who can change the building.
02 / Context
Show the actual job.
I do not think well in abstractions for their own sake. Give me the actual work, the stakes, the strange edge cases, and the person who has to live with the outcome.
03 / Adoption
Demos are not adoption.
I learned this early at BlackRock. A prototype can win the room and still lose to the spreadsheet the next morning. The demo is not the job.
04 / AI
Shorten the commute.
I do not need AI to do the whole job. I need it to move a task from stuck to nearly finished, while the person still owns the judgment.
05 / Trust
Keep people where trust matters.
Residents trust the product because there is a human team behind it. AI should make that team faster, better informed, and less buried.
06 / Team
Hire people who can carry context.
The best builders can take in a messy situation, find the few facts that matter, and move without waiting for a narrow script.
Story map
[MODULE 05]
From resident text to better building
[STEP 01 / LISTEN]
Start with what the resident actually said.
A useful system starts with ordinary stuff: the broken washer, the pet policy question, the Wi-Fi complaint, the package-room mess, the reason someone may not renew.
I grew up around research, so the default path looked academic. I studied applied physics at Cornell because I liked real systems, messy measurement, and problems where small details changed the answer.
Software came in sideways. I was in an electron microscopy lab and wrote code to make a magnetic-noise setup faster. It saved hours of manual work almost immediately. That was the moment software stopped feeling abstract.
The industries changed. The job did not. At BlackRock it meant making institutional tools usable for advisors. At Climb Credit it meant underwriting against outcomes. At ResiDesk it means helping housing companies hear residents clearly enough to make better decisions.
Physics
Software became the lever.
I wrote a tool in an electron microscopy lab to speed up magnetic-noise setup. It saved enough time, fast enough, that I started taking software seriously.
BlackRock
Real stakes sharpen the interface.
I came back six months later, re-interviewed, and moved to New York. It taught me how much interface quality matters when someone is making a decision with real money behind it.
Climb Credit
Outcomes changed the product.
Instead of asking who looked safest on paper, we asked what happened to earnings after the program. That reframed underwriting, product, and data, and took annual loan volume from $1 million to $300 million.
ResiDesk
Housing should know its customer.
Residents tell buildings what is working and what is not every day. The work is making that legible to the owner, useful to the operator, and better for the person living there.
The job has mostly been the same in different settings: find what the customer is actually saying inside a messy process, then build the shortest responsible path from context to action.
Resident experience
7%
Reported renewal and rent lift when resident feedback moved earlier into the work.
Climb Credit
$1M → $300M
Annual loan volume growth after treating student outcomes as product data, not a marketing story.
Advisor workflow
$40M ARR
Advisor-facing analytics product taken from zero to $40 million ARR in the first year.
We help rental-property owners and operators understand what residents are asking for across renewals, rent, maintenance, and staffing. The product earns its keep when it changes what happens next: who owns it, how urgent it is, what context matters, and whether it actually gets done.
We underwrote against a different question: not who looked safest on paper, but what happened to a graduate's earnings. That forced outcomes into the infrastructure.
The job was turning institutional infrastructure into something advisors could use with clients. Same information underneath, but built for the moment when a person had to explain, compare, and decide.
I write when the consensus feels too smooth. Most pieces come back to the same question: does this help someone do the real job, or does it just make the demo cleaner?
If a tool does not help someone finish a real task sooner, with less context loss, it is decoration.
Understand the job first.
If you do not know what someone is actually trying to do, you are just rearranging the screen.
Build the harness, not just the model.
The model is one part. Context, tools, guardrails, evaluation, and the handoff into someone's day decide whether it matters.
Demos lie by omission.
What matters is whether people still reach for it mid-work, mid-mess, with no audience and no demo to grade.
FAQ
[MODULE 11]
Questions that come up a lot
What AI systems do I actually build?
I build AI that fits into how people already work. At ResiDesk, that means helping property teams answer residents, understand what is really happening in the building, and get the right work to the right person.
What have I built before ResiDesk?
I worked at Climb Credit and BlackRock. At Climb, I helped grow annual loan volume from $1 million to $300 million. At BlackRock, I built a retail analytics product that reached $40 million ARR in its first year.
What do I usually write and speak about?
I usually come back to the same things: agents, evaluation, product loops, and what separates a strong demo from something that still works on a Tuesday afternoon. The resident is usually the easiest place to explain it.
What is my view on AI?
I care less about whether something looks impressive and more about whether it helps someone make a better call. That usually means getting context right, measuring what matters, and keeping a human close enough to stop the system from automating the wrong thing.
I have invested in more than 100 startups and mentored through Techstars. I tend to back founders who are close to the problem, close to the customer, and honest about what they do not know yet.
That honesty matters more now because generic advice is free. What still counts is specific context, good judgment, and helping someone get to the next real decision faster.