I build software for the gap between what customers keep saying and what operators actually get to fix.
I co-founded ResiDesk because renting is one of the biggest checks most people write, and the experience is still weirdly bad. Residents already tell buildings what is broken. The work is getting those signals to the person who can change the building.
Before that I worked on outcome-based lending at Climb Credit and advisor tools at BlackRock. The pattern did not really change: put the real context in front of the person making the call.
I do not think of this as AI for real estate. It is business 101 with better machinery: talk to the customer, understand what is actually happening, and make sure someone owns the next step.
Building ResiDesk so a building can talk to residents every day, understand what is actually happening, and get the right issue to the right person without making someone read every text by hand.
Before
Product and engineering at Climb Credit and BlackRock. Different industries, same lesson: the work gets better when the person making the decision has the right context.
Start here
Start with Work for the arc, Proof for receipts, and Talks if you want the less-polished version in my own words.
What I care about
Tools that make the real job easier without pretending the people doing the job should disappear.
Making rental properties better at hearing residents.
Most of my attention is on ResiDesk. We help teams text with residents, understand the property context behind each message, answer well, and show owners what keeps coming up before it becomes expensive.
Looking for
Owners and operators who want the ground truth.
The best conversations start with teams who already know residents are telling them something important. The problem is not caring. It is reading every text, review, ticket, and survey by hand.
Writing about
What happens after the answer.
AI is interesting when it changes the next task: who handles it, what they know, how fast they can move, and whether the resident has to explain the same problem again.
Not useful
AI that only answers.
If nobody owns the next step, nobody trusts the answer, and nothing changes afterward, the demo did not buy much.
This is the operating loop I keep coming back to: talk to the customer, make the work visible, take the boring load off the team, and do not confuse a clean demo with something people trust.
01 / Customer
Talk to the customer.
That is still business 101. In housing, the hard part is hearing enough residents without making a person read everything by hand, then getting the pattern back to the people who can change the building.
02 / Context
Show the actual job.
I do not think well in abstractions for their own sake. Give me the actual work, the stakes, the weird edge cases, and the person who has to live with the outcome.
03 / Adoption
Demos are not adoption.
I learned this early at BlackRock. A prototype can win the room and still lose to the spreadsheet the next morning. The room is not the test.
04 / AI
Shorten the commute.
I do not need AI to do the whole job. I need it to move a task from stuck to almost done, while the person still owns the judgment.
05 / Trust
Keep people where trust matters.
Residents trust the product because there is a human team behind it. AI should make that team faster, better informed, and harder to bury.
06 / Team
Hire people who can carry context.
The best builders can take in a messy situation, find the few facts that matter, and move without waiting for a perfect script.
Story map
[MODULE 05]
From text message to building decision
[STEP 01 / LISTEN]
Start with what the resident actually said.
A useful system starts with ordinary stuff: the broken washer, the pet-policy question, the Wi-Fi complaint, the package-room mess, the reason someone may not renew.
I grew up around research, so the default path looked academic. I studied applied physics at Cornell because I liked real systems, messy measurement, and problems where small details changed the answer.
Software came in sideways. I was in an electron microscopy lab and wrote code to make a magnetic-noise setup faster. It saved hours of manual work almost immediately. That was the moment software stopped feeling like a separate thing.
The industries changed. The job did not. At BlackRock it meant making institutional tools usable for advisors. At Climb Credit it meant underwriting against outcomes. At ResiDesk it means helping housing companies hear residents clearly enough to act.
Physics
Software became the lever.
I wrote a tool in an electron microscopy lab to speed up magnetic-noise setup. It saved enough time, fast enough, that software started to look like leverage instead of coursework.
BlackRock
Real stakes sharpen the interface.
I came back six months later, re-interviewed, and moved to New York. It taught me that interface quality matters a lot more when someone is using the tool with real money behind the decision.
Climb Credit
Outcomes changed the product.
Instead of asking who looked safest on paper, we asked what happened to earnings after the program. That pushed outcomes into underwriting, product, and data, and took annual loan volume from $1 million to $300 million.
ResiDesk
Housing should know its customer.
Residents tell buildings what is working and what is not every day. The work is making that legible to the owner, useful to the operator, and less annoying for the person living there.
The job has mostly been the same in different settings: find what the customer is actually saying inside a messy process, then build the shortest responsible path to a decision.
Resident experience
7%
Reported renewal and rent lift when resident feedback got into decisions earlier.
Climb Credit
$1M → $300M
Annual loan volume growth after treating student outcomes as product data, not a brochure line.
Advisor workflow
$40M ARR
Advisor-facing analytics product taken from zero to $40 million ARR in its first year.
We help rental-property owners and operators understand what residents are asking for across renewals, rent, maintenance, and staffing. The product earns its keep when it changes what happens next: who owns it, how urgent it is, what context matters, and whether the work actually gets done.
We underwrote against a different question: not who looked safest on paper, but what happened to a graduate's earnings. That forced outcomes into the infrastructure.
The job was turning institutional infrastructure into something advisors could use with clients. Same information underneath, but built for the moment when a person had to explain, compare, and decide.
I write when the consensus feels too smooth. Most pieces come back to the same question: does this help someone do the real job, or did we just make the demo easier to sell?
If a tool does not help someone finish a real task sooner, with less context loss, it is probably decoration.
Understand the job first.
If you do not know what someone is actually trying to do, you are probably just rearranging the screen.
Build the harness, not just the model.
The model is one part. Context, tools, guardrails, evaluation, and the handoff into someone's day decide whether it matters.
Demos lie by omission.
What matters is whether people still reach for it mid-work, mid-mess, with no audience and no demo to grade.
FAQ
[MODULE 11]
Questions that come up a lot
What AI systems do I actually build?
I build AI that fits into how people already work. At ResiDesk, that means helping property teams answer residents, understand what is really happening in the building, and get the right work to the person who can move it.
What have I built before ResiDesk?
I worked at Climb Credit and BlackRock. At Climb, I helped grow annual loan volume from $1 million to $300 million. At BlackRock, I built a retail analytics product that reached $40 million ARR in its first year.
What do I usually write and speak about?
I usually come back to the same things: agents, evaluation, product loops, and what separates a strong demo from something that still works on a Tuesday afternoon. Housing makes the point concrete because the customer is already talking.
What is my view on AI?
I care less about whether something looks impressive and more about whether it helps someone make a better call. That usually means getting context right, measuring what matters, and keeping a human close enough to stop the system from automating the wrong thing.
I have invested in more than 100 startups and mentored through Techstars. I tend to back founders who are close to the problem, close to the customer, and honest about what they do not know yet.
That honesty matters more now because generic advice is free. What still counts is specific context, good judgment, and helping someone get to the next real decision faster.
Runs in the browserNo server requiredFalls back cleanly
01 / Ask this site
Checking browser AI
Ask a question. Get an answer from this page.
The answer is grounded in the site copy, talks, writing, and proof links. If your browser exposes local AI, the tool can try that too. Otherwise it uses a small local retrieval engine.
Try asking about ResiDesk, housing, demos, BlackRock, Climb, writing, or AI.
02 / Conversation map
Start with a real question.
03 / Reading guide
Build a path through the site.
04 / Transcript lens
Pull the useful parts out of a talk.
05 / Useful AI test
Paste an AI idea. See if it earns its keep.
06 / Resident signal simulator
Generate a small building signal map.
07 / Throughline highlighter
Show the pattern inside the page.
Highlights language about customer truth, context, measurement, follow-through, trust, and the anti-demo point.
08 / Design critique
Grade the site like a product.
I borrowed one useful habit from Open Design: score the work before calling it done. This runs that loop on the page itself: direction, hierarchy, detail, function, and whether anything here is actually worth remembering.
09 / Tweak brief
Pick the next improvement.
Choose the pressure you care about and get a concrete pass to make next.