I build software for the gap between what customers are already saying and what a team can actually fix.
I co-founded ResiDesk because renting is one of the biggest checks most people write, and the experience is still weirdly bad. Residents already tell buildings what is broken. The work is getting those messages to the person who can change the building.
Before that I worked on outcome-based lending at Climb Credit and advisor tools at BlackRock. The setting changed, but the problem was familiar: get the useful information to the person making the decision.
I do not think of this as AI for real estate. It is business 101 with better machinery: talk to the customer, understand what is actually happening, and make sure someone owns the next step.
Building ResiDesk so a building can talk to residents every day, understand what is actually happening, and get the right issue to the right person without making someone read every text by hand.
Before
Product and engineering at Climb Credit and BlackRock. Different industries, same lesson: if the person making the decision cannot see the real context, the tool is not doing enough.
Start here
Start with Work if you want the career path, Public if you want outside links, and Talks if you want to hear the less-polished version in my own words.
What I care about
Tools that make the real job easier without pretending the people doing the job should disappear.
Current focusHousing softwareResident conversations
[UPDATED 2026-05-07]
Helping buildings hear what residents are already saying.
Most of my attention is on ResiDesk. We help teams text with residents, understand the building context behind each message, answer well, and show owners the pattern before it turns into a bigger problem.
Looking for
Owners and operators who know the inbox is telling them something.
The best conversations start with teams who already care. The problem is not motivation. It is reading every text, review, ticket, and survey by hand, then turning it into work someone can own.
Writing about
What happens after the answer.
AI is interesting when it changes the next task: who handles it, what they know, how fast they can move, and whether the resident has to explain the same problem again.
Not useful
AI that only answers.
If nobody owns the next step, nobody trusts the answer, and nothing changes afterward, the demo did not buy much.
When I am deciding whether a tool is worth building, I come back to the same loop: talk to the customer, make the work visible, take the boring load off the team, and do not confuse a clean demo with something people use the next morning.
01 / Customer
Talk to the customer.
That is still business 101. In housing, the hard part is hearing enough residents without making a person read everything by hand, then getting the pattern back to the people who can change the building.
02 / Context
Show the actual job.
I do not think well in abstractions for their own sake. Give me the actual work, the stakes, the weird edge cases, and the person who has to live with the outcome.
03 / Adoption
Demos are not adoption.
I learned this early at BlackRock. A prototype can win the room and still lose to the spreadsheet the next morning. The room is not the test.
04 / AI
Move the task forward.
I do not need AI to do the whole job. I need it to move a task from stuck to almost done, while the person still owns the judgment.
05 / Trust
Do not automate the judgment away.
Residents trust the product because there is a human team behind it. AI should make that team faster, better informed, and less buried.
06 / Team
Hire people who can carry context.
The best builders can take in a messy situation, find the few facts that matter, and move without waiting for a perfect script.
Resident messages
[MODULE 05]
How a resident message becomes a building decision
[STEP 01 / LISTEN]
Start with what the resident actually said.
A useful system starts with ordinary stuff: the broken washer, the pet-policy question, the Wi-Fi complaint, the package-room mess, the reason someone may not renew.
I grew up around research, so the default path looked academic. I studied applied physics at Cornell because I liked real systems, messy measurement, and problems where small details changed the answer.
Software came in sideways. I was in an electron microscopy lab and wrote code to make a magnetic-noise setup faster. It saved hours of manual work almost immediately. That was the moment software stopped feeling like a separate thing.
The industries changed. The job did not. At BlackRock it meant making institutional tools usable for advisors. At Climb Credit it meant underwriting against outcomes. At ResiDesk it means helping housing companies hear residents clearly enough to act.
Physics
Software became the lever.
I wrote a tool in an electron microscopy lab to speed up magnetic-noise setup. It saved enough time, fast enough, that software started to look like leverage instead of coursework.
BlackRock
Real stakes sharpen the interface.
I came back six months later, re-interviewed, and moved to New York. It taught me that interface quality matters a lot more when someone is using the tool with real money behind the decision.
Climb Credit
Outcomes changed the product.
Instead of asking who looked safest on paper, we asked what happened to earnings after the program. That pushed outcomes into underwriting, product, and data, and took annual loan volume from $1 million to $300 million.
ResiDesk
Housing should know its customer.
Residents tell buildings what is working and what is not every day. The work is making that legible to the owner, useful to the operator, and less annoying for the person living there.
The job has mostly been the same in different settings: find what the customer is actually saying inside a messy process, then build the shortest responsible path to a decision.
Resident experience
7%
Reported renewal and rent lift when resident feedback reached decisions earlier.
Climb Credit
$1M → $300M
Annual loan volume growth after treating student outcomes as product data, not a brochure line.
Advisor tools
$40M ARR
Advisor-facing analytics product taken from zero to $40 million ARR in its first year.
We help rental-property owners and operators understand what residents are asking for across renewals, rent, maintenance, and staffing. The product earns its keep when it changes what happens next: who owns it, what context matters, and whether the work actually gets done.
We underwrote against a different question: not who looked safest on paper, but what happened to a graduate's earnings. That forced outcomes into the infrastructure.
The job was turning institutional infrastructure into something advisors could use with clients. Same information underneath, but built for the moment when a person had to explain, compare, and decide.
I write when the consensus feels too smooth. Most pieces come back to the same question: does this help someone finish the real job, or did we just make the demo easier to sell?
If a tool does not help someone finish a real task sooner, with less context loss, I have a hard time caring about it.
Understand the job first.
If you do not know what someone is actually trying to do, you are probably just rearranging the screen.
Build the harness, not just the model.
The model is one part. Context, tools, guardrails, evaluation, and the handoff into someone's day decide whether it matters.
Demos lie by omission.
What matters is whether people still reach for it mid-work, mid-mess, with no audience and no demo to grade.
FAQ
[MODULE 11]
Questions people actually ask
What AI systems do I actually build?
I build AI around real work. At ResiDesk, that means helping property teams answer residents, understand what is happening in the building, and get the right issue to the person who can move it.
What have I built before ResiDesk?
I worked at Climb Credit and BlackRock. At Climb, I helped grow annual loan volume from $1 million to $300 million. At BlackRock, I built a retail analytics product that reached $40 million ARR in its first year.
What do I usually write and speak about?
I usually come back to the same things: agents, evaluation, product loops, and what separates a strong demo from something people still use on a Tuesday afternoon. Housing makes the point concrete because the customer is already talking.
What is my view on AI?
I care less about whether something looks impressive and more about whether it helps someone make a better call. That usually means getting the context right, testing what good looks like, and keeping a human close enough to stop the system from automating the wrong thing.
I have invested in more than 100 startups and mentored through Techstars. I tend to back founders who are close to the problem, close to the customer, and honest about what they do not know yet.
Generic advice is free now. The useful version is specific: here is the customer, here is the constraint, here is the ask, here is the next decision.
Runs in the browserNo server requiredFalls back cleanly
01 / Ask this site
Checking browser AI
Ask a question. Get an answer from this page.
The answer uses the copy, talks, writing, and links on this page. If your browser exposes local AI, the tool can try that too. Otherwise it uses a small local retrieval engine.
Try asking about ResiDesk, housing, demos, BlackRock, Climb, writing, or AI.
02 / Conversation map
Start with a real question.
03 / Reading guide
Build a path through the site.
04 / Transcript lens
Pull the useful parts out of a talk.
05 / Useful AI test
Paste an AI idea. See if it earns its keep.
06 / Resident messages
Generate a small building readout.
07 / Pattern highlighter
Show the pattern inside the page.
Highlights the words I keep coming back to: customer, context, measurement, follow-through, trust, and demo.
08 / Design check
Check whether the page is doing its job.
I like making the design explain itself before I call it done. This checks the page on direction, hierarchy, detail, function, and whether it actually helps a reader.
09 / Next pass
Pick the next thing to fix.
Pick the part that feels weakest and get a concrete pass to make next.