Search sections, tools, talks, links, and local actions.
Co-founder at ResiDesk
[SURFACE 01]
Turn customer truth into actual work
Arjun Kannan
I build software for the space between what customers keep telling you and what your team can actually fix.
I co-founded ResiDesk because rent is one of the biggest checks most people write, and living in a building still feels weirdly opaque. Residents already say what is broken. The hard part is getting that truth to the person who can fix the building.
Before that I worked on outcome-based lending at Climb Credit and advisor tools at BlackRock. Different rooms, same basic job: get the facts to the person who has to make the call.
I do not think of this as AI for real estate. It is business 101 with better machinery: talk to the customer, understand the situation, and make sure the next step has an owner.
Building ResiDesk so property teams can talk to residents every day, see what is actually happening, and route the right issue without asking one person to read every text by hand.
Before
Product and engineering at Climb Credit and BlackRock. Different industries, same lesson: if the person making the decision cannot see the context, the tool is not doing enough.
Start here
Start with Work for the path, Public for sources, and Talks if you want the less-polished version in my own words.
What I care about
Tools that make the job easier without pretending the people doing the job should disappear.
Current focusHousing softwareResident conversations
[UPDATED 2026-05-07]
Helping property teams hear residents before the pattern gets expensive.
Most of my attention is on ResiDesk. We help teams text with residents, understand the building context behind each message, answer well, and show owners the pattern while there is still time to do something about it.
Looking for
Owners and operators who know the inbox is trying to tell them something.
The best conversations start with teams who already care. The problem is volume. Nobody can read every text, review, ticket, and survey by hand and still turn the pattern into owned work.
Writing about
What happens after the answer shows up.
AI gets interesting when it changes the next task: who handles it, what they know, how fast they can move, and whether the resident has to explain the same problem again.
Not useful
AI that only answers.
If nobody owns the next step, nobody trusts the answer, and nothing changes afterward, the demo did not do much.
When I am deciding whether a tool is worth building, I come back to the same loop: talk to the customer, make the work visible, take the boring load off the team, and do not mistake a clean demo for something people use the next morning.
01 / Customer
Talk to the customer.
That is still business 101. In housing, the hard part is hearing enough residents without making one person read everything by hand, then getting the pattern back to the people who can change the building.
02 / Context
Show me the actual job.
I do not think well in abstractions for their own sake. Give me the work, the stakes, the weird edge cases, and the person who has to live with the outcome.
03 / Adoption
Demos are not adoption.
I learned this early at BlackRock. A prototype can win the room and still lose to the spreadsheet the next morning. Applause is not the test.
04 / AI
Move the task forward.
I do not need AI to do the whole job. I need it to move a task from stuck to almost done, while the person still owns the call.
05 / Trust
Keep judgment where trust matters.
Residents trust the product because there is a human team behind it. AI should make that team faster, better informed, and less buried.
06 / Team
Hire people who can carry context.
The best builders can walk into a messy situation, find the few facts that matter, and move without waiting for a perfect script.
Resident messages
[MODULE 05]
How a resident text turns into building work
[STEP 01 / LISTEN]
Start with what the resident actually said.
A useful system starts with ordinary stuff: the broken washer, the pet-policy question, the Wi-Fi complaint, the package-room mess, the first sign someone may not renew.
I grew up around research, so the default path looked academic. I studied applied physics at Cornell because I liked real systems, messy measurement, and problems where small details could change the answer.
Software came in sideways. I was in an electron microscopy lab and wrote code to make a magnetic-noise setup faster. It saved hours almost immediately. That was the moment software stopped feeling separate from the real work.
The industries changed. The job did not. At BlackRock, it meant making institutional tools usable for advisors. At Climb Credit, it meant underwriting against outcomes. At ResiDesk, it means helping housing companies hear residents clearly enough to act.
Physics
Software became the lever.
I wrote a tool in an electron microscopy lab to speed up a magnetic-noise setup. It saved enough time, fast enough, that software started to look like leverage instead of coursework.
BlackRock
Real stakes sharpen the interface.
I came back six months later, re-interviewed, and moved to New York. It taught me that interface quality matters a lot more when real money sits behind the decision.
Climb Credit
Outcomes changed the product.
Instead of asking who looked safest on paper, we asked what happened to earnings after the program. That pushed outcomes into underwriting, product, and data, and annual loan volume went from $1 million to $300 million.
ResiDesk
Housing should know its customer.
Residents tell buildings what is working and what is not every day. The work is making that clear to the owner, useful to the operator, and less annoying for the person living there.
The job has mostly been the same in different settings: find what the customer is actually saying inside a messy process, then build the shortest responsible path to a real decision.
Resident experience
7%
Reported renewal and rent lift when resident feedback got into decisions earlier.
Climb Credit
$1M → $300M
Annual loan volume growth after treating student outcomes as product data, not marketing copy.
Advisor tools
$40M ARR
Advisor-facing analytics product taken from zero to $40 million ARR in its first year.
We help rental-property owners and operators understand what residents are asking for across renewals, rent, maintenance, and staffing. The product earns its keep when it changes what happens next: who owns it, what context matters, and whether the work gets done.
We underwrote against a different question: not who looked safest on paper, but what happened to a graduate's earnings. That forced outcomes into the product, data, and underwriting.
The job was turning institutional infrastructure into something advisors could use with clients. Same information underneath, built for the moment when someone had to explain, compare, and decide.
I write when the consensus feels too smooth. Most pieces come back to the same question: does this help someone finish the real job, or did we just make the demo easier to sell?
If a tool does not help someone finish a real task sooner, with less context loss, I have a hard time caring about it.
Understand the job first.
If you do not know what someone is actually trying to do, you are probably just rearranging the screen.
Build the harness, not just the model.
The model is one part. Context, tools, guardrails, evaluation, and the handoff into someone's day decide whether it matters.
Demos lie by omission.
What matters is whether people still reach for it mid-work, mid-mess, with no audience and no demo to grade.
FAQ
[MODULE 11]
Questions people actually ask
What AI systems do I actually build?
I build AI around real work. At ResiDesk, that means helping property teams answer residents, understand what is happening in the building, and get the right issue to the person who can move it.
What have I built before ResiDesk?
I worked at Climb Credit and BlackRock. At Climb, I helped grow annual loan volume from $1 million to $300 million. At BlackRock, I built a retail analytics product that reached $40 million ARR in its first year.
What do I usually write and speak about?
I usually come back to the same things: agents, evaluation, product loops, and what separates a strong demo from something people still use on a Tuesday afternoon. Housing makes the point concrete because the customer is already talking.
What is my view on AI?
I care less about whether something looks impressive and more about whether it helps someone make a better call. That usually means getting the context right, testing what good looks like, and keeping a human close enough to stop the system from automating the wrong thing.
I have invested in more than 100 startups and mentored through Techstars. I tend to back founders who are close to the problem, close to the customer, and honest about what they do not know yet.
Generic advice is free now. The useful version is specific: here is the customer, here is the constraint, here is the ask, here is the next decision.
Runs in the browserNo server requiredFalls back cleanly
01 / Browser AI visual lab
Local visuals ready
Turn the site into working pictures.
Pick a view. The graphic is deterministic and runs anywhere. If the browser has a local model, it can add a sharper read on what the picture means.
02 / Ask this site
Checking browser AI
Ask a question. Get an answer from this page.
The answer uses the copy, talks, writing, and links on this page. If your browser exposes local AI, the tool can try that too. Otherwise it uses a small local retrieval engine.
Try asking about ResiDesk, housing, demos, BlackRock, Climb, writing, or AI.
03 / Conversation map
Start with a real question.
04 / Reading guide
Build a path through the site.
05 / Transcript lens
Pull the useful parts out of a talk.
06 / Useful AI test
Paste an AI idea. See if it earns its keep.
07 / Resident messages
Generate a small building readout.
08 / Pattern highlighter
Show the pattern inside the page.
Highlights the words I keep coming back to: customer, context, measurement, follow-through, trust, and demo.
09 / Design check
Check whether the page is doing its job.
I like making the design explain itself before I call it done. This checks the page on direction, hierarchy, detail, function, and whether it actually helps a reader.
10 / Next pass
Pick the next thing to fix.
Pick the part that feels weakest and get a concrete pass to make next.