Blog
Build in Public
28 marzo 20265 min

L'IA impara la tua voce — personalizzazione vocale e selettore TTS

Since October, TAMSIV's AI understood what you said. It created tasks, memos, calendar events. It responded politely. But it didn't know you.

You could say "remind me to buy milk" and it would. But if you said "remind me about the thing for the kids", it had no idea you had kids, how many, or what "the thing" meant in your daily life.

This week, two days before public launch, I added voice-based AI personalization.

Configure your AI by voice

A new button appeared on the dictaphone screen: "Configure my AI". You tap, you talk, and you explain who you are.

"I'm a parent of 3, I manage a cleaning crew of 4 people, and I forget everything I'm told after 30 seconds."

The AI listens, summarizes your context, and stores it. From that point on, every response is tailored. If you mention "the little one", it knows you mean your son. If you say "Tuesday's job site", it knows you manage field interventions.

That's the difference between a generic assistant and one that actually knows you.

6 voices, a preview, your choice

Voice is personal. Hearing the same robotic voice 50 times a day gets old. I added a TTS voice selector with 6 different options — male, female, varied tones.

You tap a voice, hear a real-time preview ("Hello, I'm your TAMSIV assistant..."), and choose. It's saved to your profile and used for all voice responses going forward.

The bug that cost 3 hours

The AI setup screen and the dictaphone share the same WebSocket channel. When you test your voice in the setup, the dictaphone in the background also tried to process the audio callbacks. Result: both screens were stepping on each other.

The fix: isolate callbacks with an "active owner" system. When the setup screen is open, it takes exclusive control of the WebSocket. The dictaphone waits its turn. Simple in theory, 3 hours of debugging in practice — because the conflict only manifested on certain Android models where the garbage collector was more aggressive.

Personalization in pricing plans

AI personalization is now a highlighted feature in the Pro and Team plans. Free users get basic context, paid plans unlock full context and voice selection.

730+ commits, 2 days to go

This was the last feature before Monday's public launch. The app now knows who it's talking to. What's left: final polish, last round of testing on beta testers' devices, and the big leap.

Monday, TAMSIV goes public on the Play Store. 6 months of solo development. And now, an AI that knows you.