From Zero to Present on Perplexity, Gemini, Bing, and Brave: 24 Hours to Make an Android App Visible to 4 AIs Simultaneously
Tuesday evening, we look at the week's numbers. Visits from ChatGPT exist. Not many, three this month, but they exist. And then we type site:tamsiv.com on Bing. Zero results. We check Common Crawl. Zero captures. The picture becomes clearer. ChatGPT finds TAMSIV through a fragile channel (direct OAI-SearchBot, a few dev.to backlinks), not via Bing. If this channel closes tomorrow, nothing.
And most importantly, ChatGPT is only one out of ten LLMs. Perplexity, Claude, Gemini, Copilot, Grok, You.com, Kagi, and all the open-source models in the background: each has its own search backend, its sources, its citation criteria. Optimizing for just one means getting stuck in a bottleneck.
The mapping we hadn't done
Before coding anything, we draw the map. It's quick once laid out:
- ChatGPT search and Microsoft Copilot rely on Bing as a backend. Bing indexes us zero, so we are invisible on both.
- Claude (Anthropic) and its derivatives like WebSearch use Brave Search + their own crawler. Here we rank first for "voice task manager Android". Good catch, we didn't know that.
- Gemini and Google AI Overview rely on Google. Indexing OK, ten Google referrers per week on average.
- Perplexity cross-references Bing, DuckDuckGo, and its own crawler. Status unknown, to be tested live.
- Grok is connected to X (formerly Twitter) and a custom crawler. No active TAMSIV presence on X, an acknowledged blind spot.
- DeepSeek, Mistral, Llama, and all open-source models draw their initial training corpus from Common Crawl. Zero captures today, so we are not in any of these trained models.
Out of ten major LLMs, we only have a solid presence on two (Google and Brave). This is insufficient.
The AEO stack deployed in one session
The idea: deploy in one day everything that makes tamsiv.com readable by AI crawlers, simultaneously, without relying on a single backend.
For the manifest itself: /llms.txt at the root, following the llmstxt.org standard. An identity paragraph, thirty links to important pages, a contact sheet (founder, platforms, languages, pricing). And /ai.txt as a complement, explicitly authorizing the use of content for AI training and indexing (Spawning standard, clear opt-in).
For structured data: we enrich the JSON-LD of the landing page. SoftwareApplication with ten explicit features and three offers (Free, Pro, Team). Organization with structured founder and extended social links, including Bluesky. WebSite with search action. And a FAQPage of eighteen questions automatically pulled from the site's i18n translations. On the blog side, enriched BlogPosting (inferred keywords, articleSection, wordCount, about, inLanguage), FAQPage auto-extracted from HTML when the article contains an FAQ section, and dateModified calculated on the actual file modification date, more accurate than the publication date alone.
For push indexing: an /api/indexnow API endpoint that simultaneously notifies Bing, Yandex, and IndexNow.org whenever a URL changes. an IndexNow key published at the root for property verification. No need to wait for crawlers to pass, we notify them ourselves.
For article citability: a "Short Answer" block of fifty to sixty-five words added to the beginning of the four articles already cited by ChatGPT, in all six languages. Twenty-four files modified. The format is calibrated so that LLMs extract the passage and cite it as is, without having to scroll five hundred words.
For comparisons: a first page /vs/todoist with a complete JSON-LD stack (two SoftwareApplication, one ItemList, one FAQPage, one BreadcrumbList). LLMs love honest comparisons, and SourceForge builds its own pages from ours.
Bing submission and IndexNow batch
The fastest lever for Bing: Webmaster Tools. Login via OAuth Google to the TAMSIV admin account, import sites from Google Search Console. Two sites imported instantly, sitemaps included. No verification meta tag to paste, no waiting.
Then the IndexNow batch. We retrieve the 498 URLs from the sitemap, split them into chunks of 50, and push them in parallel to the three public endpoints (api.indexnow.org, bing.com, yandex.com) with a one and a half second delay between each wave. Thirty chunks sent, thirty OK. Total: a little less than a minute to tell Bing and Yandex to index thirteen hundred pages.
In parallel, an inclusion request sent to Common Crawl via their contact form. Automatic reply "Thank you, your submission has been received". We'll see in the next monthly iteration.
The next day's test, on four LLMs
The direct test the next morning on each LLM, that's where it counts.
Perplexity returns a complete and precise answer to "What is TAMSIV voice task manager Android". Ten sources cited: four tamsiv.com articles, the Play Store listing, a Slashdot listing we didn't know existed, two dev.to articles, the YouTube demo video, the EN landing page, and a LinkedIn post from a third party (a gyaansetu-ai account that had reposted our info). Rich answer, correctly attributed to TAMSIV as an entity.
Gemini also answers, shorter, with three sources: Play Store, SourceForge (a comparison page we didn't know either), and a tamsiv.com article. The answer correctly cites the key features (NLP, multi-task extraction, real-time collaboration, gamification, six languages, hierarchical folders). A funny little detail: Gemini attributes the app's design to "Julien Avezou" (another indie hacker whose articles we commented on dev.to). We'll let it run, the founder's name is not the point of this story.
Claude via WebSearch places the tamsiv.com blog in first position for "best voice task manager Android 2026", with the description and pricing correctly reproduced.
Bing is a special case. The query site:tamsiv.com still returns zero results, because the engine takes several days to integrate pages into its operator index. But in terms of real traffic, two human Bing visitors appear in Supabase that day: one from England who lands on the article about Supabase cost reduction, one from France who lands on the history of AI conversations. A source that was zero 48 hours earlier, two visitors now, without us having written a single new piece of content. That's the IndexNow batch speaking.
The real surprise: Slashdot and SourceForge already had listings
We discover in the Perplexity sources a link to slashdot.org/software/p/TAMSIV/. We click, and there's a complete listing. Same on SourceForge. Precise descriptions, probably retrieved from our Play Store. Correct pricing, correct languages, correct categories.
No one manually submitted these listings. Aggregators automatically scrape Play Stores and create entries for apps that gain visibility. This is luck, or rather the reward for having filled out our Play Store listing well in six languages.
On the SourceForge side, a few factual errors imported from automatic scraping: the listing announces TAMSIV on Windows, Mac, Linux, iOS, whereas we only run on Android and the web. It announces 24/7 phone support, whereas we are reachable by email and Discord. It announces webinars and in-person training, whereas we just have online documentation. These errors feed downstream LLM hallucinations (Gemini picks up "iOS, Web, desktop platforms" which probably comes from there).
Claiming the listing is the next step: a SourceForge vendor account already created for TAMSIV, a claim form to fill out, moderation in a few days, and we will correct the five errors. But this work aims to correct what the listing says about the product, not to push a personal biography.
What we learned
One LLM is not all LLMs. Optimizing for ChatGPT alone means ignoring 70% of the market. An AEO strategy must simultaneously target Bing (ChatGPT/Copilot), Brave (Claude), Google (Gemini), and Common Crawl (open-source models). The technical actions are the same, but their effect is measured on different channels.
Software directories are primary LLM sources. Perplexity cites Slashdot, Gemini cites SourceForge, Claude cites AlternativeTo, Copilot cites Capterra. Multiplying presence on these sites multiplies citations. This is the forgotten layer of classic SEO, and it directly feeds LLMs.
Enriched JSON-LD is read, not just by Google. We saw Gemini pick up the features+offers structure from SoftwareApplication, and the FAQPage from the blog. For fifty lines of JavaScript in the layout, the effect is measurable.
IndexNow saves days of cycle time. Without push, Bing takes one to three weeks to discover a new site. With an IndexNow batch, it's a few hours. And it's free, no API key, no crazy rate limits.
LLM hallucinations on non-critical details (founder's name, supported platforms) almost all come from third-party listings. Correcting them via claim forms is more effective than over-publishing on your own site. The product itself must be described factually and consistently wherever you have control.
Next steps
On the list: claim Slashdot and SourceForge to correct product errors, active submission to the ten other missing directories (G2, Capterra, GetApp, ProductHunt, ToolFinder, FutureTools, There's an AI for that, AItoolsdirectory, Crozdesk, SoftwareSuggest). Four new comparison pages (Any.do, Google Keep, Notion, TickTick) using the same template as /vs/todoist. And mandatory monthly monitoring of LLM citations, with screenshots for each test, to measure trajectory and not just rely on impressions.
Test again in thirty days, in sixty, in ninety. If the trajectory is good, we'll know. If it stagnates, we'll have the numbers to pivot.
FAQ
What is an llms.txt file and is it really useful?
llms.txt is an emerging standard at the end of 2025 (llmstxt.org) that proposes a structured manifest at the root of a site, directly readable by language models. The idea is to provide them in a single read with the site's identity, its important pages by category, and factual elements useful for citation. It's the equivalent of a robots.txt for LLMs. The measurable effect is still difficult to isolate as the standard is young, but it costs nothing to deploy and clearly signals that the site wants to be read by AIs.
Why not only target ChatGPT since it's the most used?
Because ChatGPT search is a Microsoft Bing product on the backend. If Bing doesn't index us, ChatGPT search doesn't see us either. Conversely, optimizing for Bing makes us visible on ChatGPT and Copilot at the same time. And there are at least ten major LLMs, each with their own backend, so targeting just one means ignoring 70 to 80% of entry points. Multi-LLM optimization costs the same effort as mono-LLM optimization.
Does IndexNow really work or is it Microsoft marketing?
It works, measurably. We pushed 498 URLs in chunks of 50 to Bing, Yandex, and IndexNow.org. Thirty chunks accepted out of thirty. Twenty-four hours later, two human Bing visitors arrived on specific blog articles. A source that didn't exist the day before. The protocol is open (not a Microsoft lock-in), supported by Yandex, Naver, Seznam, and indirectly by all engines that use Bing as a backend. To be implemented in every new site.
How long before Bing really indexes all pages?
For a new site, without Webmaster Tools, it's several weeks for Bing. With Webmaster Tools (import from Google Search Console), it's immediate for sitemaps but a few days for individual pages to appear in site:. With IndexNow in addition, priority pages are crawled in a few hours. The time it takes for pages to appear in the site:tamsiv.com operator is still one to two weeks, but real traffic starts earlier.
Were the Slashdot and SourceForge listings created by you or automatically?
Automatically, by scraping. The Slashdot Media aggregators (who own both) scrape Play Stores and create listings for apps that exceed a visibility threshold. The description probably comes from our Play Store EN listing, which explains why it's correct. The errors come from the automatic filling of auxiliary fields (platforms, support, training, API) that have no clear source and that the scraper fills with default values. You can claim the listing to take control and correct it.
Is this project repeatable for another application?
Yes, the source code of the stack (llms.txt, ai.txt, IndexNow API, extended JSON-LD, short answer block component) is entirely transferable. The only manual work is writing the content (llms.txt manifest, short answer blocks for articles), which takes one day for a site with sixty articles. Claiming third-party directories (Slashdot, SourceForge) requires creating vendor accounts, twenty minutes per site, and waiting for moderation. The whole thing can be done in one week for an already published application, with a measurable impact from the day after the push.