Before You Start: Understand the Architecture
Lovable builds React + Vite single-page applications (SPAs) with client-side rendering (CSR). This means search engines and social platforms receive a mostly empty HTML shell when they first request any URL — your actual page content only loads after JavaScript executes.
What this means for SEO:
Google indexes CSR sites in two stages — first the HTML shell, then JavaScript-rendered content (delayed by days or weeks)
Social platforms (LinkedIn, Twitter/X, Facebook) never execute JavaScript — they only see your static HTML
AI crawlers (ChatGPT, Perplexity, Claude) mostly don't execute JavaScript either
Every page on your site returns the same
index.htmlby default, making pages look identical to crawlers before JS runs
Understanding this is the foundation. Every best practice in this guide exists because of this architecture.
For a first-person account of hitting these limitations on a real Lovable project, read The Lovable SEO Reality Check.
Part 1: Foundation Setup
1.1 SEOHead Component
Every page needs unique meta tags. The standard approach in Lovable is a reusable SEOHead component using react-helmet-async.
Prompt:
Create a reusable SEOHead component using react-helmet-async with the following props: - title (string) — auto-appends "| YourSiteName" if not already included - description (string) - canonical (string) — accepts relative paths and converts to absolute https://yourdomain.com - ogType ('website' | 'article') — defaults to 'website' - ogImage (string) — accepts relative paths, converts to absolute URL, defaults to branded social card at /og-image.jpg - robots (string, optional) — e.g. 'noindex,nofollow' - article (optional object with publishedTime, author, tags) Include these tags: - <title> - meta name="description" - link rel="canonical" - og:type, og:url, og:title, og:description, og:image, og:site_name, og:locale - twitter:card (summary_large_image), twitter:url, twitter:title, twitter:description, twitter:image - article:published_time, article:author, article:tag (when ogType is 'article') All image and canonical URLs must be absolute — convert relative paths by prepending https://yourdomain.comVerify — paste in browser console:
console.log('Title:', document.title); console.log('Canonical:', document.querySelector('link[rel="canonical"]')?.href); console.log('OG Image:', document.querySelector('meta[property="og:image"]')?.content); console.log('Description:', document.querySelector('meta[name="description"]')?.content);1.2 index.html Static Meta Tags
Your index.html is the only file crawlers see before JavaScript runs. It must contain static, meaningful meta tags — not just the homepage defaults.
Prompt:
Update index.html with the following: 1. Set the <title> to the homepage title: "YourSiteName | Your Main Value Proposition" 2. Set meta name="description" to a clear homepage description (140-160 characters) 3. Add <link rel="canonical" href="https://yourdomain.com/" /> as a static fallback 4. Update og:image and twitter:image to use absolute URLs: https://yourdomain.com/path-to-image.jpg (not relative /path-to-image.jpg) 5. Set og:url to https://yourdomain.com/ 6. Add <meta property="og:site_name" content="YourSiteName" /> 7. Add <meta property="og:locale" content="en_US" /> 8. Add <meta name="robots" content="index, follow" />
Critical check — OG image must be absolute:
<!-- Wrong — social crawlers won't resolve this --> <meta property="og:image" content="/images/social-card.jpg" /> <!-- Correct --> <meta property="og:image" content="https://yourdomain.com/images/social-card.jpg" />
1.3 JSON-LD Structured Data
Add schema markup for your organisation in index.html (static, always visible to crawlers) and use a StructuredData component for page-specific schemas.
Prompt — static schema in index.html:
Add JSON-LD structured data to index.html inside a <script type="application/ld+json"> tag with an @graph array containing: 1. Organization schema: - @type: Organization - name, url, logo (absolute URL), description - address with streetAddress, addressLocality, postalCode, addressCountry - contactPoint with telephone, email, contactType - sameAs array with LinkedIn and other social profile URLs 2. LocalBusiness schema: - @type: LocalBusiness - name, url, telephone, email - priceRange (e.g. "CHF 2,000 - 25,000") - address block - areaServed Do not duplicate these schema types in React components — this static version handles sitewide identity.
Prompt — StructuredData component for page-specific schemas:
Create a StructuredData React component using react-helmet-async that accepts a schema prop (or array of schemas) and renders them as JSON-LD. Support these schema types: - Article (for blog posts): headline, description, image, datePublished, dateModified, author, publisher - Service (for service pages): name, description, provider, serviceType, areaServed, offers - FAQPage (for pages with FAQs): array of question/answer pairs - BreadcrumbList: array of name/url items Use this component only for page-specific schemas. Never use it for Organization or LocalBusiness — those live in index.html.
Verify structured data:
console.log('Schema count:', document.querySelectorAll('script[type="application/ld+json"]').length);Then validate at: https://search.google.com/test/rich-results
1.4 Sitemap
Prompt:
Create a sitemap.xml in the /public folder listing all public routes with: - <loc> using full absolute URLs (https://yourdomain.com/page) - <lastmod> in YYYY-MM-DD format - <changefreq>: daily for blog index, weekly for main pages, monthly for static pages, yearly for legal pages - <priority>: 1.0 for homepage, 0.9 for key service pages, 0.8 for about/contact/blog index, 0.7 for individual service and resource pages, 0.6 for blog posts, 0.4 for legal pages Exclude all authenticated routes: /dashboard, /admin, /login, /settings, /profile, and any /*/edit paths.
After publishing, submit at: Google Search Console → Sitemaps → Enter sitemap URL → Submit
Update sitemap when adding pages:
Update sitemap.xml to add the following new routes: [list routes] Set lastmod to today's date for all modified pages.
1.5 robots.txt
Prompt:
Create a robots.txt in /public with the following rules: 1. Default (all crawlers): Allow /, disallow /dashboard, /admin, /login, /settings, /my-profile, /auth/callback, and any authenticated or private routes 2. Googlebot: explicit Allow for all public pages 3. Bingbot: same as Googlebot 4. Social crawlers (allow for preview generation): User-agent: Twitterbot — Allow: / User-agent: facebookexternalhit — Allow: / 5. AI crawlers (allow for citations and GEO visibility): User-agent: GPTBot — Allow: / User-agent: PerplexityBot — Allow: / User-agent: Claude-Web — Allow: / User-agent: Google-Extended — Allow: / 6. Sitemap line at the bottom: Sitemap: https://yourdomain.com/sitemap.xml
Part 2: On-Page SEO Per Route
2.1 Apply SEOHead to Every Page
Prompt:
Add the SEOHead component to every page component in /src/pages/. For each page provide: - A unique title under 60 characters including the primary keyword - A unique meta description between 140-160 characters with a soft call to action - The canonical URL for that specific route (e.g. /about, /services/intercom) - An ogImage specific to that page if available, otherwise leave as default Pages to update: [list your page files]
Check all pages have unique titles:
// Run this on each page and compare console.log('Title length:', document.title.length, '| Title:', document.title);2.2 Heading Structure
Prompt:
Review the heading structure on each page component. Ensure: - Exactly one H1 per page that includes the primary keyword and states the page purpose - H2 for major sections - H3 for subsections - No skipped heading levels (H1 → H3 without H2) - Headings are not used purely for visual styling Pages to audit: [list pages]
Verify:
console.log('H1 count:', document.querySelectorAll('h1').length); // Should be 1 console.log('H1 text:', document.querySelector('h1')?.textContent);2.3 Image Optimisation
Prompt:
Audit all images across the site and: 1. Add descriptive alt text to every <img> tag — include relevant keywords naturally, describe what is shown 2. Add width and height attributes to prevent layout shift 3. Add loading="lazy" to all images below the fold 4. Add loading="eager" and fetchpriority="high" to the hero/above-fold image only 5. Convert any PNG images used as photos to WebP format where possible 6. Flag any images over 200KB for compression Bad alt text: alt="image", alt="photo", alt="" Good alt text: alt="Christopher Boerger, founder of dot2.solutions, Swiss AI consultant"
Check for missing alt text:
console.log('Images missing alt:', document.querySelectorAll('img:not([alt])').length); // Should be 02.4 Internal Linking
Prompt:
Review internal linking across the site: 1. Ensure the navigation and footer use real <a href> tags, not onClick handlers 2. Add 3-5 contextual internal links on each main service page pointing to related pages 3. Use descriptive anchor text — not "click here" or "read more" 4. Ensure every important page is reachable within 3 clicks from the homepage 5. Add links to the 3 most important pages in the footer so they appear site-wide Good anchor text: "Intercom Fin AI implementation services" Bad anchor text: "click here", "read more", "learn more"
Part 3: Blog Post SEO
Blog posts need additional SEO configuration beyond standard pages.
3.1 Article Schema
Prompt:
For each blog post page, add the StructuredData component with Article schema including: - headline: the post title (under 110 characters) - description: the meta description - image: absolute URL to the post's featured image (1200x630px) - datePublished: ISO 8601 format (e.g. 2026-01-15T08:00:00+00:00) - dateModified: ISO 8601 format of last update - author: { name: "Author Name", url: "https://yourdomain.com/about" } - publisher: { name: "YourSiteName", logo: "https://yourdomain.com/logo.png" } - keywords: array of 5-8 relevant keywords Also add BreadcrumbList schema: - Home → https://yourdomain.com/ - Blog → https://yourdomain.com/blog - [Post Title] → https://yourdomain.com/blog/[slug]
3.2 Blog Post SEOHead
Prompt:
For each blog post, update the SEOHead component call with: - ogType="article" - article.publishedTime in ISO format - article.author as the author name - article.tags as an array of topic tags - A unique ogImage: the featured image for this post at 1200x630px - canonical pointing to the exact post URL with no trailing slash
Part 4: GEO — AI Crawler Optimisation
GEO (Generative Engine Optimization) helps AI systems like ChatGPT, Perplexity, and Claude discover and cite your content.
4.1 llms.txt
Prompt:
Create two files in the /public folder: 1. /public/llms.txt — a concise summary following the llms.txt standard: - # CompanyName header - > One-line description - Sections for: Services (with URLs), Pricing, Resources, Blog, Company, Contact - Each entry: - [Page Name](URL): brief description 2. /public/llms-full.txt — comprehensive version including: - Full company description and founder bio - Detailed service descriptions with pricing - All case studies with summaries - Full blog post list with descriptions - Complete FAQ section - All contact and legal information Add both files to sitemap.xml with changefreq="monthly" and priority="0.5" Reference them at the bottom of robots.txt as a comment.
4.2 LLM Summary Page
Prompt:
Create a static page at /llms (also generate as /public/llm.html for direct crawler access) containing: - Company summary in clear, factual language - What we do (one paragraph) - Services list with brief descriptions and prices - Key differentiators (bullet list) - FAQ section with 8-10 common questions and direct answers - Contact information Use clear H2/H3 headings, avoid marketing fluff. Add Organization and FAQPage JSON-LD schema. Add the page to sitemap.xml. Content should be structured for AI citation — short, factual, quotable answers. Example format: "dot2.solutions is a Swiss AI consultancy founded by Christopher Boerger, specialising in Intercom Fin AI deployment for SMEs."
Part 5: Build-Time Meta Injection
This directly addresses the "Duplicate without user-selected canonical" Search Console issue — Googlebot seeing the same empty shell for every route and not finding page-specific canonical tags before JavaScript runs.
The implementation uses three files working together:
Prompt — Step 1: Create the Vite plugin
Create scripts/prerender-meta.ts — a custom Vite plugin with: - A RouteMeta interface: path, title, description, canonical, ogImage, ogType, robots - An injectMeta() function that takes the base index.html and replaces: - <title> tag - meta name="description" - link rel="canonical" - og:title, og:description, og:url, og:image, og:type - twitter:title, twitter:description, twitter:image, twitter:url All URLs must be converted to absolute (prepend https://yourdomain.com for relative paths) - A prerenderMetaPlugin() export that: - Runs at build time (apply: "build", enforce: "post") - In closeBundle(), reads dist/index.html - For each route, creates dist/[route-path]/index.html with injected meta tags - Skips routes where the file already exists - Logs how many files were generated
Prompt — Step 2: Create the auto-sync route generator
Create scripts/generate-prerender-routes.ts — a script that: 1. Reads all blog post data files from src/data/blog/posts/ 2. Extracts id (used as slug), title, and excerpt from each file 3. Generates scripts/prerender-routes.ts automatically with: - Static routes array for all non-blog pages (hardcoded with title/description/canonical) - Blog routes array generated from the extracted post data - Exports a combined prerenderRoutes array 4. Marks the output file as AUTO-GENERATED with instructions to run the script Add these npm scripts to package.json: - "generate:routes": "npx tsx scripts/generate-prerender-routes.ts" - "prebuild": "npm run generate:routes" This ensures prerender-routes.ts stays in sync with blog posts automatically on every build.
Prompt — Step 3: Wire into Vite config
Update vite.config.ts to import prerenderMetaPlugin from ./scripts/prerender-meta and prerenderRoutes from ./scripts/prerender-routes, and add prerenderMetaPlugin(prerenderRoutes) to the plugins array.
What this produces at build time:
dist/intercom-expert/index.html— with correct canonical and meta for that pagedist/blog/my-post-slug/index.html— with article-specific title, description, canonicalEvery other configured route gets its own HTML file
What it does NOT do: inject body content. The <div id="root"> remains empty — React hydrates that client-side as usual. This is specifically a canonical tag fix, not full prerendering.
Maintenance: when you add a new blog post, prerender-routes.ts regenerates automatically on the next build via the prebuild script. For new non-blog pages, add them to the static routes array in generate-prerender-routes.ts.
Part 6: Performance
Core Web Vitals
Prompt:
Optimise the homepage and key landing pages for Core Web Vitals: 1. Add width and height to all images to prevent CLS (Cumulative Layout Shift) 2. Add loading="lazy" to all below-fold images 3. Add fetchpriority="high" to the hero image 4. Defer non-critical third-party scripts (analytics, chat widgets) until after page load 5. Preconnect to external font and asset domains: <link rel="preconnect" href="https://fonts.googleapis.com"> 6. Use font-display: swap on all custom fonts to prevent render blocking Target scores: Performance 90+, SEO 100 in Lighthouse.
Run Lovable's built-in Speed tool after each change to track improvement.
Part 7: Verification Checklist
Use this checklist before publishing and after major content changes.
Technical Foundation
□ SEOHead component installed with react-helmet-async □ SEOHead applied to every page with unique title, description, canonical □ All OG image paths are absolute URLs (https://...) □ Default ogImage is branded social card, not a random asset □ og:site_name and og:locale present in SEOHead □ Static canonical in index.html for homepage □ JSON-LD Organization + LocalBusiness in index.html (not duplicated in components) □ Page-specific schemas (Article, Service, FAQ) in StructuredData component □ sitemap.xml covers all public routes, excludes authenticated routes □ robots.txt allows Googlebot, Bingbot, social crawlers, AI crawlers □ Build-time meta injection configured (prerender-meta.ts plugin + prerender-routes.ts) □ Auto-sync script (generate-prerender-routes.ts) wired as prebuild npm script □ New non-blog pages added to static routes in generate-prerender-routes.ts
Content
□ Every page has a unique title under 60 characters □ Every page has a unique description between 140-160 characters □ One H1 per page, includes primary keyword □ All images have descriptive alt text □ Internal links use real <a href> tags with descriptive anchor text □ Important pages reachable within 3 clicks from homepage
GEO
□ llms.txt published at /llms.txt □ llms-full.txt published at /llms-full.txt □ LLM summary page exists at /llms □ AI crawler directives in robots.txt (GPTBot, PerplexityBot, Claude-Web) □ FAQPage schema on pages with FAQ sections
Monitoring
□ Custom domain connected and set as primary in Lovable □ Domain verified in Google Search Console □ Sitemap submitted in Google Search Console □ URL Inspection run on 5 most important pages — confirm Googlebot sees content □ Social preview tested: LinkedIn Post Inspector, Facebook Sharing Debugger □ Structured data validated: Google Rich Results Test
Part 8: Ongoing Maintenance Prompts
Add a new page
I've added a new page at /[route]. Please: 1. Add SEOHead with unique title, description, and canonical 2. Add to sitemap.xml with today's date as lastmod and appropriate priority 3. Add Article or Service schema if applicable 4. Add 2-3 internal links from related existing pages pointing to this new page 5. Update llms.txt and llms-full.txt to include this page
Add a new blog post
I've added a blog post at /blog/[slug]. Please: 1. Add SEOHead with ogType="article", article metadata, and a unique ogImage 2. Add Article schema and BreadcrumbList schema via StructuredData component 3. Add to sitemap.xml with the publish date as lastmod 4. Update the blog index page if it has a hardcoded list 5. Add 1-2 internal links from related existing posts to this new post 6. Update llms.txt and llms-full.txt blog sections
Note: If you have the prerender plugin set up with the auto-sync script, prerender-routes.ts will regenerate automatically on the next build — no manual step needed for the new blog post route.
Monthly SEO audit
Run a monthly SEO audit on dot2.solutions: 1. Check sitemap.xml — are all current routes listed with accurate lastmod dates? 2. Check for any pages missing SEOHead or with duplicate titles/descriptions 3. Check robots.txt — are any important pages accidentally disallowed? 4. Review H1 tags on all main pages — are they unique and keyword-relevant? 5. Check all internal <a href> links resolve without 404 errors 6. Flag any images missing alt text
Known Limitations
These issues cannot be fixed within Lovable's current CSR architecture:
Issue | Impact | Workaround |
Empty body before JS runs | Google indexes content with delay | Build-time meta injection (partial fix) |
Social previews need absolute static OG tags | Solved with correct index.html setup | ✅ Fixable |
AI crawlers miss JS-rendered content | Reduced GEO visibility | llms.txt + static LLM page |
Duplicate canonical warning in Search Console | Pages may not be indexed | ✅ Build-time meta injection — addressed |
Blog content not immediately indexable | SEO value delayed | Consider Ghost, or other tools for blog |
For content-first businesses where organic search is the primary acquisition channel, consider migrating the blog to Ghost or other tools — which handles SSR natively — while keeping your app in Lovable (switching your intecom help center to indexable can also help).
Want the full story behind these limitations? Read the companion article: The Lovable SEO Reality Check — lessons learned building dot2.solutions.
