If you’ve made the jump to Headless with NextJS, Vercel, and Edge, you’re likely using Static Site Generation (SSG) at build time, and you’re likely going to be using Incremental Static Regeneration (ISR). SSG is useful to build some or all of the pages before your deployment is complete, thus giving users immediate access to the fully-rendered content on the first request. But, once the content has changed, you’re going to need to regenerate that rendered content. That’s where ISR comes in. Once a user requests a page, it as a revalidation period. You can control this in the [[…path]].tsx file in the pages folder of your solution:

// This function gets called at build time on server-side.
// It may be called again, on a serverless function, if
// revalidation (or fallback) is enabled and a new request comes in.
export const getStaticProps: GetStaticProps = async (context) => {
  const props = await sitecorePagePropsFactory.create(context);

  // Check if we have a redirect (e.g. custom error page)
  if (props.redirect) {
    return {
      redirect: props.redirect,
    };
  }

  return {
    props,
    // Next.js will attempt to re-generate the page:
    // - When a request comes in
    // - At most once every 5 seconds
    revalidate: 5, // In seconds  
    notFound: props.notFound, // Returns custom 404 page with a status code of 404 when true
  };
};

Your revalidation period is found on line 19. Notice the default setting of five seconds. You should never, never, never leave this. It will essentially go back to Edge for new content every 5 seconds for each page. You’re going to quickly hit your 80 requests per second uncached query throttle on Experience Edge, which will lead to some heartburn on your part. The unofficial recommendation I’ve heard for this value should be no less than 30 minutes. (This is entirely dependent on your own implementation, however. I personally wouldn’t go lower than that.)

So with that in mind, you can draw the reasonable conclusion: Authors may experience up to a 30-minute delay between publishing content and seeing it live on the production site. That’s the reality in a headless world, though. The “Render Tax” has to be paid. In the XP world, it was paid by your visitors. In headless, it’s absorbed by the platform and passed on through lagged publishing cycles.

Sometimes though, just sometimes, an author might have a need to publish something immediately. Say the privacy page accidentally got updated to say “Sorry no privacy” (it can happen). You’d want to have that published quickly and updated on the site.

We can solve this with On-Demand ISR, which is a fancy way of saying “Purge a specific cache” or “Nuke the Route” as I like to call it.

Ripley knows best

Anwho, this is actually a two-part process. First part is to create an API which does the revalidation, the second part is invoking it.

The first part is actually pretty easy. We’ll create the following in the /pages/api folder:

import type { NextApiRequest, NextApiResponse } from 'next';

const routePurgeApi = async (
  req: NextApiRequest,
  res: NextApiResponse
): Promise<void> => {
  // Check for the API Key. It matters
  if (
    req.headers.apikey !== process.env.PURGE_API_KEY ||
    !process.env.PURGE_API_KEY
  ) {
    return res.status(401).json({ message: 'Missing or Invalid API Key' });
  }

  const path = (req.headers.path as string)?.toString();

  if (!path || path === '') {
    return res.status(400).json({ message: 'Bad request: path' });
  }

  const userName = req.headers.userName;

  try {
    await res.revalidate(path);

    console.warn(`Purged Route [${userName}] [${path}]`);

    return res.status(202).json({ revalidated: true });
  } catch (err) {
    console.error(`Purged Route Failed [${userName}] [${path}]`);
    console.error(err);

    return res.status(500).send('Error revalidating');
  }
};

export default routePurgeApi;

A few things to note:

  • We need an API Key, so some yahoo doesn’t come in and randomly purge some routes.
  • We need a path (Duh) of what to purge
  • For diagnostics reasons, we want to track the user doing the purging (to look for bad behavior…tons of purging likely means some bad education with that author)

The second part is to create a SPE command. In our solution, we added a rule where this could only be executed on items that inherited from our Page template, since only pages have routes (sorry DataSources… you’re not important) Here’s what that command looks like, in general.

$apiKey = [Sitecore.Configuration.Settings]::GetSetting("Vercel.APIKey")
$siteName = "MySite"
# Get our Context
$currItem = Get-Item .

$apiUrl =  "https://"+[Sitecore.Sites.SiteManager]::GetSite($siteName).Properties["targetHostName"]+"/api/routepurge"

$path = "/"+ $SitecoreContextItem.Language.Name.ToLower() +"/_site_"+$siteName+$currItem.FullPath.ToLower().Replace(' ', '-')
$path = $path.replace("<PATH_TO_SITE>/home", "")

$userName = (Get-User -Current).Name

if($path -eq "")
{
    $path = "/"
}

try
{
    $res = Invoke-WebRequest $apiUrl -Headers @{"apikey"=$apiKey; "path"=$path; "userName"=$userName} -UseBasicParsing
    
    if($res.StatusCode -eq 202)
    {
        Show-Alert -Title "Successfully Purged Route for $path" 
    }
    else
    {
        Write-Host "API: $apiUrl"
        Write-Host "Path: $path"
        Show-Alert -Title "FAILED to Purge Route for $path ($($res.StatusCode))"
    }
}
catch {
    Write-Host "API: $apiUrl"
    Write-Host "Path: $path"

    if($res)
    {
        Write-Host $res.StatusCode
    }

    Show-Alert -Title "FAILED to Purge Route for $path (0)"
}

For the power this brings, it’s a pretty simple script:

  1. Grab your API Key
  2. Grab your Context Item
  3. Grab the Site (we’re using multisite plugin here, so how you define your site is going to be up to you)
  4. We grab the relative path of the item, including language. If you are using multisite and your site was called “SITENAME”, this would be “/_site_SITENAME/en/….” If you’re not using multisite, just “/en/” should be fine.
  5. Then hit our API, passing in the headers.
  6. Check our status and profit.

All of this, though, is contingent upon the content being published to Edge! That’s where Vercel finds the JSON!

All in all, there is going to be a change to your authors’ publishing habits. This helps bring things a little closer to previous expectations, though!