Back to Blog
Next.jsOpenAIArchitectureLocal-First

I Built an AI That Roasts Your Resume Like a Code Linter (Local-First)

Why most AI Resume Builders are useless black boxes, and how to use OpenAI's JSON mode to build an objective, auditable ATS Gap Analysis engine.

The career tooling market is fundamentally broken.

If you've searched for "Free Resume Builder" recently, you know the drill: Spend 45 minutes typing your work history into a SaaS app, click "Export," and get immediately hit with a $29.99/month paywall. Worse, if you run their "AI ATS Scan," you're handed an arbitrary score (e.g., 72/100) with generic, unhelpful coaching advice like, "Try using more action verbs."

It’s a black box. As engineers, we hate black boxes. When our code fails the CI pipeline, the linter tells us exactly which file, which line, and why it failed.

Why shouldn't resume reviews work exactly the same way?

I decided they should. I built Refine.tools, an open-architecture, local-first tool suite designed to treat your career documents like a codebase. Here is how I architected the ATS Gap Analysis Engine using strict JSON structuring to create an auditable, objective resume linter.

The Architecture: Why Next.js and IndexedDB?

My primary constraint was privacy. A resume is a highly concentrated bundle of PII (Personally Identifiable Information). Creating another database to hoard job seekers' phone numbers and addresses was strictly against the ethos of the project.

The stack:

  • Framework: Next.js (App Router) deployed on Vercel.
  • Persistence: IndexedDB via dexie and dexie-react-hooks.
  • Compute: Serverless API routes securely calling OpenAI.

By relying on IndexedDB, the entire application functions primarily offline or locally. When you paste your resume, it saves to your browser's local storage. When you close the tab, it stays on your laptop.

The Problem: Taming the LLM

Standard prompt engineering for resume advice usually yields fluffy, subjective coaching. Prompt: "Review this resume against this job description and give me feedback." Output: "Your resume is quite strong! However, to stand out, you might want to highlight your leadership skills more prominently."

This is completely useless for someone trying to beat an Applicant Tracking System. We don't need a coach; we need a compiler.

The Solution: JSON Mode and Structured Outputs

To force the LLM to behave like a deterministic linter, I leveraged OpenAI's response_format: { type: "json_object" } alongside a highly aggressive SYSTEM_PROMPT.

Instead of asking for "feedback," the system prompt commands the AI to act as a ruthless ATS parser and output a specific JSON schema:

const SYSTEM_PROMPT = `
You are a strict, objective ATS (Applicant Tracking System) parser.
You will receive a User Resume and a Target Job Description.
Do NOT provide coaching. Do NOT provide encouragement.
Perform a strict gap analysis.

You MUST respond in the following JSON format:
{
  "score": <number 0-100 indicating keyword match density>,
  "missingKeywords": ["keyword1", "keyword2", "keyword3"],
  "formattingFlags": [
    {
      "issue": "Missing month in employment date",
      "severity": "high"
    }
  ],
  "matchedSkills": ["skill1", "skill2"]
}
`;

Why This Works

  1. Objectivity over Subjectivity: By forcing the model to arrayify missingKeywords, it has to do a direct diff between the required skills in the JD and the present skills in the resume.
  2. Auditability: When the Next.js API route returns this JSON to the client, I map over the missingKeywords array and render them as red pill-badges on the UI. The user sees exactly why their score is 72/100. It's not a mystery. It's because they are missing the keyword "Docker."
  3. Actionability: A formatting flag of {"issue": "Missing month in employment date"} is an explicit bug report that the user can fix in 10 seconds.

Ephemeral Processing

When the user clicks "Analyze," the client sends the locally stored resume and JD to the Next.js API route.

// app/api/resume-score/route.ts
export async function POST(request: NextRequest) {
    // ... rate limiting and auth validation ...
    
    const { resumeText, jobDescription } = await request.json();
    
    const completion = await openai.chat.completions.create({
        model: "gpt-4o-mini",
        response_format: { type: "json_object" },
        messages: [
            { role: "system", content: SYSTEM_PROMPT },
            { role: "user", content: `Resume:\n${resumeText}\n\nJob Description:\n${jobDescription}` },
        ],
        temperature: 0.1, // Low temperature for deterministic output
    });

    const result = JSON.parse(completion.choices[0]?.message?.content || "{}");
    return NextResponse.json(result);
}

The payload is processed in memory on Vercel's edge, sent to OpenAI, returned to the client, and immediately destroyed on the server. Zero data persistence. No tracking logs.

The Takeaway

We need fewer SaaS platforms hoarding user data to provide vague dashboards, and more specialized, local-first utility tools that solve concrete problems.

If you are a developer currently navigating the broken 2026 job market, you don't need a career coach. You need a linter.

Drop your resume into Refine.tools and see what the compiler says. It's completely free, and I don't want your email address.