HARO Link Building Autopilot: How We Automate Expert Pitches Without Losing Their Voice

Content Engineering

Last Updated: Apr 30, 2026

Written by

Ameet Mehta

Ameet Mehta

Share this article

HARO Link Building Autopilot: How We Automate Expert Pitches Without Losing Their Voice

TL;DR

  • Journalist-query placements build the earned-media footprint LLMs cite. That footprint drives AI visibility across ChatGPT, Perplexity, and Google AI Overviews.
  • Manual HARO takes about 25 minutes per pitch, and you need to respond to 100+ queries a month before it compounds. Most people never get there.
  • We built HARO Autopilot. It sources queries, scores for fit, and drafts in your expert's real voice. You get 10+ placements a month with a few hours of review a week.

Why HARO Placements Matter & Feed AI Visibility

LLMs trust third-party earned media. Repeated mentions of your brand and your expert across publications that semantically cover your category build the entity association models cite when a buyer asks for vendors, comparisons, or how-tos. HARO is a fast, scalable way to generate that earned media at volume: every placement is a journalist-chosen contextual mention in a category-relevant outlet, which is a strong signal LLMs know how to read. Fifteen a month compound into a pattern models recognize and cite.

The research backs it.

  • Muck Rack: 82% of AI citations reference earned media (1M+ citations analyzed).
  • Stacker: distributed content hit 34% citation rate vs. 8% for owned, a 239% lift.
  • Ahrefs: brand mentions correlate 0.664 with AI visibility; backlinks only 0.218 (75K brands).

Manual HARO response takes about 25 minutes per pitch. For HARO-based JQs to actually start adding value, you need at least 30 placements stacked up. Most teams never make it past a handful and give up. People dive in because it looks easy, then the time cost piles up and the whole process stalls.

The other option is hiring an agency, HARO-specific or digital PR, starting around $3k/month. The problem is not just cost. It is managing another vendor. You are briefing an account manager on your positioning, reviewing pitches written in a generic agency voice, chasing status updates, and getting reports that track placements instead of authority. When a good placement does land, the quote reads like every other brand in the agency's roster. You pay real money for output that does not actually build your expert's reputation, and you add another line item to your ops load.

We built a third option. An automation that sources relevant queries, drafts in each expert's actual voice, and runs every draft past a human review in about 3 minutes. A few hours a month of expert review produces 10 to 20 quality backlinks per month. The rest of this article is how it works and what it produced in four months.

HARO Automation Expectations

Here is what running the automation actually produces, and what it replaces.

MetricWithout AutomationWith Automation
Responses submitted per month3 to 5 (sporadic)~150
Published placements per month0 to 215
Acceptance rate5 to 7%~10%
Time per response~45 minutes~5 minutes

Publication Expectations

Most placements will land in category-specific trade publications that your expert's actual buyers read. If you are a marketer, expect outlets like Marketer Magazine, Marketing Scoop, MarTech Series, and Search Engine Journal. If you are in SaaS or tech, expect TechBullion, Tech Times, and vertical SaaS blogs. If you are in healthtech, expect HIT Consultant, MedCity News, and clinical practice publications. The pattern holds across categories: niche trade outlets, not Forbes.

LLMs build authority by association. When your expert and company appear repeatedly across publications that semantically cover a category, the model links that entity to the category in its training and retrieval data. That association is what gets you surfaced in product comparisons, vendor shortlists, how-to answers, and "best of" queries inside ChatGPT, Perplexity, and Google AI Overviews. Fifteen mentions across category-relevant trade pubs build that entity pattern.

The HARO Automation: Details

The automation has six stages. Every query that hits the system flows through all six before it can be submitted. If it fails at any stage, it gets logged and dropped.

  1. Build the Voice Profile. A reusable source-of-truth document for each expert. Lives in Airtable. Captures phrasing, past quotes, credentials, topic stances, and a "would never say" list.
  2. Pull Fresh Queries. Open queries flow in from Featured and MentionMatch on a schedule. Dedupes against prior submissions. Drops anything with under 72 hours left on the clock.
  3. Score for Fit. Every surviving query gets scored on topic fit (40%), publication tier (30%), and proof availability (30%). Below 60 is logged and dropped. Above 60 moves to drafting.
  4. Draft in the Expert's Voice. Two Claude passes. First writes the draft using the voice profile, query context, and proof points. Second polishes: cuts filler, verifies credentials, tightens to a direct answer.
  5. Human QA. Draft writes to a Google Sheet. Slack pings the reviewer. Human edits, flips a status cell, and that flip triggers submission.
  6. Submit and Track. Approved draft submits (webhook or manual paste, depending on platform). Result writes back to the same sheet for reporting.

Here is what happens inside each of the six stages. Customization notes are flagged per stage for teams using a different stack.

1. Build the Voice Profile

The voice profile is what makes volume not equal garbage. This is the section that answers the implicit objection: "if you are drafting 150 responses a month with AI, the quality must be trash." It is not, because Claude drafts as the expert, not like the expert.

Each of our three SMEs has a single Airtable record. That record captures five things.

Voice Profile FieldWhat Goes Here
POV statementsFour to eight strong positions the expert holds on their core topics. "Most companies treat content as SEO when they should be treating it as knowledge graph construction." Not "content marketing is important."
Past quotes15 to 30 real quotes pulled from podcasts, interviews, LinkedIn, and Slack. Verbatim. Claude uses these to calibrate phrasing and cadence, not to copy.
Credentials and proof pointsSpecific numbers, client wins, years of experience, technical claims that can be verified. This is what Claude pulls from when a query asks "what is the evidence?"
Topic stancesWhere the expert stands on active debates in their field. Specific enough that two different SMEs would produce different answers to the same query.
Would never sayPhrases, frames, and opinions the expert explicitly rejects. This keeps Claude from drafting something that sounds reasonable but would make the expert wince.

One 30-minute intake session per SME is enough to build the first version. After that, you append. New quote from a podcast? Add it. New stance on an emerging topic? Add it. You do not rewrite the profile, you grow it. Over six months we have roughly tripled each expert's profile without a single additional intake call.

This is the asset an agency charges you for and never actually transfers. When you own the profile as a structured record, the marginal cost of drafting one more response is a Claude API call. That is the unlock. Every other part of the automation is plumbing on top of this one asset.

The voice profile is also the quality ceiling. Garbage profile means garbage drafts, no matter how clever the scoring layer is. If you take one thing from this piece, take the profile structure. It works even if you never build the automation.

2. Pull Fresh Queries

The Featured API returns about 1,400 open queries per month across our subscribed categories. MentionMatch adds another 1,000. The automation pulls on a schedule, dedupes against prior submissions, and drops anything with under 72 hours left on the clock (not enough time for a reliable SME check). Roughly 2,400 queries land in the scoring stage every month.

Customize: swap the ingest nodes if you use Qwoted or another query source. The rest of the pipeline stays the same.

3. Score for Fit

Scoring is where most of the volume dies. Each surviving query gets three scores.

  • Topic fit (0 to 40). Does this query align with one of our SME's stated areas? A fintech query with no fintech expert scores zero. A growth marketing query with a growth marketing SME starts at 30 and adjusts by specificity.
  • Publication tier (0 to 30). DA weight plus category relevance, pulled from our reference sheet. TechBullion at DA 76 scores 28. A brand-new blog scores 4.
  • Proof availability (0 to 30). Do we have a real case study, quote, or data point that directly answers this? Voice profile proof points are tagged by topic. Tag match, score rises. Invent territory, score falls.

Below 60, the query is logged to a rejection sheet with the reason. Above 60, it moves to drafting. In a given month we draft on about 150 of the 2,400 queries that survive freshness. That is a 94% rejection rate before Claude writes anything. This is the single most important design choice in the automation: never ask AI to write about something the expert could not substantively answer.

A real example from last month. A Marketer Magazine query (DA 55, 30-hour deadline) about AI content detection false positives. Pushkar has three case studies on this from client work. Topic fit 38, publication tier 22, proof 28. Total 88. Drafted.

Same day, same publication: "How do you build a morning routine that supports creativity?" Topic fit 8, publication tier 22, proof 0. Total 30. Dropped.

Customize: tune the scoring weights and the 60-point threshold to reflect your own pub tier preferences and SME coverage.

4. Draft in the Expert's Voice

Two Claude passes. The first pulls the voice profile, the full query, the expert's tagged proof points, and the publication context to produce a draft. The second polishes with three rules: cut filler, verify credentials cited actually exist in the profile, and tighten the response to a direct answer. A short "never do" list lives in the prompt: never invent credentials, never fabricate quotes, never pad with qualifiers the expert would not use.

End-to-end time from query release to draft in the Google Sheet is about 35 minutes. Most of that is the schedule trigger interval. Actual compute is under 40 seconds.

Customize: tune the Claude prompt to your voice profile format. Most teams rewire this node to match how they structure POV statements and proof points.

5. Human QA

Every draft lands in a Google Sheet with a status cell (pending, approved, rejected). An n8n Slack node pings the reviewer the moment a draft writes. The SME (or a trusted editor) opens the sheet, reads the draft, edits if needed, and flips status to approved. That flip triggers submission.

Average time in the sheet: about 3 minutes per draft. About 8% get rejected outright, usually because the SME has a fresh stance not yet in the profile. Those rejections feed back into the voice profile, which keeps it sharpening over time.

Why the gate stays: Featured and HARO run AI-likelihood detection, journalists blacklist generic pitches, and one flag at volume can lock you out of a whole publication pool. Two minutes of QA is the cheapest insurance in the system.

6. Submit and Track

Approved drafts route to submission based on platform. MentionMatch fires through a Google Sheets Trigger node that posts the edited text. Featured gates its auto-submit API behind a paid tier, so most teams run a daily manual paste from the sheet. At 150 responses a month, the last click being human is the cheapest insurance in the system. Every submission writes back to the same sheet with status, placement URL (once live), and publication tier. That becomes the reporting view: responses submitted, placements landed, acceptance rate by SME, and pub tier distribution. Same sheet, no separate dashboard.

Customize: swap the submission method to match your platform. If you pay for Featured's auto-submit tier, wire the webhook in place of the manual paste step.

Download the Workflows

If you would rather have us build it for your team, that is also an option. We run this as a managed service for clients who want the output without the setup. Talk to us.

Reviewed By

Pushkar Sinha

Pushkar Sinha

Frequently Asked Questions

How does HARO help with ChatGPT citations?+

ChatGPT, Perplexity, and Google AI Overviews cite third-party earned media far more than owned content. When your expert gets quoted across category-relevant trade publications, those mentions build an entity association the model links to your category. The model then surfaces that expert and company in answers to vendor, comparison, and how-to queries. HARO is the cheapest way to generate those mentions at volume. Fifteen placements a month across relevant trade pubs compound into a pattern LLMs recognize and cite.

Does HARO still work in 2026?+

HARO itself (now rebranded Connectively) still exists but the landscape has fragmented. Featured, Qwoted, and MentionMatch handle most of the active query volume for B2B topics. The automation principles in this article apply across all of them because they all share the same fundamental structure: a query arrives, a deadline is attached, and an expert response gets considered. Only the ingest nodes change.

How do you keep AI-drafted responses from sounding generic?+

The SME voice profile does most of the work. Claude drafts from the expert's actual quotes, stances, and credentials rather than a generic "write like an expert" prompt. Then a 2-minute human QA gate catches anything that still sounds off. The result is a response that sounds like the specific expert, not like AI trying to sound like a generic expert.

What acceptance rate should I expect for automated journalist query responses?+

We run at roughly 10%, which sits at the top end of the 5% to 15% industry range for manual HARO responses. We hold that rate at volume because the scoring layer kills around 94% of queries before drafting. Everything that actually gets submitted has real proof behind it and a verified topic fit. You can reach a similar rate by being brutal at the scoring stage, even if you never fully automate the drafting.

How long before journalist query placements show up in AI answers?+

It depends on the publication and the LLM training cycle. Muck Rack's research found that about 50% of AI citations come from content published within the last 11 months. So the compounding is real but it is not fast. Think of it as planting the seeds for AI visibility a year out, not a tactic that pays off this quarter.

Can I automate HARO responses without a developer?+

Yes. The n8n workflows we ship are visual and no-code. The nodes most teams rewire are the query source (which platform you pull from), the Claude prompt (to fit your voice profile format), the scoring thresholds, and the submission method. If you can follow a screen-share tutorial, you can configure them.

Is this digital PR or SEO?+

Both, really. The pipeline is a digital PR execution tool. The outcome, earned media that compounds into AI citations, is a GEO (generative engine optimization) play. We treat it as GEO rather than standalone PR because the compounding value lives in the AI visibility signals, not in the individual placement. That framing also changes how you measure it. Tracking placements per month is fine. Tracking whether your expert starts surfacing in LLM answers for category queries is the real scoreboard.