Back to blog

April 30, 2026

Three Monetization Models for Adversarial Economics

How to turn the S = (U + R) / D scam-asymmetry model into three implementable products: an inbound firewall, a marketplace API, and a transparent recovery directory.

Three Monetization Models for Adversarial Economics

Turning scam prevention into products that raise attacker cost

The core product insight is simple:

Do not detect scams by reading vibes.
Break scam economics by raising verification cost exactly when candidate risk rises.

The underlying model is:

S = (U + R) / D

Where:

  • U is the value of the resource requested from the user;
  • R is the risk attached to giving that resource;
  • D is the verified cost the counterparty has spent proving legitimacy.

Most scam prevention tells the user to become better at reading suspicious text. That is fragile. The better move is to change the transaction.

When the candidate is asked to spend more time, trust, documents, account access, local code execution, or wallet risk, the counterparty must spend more verified proof first.

In short:

Raise D before U + R gets expensive.

This post turns that idea into three product models an agent can start implementing immediately.

Shared System Primitive

All three products use the same primitive:

requested resource -> risk classification -> required proof -> safe next action

The product does not need to prove criminal intent. It only needs to decide whether the next candidate action is safe enough.

Resource Classes

Resource requested from userRisk levelExamples
Attention1Chat reply, basic screening
Time2Call, interview, async questionnaire
Identity4CV with personal contacts, LinkedIn, location
Documents8Passport, tax ID, bank details, KYC
Work5Unpaid project, take-home task, project review
Account access10GitHub, Google, Notion, Slack auth
Device execution10Repo execution, APK, extension, desktop app
Wallet or funds10Wallet connection, seed phrase, test transfer

Proof Classes

Proof provided by counterpartyProof valueNotes
Messenger handle0Cheap theater
Polished PDF or website1Easy to fake
Recruiter LinkedIn with real history2Useful but insufficient alone
Corporate-domain email challenge4Strong baseline proof
Official role URL4Stronger when independently discoverable
Calendar invite from company domain3Useful process proof
Verified legal entity or SOW5Needed before paid work
Paid test or refundable deposit5Raises counterparty cost
Platform-verified employer identity6Strong B2B proof

Default Score

S = (resource_risk + requested_commitment) - proof_value

Use a subtractive score in implementation because it is easier to explain in UI and logs.

ScoreStateDefault action
0-3Safe enoughContinue with light warning
4-8Needs verificationAsk for proof before next step
9-15High-riskBlock expensive user action
16+Hostile by operational assumptionBlock and route to safety flow

The exact weights can change. The invariant should not.

As candidate-side risk rises, counterparty-side proof must rise too.

Model 1: Proof-of-Stake Inbound Routing

What It Is

A personal inbound gateway for candidates.

Think of it as a Calendly-shaped hiring firewall. A recruiter cannot immediately request the candidate's attention, CV, GitHub, call, or project review. They first pass through a verification gateway.

The product is not:

pay me to talk to me

The cleaner framing is:

prove who you are before renting my attention

Why It Works

Scammers rely on near-zero-cost outreach.

Their unit economics look like this:

cost to fake ~= 0
messages sent ~= many
candidate cost to process ~= high

The gateway raises D, the cost of legitimacy proof, before the candidate spends U + R.

For a real recruiter, this is usually cheap:

submit official role URL
verify corporate email
state compensation range
describe process

For a scammer, doing that reliably at scale is expensive.

MVP Surface

Build the first version as a personal link:

/u/:handle

Example:

hirewall.example/max

The recruiter sees a short form:

FieldRequiredPurpose
Work emailYesCorporate-domain verification
Company nameYesDisplay and matching
Official role URLYesIndependent verification
Recruiter LinkedInOptionalExtra proof
Compensation rangeYesFilters vague bait
Requested next stepYesDefines candidate-side risk
MessageYesContext

After submission, the system sends a magic link to the work email. The candidate only sees verified or pending leads in their dashboard.

Data Objects

User
- id
- handle
- public_name
- email
- plan

InboundRequest
- id
- user_id
- company_name
- recruiter_email
- recruiter_domain
- role_url
- recruiter_linkedin
- compensation_range
- requested_next_step
- message
- proof_status
- risk_score
- decision
- created_at

VerificationEvent
- id
- inbound_request_id
- type
- status
- evidence
- created_at

Core Flow

recruiter opens personal link
-> submits role and requested next step
-> system classifies requested resource risk
-> system sends corporate-domain verification link
-> proof value is computed
-> candidate sees structured request with score
-> candidate accepts, rejects, or sends verification challenge

Monetization

Start candidate-side.

PlanPriceValue
Free$01 public gateway, limited requests
Search$9-19/monthUnlimited requests, scoring, templates
Pro$49/monthMultiple profiles, Web3/freelance risk packs, evidence log

Add micro-stake later only if needed. A burned-stake system introduces disputes, chargebacks, and abuse risk. Domain proof and structured scope are cleaner for MVP.

Acceptance Criteria

  • A candidate can create a public inbound gateway link.
  • A recruiter can submit a structured hiring request.
  • A recruiter must verify through a corporate-domain email before the request is marked verified.
  • The request receives a risk score based on requested next step and provided proof.
  • The candidate sees a safe summary and a suggested reply.
  • The candidate is warned before any high-risk request such as code execution, document sharing, wallet action, or account login.

Agent Build Prompt

Build an MVP called Hiring Firewall.

Implement a candidate-side inbound gateway where recruiters submit structured hiring requests before asking for attention, CVs, calls, project reviews, GitHub access, documents, wallet actions, or code execution.

Core pages:
- candidate signup/login
- public gateway page at /u/:handle
- recruiter request form
- email-domain verification flow
- candidate dashboard with request cards
- request detail page with risk score and safe reply template

Core model:
Score = requested_resource_risk + requested_commitment - proof_value.

Block high-risk next steps until proof is sufficient.
Do not implement burned stakes in MVP. Use corporate-domain verification, official role URL, compensation range, and structured scope as proof.

Model 2: Automated Asymmetry-Balancer API

What It Is

A B2B trust-and-safety API for hiring platforms, freelance marketplaces, Web3 bounty boards, developer communities, and remote-work platforms.

The API watches for moments where a counterparty asks the user to spend a high-risk resource. When that happens, it returns the required proof level and the recommended UI interruption.

The product primitive is:

if requested candidate resource becomes expensive
then employer-side verification requirement increases

Why It Works

Platforms already have the hardest part: traffic and interaction data.

They can see:

  • external links;
  • file attachments;
  • project review requests;
  • requests to run repos;
  • requests for wallet actions;
  • requests for KYC;
  • suspicious urgency;
  • identity mismatch between profile, email, domain, and company.

The platform problem is that the user often reaches the danger point before verification catches up.

The API closes that gap.

API Contract

Endpoint:

POST /v1/evaluate-interaction

Input:

{
  "platform_user_id": "candidate_123",
  "counterparty_id": "employer_456",
  "counterparty_type": "recruiter",
  "interaction_context": "message",
  "requested_action": "run_repo",
  "message_excerpt": "Please clone this repo and run npm install before the interview.",
  "provided_proofs": ["profile_verified", "company_page"],
  "available_metadata": {
    "company_domain_verified": false,
    "official_role_url_present": false,
    "payment_method_verified": false
  }
}

Output:

{
  "risk_score": 17,
  "state": "hostile_by_operational_assumption",
  "required_proofs": [
    "company_domain_email",
    "official_role_url",
    "verified_contract_or_paid_test"
  ],
  "recommended_ui": {
    "pattern": "scam_alert_pie",
    "danger": "High-risk technical step.",
    "micro_lesson": "A repo is not proof. It can be the attack surface.",
    "cta": "Run verification check"
  },
  "block_user_action": true,
  "safe_reply": "Before I run code or open project files, please verify the company domain, send the official role URL, and provide the contracted test-task process."
}

Risk Matrix

Requested actionRequired proof
Open chatLight warning
Send CVCorporate-domain email or official role URL
Join callVerified recruiter identity
Do project reviewDomain email + official role + paid/contracted step
Run repoHigh-risk lock + verified company + sandbox guidance
Install appBlock until high-trust verification
Connect walletBlock by default
Send documents or KYCLegal entity + signed process + human review

Product Surfaces

The API should support three surfaces:

SurfaceUserOutput
Server APIPlatform backendRisk state and required proof
UI widgetProduct frontendScam Alert Pie component
Admin consoleTrust teamLogs, patterns, thresholds, overrides

Monetization

Charge platforms because compromised candidates are platform churn, support cost, reputational risk, and potential legal exposure.

Pricing modelGood for
Per protected interactionEarly integrations
Per active candidateHiring marketplaces
Platform trust module subscriptionMature B2B SaaS
Enterprise trust-and-safety packageWeb3, freelance, or developer-heavy platforms

Acceptance Criteria

  • A platform can send a requested action and proof set to the API.
  • The API returns a risk score, state, required proofs, UI recommendation, and block_user_action.
  • The API includes a safe reply template.
  • High-risk actions such as repo execution, app installation, wallet connection, and KYC produce blocking states unless proof is strong enough.
  • The UI widget can render a stable warning, rotating micro-lesson, and fixed CTA.
  • The admin console can inspect evaluations and tune risk weights without code deploys.

Agent Build Prompt

Build an Asymmetry Balancer API MVP for hiring and freelance platforms.

Implement:
- POST /v1/evaluate-interaction
- risk scoring based on requested_action and provided_proofs
- required_proofs generation
- recommended Scam Alert Pie UI payload
- safe_reply generation from templates
- admin-configurable weights
- basic event logging

Create a small demo page where a platform operator selects a requested action, proof set, and message excerpt, then sees the returned risk state, blocking decision, warning copy, and safe reply.

Use deterministic scoring first. Do not depend on LLM classification for MVP. Leave a later extension point for LLM-assisted message analysis.

Model 3: Score 16+ Incident Response Lead Generation

What It Is

A free scam-risk prompt and checklist that routes users to recovery paths when they are already in danger.

This is the fastest monetization path, but it is ethically sensitive.

The user may arrive after they already:

  • ran unknown code;
  • installed an app;
  • sent passport or tax documents;
  • shared bank details;
  • connected a wallet;
  • logged in through GitHub or Google;
  • sent KYC materials;
  • performed unpaid project work.

At that moment, the product can route them to the right recovery checklist, service, or specialist.

Why It Works

The prompt is the top of funnel.

High-risk classifications reveal a very specific need:

Danger pointRecovery path
Ran unknown repoDevice cleanup, malware scan, credential rotation
Installed APK or extensionMobile/browser security cleanup
Sent passport or KYCIdentity protection checklist
Shared bank detailsBank fraud checklist
Connected walletWallet incident response
Logged in with GitHub/GoogleOAuth audit and token revocation
Sent unpaid workEvidence log and boundary reply

This is valuable because the user is not a generic lead. They are a pre-qualified person with a concrete incident type and high intent.

Ethical Constraint

Do not monetize panic.

The product must not scare the user into buying emergency services.

Use this order:

free first aid first
plain-language severity second
paid options third
affiliate disclosure always

The trust rule:

If the protector becomes an extractor, the model is dead.

MVP Surface

Build a public page:

/check

The user pastes a conversation or answers a structured checklist.

The product returns:

  • Green / Yellow / Red verdict;
  • score;
  • exact danger points;
  • do-not-do list for 24 hours;
  • safe next message;
  • recovery checklist if damage may have occurred;
  • optional service directory with transparent disclosures.

Data Objects

CheckSession
- id
- created_at
- input_type
- risk_score
- verdict
- danger_points
- recovery_paths
- safe_reply
- user_email_optional

RecoveryPath
- id
- trigger
- title
- free_steps
- paid_options
- affiliate_disclosure
- severity

PartnerService
- id
- category
- name
- url
- jurisdiction
- disclosure
- verification_status

Recovery Path Example

Trigger:

ran_unknown_repo

Free first aid:

1. Disconnect from sensitive accounts.
2. Stop running the project.
3. Do not enter credentials into any related tool.
4. Rotate passwords for accounts used on the device.
5. Review GitHub, Google, and package-manager tokens.
6. Run a malware scan or use a clean device for sensitive accounts.
7. Save evidence before deleting messages.

Paid options:

Optional: professional device cleanup or incident response.
Disclosure: some listed services may pay referral fees.

Monetization

Use a transparent recovery directory.

Revenue pathConstraint
Affiliate referralsMust be disclosed
Sponsored recovery listingsMust be labeled
Premium incident checklistFree version must remain useful
B2B referral partnershipsNo panic copy
Paid human reviewClear scope and limits

Acceptance Criteria

  • A user can run a free scam check without creating an account.
  • The system returns a verdict, score, danger points, safe reply, and do-not-do list.
  • 16+ routes to recovery paths based on detected danger points.
  • Every recovery path starts with free first-aid steps.
  • Paid referrals are visibly disclosed.
  • The copy avoids panic escalation and does not imply certainty when evidence is incomplete.

Agent Build Prompt

Build a free Hiring Scam Check MVP.

Implement:
- public /check page
- structured input form and paste box
- deterministic scoring using S = (U + R) / D style weights
- verdict: Green / Yellow / Red
- danger point extraction from selected checklist items
- safe reply template generation
- 24-hour do-not-do list
- recovery path routing for high-risk incidents
- transparent recovery directory with affiliate disclosure fields

Do not use fear-based copy.
Every paid option must be preceded by useful free first-aid steps.
Do not require account creation for the first check.

The Monetization Ladder

The three products can exist independently, but they fit naturally into one ladder.

Free:
article + prompt + scorecard

Low-ticket:
candidate safety kit

Mid-ticket:
personal inbound firewall

B2B:
Asymmetry Balancer API

High-intent:
transparent recovery directory

The cleanest starting path is usually:

free check -> candidate firewall -> B2B API

Lead generation can monetize early, but it must be handled carefully. If trust is damaged there, the whole adversarial-economics brand becomes incoherent.

Implementation Order

If one agent were implementing this from scratch, the best sequence would be:

  1. Build the deterministic scoring library.
  2. Build the free Hiring Scam Check UI.
  3. Add recovery paths for 16+ incidents.
  4. Convert the same scoring library into the candidate inbound firewall.
  5. Extract the scoring library behind an API.
  6. Add B2B admin controls and event logs.
  7. Add marketplace UI widget support.

The reason is simple: all three products share the same scoring primitive. Build that once.

Shared Scoring Module Spec

Input:
- requested_resources[]
- provided_proofs[]
- urgency_signals[]
- counterparty_context

Output:
- risk_score
- state
- danger_points[]
- required_proofs[]
- block_user_action
- safe_reply_template
- recovery_paths[]

State mapping:

0-3: safe_enough
4-8: needs_verification
9-15: high_risk
16+: hostile_by_operational_assumption

The implementation should be deterministic first. LLM analysis can improve extraction later, but the core product should not depend on model mood.

Product Thesis

The market is not "scam detection."

The market is restoring symmetry in dangerous hiring flows.

That is a stronger category because it covers scammers, exploitative employers, chaotic recruiters, fake Web3 roles, freelance project bait, and suspicious business relationships that are not cleanly classifiable but are still harmful.

The final product line can be summarized as:

Protect user resources by forcing counterparty proof to rise with requested user risk.

Or shorter:

Raise D before U + R gets expensive.