<?xml version="1.0" encoding="UTF-8"?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude"
     version="3"
     xml:lang="en"
     category="info"
     consensus="false"
     submissionType="IETF"
     ipr="trust200902"
     docName="draft-dhir-http-agent-profile-00">

  <front>
    <title abbrev="HTTP Agent Profile">
      HTTP Agent Profile (HAP): Authenticated and Monetized Agent Traffic on the Web
    </title>

    <seriesInfo name="Internet-Draft" value="draft-dhir-http-agent-profile-00"/>
    <seriesInfo name="Intended status" value="Informational"/>

    <author fullname="Sanat Dhir" initials="S." surname="Dhir">
      <organization>Independent</organization>
      <address>
        <email>sdhir26@gsb.columbia.edu</email>
      </address>
    </author>

    <date year="2025" month="November" day="24"/>

    <abstract>
      <t>
        Autonomous agents such as LLM-powered crawlers, browser-integrated assistants,
        and task-oriented bots are rapidly becoming first-class HTTP clients on the
        Web. Today’s infrastructure largely assumes a human behind a browser
        and monetizes content through advertising and coarse subscriptions. Automated
        agents consume content at scale without rendering pages or viewing ads,
        exacerbating bot-mitigation arms races and economic misalignment between
        content providers and AI systems.
      </t>
      <t>
        This document describes an HTTP Agent Profile (HAP) that enables: (1)
        cryptographic authentication of agent traffic using HTTP Message Signatures;
        (2) clear separation between human and agent traffic using privacy-preserving
        human tokens; and (3) protocol-level value exchange for agents via HTTP
        status code 402 ("Payment Required") and pluggable micropayment
        mechanisms. The profile reuses existing HTTP features and is designed for
        incremental deployment via reverse proxies, CDNs, and agent libraries.
      </t>
    </abstract>
  </front>

  <middle>

    <!-- 1. Introduction -->
    <section anchor="intro" numbered="true">
      <name>Introduction</name>
      <t>
        Web traffic is undergoing a shift from primarily human-driven browsing to
        increasing volumes of autonomous agent activity. Modern agents include search
        crawlers, LLM-based assistants that browse on behalf of users, and specialized
        bots that fetch and process large amounts of content. These agents often access
        resources without rendering pages or viewing ads, and without participating in
        the economic arrangements that publishers rely on for human visitors.
      </t>
      <t>
        Existing mechanisms for distinguishing human traffic from automated traffic
        rely on fragile signals such as User-Agent strings, IP address ranges,
        and CAPTCHAs. Sophisticated agents can mimic human behavior, use residential
        proxies, and even outsource CAPTCHAs to humans, making traditional bot
        detection increasingly ineffective. At the same time, publishers and API
        providers respond with more aggressive blocking and rate limiting, which
        frequently catches legitimate agent services in the crossfire. The result is
        an adversarial "dark forest" dynamic in which neither side has
        clear, protocol-level tools to cooperate.
      </t>
      <t>
        In parallel, regulators are starting to require clearer transparency about
        AI systems: for example, rules that users must be informed when interacting
        with an AI system rather than a human, and that bots must not misrepresent
        themselves for certain purposes. These pressures suggest that HTTP should grow
        more explicit mechanisms for agent identification and handling.
      </t>
      <t>
        This document proposes an HTTP Agent Profile (HAP) as an HTTP-based approach
        to: (1) authenticate agent traffic using HTTP Message Signatures
        (<xref target="RFC9421"/>); (2) keep a
        separate "human lane" identified by privacy-preserving human
        tokens; and (3) enable protocol-level payments for agent access using HTTP
        402. Rather than defining a new application protocol, HAP stays within HTTP
        semantics (<xref target="RFC9110"/>) and is intended to be deployed via existing HTTP
        infrastructure, especially reverse proxies, CDNs, and agent SDKs.
      </t>
    </section>

    <!-- 2. Conventions and Terminology -->
    <section anchor="conventions" numbered="true">
      <name>Conventions and Terminology</name>
      <t>
        The key words "MUST", "MUST NOT", "REQUIRED",
        "SHALL", "SHALL NOT", "SHOULD", "SHOULD
        NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY",
        and "OPTIONAL" in this document are to be interpreted as described
        in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they appear in all capitals.
      </t>
      <t>
        The following terms are used throughout this document:
      </t>
      <ul>
        <li>
          <em>Agent</em>: An autonomous HTTP client that issues requests without a
          human directly driving each interaction (for example, a crawler, an
          LLM-based assistant, or an automated script).
        </li>
        <li>
          <em>Human client</em>: A human user interacting with a browser or similar
          user agent.
        </li>
        <li>
          <em>HTTP Agent Profile (HAP)</em>: A set of HTTP conventions for
          authenticating agents, separating human versus agent traffic, and
          expressing value exchange requirements for agents using HTTP 402.
        </li>
        <li>
          <em>Signed agent request</em>: An HTTP request that carries an HTTP
          Message Signature bound to an agent identifier.
        </li>
        <li>
          <em>Payment challenge</em>: An HTTP 402 response that encodes a
          machine-readable requirement for payment or other economic work before
          access is granted.
        </li>
      </ul>
    </section>

    <!-- 3. Requirements and Problem Statement -->
    <section anchor="requirements" numbered="true">
      <name>Requirements and Problem Statement</name>
      <t>
        Existing approaches to managing automated traffic have several limitations:
        User-Agent strings are self-declared and trivial to spoof; IP-based blocking
        is undermined by residential proxy networks and shared hosting; and
        CAPTCHA-based human verification is increasingly defeated by automation or
        outsourced labor, while degrading user experience for real humans. Many sites
        treat any suspicious traffic as hostile, which often includes legitimate
        agents that do not identify themselves.
      </t>
      <t>
        We identify the following requirements for a pragmatic solution:
      </t>
      <ol>
        <li>
          <em>R1 – Agent Identification:</em> Servers should be able to
          cryptographically verify the origin of a request (which agent/software
          operator is responsible), rather than relying on self-asserted headers.
        </li>
        <li>
          <em>R2 – Human vs. Agent Separation:</em> The mechanism should enable
          servers to treat human and agent traffic differently, without imposing
          undue friction (like CAPTCHAs) on legitimate human users.
        </li>
        <li>
          <em>R3 – Value Exchange:</em> Servers should be able to require payment or
          other forms of compensation from agents at the HTTP protocol level (for
          example, using HTTP 402 challenges) in a machine-readable, automatable
          way.
        </li>
        <li>
          <em>R4 – Incremental Deployability:</em> The solution must work within
          existing HTTP/HTTPS infrastructure (standard ports, TLS, proxies, CDNs)
          and allow gradual adoption with graceful fallback for non-participating
          clients.
        </li>
        <li>
          <em>R5 – Privacy and Openness:</em> The design should accommodate both
          persistent agent identities (for building reputation) and ephemeral
          ones (for privacy), and it should not mandate that all content be behind
          a paywall for agents – free access options should remain possible.
        </li>
        <li>
          <em>R6 – Extensibility:</em> Higher-level frameworks (like robots.txt/llms.txt
          rules, AI metadata, or reputation systems) should be able to build on
          top of the mechanism, using it as a foundation for more sophisticated
          policy or trust layers.
        </li>
      </ol>
      <t>
        In summary, the Web currently lacks a standardized method for agents to
        authenticate themselves or negotiate economic terms, and this proposal aims
        to fill that gap while meeting the above requirements.
      </t>
    </section>

    <!-- 4. Design Space: New Protocol vs HTTP Profile -->
    <section anchor="design-space" numbered="true">
      <name>Design Space: New Protocol vs HTTP Profile</name>
      <t>
        One possible approach to agent traffic is to define a new application-layer
        protocol specifically for agents (an "Agent-HTTP"). This could, for
        example, integrate attestation and payments into the connection handshake
        and optimize message patterns for agent communication.
      </t>
      <t>
        However, a clean-slate protocol would face significant deployment barriers:
        introducing new ALPN identifiers can trigger incompatible behavior in
        network middleboxes (firewalls, etc.), and it would require establishing new
        trust and certificate infrastructures for agent identities or attestations.
        It would likely see slow adoption, as all parties would need to upgrade to
        speak the new protocol.
      </t>
      <t>
        By contrast, HAP takes a more incremental path by defining a profile within
        HTTP:
      </t>
      <ul>
        <li>It reuses HTTP/1.1, HTTP/2, and HTTP/3 transports and semantics unchanged.</li>
        <li>It adds profile-specific headers and behaviors (for authentication and
        payment) rather than altering core protocol syntax or introducing new
        methods.</li>
        <li>It can be implemented in middleware (reverse proxies, gateways) or
        libraries, meaning sites and agents can adopt it without a full-stack
        overhaul. An optional ALPN token or HTTP/2 setting can be used as a hint
        for efficiency but is not required for correctness.</li>
      </ul>
      <t>
        We therefore focus on HAP – an HTTP profile for authenticated, monetized
        agent traffic – as the more practical approach. A future, purpose-built
        agent protocol (informally, "HTTPA") could be considered if experience
        with HAP reveals the need for deeper changes, but initially HAP stays within
        the bounds of HTTP to maximize deployability.
      </t>
    </section>

    <!-- 5. HTTP Agent Profile Overview -->
    <section anchor="overview" numbered="true">
      <name>HTTP Agent Profile Overview</name>
      <t>
        At a high level, the HTTP Agent Profile defines two conceptual lanes for
        web traffic:
      </t>
      <ol>
        <li>
          <em>Human traffic lane:</em> Requests from human-driven clients may
          carry a valid human token (e.g., a Privacy Pass token as per RFC 9578) when
          available. Such requests are treated as human-originated, meaning the
          server does not require them to be signed or paid through the HAP
          mechanisms. Human users thus continue to browse without additional
          protocol friction (aside from the invisible token exchange).
        </li>
        <li>
          <em>Agent traffic lane:</em> Requests from autonomous agents carry an
          HTTP Message Signature that binds the request to an agent identity (see
          Section 6). Servers recognizing these signatures can apply agent-specific
          policies: for example, issuing an HTTP 402 challenge to request payment,
          enforcing stricter rate limits, or denying access based on the agent’s
          reputation or lack of compliance.
        </li>
      </ol>
      <t>
        Requests with neither a signature nor a human token fall into a legacy or
        unknown category. Servers can handle these as they do today – possibly with
        CAPTCHAs, blocking, or lower service quality – or they may choose to
        gradually require HAP signals for certain resources once adoption is
        sufficient. The intention is that over time, most legitimate traffic will
        present one of the signals (human or agent), simplifying traffic classification.
      </t>
    </section>

    <!-- 6. Agent Authentication Using HTTP Message Signatures -->
    <section anchor="agent-auth" numbered="true">
      <name>Agent Authentication Using HTTP Message Signatures</name>
      <t>
        HAP-compliant agents use HTTP Message Signatures (<xref target="RFC9421"/>) to authenticate their
        requests. Each HTTP request from an agent is augmented with a digital signature
        covering selected components of the request (such as headers and the request
        target).
      </t>
      <t>
        <strong>Key establishment:</strong> An agent operator generates one or more cryptographic
        key pairs (for example, an Ed25519 key). The agent’s public key(s) are published in a
        key directory – for example at a known URL under the agent’s control. A
        suggested convention is to use the HTTP Message Signatures key directory
        format at a well-known URI (for example,
        <tt>https://&lt;agent-domain&gt;/.well-known/agent-keys</tt>),
        which can list the agent’s current public keys and metadata (key IDs,
        expiration, and so on).
      </t>
      <t>
        <strong>Request structure:</strong> The agent includes the following headers in each request:
      </t>
      <t>
        <strong>Signature-Input</strong>: Parameters indicating which parts of the HTTP
        request are covered by the signature, along with metadata like a creation
        time, expiration time, and a key identifier (kid). For example, it might
        indicate it signs the <tt>@authority</tt> (host), the <tt>@method</tt> and <tt>@path</tt>, and
        a custom <tt>Signature-Agent</tt> header (see below), with a creation timestamp
        and a key ID of "agent-key-2025-01".
      </t>
      <t>
        <strong>Signature</strong>: The actual digital signature over the specified components,
        encoded in Base64. This is typically labeled (for example, "Signature: sig1=...")
        matching the label used in Signature-Input.
      </t>
      <t>
        <strong>Signature-Agent</strong>: A header identifying the agent. This could be a URI
        (for example, "https://agent.example.com") or a name corresponding to the
        key directory. The contents of Signature-Agent are covered by the signature
        (by including "signature-agent" in the Signature-Input), ensuring an
        attacker cannot alter which agent is claiming to send the request.
      </t>
      <t>
        <strong>Verification on the server:</strong> When a HAP-enabled server (or intermediary)
        receives a request with those headers, it will:
      </t>
      <ol>
        <li>
          Parse the Signature-Input to understand which components were signed and
          obtain the key identifier (keyid) and any metadata (for example, timestamp).
        </li>
        <li>
          Locate the agent’s public key. This could involve fetching the key from the
          URL specified by Signature-Agent (if it is an HTTPS URI) or looking up a
          cached key if seen recently. The keyid helps select the correct key from the
          agent’s key set.
        </li>
        <li>
          Verify the signature using the public key and the reconstructed signing
          string (per <xref target="RFC9421"/>). If verification fails, the server knows the
          request was not actually from the claimed agent (or was tampered with), and
          it can be rejected (for example, treated as an invalid request or as a likely
          malicious bot).
        </li>
        <li>
          If verification succeeds, the server now has an authenticated agent identity
          associated with the request. It can log this or use it in further policy
          decisions. For example, it might map the Signature-Agent URI to an internal
          identifier or check it against an allow/deny list.
        </li>
      </ol>
      <t>
        <strong>Key rotation and metadata:</strong> The agent’s key directory can indicate when keys
        expire or are revoked. Agents SHOULD rotate keys periodically (for instance,
        using a new keypair every 3–6 months) to limit the impact of key compromise. Servers
        SHOULD NOT cache keys beyond their advertised validity and SHOULD fetch updates
        as needed. If a previously seen key suddenly fails verification, the server MAY
        fetch a fresh copy of the directory in case of key rotation.
      </t>
      <t>
        <strong>Impersonation resistance:</strong> Because the signature is tied to the agent’s domain
        (or identifier), another party cannot masquerade as that agent without access to
        its private key. Even if a malicious bot claims the same Signature-Agent value
        as a well-known agent, its Signature header will not validate against the real
        agent’s public key, and thus it will be recognized as a fake. This is a major
        improvement over the status quo where simply claiming
        <tt>User-Agent: Googlebot</tt> might fool some defenses.
      </t>
    </section>

    <!-- 7. Human vs. Agent Separation -->
    <section anchor="human-agent-separation" numbered="true">
      <name>Human vs. Agent Separation</name>
      <t>
        HAP aims to avoid imposing protocol-level payment or signature requirements
        on human users whenever possible. For human-driven traffic, modern
        privacy-preserving human tokens such as Private Access Tokens <xref target="RFC9578"/>
        can provide a strong signal that a request originates from a human user on a legitimate
        device, without exposing a stable identifier.
      </t>
      <t>
        In a HAP deployment, a server or intermediary can use the following
        classification strategy:
      </t>
      <ul>
        <li>
          If a request presents a valid human token, it is treated as human
          traffic. Normal user experience applies (ads, subscriptions, or site
          policies).
        </li>
        <li>
          If a request presents a valid agent signature, it is treated as agent
          traffic and subject to agent-specific policies (including possible
          payment requirements).
        </li>
        <li>
          If a request presents neither signal, it is treated as legacy or
          unknown traffic and handled according to existing bot mitigation
          and security policies.
        </li>
      </ul>
      <t>
        User-side agents (such as browser-integrated assistants) MAY choose to
        present both a human token and an agent signature, to indicate that an
        autonomous component is acting on behalf of a specific user’s
        browsing session. In such cases, servers can apply hybrid policies, for
        example permitting some forms of automated access at human-like rates
        without payment, while requiring payment for high-volume or bulk access.
      </t>
    </section>

    <!-- 8. HTTP 402 and Payment Profiles -->
    <section anchor="payments" numbered="true">
      <name>HTTP 402 and Payment Profiles</name>
      <t>
        HTTP status code 402 ("Payment Required") was reserved in early
        HTTP specifications but left without a standardized meaning for decades.
        HAP adopts 402 as a machine-readable signal that a request from an
        authenticated agent is potentially acceptable, but access is contingent on
        some form of payment or economic work.
      </t>
      <t>
        When a server determines that an agent request requires payment, it responds
        with 402 instead of 200. The response includes sufficient information for
        the agent to understand how to fulfill the requirement. This information
        may appear in header fields (for example, a WWW-Authenticate challenge) and
        in the response body. Once the agent has fulfilled the requirement, it
        retries the request with proof of payment and, if validation succeeds, the
        server returns the requested resource.
      </t>
      <t>
        HAP itself is payment-agnostic. It defines the use of 402 as a challenge
        mechanism and allows different payment systems to be bound to 402 via
        <em>payment profiles</em>. A payment profile specifies:
      </t>
      <ul>
        <li>
          How the server describes offers (price, currency, scope, and duration)
          in 402 responses.
        </li>
        <li>
          How the agent obtains a concrete payment request (for example, a
          Lightning invoice or other payment token).
        </li>
        <li>
          How the agent proves successful payment in subsequent requests.
        </li>
      </ul>
      <t>
        One concrete payment profile is L402 (<xref target="L402"/>), which combines Macaroon tokens and
        the Lightning Network. Under L402, a server that wishes to charge an agent
        for access generates a Macaroon that encodes the scope and conditions of
        access, and a Lightning invoice for a specified amount. These are returned
        in a 402 response.
      </t>
      <t>
        The agent then pays the Lightning invoice using its Lightning wallet or a
        custodial payment API. Upon successful payment, the agent obtains the
        invoice preimage. The agent retries the original request, including both
        the Macaroon (for example, in an Authorization header) and the preimage.
        The server verifies the Macaroon signature and caveats, and that the hash
        of the preimage matches the invoice hash encoded in the Macaroon. If both
        checks succeed, the server considers payment confirmed and grants access.
      </t>
      <t>
        Other payment profiles could support different rails, such as on-chain
        cryptocurrency, fiat payment APIs, or proof-of-work-based puzzles. HAP does
        not mandate any particular payment rail; it simply provides the 402 envelope
        in which such profiles can operate for agent traffic.
      </t>
    </section>

    <!-- 9. Deployment and Operational Considerations -->
    <section anchor="deployment" numbered="true">
      <name>Deployment and Operational Considerations</name>
      <t>
        One of HAP’s design goals is incremental deployability. Sites do not
        need to convert all APIs or endpoints to HAP at once, and existing clients
        are unaffected unless they choose to implement the profile.
      </t>
      <t>
        In many deployments, reverse proxies, CDNs, and API gateways will be the
        primary enforcement points. These intermediaries can:
      </t>
      <ul>
        <li>
          Verify HTTP Message Signatures on incoming requests and classify them
          as agent or non-agent.
        </li>
        <li>
          Validate human tokens where available and route such requests down
          a human lane.
        </li>
        <li>
          Apply local policy for known agent identities (allow, rate limit,
          block, or require payment).
        </li>
        <li>
          Issue HTTP 402 challenges and interact with payment backends
          (Lightning nodes, payment gateways, or other services).
        </li>
      </ul>
      <t>
        Origin servers can then be configured to see only requests that have either
        already satisfied HAP requirements or have been classified as non-HAP
        traffic. For example, an API gateway might only forward requests that
        present a valid agent signature and, where required, a proof of payment.
      </t>
      <t>
        Servers and intermediaries can experiment with HAP policies in "report
        only" modes before enforcing them. For example, a proxy could log
        cases where an unauthenticated high-volume agent would have been challenged
        with 402, without actually issuing the challenge yet. This allows operators
        to tune thresholds and pricing before turning on enforcement.
      </t>
      <t>
        As HAP use becomes more common, optional transport-level hints can improve
        efficiency. For example, a client might include an ALPN identifier in its
        TLS handshake to indicate HAP support, or an HTTP/2 SETTINGS parameter
        could advertise HAP capabilities. These optimizations are not required for
        correctness; all HAP functionality can be negotiated via normal HTTP
        messages.
      </t>
    </section>

    <!-- 10. Security Considerations -->
    <section anchor="security" numbered="true">
      <name>Security Considerations</name>
      <t>
        HAP introduces new security-relevant mechanisms at the HTTP layer. This
        section summarizes the main security considerations.
      </t>
      <t>
        <em>Authentication and impersonation.</em> HAP relies on HTTP Message
        Signatures bound to an agent identifier for agent authentication. If an
        attacker obtains an agent’s private key, it can impersonate that
        agent until keys are rotated and caches expire. Agent operators SHOULD
        protect private keys carefully, use hardware-backed storage where possible,
        and rotate keys regularly. Servers SHOULD respect key expiration metadata
        and avoid pinning keys for longer than necessary.
      </t>
      <t>
        <em>Replay and downgrade.</em> Signatures SHOULD cover freshness indicators
        such as a timestamp and a nonce, and SHOULD be used over TLS. Servers
        SHOULD reject signatures that are too old or reuse the same nonce. Agents
        and servers SHOULD prefer HTTPS and avoid sending signed requests over
        cleartext HTTP.
      </t>
      <t>
        <em>Key discovery and SSRF.</em> Key discovery via Signature-Agent and
        .well-known URLs introduces the risk of server-side request forgery (SSRF)
        or denial-of-service if an attacker can cause a server to fetch from
        arbitrary origins. Servers SHOULD restrict outbound key discovery to
        public IP ranges, enforce timeouts and connection limits, and cache keys
        according to HTTP caching rules to avoid repeated lookups.
      </t>
      <t>
        <em>Denial-of-service.</em> Signature verification and payment processing
        consume CPU and network resources. Implementations SHOULD be designed to
        bound the amount of work per request, for example by limiting the number
        of keys tried for verification and by offloading payment processing to
        dedicated services. Operators SHOULD monitor for abuse patterns, such as
        floods of invalidly signed requests or repeated unpaid 402 challenges.
      </t>
      <t>
        <em>Abuse by authenticated agents.</em> An authenticated and paying agent
        can still behave maliciously or violate a site’s acceptable-use
        policy. HAP does not attempt to prevent misuse of content once access is
        granted. Instead, HAP aims to make agent traffic more accountable, so that
        misbehaving agent identities can be revoked or blocked and so that higher-
        level governance mechanisms and contracts can be applied.
      </t>
    </section>

    <!-- 11. Privacy Considerations -->
    <section anchor="privacy" numbered="true">
      <name>Privacy Considerations</name>
      <t>
        HAP affects privacy primarily through the introduction of agent identifiers
        and the handling of human tokens and payment metadata.
      </t>
      <t>
        <em>Agent identity and tracking.</em> Persistent agent identifiers and
        long-lived keys can be used by servers to correlate agent behavior across
        sites. This is desirable for enterprise crawlers seeking to build
        reputation, but may be undesirable for user-centric agents that act on
        behalf of individuals. Operators of user-centric agents SHOULD use
        short-lived keys or per-origin keys to reduce the risk of cross-site
        tracking. Servers SHOULD avoid treating agent identities as direct proxies
        for user identities unless there is a separate, explicit user authentication
        relationship.
      </t>
      <t>
        <em>Human tokens.</em> Privacy-preserving human tokens are designed so that
        issuers cannot link issuance and redemption events. HAP deployments should
        preserve this property by not logging or correlating human tokens beyond
        what is necessary for validation. Aggregated statistics (such as the rate
        of human versus agent traffic) can usually be collected without recording
        raw token values.
      </t>
      <t>
        <em>Payment metadata.</em> Payment systems may reveal information about who
        pays for what. In many agent scenarios, payments are made by agent operators
        rather than end users, but care should be taken not to leak unnecessary
        payment metadata into application logs. Where possible, payment flows should
        be handled by dedicated payment services that expose only the minimal proof
        necessary for access control (for example, a validated token or flag).
      </t>
    </section>

    <!-- 12. IANA Considerations -->
    <section anchor="iana" numbered="true">
      <name>IANA Considerations</name>
      <t>
        This document does not currently define any new registries or request any
        actions from IANA. If future revisions define new HTTP header fields or
        authentication schemes specific to HAP, this section will be updated with
        appropriate registration requests.
      </t>
    </section>

    <!-- 13. Acknowledgements -->
    <section anchor="acks" numbered="true">
      <name>Acknowledgements</name>
      <t>
        The author thanks the broader HTTP, Web security, and AI standards
        communities for ongoing discussion about bot management, agent protocols,
        and monetization mechanisms, and acknowledges existing work on HTTP Message
        Signatures, Privacy Pass, and Lightning-based micropayments that helped
        shape this proposal.
      </t>
    </section>

  </middle>

  <back>
    <references>
      <name>Normative References</name>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml"/>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml"/>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9421.xml"/>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9110.xml"/>
    </references>

    <references>
      <name>Informative References</name>
      <xi:include href="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.9578.xml"/>
      <reference anchor="L402" target="https://docs.l402.org/">
        <front>
          <title>L402: Lightning HTTP 402 Protocol</title>
          <author fullname="Lightning Labs"/>
          <date year="2023"/>
        </front>
      </reference>
    </references>
  </back>
</rfc>
