This document is the implementation specification (single source of truth) for go-lcpd/ in this repository.
It is written in WYSIWID format (Concepts + Synchronizations).Sources of truth:
go-lcpd exchanges LCP messages (TLV streams) with peers over lnd BOLT #1 custom messages as the transport.
It exposes a gRPC API (lcpd.v1.LCPDService) for local clients to drive the requester-side LCP flow (quote → pay → stream(result) → complete).
The same process can also run as a provider-side handler (when configured) to process inbound lcp_quote_request, receive the input stream, return an invoice, and then stream the result and send lcp_result.
It does not implement its own P2P (TCP) transport.Protobuf is managed via buf. The goal is to keep the following integration tests reproducible within this project:
gRPC: smoke check GetLocalInfo / ListLCPPeers (no external processes)
Lightning (custom messages, regtest): two nodes exchange lcp_manifest and ListLCPPeers can enumerate LCP-capable peers (opt-in)
Do not modify BOLTs: Do not propose or require changes to Lightning BOLT specs (L2). Rationale: preserve interoperability.
Do not implement a custom P2P (TCP) transport: go-lcpd MUST NOT implement peer transport over TCP/UDP. Delegate peer messaging to lnd peer connections. Rationale: avoid re-implementation and reduce operational risk.
LCP message types MUST match protocol/protocol.md: Type numbers for lcp_manifest, etc. MUST follow the odd assignments in protocol/protocol.md. Rationale: coexistence with non-LCP peers.
BOLT #1 unknown message rule: Ignore unknown odd messages. Disconnect on unknown even messages via lnrpc.Lightning.DisconnectPeer. Rationale: match BOLT #1 parity behavior.
Enforce protocol_version=2: LCP v0.2 uses protocol_version=2 (encoding: major*100 + minor). Implementations MUST send and accept only version 2. Rationale: interoperability.
Manifest-gated job messages: go-lcpd MUST NOT send any job-scope messages until it has (1) sent lcp_manifest and (2) received the peer’s lcp_manifest. Rationale: required by protocol/protocol.md and needed to learn peer limits (max_payload_bytes, etc.).
Preserve TLV stream canonical form: Encode TLVs as bigsize(type) → bigsize(length) → value. Sort by type. Do not duplicate types. Rationale: byte-exact terms_hash and forward compatibility.
Do not disconnect on unknown TLVs: Unknown TLVs MUST NOT cause disconnection. Continue decoding. Rationale: forward compatibility.
openai_chat_completions_v1_params_tlvs is strict: Providers MUST reject unknown param types with lcp_error(code=unsupported_params). Rationale: fixed interpretation of typed params.
Always include job-scope envelope TLVs: All LCP messages except lcp_manifest MUST include job_id, msg_id, and expiry per protocol/protocol.md. Rationale: replay safety and idempotency.
Deterministic chunk msg_id: For lcp_stream_chunk, msg_id MUST be SHA256(stream_id || u32be(seq)) per protocol/protocol.md. Rationale: bounded replay state for large streams.
Ignore expired messages (Provider inbound): Providers MUST ignore inbound job-scope messages where expiry < now. Rationale: replay resistance.
Clamp envelope expiry window (Provider inbound): Providers MUST use effective_expiry = min(expiry, now + MAX_ENVELOPE_EXPIRY_WINDOW_SECONDS) per protocol/protocol.md. Rationale: prevent state-holding DoS via far-future expiry.
Ignore duplicate messages (Provider inbound): Providers MUST de-duplicate inbound job-scope messages by (peer_pubkey, job_id, msg_id) until effective_expiry. Rationale: idempotency.
For lcp_stream_chunk, go-lcpd MUST treat seq < expected_seq as a duplicate and MUST ignore it (without storing per-chunk replay entries).
LCP v0.2 terms_hash: Compute terms_hash byte-exactly as defined in protocol/protocol.md (including canonicalization of params and input stream metadata). Rationale: invoice binding depends on byte-exactness.
Provider execution rule: Providers MUST NOT execute or deliver results for job_id before invoice settlement. Rationale: protocol/protocol.md execution rule.
Requester invoice checks: Before paying, verify description_hash == terms_hash, payee_pubkey == peer_id, invoice_amount_msat == price_msat, and invoice_expiry_unix <= quote_expiry (with ALLOWED_CLOCK_SKEW_SECONDS). Rationale: invoice swapping defense and amount/expiry integrity.
Do not pay past quote expiry: Reject Terms where quote_expiry <= now. Rationale: avoid unnecessary payments.
Default constants: Use these defaults unless configured:
MAX_ENVELOPE_EXPIRY_WINDOW_SECONDS = 600
ALLOWED_CLOCK_SKEW_SECONDS = 5
DEFAULT_MAX_STORE_ENTRIES = 1024 (for ReplayStore, ProviderJobStore, and RequesterJobStore)
LCP_INVOICE_EXPIRY_SLACK_SECONDS (optional; defaults to LCP_ALLOWED_CLOCK_SKEW_SECONDS)
Payload/stream limit checks: Do not send any LCP message whose payload exceeds remote_manifest.max_payload_bytes. In particular, input/result streaming MUST chunk into lcp_stream_chunk messages that fit under the peer limit. gRPC returns RESOURCE_EXHAUSTED. Provider returns lcp_error(code=payload_too_large). Rationale: DoS resistance and compatibility.
Enforce declared stream limits: go-lcpd MUST enforce max_stream_bytes and max_job_bytes from the local manifest for inbound streams. Rationale: protocol-defined DoS bounds.
Prefer regtest for Lightning integration: Development/integration tests MUST use regtest. They MUST NOT depend on mainnet/testnet. Rationale: safety and reproducibility.
Do not implicitly require external processes: go test ./... MUST NOT require bitcoind or lnd. Lightning integration tests MUST be opt-in. Rationale: developer experience and safety.
Integration tests MUST NOT depend on external networks: CI/dev MUST not require external APIs. Keep smoke tests opt-in. Rationale: reproducibility.
Do not log secrets: Logs MUST NOT contain API keys, access tokens, macaroons, BOLT11 invoices (payment_request), raw prompts, or raw model outputs at any log level. Rationale: logs leak.
Primary interface is gRPC: go-lcpd MUST be operable via gRPC. Do not make CLI the primary interface. Rationale: stabilize an API shape for integrations.
Do not provide cmd/: go-lcpd MUST NOT provide CLI entry points under go-lcpd/cmd/*. Rationale: keep operations centered on gRPC.
Manage protobuf with buf: API protobufs MUST be generated via buf. Rationale: change detection, compatibility, reproducible generation.
DI and lifecycle: Go components MUST be wired with go.uber.org/fx. Rationale: explicit composition and testability.
Do not assume Nix: Core workflows SHOULD work with a standard Go toolchain and externally installed binaries. flake.nix MAY exist as an optional pinned devShell. Rationale: onboarding.
Treat lcp_manifest as connection-scoped (do not include job envelope TLVs).
Job-scope messages (everything except lcp_manifest) MUST include JobEnvelope (protocol_version, job_id, msg_id, expiry).
lcp_stream_chunk.msg_id MUST be deterministic (SHA256(stream_id || u32be(seq))). encode_stream_chunk MUST compute it; decode_stream_chunk MUST verify it.
openai_chat_completions_v1_params_tlvs MUST preserve unknown TLVs in unknown and MUST NOT fail decoding solely due to unknown TLVs (forward compatibility).
Purpose: Hold Provider-side job state (input stream / quote / payment / execute / result streaming) to support idempotency and cancellation.
Domain Model:
Since job_id can collide across peers, jobs MUST be keyed by peer_pubkey as well.
The store MUST have bounds (e.g., max entries) and MUST NOT retain pending state past quote_expiry (evict/GC). Rationale: prevent state-holding DoS via unbounded job creation.
For jobs in awaiting_input, implementations SHOULD evict/GC state once expiry (from the job envelope) is past the retention window (effective_expiry). Rationale: prevent state-holding DoS via incomplete streams.
Default bounds MUST be DEFAULT_MAX_STORE_ENTRIES = 1024.
If the request specifies an output-token cap (max_completion_tokens / max_tokens / max_output_tokens), the Provider MUST reject values that exceed policy.max_output_tokens.
If the request does not specify an output-token cap, the Provider MUST use policy.max_output_tokens for quote-time estimation.
Provider mode MUST be explicitly enabled (lcpd-grpcd requires enabled: true in the YAML at LCPD_PROVIDER_CONFIG_PATH).
If Provider is disabled (or the compute backend is disabled), inbound lcp_quote_request MUST be rejected as lcp_error(code=unsupported_task) (do not create invoices or execute).
The Provider MUST decode job-scope messages and validate protocol_version and the task (unsupported cases return lcp_error).
openai_chat_completions_v1_params_tlvs are strict: if unknown TLV types are present, return lcp_error(code=unsupported_params).
openai.chat_completions.v1 inputs MUST be valid JSON and MUST satisfy:
request_json.model is required.
request_json.messages is required and must be non-empty.
request_json.stream must be omitted or false.
params.model is required and must match request_json.model.
If models is non-empty, requests whose params.model is not in the set MUST be rejected as lcp_error(code=unsupported_task).
If models is empty, any params.model MAY be accepted, but the manifest MUST NOT advertise supported_tasks.
The Provider MUST de-duplicate inbound job-scope messages via ReplayStore, and drop messages where expiry < now.
For lcp_stream_chunk, de-duplication is performed by stream state (seq < expected_seq is ignored as a duplicate).
For multiple lcp_quote_request messages with the same job_id, the Provider SHOULD return the same lcp_quote_response within TTL once it has been issued (idempotency).
If an existing job’s terms_hash does not match the recomputed value, the Provider SHOULD return lcp_error(code=payment_invalid).
Input stream gating (v0.2):
After accepting lcp_quote_request, the Provider MUST require exactly one validated input stream (stream_kind=input) before issuing a quote.
The Provider MUST NOT send lcp_quote_response until it has validated the input stream and derived (input_hash, input_len, input_content_type, input_content_encoding) as required by protocol/protocol.md.
Stream receive rules (v0.2):
For input streams, lcp_stream_begin MUST include total_len and sha256.
content_encoding="identity" MUST be supported; unknown encodings MUST be rejected with lcp_error(code=unsupported_encoding).
seq MUST start at 0 and increase by exactly 1; seq < expected_seq is ignored as a duplicate; seq > expected_seq MUST return lcp_error(code=chunk_out_of_order).
The Provider MUST enforce local limits (max_payload_bytes, max_stream_bytes, max_job_bytes) for inbound streams.
Stream end validation failure MUST return lcp_error(code=checksum_mismatch).
price_msat is computed after input stream validation via max-output-token planning + LLMUsageEstimator.estimate + LLMPricing.quote_price (fixed at quote-time; do not recompute on settlement).
The Provider MUST compute terms_hash and MUST set the BOLT11 invoice description_hash to exactly equal terms_hash (invoice swapping defense).
The Provider MUST NOT start execution before invoice settlement.
When executing, the Provider MUST pass params.model to the compute backend as task.model.
Result delivery (v0.2):
For task_kind="openai.chat_completions.v1", output bytes MUST be treated as raw OpenAI-compatible response body bytes (JSON).
The Provider SHOULD set result_content_type = "application/json; charset=utf-8" and MUST use result_content_encoding = "identity".
The Provider MUST send exactly one result stream (stream_kind=result) after invoice settlement and before lcp_result(status=ok).
The Provider MUST include in lcp_result(status=ok) the metadata matching the validated result stream (result_stream_id, result_hash, result_len, result_content_type, result_content_encoding).
Outbound sizing:
When sending any message, the Provider MUST respect the peer’s declared receive limits in remote_manifest (max_payload_bytes, max_stream_bytes, max_job_bytes).
Result streaming MUST be chunked so each message payload fits within remote_manifest.max_payload_bytes.
Failure handling:
If compute fails after payment, the Provider SHOULD send lcp_result(status=failed, message=...).
On receiving lcp_cancel, the Provider SHOULD stop work best-effort and SHOULD send lcp_result(status=cancelled).
Inbound payloads are decoded via LCPWire.decode_quote_response / decode_stream_begin / decode_stream_chunk / decode_stream_end / decode_result / decode_error.
The waiter MUST reconstruct exactly one validated result stream (stream_kind=result) per job before completing a successful wait_result.
On lcp_result(status=ok), the waiter MUST verify the lcp_result metadata matches the validated result stream (stream_id, sha256, total_len, content_type, content_encoding).
For the same (peer_id, job_id), allow at most one concurrent waiter for quote/result; additional waiters must fail.
lcp_error is delivered to both the quote and result waiters.
The requester-side waiter does not implement expiry/replay filtering.
send_custom_message(grpc_conn, peer_pubkey, msg_type, payload_bytes) -> ok | error
disconnect_peer(grpc_conn, peer_pubkey) -> ok | error
Operational Principle:
If LCPD_LND_MANIFEST_RESEND_INTERVAL is unset or <= 0, lcp_manifest MUST be sent at most once per connection (to avoid infinite loops).
If LCPD_LND_MANIFEST_RESEND_INTERVAL is set to a positive duration, go-lcpd SHOULD periodically re-send lcp_manifest to connected peers on that interval.
If lcp_manifest is received and we have not sent ours yet, reply once (SHOULD). If we have already sent ours, do not reply.
Inbound messages are classified via LCPMessageRouter; manifests are decoded internally and applied to PeerDirectory.
Summary: On startup, connect to lnd, load an initial peer snapshot, and start subscription loops.
When: on go-lcpd startup.
Then:
Connect to lnd via LNDPeerMessaging.dial (if not configured, disable integration and do nothing).
Fetch currently connected peers via LNDPeerMessaging.list_peers and apply them to PeerDirectory (connected + custom_msg_enabled).
For each peer, send lcp_manifest once if it has not been sent yet and call PeerDirectory.mark_manifest_sent.
If LCPD_LND_MANIFEST_RESEND_INTERVAL is set to a positive duration, start a periodic loop that re-sends lcp_manifest to connected peers on that interval.
Start the subscribe_peer_events and subscribe_custom_messages loops.
Summary: Classify inbound custom messages; apply manifests to PeerDirectory; dispatch the rest to handlers.
When: LNDPeerMessaging receives a CustomMessage.
Then:
Classify via LCPMessageRouter.route.
For dispatch_manifest, decode via LCPWire.decode_manifest, call PeerDirectory.mark_lcp_ready (recording remote_manifest), and if we have not sent ours yet, reply with lcp_manifest once and call PeerDirectory.mark_manifest_sent.
For dispatch_quote_request / dispatch_cancel, dispatch to the Provider handler (replay/expiry handling lives on the Provider handler side).
For dispatch_quote_response / dispatch_result / dispatch_error, dispatch to the requester-side waiter (bound to the waiting gRPC call).
For dispatch_stream_begin / dispatch_stream_chunk / dispatch_stream_end, dispatch to both the Provider handler and the requester-side waiter. Each side MUST ignore streams that do not match its expected state (input vs result).
Summary: Return local node_id and manifest via GetLocalInfo.
When: GRPCService.GetLocalInfo is called.
Then:
Call LightningRPC.get_info and use identity_pubkey as node_id.
Build manifest from local config (protocol_version=2, max_payload_bytes=DEFAULT_MAX_PAYLOAD_BYTES, max_stream_bytes=DEFAULT_MAX_STREAM_BYTES, max_job_bytes=DEFAULT_MAX_JOB_BYTES). Include supported_tasks only when Provider is enabled and models is non-empty.
Summary: Enumerate connected LCP-capable peers (those that have responded with lcp_manifest) via ListLCPPeers.
When: GRPCService.ListLCPPeers is called.
Then:
Call PeerDirectory.list_lcp_peers().
Pack each peer’s peer_id / address / remote_manifest into the gRPC response and return it.
Summary: Send lcp_quote_request via RequestQuote and return lcp_quote_response as Terms.
When: GRPCService.RequestQuote is called.
Where:
Gather and validate inputs (peer_id, task).
Precondition: the target peer is lcp_ready in PeerDirectory (otherwise return FAILED_PRECONDITION).
Then:
Call LCPTasks.validate_task.
Generate job_id, a msg_id for lcp_quote_request, and expiry (expiry = now + 300 seconds).
Build a QuoteRequest by calling LCPTasks.to_wire_quote_request_task to get task_kind/params_bytes.
Encode lcp_quote_request via LCPWire.encode_quote_request. If it would exceed the peer’s remote_manifest.max_payload_bytes, return RESOURCE_EXHAUSTED (do not send).
Send lcp_quote_request via SendCustomMessage(type=42083).
Build the input stream via LCPTasks.to_wire_input_stream to get decoded_bytes/content_type/content_encoding.
Verify the input fits the peer’s declared stream limits (len(decoded_bytes) <= remote_manifest.max_stream_bytes and <= remote_manifest.max_job_bytes), otherwise return RESOURCE_EXHAUSTED (do not send).
Send the input stream:
Generate stream_id and a msg_id for lcp_stream_begin.
Send lcp_stream_chunk × N with seq=0..N-1, choosing chunk sizes so each encoded message payload fits within remote_manifest.max_payload_bytes.
Send lcp_stream_end(total_len, sha256) (with a fresh random msg_id).
Wait until deadline for lcp_quote_response or lcp_error with the same job_id.
On lcp_quote_response, recompute and verify terms_hash (including input stream metadata and canonicalized params_hash), call RequesterJobStore.put_quote, and return RequestQuoteResponse{terms}.
Summary: Pay the invoice in AcceptAndExecute, wait for the result stream + lcp_result, and return Result.
When: GRPCService.AcceptAndExecute is called.
Where:
Gather and validate inputs (peer_id, job_id, pay_invoice).
Precondition: RequesterJobStore has a quote for job_id (otherwise NOT_FOUND).
Then:
Verify pay_invoice=true (false is INVALID_ARGUMENT).
Verify quote_expiry has not passed (if it has, FAILED_PRECONDITION).
Call LightningRPC.decode_payment_request(terms.payment_request) and verify invoice binding (failure is FAILED_PRECONDITION):
Call LightningRPC.pay_invoice, then wait until deadline for either:
a validated result stream + matching lcp_result(status=ok), or
lcp_error.
On success, return AcceptAndExecuteResponse{result} where result.result is the reconstructed result bytes and result.content_type matches the result stream metadata.
Summary: As a Provider, receive lcp_quote_request, receive the input stream, issue an invoice-bound quote, wait for settlement, stream the result, and send lcp_result.
When: the Provider handler receives lcp_quote_request.
Then:
If Provider is disabled or the compute backend is disabled, send lcp_error(code=unsupported_task) (do not create invoices or execute).
Validate protocol_version; if unsupported, send lcp_error(code=unsupported_version).
Validate task_kind and params; if unsupported, send lcp_error(code=unsupported_task|unsupported_params).
If expiry < now, drop; otherwise compute effective_expiry = min(expiry, now + MAX_ENVELOPE_EXPIRY_WINDOW_SECONDS) and de-duplicate via ReplayStore until effective_expiry (drop duplicates).
Compute params_hash via ProtocolCompatibility.compute_params_hash and persist a job record in ProviderJobStore as awaiting_input.
Receive and validate exactly one input stream (stream_kind=input) for this job_id via the Provider stream handlers:
On lcp_stream_begin: validate required fields (total_len, sha256), enforce content_encoding, enforce local max_stream_bytes/max_job_bytes, and initialize stream state.
On lcp_stream_chunk: enforce seq ordering (ignore duplicates; reject out-of-order), append bytes, and enforce local limits.
On lcp_stream_end: validate length/hash; on failure send lcp_error(code=checksum_mismatch) and fail the job.
Set quote_expiry = now + quote_ttl_seconds, compute price_msat via LLMExecutionPolicy.apply / LLMUsageEstimator.estimate / LLMPricing.quote_price, then compute terms_hash (including input metadata and params_hash).
Satisfy idempotency for the same job_id:
If ProviderJobStore already has a valid quote, re-send the same lcp_quote_response.
If it exists but is expired, send lcp_error(code=quote_expired).
If the existing job’s terms_hash mismatches, send lcp_error(code=payment_invalid).
Call create_invoice(description_hash=terms_hash, price_msat, expiry_seconds) and obtain payment_request.
Summary: On regtest, show via an integration test (opt-in) that two nodes can exchange lcp_manifest over lnd custom messages.
When: the integration test starts with LCP_ITEST_REGTEST=1.
Then:
Start bitcoind and two lnd nodes and initialize wallets.
Connect the two nodes via ConnectPeer and ensure they are online (no channel required).
Start go-lcpd on both sides and begin lnd_inbound_custom_message_dispatch.
Wait (short timeout) until ListLCPPeers returns one peer.
Verify peer_id equals the remote pubkey and required fields like remote_manifest.protocol_version==2 and stream limit fields match expectations.
Summary: On regtest, show via an integration test (opt-in) that LCP v0.2 quote → payment → result streaming completes over custom messages.
When: the integration test starts with LCP_ITEST_REGTEST=1.
Then:
Start bitcoind and two lnd nodes (Alice=Requester, Bob=Provider) and initialize wallets.
Open a channel from Alice → Bob (required to settle invoices).
Start go-lcpd on Bob and enable Provider flows lnd_inbound_custom_message_dispatch and lnd_lcp_provider_quote_pay_result (compute backend is a deterministic stub).
On Alice (the test Requester), send lcp_quote_request and the input stream (lcp_stream_begin/chunk/end) via SendCustomMessage.
Receive and decode lcp_quote_response via SubscribeCustomMessages, verify description_hash == terms_hash and destination(payee) == bob_pubkey via DecodePayReq, then pay.
Receive and decode the result stream (lcp_stream_begin/chunk/end) and lcp_result, verify stream validation + metadata consistency, and verify the reconstructed output matches expectations (use cmp.Diff).