• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

stacks-network / stacks-core / 23575718640

26 Mar 2026 03:15AM UTC coverage: 85.676% (-0.04%) from 85.712%
23575718640

Pull #6985

github

41c495
web-flow
Merge 7e667cd04 into 08342a156
Pull Request #6985: fix: If the PoX anchor block is a shadow block, then none of its grandchildren tenures' block-commits will be valid

127 of 159 new or added lines in 12 files covered. (79.87%)

132 existing lines in 29 files now uncovered.

186631 of 217833 relevant lines covered (85.68%)

17237077.3 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

82.22
/stacks-node/src/neon_node.rs
1
// Copyright (C) 2013-2020 Blockstack PBC, a public benefit corporation
2
// Copyright (C) 2020-2024 Stacks Open Internet Foundation
3
//
4
// This program is free software: you can redistribute it and/or modify
5
// it under the terms of the GNU General Public License as published by
6
// the Free Software Foundation, either version 3 of the License, or
7
// (at your option) any later version.
8
//
9
// This program is distributed in the hope that it will be useful,
10
// but WITHOUT ANY WARRANTY; without even the implied warranty of
11
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
12
// GNU General Public License for more details.
13
//
14
// You should have received a copy of the GNU General Public License
15
// along with this program.  If not, see <http://www.gnu.org/licenses/>.
16

17
/// Main body of code for the Stacks node and miner.
18
///
19
/// System schematic.
20
/// Legend:
21
///    |------|    Thread
22
///    /------\    Shared memory
23
///    @------@    Database
24
///    .------.    Code module
25
///
26
///
27
///                           |------------------|
28
///                           |  RunLoop thread  |   [1,7]
29
///                           |   .----------.   |--------------------------------------.
30
///                           |   .StacksNode.   |                                      |
31
///                           |---.----------.---|                                      |
32
///                    [1,12]     |     |    |     [1]                                  |
33
///              .----------------*     |    *---------------.                          |
34
///              |                  [3] |                    |                          |
35
///              V                      |                    V                          V
36
///      |----------------|             |    [9,10]   |---------------| [11] |--------------------------|
37
/// .--- | Relayer thread | <-----------|-----------> |   P2P Thread  | <--- | ChainsCoordinator thread | <--.
38
/// |    |----------------|             V             |---------------|      |--------------------------|    |
39
/// |            |     |          /-------------\    [2,3]    |    |              |          |               |
40
/// |        [1] |     *--------> /   Globals   \ <-----------*----|--------------*          | [4]           |
41
/// |            |     [2,3,7]    /-------------\                  |                         |               |
42
/// |            V                                                 V [5]                     V               |
43
/// |    |----------------|                                 @--------------@        @------------------@     |
44
/// |    |  Miner thread  | <------------------------------ @  Mempool DB  @        @  Chainstate DBs  @     |
45
/// |    |----------------|             [6]                 @--------------@        @------------------@     |
46
/// |                                                                                        ^               |
47
/// |                                               [8]                                      |               |
48
/// *----------------------------------------------------------------------------------------*               |
49
/// |                                               [7]                                                      |
50
/// *--------------------------------------------------------------------------------------------------------*
51
///
52
/// [1]  Spawns
53
/// [2]  Synchronize unconfirmed state
54
/// [3]  Enable/disable miner
55
/// [4]  Processes block data
56
/// [5]  Stores unconfirmed transactions
57
/// [6]  Reads unconfirmed transactions
58
/// [7]  Signals block arrival
59
/// [8]  Store blocks and microblocks
60
/// [9]  Pushes retrieved blocks and microblocks
61
/// [10] Broadcasts new blocks, microblocks, and transactions
62
/// [11] Notifies about new transaction attachment events
63
/// [12] Signals VRF key registration
64
///
65
/// When the node is running, there are 4-5 active threads at once. They are:
66
///
67
/// * **RunLoop Thread**:
68
///     This is the main thread, whose code body lives in `src/run_loop/neon.rs`.
69
///     This thread is responsible for:
70
///       * Bootup
71
///       * Running the burnchain indexer
72
///       * Notifying the ChainsCoordinator thread when there are new burnchain blocks to process
73
///
74
/// * **Relayer Thread**:
75
///     This is the thread that stores and relays blocks and microblocks. Both
76
///     it and the ChainsCoordinator thread are very I/O-heavy threads, and care has been taken to
77
///     ensure that neither one attempts to acquire a write-lock in the underlying databases.
78
///     Specifically, this thread directs the ChainsCoordinator thread when to process new Stacks
79
///     blocks, and it directs the miner thread (if running) to stop when either it or the
80
///     ChainsCoordinator thread needs to acquire the write-lock.
81
///     This thread is responsible for:
82
///       * Receiving new blocks and microblocks from the P2P thread via a shared channel
83
///       * (Synchronously) requesting the CoordinatorThread to process newly-stored Stacks blocks
84
///         and microblocks
85
///       * Building up the node's unconfirmed microblock stream state, and sharing it with the P2P
86
///         thread so it can answer queries about the unconfirmed microblock chain
87
///       * Pushing newly-discovered blocks and microblocks to the P2P thread for broadcast
88
///       * Registering the VRF public key for the miner
89
///       * Spawning the block and microblock miner threads, and stopping them if their continued
90
///         execution would inhibit block or microblock storage or processing.
91
///       * Submitting the burnchain operation to commit to a freshly-mined block
92
///
93
/// * **Miner Thread**:
94
///     This is the thread that actually produces new blocks and microblocks. It
95
///     is spawned only by the Relayer thread to carry out mining activity when the underlying
96
///     chainstate is not needed by either the Relayer or ChainsCoordinator threads.
97
///     This thread does the following:
98
///       * Walk the mempool DB to build a new block or microblock
99
///       * Return the block or microblock to the Relayer thread
100
///
101
/// * **P2P Thread**:
102
///     This is the thread that communicates with the rest of the P2P network, and
103
///     handles RPC requests. It is meant to do as little storage-write I/O as possible to avoid lock
104
///     contention with the Miner, Relayer, and ChainsCoordinator threads. In particular, it forwards
105
///     data it receives from the P2P thread to the Relayer thread for I/O-bound processing. At the
106
///     time of this writing, it still requires holding a write-lock to handle some RPC requests, but
107
///     future work will remove this so that this thread's execution will not interfere with the
108
///     others. This is the only thread that does socket I/O.
109
///     This thread runs the PeerNetwork state machines, which include the following:
110
///       * Learning the node's public IP address
111
///       * Discovering neighbor nodes
112
///       * Forwarding newly-discovered blocks, microblocks, and transactions from the Relayer thread
113
///         to other neighbors
114
///       * Synchronizing block and microblock inventory state with other neighbors
115
///       * Downloading blocks and microblocks, and passing them to the Relayer for storage and
116
///         processing
117
///       * Downloading transaction attachments as their hashes are discovered during block processing
118
///       * Synchronizing the local mempool database with other neighbors
119
///         (notifications for new attachments come from a shared channel in the ChainsCoordinator thread)
120
///       * Handling HTTP requests
121
///
122
/// * **ChainsCoordinator Thread**:
123
///     This thread processes sortitions and Stacks blocks and
124
///     microblocks, and handles PoX reorgs should they occur (this mainly happens in boot-up). It,
125
///     like the Relayer thread, is a very I/O-heavy thread, and it will hold a write-lock on the
126
///     chainstate DBs while it works. Its actions are controlled by a CoordinatorComms structure in
127
///     the Globals shared state, which the Relayer thread and RunLoop thread both drive (the former
128
///     drives Stacks blocks processing, the latter sortitions).
129
///     This thread is responsible for:
130
///       * Responding to requests from other threads to process sortitions
131
///       * Responding to requests from other threads to process Stacks blocks and microblocks
132
///       * Processing PoX chain reorgs, should they ever happen
133
///       * Detecting attachment creation events, and informing the P2P thread of them so it can go
134
///         and download them
135
///
136
/// In addition to the mempool and chainstate databases, these threads share access to a Globals
137
/// singleton that contains soft state shared between threads. Mainly, the Globals struct is meant
138
/// to store inter-thread shared singleton communication media all in one convenient struct. Each
139
/// thread has a handle to the struct's shared state handles. Global state includes:
140
///       * The global flag as to whether or not the miner thread can be running
141
///       * The global shutdown flag that, when set, causes all threads to terminate
142
///       * Sender channel endpoints that can be shared between threads
143
///       * Metrics about the node's behavior (e.g. number of blocks processed, etc.)
144
///
145
/// This file may be refactored in the future into a full-fledged module.
146
use std::cmp;
147
use std::cmp::Ordering as CmpOrdering;
148
use std::collections::{BTreeMap, HashMap, HashSet, VecDeque};
149
use std::io::{ErrorKind, Read, Write};
150
use std::net::SocketAddr;
151
use std::sync::mpsc::{Receiver, TrySendError};
152
use std::thread::JoinHandle;
153
use std::time::{Duration, Instant};
154
use std::{fs, mem, thread};
155

156
use clarity::boot_util::boot_code_id;
157
use clarity::vm::costs::ExecutionCost;
158
use clarity::vm::types::{PrincipalData, QualifiedContractIdentifier};
159
use libsigner::v0::messages::{
160
    MessageSlotID, MinerSlotID, MockBlock, MockProposal, MockSignature, PeerInfo, SignerMessage,
161
};
162
use libsigner::{SignerSession, StackerDBSession};
163
use stacks::burnchains::bitcoin::address::{BitcoinAddress, LegacyBitcoinAddressType};
164
use stacks::burnchains::db::BurnchainHeaderReader;
165
use stacks::burnchains::{Burnchain, BurnchainSigner, PoxConstants, Txid};
166
use stacks::chainstate::burn::db::sortdb::{SortitionDB, SortitionHandleConn};
167
use stacks::chainstate::burn::operations::leader_block_commit::{
168
    RewardSetInfo, BURN_BLOCK_MINED_AT_MODULUS,
169
};
170
use stacks::chainstate::burn::operations::{
171
    BlockstackOperationType, LeaderBlockCommitOp, LeaderKeyRegisterOp,
172
};
173
use stacks::chainstate::burn::{BlockSnapshot, ConsensusHash};
174
use stacks::chainstate::coordinator::{get_next_recipients, OnChainRewardSetProvider};
175
use stacks::chainstate::nakamoto::NakamotoChainState;
176
use stacks::chainstate::stacks::address::PoxAddress;
177
use stacks::chainstate::stacks::boot::MINERS_NAME;
178
use stacks::chainstate::stacks::db::blocks::StagingBlock;
179
use stacks::chainstate::stacks::db::{StacksChainState, StacksHeaderInfo, MINER_REWARD_MATURITY};
180
use stacks::chainstate::stacks::miner::{
181
    signal_mining_blocked, signal_mining_ready, AssembledAnchorBlock, BlockBuilderSettings,
182
    StacksMicroblockBuilder,
183
};
184
use stacks::chainstate::stacks::{
185
    CoinbasePayload, Error as ChainstateError, StacksBlock, StacksBlockBuilder, StacksBlockHeader,
186
    StacksMicroblock, StacksPublicKey, StacksTransaction, StacksTransactionSigner,
187
    TransactionAnchorMode, TransactionPayload, TransactionVersion,
188
};
189
use stacks::config::chain_data::MinerStats;
190
use stacks::config::NodeConfig;
191
use stacks::core::mempool::MemPoolDB;
192
use stacks::core::{EpochList, FIRST_BURNCHAIN_CONSENSUS_HASH, STACKS_EPOCH_3_0_MARKER};
193
use stacks::cost_estimates::metrics::{CostMetric, UnitMetric};
194
use stacks::cost_estimates::{CostEstimator, FeeEstimator, UnitEstimator};
195
use stacks::monitoring::{increment_stx_blocks_mined_counter, update_active_miners_count_gauge};
196
use stacks::net::atlas::{AtlasConfig, AtlasDB};
197
use stacks::net::db::{LocalPeer, PeerDB};
198
use stacks::net::dns::{DNSClient, DNSResolver};
199
use stacks::net::p2p::PeerNetwork;
200
use stacks::net::relay::Relayer;
201
use stacks::net::stackerdb::{StackerDBConfig, StackerDBSync, StackerDBs, MINER_SLOT_COUNT};
202
use stacks::net::{
203
    Error as NetError, NetworkResult, PeerNetworkComms, RPCHandlerArgs, ServiceFlags,
204
};
205
use stacks::util_lib::strings::{UrlString, VecDisplay};
206
use stacks::{monitoring, version_string};
207
use stacks_common::codec::StacksMessageCodec;
208
use stacks_common::types::chainstate::{
209
    BlockHeaderHash, BurnchainHeaderHash, SortitionId, StacksAddress, StacksBlockId,
210
    StacksPrivateKey, VRFSeed,
211
};
212
use stacks_common::types::net::PeerAddress;
213
use stacks_common::types::{PublicKey, StacksEpochId};
214
use stacks_common::util::hash::{to_hex, Hash160, Sha256Sum};
215
use stacks_common::util::secp256k1::Secp256k1PrivateKey;
216
use stacks_common::util::vrf::{VRFProof, VRFPublicKey};
217
use stacks_common::util::{get_epoch_time_ms, get_epoch_time_secs};
218

219
use super::{BurnchainController, Config, EventDispatcher, Keychain};
220
use crate::burnchains::bitcoin_regtest_controller::{
221
    burnchain_params_from_config, BitcoinRegtestController, OngoingBlockCommit,
222
};
223
use crate::burnchains::{make_bitcoin_indexer, Error as BurnchainControllerError};
224
use crate::globals::{NeonGlobals as Globals, RelayerDirective};
225
use crate::nakamoto_node::miner_db::MinerDB;
226
use crate::nakamoto_node::signer_coordinator::SignerCoordinator;
227
use crate::run_loop::neon::RunLoop;
228
use crate::run_loop::RegisteredKey;
229
use crate::ChainTip;
230

231
pub const RELAYER_MAX_BUFFER: usize = 100;
232
const VRF_MOCK_MINER_KEY: u64 = 1;
233

234
pub const BLOCK_PROCESSOR_STACK_SIZE: usize = 32 * 1024 * 1024; // 32 MB
235

236
type MinedBlocks = HashMap<BlockHeaderHash, (AssembledAnchorBlock, Secp256k1PrivateKey)>;
237

238
/// Result of running the miner thread.  It could produce a Stacks block or a microblock.
239
#[allow(clippy::large_enum_variant)]
240
pub(crate) enum MinerThreadResult {
241
    Block(
242
        AssembledAnchorBlock,
243
        Secp256k1PrivateKey,
244
        Option<OngoingBlockCommit>,
245
    ),
246
    Microblock(
247
        Result<Option<(StacksMicroblock, ExecutionCost)>, NetError>,
248
        MinerTip,
249
    ),
250
}
251

252
/// Miner chain tip, on top of which to build microblocks
253
#[derive(Debug, Clone, PartialEq)]
254
pub struct MinerTip {
255
    /// tip's consensus hash
256
    consensus_hash: ConsensusHash,
257
    /// tip's Stacks block header hash
258
    block_hash: BlockHeaderHash,
259
    /// Microblock private key to use to sign microblocks
260
    microblock_privkey: Secp256k1PrivateKey,
261
    /// Stacks height
262
    stacks_height: u64,
263
    /// burnchain height
264
    burn_height: u64,
265
}
266

267
impl MinerTip {
268
    pub fn new(
6,166✔
269
        ch: ConsensusHash,
6,166✔
270
        bh: BlockHeaderHash,
6,166✔
271
        pk: Secp256k1PrivateKey,
6,166✔
272
        stacks_height: u64,
6,166✔
273
        burn_height: u64,
6,166✔
274
    ) -> MinerTip {
6,166✔
275
        MinerTip {
6,166✔
276
            consensus_hash: ch,
6,166✔
277
            block_hash: bh,
6,166✔
278
            microblock_privkey: pk,
6,166✔
279
            stacks_height,
6,166✔
280
            burn_height,
6,166✔
281
        }
6,166✔
282
    }
6,166✔
283
}
284

285
/// Node implementation for both miners and followers.
286
/// This struct is used to set up the node proper and launch the p2p thread and relayer thread.
287
/// It is further used by the main thread to communicate with these two threads.
288
pub struct StacksNode {
289
    /// Atlas network configuration
290
    pub atlas_config: AtlasConfig,
291
    /// Global inter-thread communication handle
292
    pub globals: Globals,
293
    /// True if we're a miner
294
    is_miner: bool,
295
    /// handle to the p2p thread
296
    pub p2p_thread_handle: JoinHandle<Option<PeerNetwork>>,
297
    /// handle to the relayer thread
298
    pub relayer_thread_handle: JoinHandle<()>,
299
}
300

301
/// Fault injection logic to artificially increase the length of a tenure.
302
/// Only used in testing
303
#[cfg(test)]
304
pub(crate) fn fault_injection_long_tenure() {
179,765✔
305
    // simulated slow block
306
    let Ok(tenure_str) = std::env::var("STX_TEST_SLOW_TENURE") else {
179,765✔
307
        return;
179,756✔
308
    };
309
    let Ok(tenure_time) = tenure_str.parse::<u64>() else {
9✔
310
        error!("Parse error for STX_TEST_SLOW_TENURE");
×
311
        panic!();
×
312
    };
313
    info!("Fault injection: sleeping for {tenure_time} milliseconds to simulate a long tenure");
9✔
314
    stacks_common::util::sleep_ms(tenure_time);
9✔
315
}
179,765✔
316

317
#[cfg(not(test))]
318
pub(crate) fn fault_injection_long_tenure() {}
319

320
/// Fault injection to skip mining in this bitcoin block height
321
/// Only used in testing
322
#[cfg(test)]
323
pub(crate) fn fault_injection_skip_mining(rpc_bind: &str, target_burn_height: u64) -> bool {
271,383✔
324
    let Ok(disable_heights) = std::env::var("STACKS_DISABLE_MINER") else {
271,383✔
325
        return false;
271,383✔
326
    };
327
    let disable_schedule: serde_json::Value = serde_json::from_str(&disable_heights).unwrap();
×
328
    let disable_schedule = disable_schedule.as_array().unwrap();
×
329
    for disabled in disable_schedule {
×
330
        let target_miner_rpc_bind = disabled.get("rpc_bind").unwrap().as_str().unwrap();
×
331
        if target_miner_rpc_bind != rpc_bind {
×
332
            continue;
×
333
        }
×
334
        let target_block_heights = disabled.get("blocks").unwrap().as_array().unwrap();
×
335
        for target_block_value in target_block_heights {
×
336
            let target_block = u64::try_from(target_block_value.as_i64().unwrap()).unwrap();
×
337
            if target_block == target_burn_height {
×
338
                return true;
×
339
            }
×
340
        }
341
    }
342
    false
×
343
}
271,383✔
344

345
#[cfg(not(test))]
346
pub(crate) fn fault_injection_skip_mining(_rpc_bind: &str, _target_burn_height: u64) -> bool {
347
    false
348
}
349

350
/// Open the chainstate, and inject faults from the config file
351
pub(crate) fn open_chainstate_with_faults(
209,110✔
352
    config: &Config,
209,110✔
353
) -> Result<StacksChainState, ChainstateError> {
209,110✔
354
    let stacks_chainstate_path = config.get_chainstate_path_str();
209,110✔
355
    let (mut chainstate, _) = StacksChainState::open(
209,110✔
356
        config.is_mainnet(),
209,110✔
357
        config.burnchain.chain_id,
209,110✔
358
        &stacks_chainstate_path,
209,110✔
359
        Some(config.node.get_marf_opts()),
209,110✔
360
    )?;
×
361

362
    chainstate.fault_injection.hide_blocks = config.node.fault_injection_hide_blocks;
209,110✔
363
    Ok(chainstate)
209,110✔
364
}
209,110✔
365

366
/// Types of errors that can arise during mining
367
enum Error {
368
    /// Can't find the header record for the chain tip
369
    HeaderNotFoundForChainTip,
370
    /// Can't find the stacks block's offset in the burnchain block
371
    WinningVtxNotFoundForChainTip,
372
    /// Can't find the block sortition snapshot for the chain tip
373
    SnapshotNotFoundForChainTip,
374
    /// The burnchain tip changed while this operation was in progress
375
    BurnchainTipChanged,
376
    /// The coordinator channel closed
377
    CoordinatorClosed,
378
}
379

380
/// Metadata required for beginning a new tenure
381
struct ParentStacksBlockInfo {
382
    /// Header metadata for the Stacks block we're going to build on top of
383
    stacks_parent_header: StacksHeaderInfo,
384
    /// the consensus hash of the sortition that selected the Stacks block parent
385
    parent_consensus_hash: ConsensusHash,
386
    /// the burn block height of the sortition that selected the Stacks block parent
387
    parent_block_burn_height: u64,
388
    /// the total amount burned in the sortition that selected the Stacks block parent
389
    parent_block_total_burn: u64,
390
    /// offset in the burnchain block where the parent's block-commit was
391
    parent_winning_vtxindex: u16,
392
    /// nonce to use for this new block's coinbase transaction
393
    coinbase_nonce: u64,
394
}
395

396
#[derive(Clone, Default)]
397
pub enum LeaderKeyRegistrationState {
398
    /// Not started yet
399
    #[default]
400
    Inactive,
401
    /// Waiting for burnchain confirmation
402
    /// `u64` is the target block height in which we intend this key to land
403
    /// `txid` is the burnchain transaction ID
404
    Pending(u64, Txid),
405
    /// Ready to go!
406
    Active(RegisteredKey),
407
}
408

409
impl LeaderKeyRegistrationState {
410
    pub fn get_active(&self) -> Option<RegisteredKey> {
3,914✔
411
        if let Self::Active(registered_key) = self {
3,914✔
412
            Some(registered_key.clone())
3,914✔
413
        } else {
414
            None
×
415
        }
416
    }
3,914✔
417
}
418

419
/// Relayer thread
420
/// * accepts network results and stores blocks and microblocks
421
/// * forwards new blocks, microblocks, and transactions to the p2p thread
422
/// * processes burnchain state
423
/// * if mining, runs the miner and broadcasts blocks (via a subordinate MinerThread)
424
pub struct RelayerThread {
425
    /// Node config
426
    config: Config,
427
    /// Handle to the sortition DB (optional so we can take/replace it)
428
    sortdb: Option<SortitionDB>,
429
    /// Handle to the chainstate DB (optional so we can take/replace it)
430
    chainstate: Option<StacksChainState>,
431
    /// Handle to the mempool DB (optional so we can take/replace it)
432
    mempool: Option<MemPoolDB>,
433
    /// Handle to global state and inter-thread communication channels
434
    globals: Globals,
435
    /// Authoritative copy of the keychain state
436
    keychain: Keychain,
437
    /// Burnchian configuration
438
    burnchain: Burnchain,
439
    /// height of last VRF key registration request
440
    last_vrf_key_burn_height: u64,
441
    /// Set of blocks that we have mined, but are still potentially-broadcastable
442
    last_mined_blocks: MinedBlocks,
443
    /// client to the burnchain (used only for sending block-commits)
444
    bitcoin_controller: BitcoinRegtestController,
445
    /// client to the event dispatcher
446
    event_dispatcher: EventDispatcher,
447
    /// copy of the local peer state
448
    local_peer: LocalPeer,
449
    /// last time we tried to mine a block (in millis)
450
    last_tenure_issue_time: u128,
451
    /// last observed burnchain block height from the p2p thread (obtained from network results)
452
    last_network_block_height: u64,
453
    /// time at which we observed a change in the network block height (epoch time in millis)
454
    last_network_block_height_ts: u128,
455
    /// last observed number of downloader state-machine passes from the p2p thread (obtained from
456
    /// network results)
457
    last_network_download_passes: u64,
458
    /// last observed number of inventory state-machine passes from the p2p thread (obtained from
459
    /// network results)
460
    last_network_inv_passes: u64,
461
    /// minimum number of downloader state-machine passes that must take place before mining (this
462
    /// is used to ensure that the p2p thread attempts to download new Stacks block data before
463
    /// this thread tries to mine a block)
464
    min_network_download_passes: u64,
465
    /// minimum number of inventory state-machine passes that must take place before mining (this
466
    /// is used to ensure that the p2p thread attempts to download new Stacks block data before
467
    /// this thread tries to mine a block)
468
    min_network_inv_passes: u64,
469
    /// consensus hash of the last sortition we saw, even if we weren't the winner
470
    last_tenure_consensus_hash: Option<ConsensusHash>,
471
    /// tip of last tenure we won (used for mining microblocks)
472
    miner_tip: Option<MinerTip>,
473
    /// last time we mined a microblock, in millis
474
    last_microblock_tenure_time: u128,
475
    /// when should we run the next microblock tenure, in millis
476
    microblock_deadline: u128,
477
    /// cost of the last-produced microblock stream
478
    microblock_stream_cost: ExecutionCost,
479

480
    /// Inner relayer instance for forwarding broadcasted data back to the p2p thread for dispatch
481
    /// to neighbors
482
    relayer: Relayer,
483

484
    /// handle to the subordinate miner thread
485
    miner_thread: Option<JoinHandle<Option<MinerThreadResult>>>,
486
    /// if true, then the last time the miner thread was launched, it was used to mine a Stacks
487
    /// block (used to alternate between mining microblocks and Stacks blocks that confirm them)
488
    mined_stacks_block: bool,
489
    /// if true, the last time the miner thread was launched, it did not mine.
490
    last_attempt_failed: bool,
491
}
492

493
pub(crate) struct BlockMinerThread {
494
    /// node config struct
495
    config: Config,
496
    /// handle to global state
497
    globals: Globals,
498
    /// copy of the node's keychain
499
    keychain: Keychain,
500
    /// burnchain configuration
501
    burnchain: Burnchain,
502
    /// Set of blocks that we have mined, but are still potentially-broadcastable
503
    /// (copied from RelayerThread since we need the info to determine the strategy for mining the
504
    /// next block during this tenure).
505
    last_mined_blocks: MinedBlocks,
506
    /// Copy of the node's last ongoing block commit from the last time this thread was run
507
    ongoing_commit: Option<OngoingBlockCommit>,
508
    /// Copy of the node's registered VRF key
509
    registered_key: RegisteredKey,
510
    /// Burnchain block snapshot at the time this thread was initialized
511
    burn_block: BlockSnapshot,
512
    /// Handle to the node's event dispatcher
513
    event_dispatcher: EventDispatcher,
514
    /// Failed to submit last attempted block
515
    failed_to_submit_last_attempt: bool,
516
}
517

518
/// State representing the microblock miner.
519
struct MicroblockMinerThread {
520
    /// handle to global state
521
    globals: Globals,
522
    /// handle to chainstate DB (optional so we can take/replace it)
523
    chainstate: Option<StacksChainState>,
524
    /// handle to sortition DB (optional so we can take/replace it)
525
    sortdb: Option<SortitionDB>,
526
    /// handle to mempool DB (optional so we can take/replace it)
527
    mempool: Option<MemPoolDB>,
528
    /// Handle to the node's event dispatcher
529
    event_dispatcher: EventDispatcher,
530
    /// Parent Stacks block's sortition's consensus hash
531
    parent_consensus_hash: ConsensusHash,
532
    /// Parent Stacks block's hash
533
    parent_block_hash: BlockHeaderHash,
534
    /// Microblock signing key
535
    miner_key: Secp256k1PrivateKey,
536
    /// How often to make microblocks, in milliseconds
537
    frequency: u64,
538
    /// Epoch timestamp, in milliseconds, when the last microblock was produced
539
    last_mined: u128,
540
    /// How many microblocks produced so far
541
    quantity: u64,
542
    /// Block budget consumed so far by this tenure (initialized to the cost of the Stacks block
543
    /// itself; microblocks fill up the remaining budget)
544
    cost_so_far: ExecutionCost,
545
    /// Block builder settings for the microblock miner.
546
    settings: BlockBuilderSettings,
547
}
548

549
impl MicroblockMinerThread {
550
    /// Instantiate the miner thread state from the relayer thread.
551
    /// May fail if:
552
    /// * we didn't win the last sortition
553
    /// * we couldn't open or read the DBs for some reason
554
    /// * we couldn't find the anchored block (i.e. it's not processed yet)
555
    pub fn from_relayer_thread(relayer_thread: &RelayerThread) -> Option<MicroblockMinerThread> {
9,865✔
556
        let globals = relayer_thread.globals.clone();
9,865✔
557
        let config = relayer_thread.config.clone();
9,865✔
558
        let burnchain = relayer_thread.burnchain.clone();
9,865✔
559
        let miner_tip = match relayer_thread.miner_tip.clone() {
9,865✔
560
            Some(tip) => tip,
9,865✔
561
            None => {
562
                debug!("Relayer: cannot instantiate microblock miner: did not win Stacks tip sortition");
×
563
                return None;
×
564
            }
565
        };
566

567
        let stacks_chainstate_path = config.get_chainstate_path_str();
9,865✔
568
        let burn_db_path = config.get_burn_db_file_path();
9,865✔
569
        let cost_estimator = config
9,865✔
570
            .make_cost_estimator()
9,865✔
571
            .unwrap_or_else(|| Box::new(UnitEstimator));
9,865✔
572
        let metric = config
9,865✔
573
            .make_cost_metric()
9,865✔
574
            .unwrap_or_else(|| Box::new(UnitMetric));
9,865✔
575

576
        // NOTE: read-write access is needed in order to be able to query the recipient set.
577
        // This is an artifact of the way the MARF is built (see #1449)
578
        let sortdb = SortitionDB::open(
9,865✔
579
            &burn_db_path,
9,865✔
580
            true,
581
            burnchain.pox_constants,
9,865✔
582
            Some(config.node.get_marf_opts()),
9,865✔
583
        )
584
        .map_err(|e| {
9,865✔
585
            error!("Relayer: Could not open sortdb '{burn_db_path}' ({e:?}); skipping tenure");
×
586
            e
×
587
        })
×
588
        .ok()?;
9,865✔
589

590
        let mut chainstate = open_chainstate_with_faults(&config)
9,865✔
591
            .map_err(|e| {
9,865✔
592
                error!(
×
593
                    "Relayer: Could not open chainstate '{stacks_chainstate_path}' ({e:?}); skipping microblock tenure"
594
                );
595
                e
×
596
            })
×
597
            .ok()?;
9,865✔
598

599
        let mempool = MemPoolDB::open(
9,865✔
600
            config.is_mainnet(),
9,865✔
601
            config.burnchain.chain_id,
9,865✔
602
            &stacks_chainstate_path,
9,865✔
603
            cost_estimator,
9,865✔
604
            metric,
9,865✔
605
        )
606
        .expect("Database failure opening mempool");
9,865✔
607

608
        let MinerTip {
609
            consensus_hash: ch,
9,865✔
610
            block_hash: bhh,
9,865✔
611
            microblock_privkey: miner_key,
9,865✔
612
            ..
613
        } = miner_tip;
9,865✔
614

615
        debug!("Relayer: Instantiate microblock mining state off of {ch}/{bhh}");
9,865✔
616

617
        // we won a block! proceed to build a microblock tail if we've stored it
618
        match StacksChainState::get_anchored_block_header_info(chainstate.db(), &ch, &bhh) {
9,865✔
619
            Ok(Some(_)) => {
620
                let parent_index_hash = StacksBlockHeader::make_index_block_hash(&ch, &bhh);
9,865✔
621
                let cost_so_far = if relayer_thread.microblock_stream_cost == ExecutionCost::ZERO {
9,865✔
622
                    // unknown cost, or this is idempotent.
623
                    StacksChainState::get_stacks_block_anchored_cost(
9,853✔
624
                        chainstate.db(),
9,853✔
625
                        &parent_index_hash,
9,853✔
626
                    )
627
                    .expect("FATAL: failed to get anchored block cost")
9,853✔
628
                    .expect("FATAL: no anchored block cost stored for processed anchored block")
9,853✔
629
                } else {
630
                    relayer_thread.microblock_stream_cost.clone()
12✔
631
                };
632

633
                let frequency = config.node.microblock_frequency;
9,865✔
634
                let settings =
9,865✔
635
                    config.make_block_builder_settings(0, true, globals.get_miner_status());
9,865✔
636

637
                // port over unconfirmed state to this thread
638
                chainstate.unconfirmed_state = if let Some(unconfirmed_state) =
9,865✔
639
                    relayer_thread.chainstate_ref().unconfirmed_state.as_ref()
9,865✔
640
                {
641
                    Some(unconfirmed_state.make_readonly_owned().ok()?)
9,865✔
642
                } else {
643
                    None
×
644
                };
645

646
                Some(MicroblockMinerThread {
9,865✔
647
                    globals,
9,865✔
648
                    chainstate: Some(chainstate),
9,865✔
649
                    sortdb: Some(sortdb),
9,865✔
650
                    mempool: Some(mempool),
9,865✔
651
                    event_dispatcher: relayer_thread.event_dispatcher.clone(),
9,865✔
652
                    parent_consensus_hash: ch,
9,865✔
653
                    parent_block_hash: bhh,
9,865✔
654
                    miner_key,
9,865✔
655
                    frequency,
9,865✔
656
                    last_mined: 0,
9,865✔
657
                    quantity: 0,
9,865✔
658
                    cost_so_far,
9,865✔
659
                    settings,
9,865✔
660
                })
9,865✔
661
            }
662
            Ok(None) => {
663
                warn!("Relayer: No such anchored block: {ch}/{bhh}.  Cannot mine microblocks");
×
664
                None
×
665
            }
666
            Err(e) => {
×
667
                warn!("Relayer: Failed to get anchored block cost for {ch}/{bhh}: {e:?}");
×
668
                None
×
669
            }
670
        }
671
    }
9,865✔
672

673
    /// Do something with the inner chainstate DBs (borrowed mutably).
674
    /// Used to fool the borrow-checker.
675
    /// NOT COMPOSIBLE - WILL PANIC IF CALLED FROM WITHIN ITSELF.
676
    fn with_chainstate<F, R>(&mut self, func: F) -> R
9,864✔
677
    where
9,864✔
678
        F: FnOnce(&mut Self, &mut SortitionDB, &mut StacksChainState, &mut MemPoolDB) -> R,
9,864✔
679
    {
680
        let mut sortdb = self.sortdb.take().expect("FATAL: already took sortdb");
9,864✔
681
        let mut chainstate = self
9,864✔
682
            .chainstate
9,864✔
683
            .take()
9,864✔
684
            .expect("FATAL: already took chainstate");
9,864✔
685
        let mut mempool = self.mempool.take().expect("FATAL: already took mempool");
9,864✔
686

687
        let res = func(self, &mut sortdb, &mut chainstate, &mut mempool);
9,864✔
688

689
        self.sortdb = Some(sortdb);
9,864✔
690
        self.chainstate = Some(chainstate);
9,864✔
691
        self.mempool = Some(mempool);
9,864✔
692

693
        res
9,864✔
694
    }
9,864✔
695

696
    /// Unconditionally mine one microblock.
697
    /// Can fail if the miner thread gets cancelled (most likely cause), or if there's some kind of
698
    /// DB error.
699
    fn inner_mine_one_microblock(
9,864✔
700
        &mut self,
9,864✔
701
        sortdb: &SortitionDB,
9,864✔
702
        chainstate: &mut StacksChainState,
9,864✔
703
        mempool: &mut MemPoolDB,
9,864✔
704
    ) -> Result<StacksMicroblock, ChainstateError> {
9,864✔
705
        debug!(
9,864✔
706
            "Try to mine one microblock off of {}/{} (total: {})",
707
            &self.parent_consensus_hash,
×
708
            &self.parent_block_hash,
×
709
            chainstate
×
710
                .unconfirmed_state
×
711
                .as_ref()
×
712
                .map(|us| us.num_microblocks())
×
713
                .unwrap_or(0)
×
714
        );
715

716
        let block_snapshot =
9,864✔
717
            SortitionDB::get_block_snapshot_consensus(sortdb.conn(), &self.parent_consensus_hash)
9,864✔
718
                .map_err(|e| {
9,864✔
719
                    error!("Failed to find block snapshot for mined block: {e}");
×
720
                    e
×
721
                })?
×
722
                .ok_or_else(|| {
9,864✔
723
                    error!("Failed to find block snapshot for mined block");
×
724
                    ChainstateError::NoSuchBlockError
×
725
                })?;
×
726
        let burn_height = block_snapshot.block_height;
9,864✔
727

728
        let epoch_id = SortitionDB::get_stacks_epoch(sortdb.conn(), burn_height)
9,864✔
729
            .map_err(|e| {
9,864✔
730
                error!("Failed to get epoch for microblock: {e}");
×
731
                e
×
732
            })?
×
733
            .expect("FATAL: no epoch defined")
9,864✔
734
            .epoch_id;
735

736
        let mint_result = {
4✔
737
            let ic = sortdb.index_handle_at_block(
9,864✔
738
                chainstate,
9,864✔
739
                &block_snapshot.get_canonical_stacks_block_id(),
9,864✔
740
            )?;
×
741
            let mut microblock_miner = match StacksMicroblockBuilder::resume_unconfirmed(
9,864✔
742
                chainstate,
9,864✔
743
                &ic,
9,864✔
744
                &self.cost_so_far,
9,864✔
745
                self.settings.clone(),
9,864✔
746
            ) {
9,864✔
747
                Ok(x) => x,
9,864✔
748
                Err(e) => {
×
749
                    let msg = format!(
×
750
                        "Failed to create a microblock miner at chaintip {}/{}: {e:?}",
751
                        &self.parent_consensus_hash, &self.parent_block_hash
×
752
                    );
753
                    error!("{msg}");
×
754
                    return Err(e);
×
755
                }
756
            };
757

758
            let t1 = get_epoch_time_ms();
9,864✔
759

760
            let mblock = microblock_miner.mine_next_microblock(
9,864✔
761
                mempool,
9,864✔
762
                &self.miner_key,
9,864✔
763
                &self.event_dispatcher,
9,864✔
764
            )?;
9,860✔
765
            let new_cost_so_far = microblock_miner.get_cost_so_far().expect("BUG: cannot read cost so far from miner -- indicates that the underlying Clarity Tx is somehow in use still.");
4✔
766
            let t2 = get_epoch_time_ms();
4✔
767

768
            info!(
4✔
769
                "Mined microblock {} ({}) with {} transactions in {}ms",
770
                mblock.block_hash(),
4✔
771
                mblock.header.sequence,
772
                mblock.txs.len(),
4✔
773
                t2.saturating_sub(t1)
4✔
774
            );
775

776
            Ok((mblock, new_cost_so_far))
4✔
777
        };
778

779
        let (mined_microblock, new_cost) = match mint_result {
4✔
780
            Ok(x) => x,
4✔
781
            Err(e) => {
×
782
                warn!("Failed to mine microblock: {e}");
×
783
                return Err(e);
×
784
            }
785
        };
786

787
        // failsafe
788
        if !Relayer::static_check_problematic_relayed_microblock(
4✔
789
            chainstate.mainnet,
4✔
790
            epoch_id,
4✔
791
            &mined_microblock,
4✔
792
        ) {
4✔
793
            // nope!
UNCOV
794
            warn!(
×
795
                "Our mined microblock {} was problematic. Will NOT process.",
796
                &mined_microblock.block_hash()
×
797
            );
798

799
            #[cfg(test)]
800
            {
801
                use std::path::Path;
UNCOV
802
                if let Ok(path) = std::env::var("STACKS_BAD_BLOCKS_DIR") {
×
803
                    // record this microblock somewhere
UNCOV
804
                    if fs::metadata(&path).is_err() {
×
805
                        fs::create_dir_all(&path)
×
806
                            .unwrap_or_else(|_| panic!("FATAL: could not create '{path}'"));
×
UNCOV
807
                    }
×
808

UNCOV
809
                    let path = Path::new(&path);
×
UNCOV
810
                    let path = path.join(Path::new(&format!("{}", &mined_microblock.block_hash())));
×
UNCOV
811
                    let mut file = fs::File::create(&path)
×
UNCOV
812
                        .unwrap_or_else(|_| panic!("FATAL: could not create '{path:?}'"));
×
813

UNCOV
814
                    let mblock_bits = mined_microblock.serialize_to_vec();
×
UNCOV
815
                    let mblock_bits_hex = to_hex(&mblock_bits);
×
816

UNCOV
817
                    let mblock_json = format!(
×
818
                        r#"{{"microblock":"{mblock_bits_hex}","parent_consensus":"{}","parent_block":"{}"}}"#,
UNCOV
819
                        &self.parent_consensus_hash, &self.parent_block_hash
×
820
                    );
UNCOV
821
                    file.write_all(mblock_json.as_bytes()).unwrap_or_else(|_| {
×
822
                        panic!("FATAL: failed to write microblock bits to '{path:?}'")
×
823
                    });
UNCOV
824
                    info!(
×
825
                        "Fault injection: bad microblock {} saved to {}",
826
                        &mined_microblock.block_hash(),
×
827
                        &path.to_str().unwrap()
×
828
                    );
829
                }
×
830
            }
UNCOV
831
            return Err(ChainstateError::NoTransactionsToMine);
×
832
        }
4✔
833

834
        // cancelled?
835
        let is_miner_blocked = self
4✔
836
            .globals
4✔
837
            .get_miner_status()
4✔
838
            .lock()
4✔
839
            .expect("FATAL: mutex poisoned")
4✔
840
            .is_blocked();
4✔
841
        if is_miner_blocked {
4✔
842
            return Err(ChainstateError::MinerAborted);
×
843
        }
4✔
844

845
        // preprocess the microblock locally
846
        chainstate.preprocess_streamed_microblock(
4✔
847
            &self.parent_consensus_hash,
4✔
848
            &self.parent_block_hash,
4✔
849
            &mined_microblock,
4✔
850
        )?;
×
851

852
        // update unconfirmed state cost
853
        self.cost_so_far = new_cost;
4✔
854
        self.quantity += 1;
4✔
855
        Ok(mined_microblock)
4✔
856
    }
9,864✔
857

858
    /// Can this microblock miner mine off of this given tip?
859
    pub fn can_mine_on_tip(
19,728✔
860
        &self,
19,728✔
861
        consensus_hash: &ConsensusHash,
19,728✔
862
        block_hash: &BlockHeaderHash,
19,728✔
863
    ) -> bool {
19,728✔
864
        self.parent_consensus_hash == *consensus_hash && self.parent_block_hash == *block_hash
19,728✔
865
    }
19,728✔
866

867
    /// Body of try_mine_microblock()
868
    fn inner_try_mine_microblock(
9,864✔
869
        &mut self,
9,864✔
870
        miner_tip: MinerTip,
9,864✔
871
        sortdb: &SortitionDB,
9,864✔
872
        chainstate: &mut StacksChainState,
9,864✔
873
        mem_pool: &mut MemPoolDB,
9,864✔
874
    ) -> Result<Option<(StacksMicroblock, ExecutionCost)>, NetError> {
9,864✔
875
        if !self.can_mine_on_tip(&self.parent_consensus_hash, &self.parent_block_hash) {
9,864✔
876
            // not configured to mine on this tip
877
            return Ok(None);
×
878
        }
9,864✔
879
        if !self.can_mine_on_tip(&miner_tip.consensus_hash, &miner_tip.block_hash) {
9,864✔
880
            // this tip isn't what this miner is meant to mine on
881
            return Ok(None);
×
882
        }
9,864✔
883

884
        if self.last_mined + (self.frequency as u128) >= get_epoch_time_ms() {
9,864✔
885
            // too soon to mine
886
            return Ok(None);
×
887
        }
9,864✔
888

889
        let mut next_microblock_and_runtime = None;
9,864✔
890

891
        // opportunistically try and mine, but only if there are no attachable blocks in
892
        // recent history (i.e. in the last 10 minutes)
893
        let num_attachable = StacksChainState::count_attachable_staging_blocks(
9,864✔
894
            chainstate.db(),
9,864✔
895
            1,
896
            get_epoch_time_secs() - 600,
9,864✔
897
        )?;
×
898
        if num_attachable == 0 {
9,864✔
899
            match self.inner_mine_one_microblock(sortdb, chainstate, mem_pool) {
9,864✔
900
                Ok(microblock) => {
4✔
901
                    // will need to relay this
4✔
902
                    next_microblock_and_runtime = Some((microblock, self.cost_so_far.clone()));
4✔
903
                }
4✔
904
                Err(ChainstateError::NoTransactionsToMine) => {
905
                    info!("Will keep polling mempool for transactions to include in a microblock");
9,859✔
906
                }
907
                Err(e) => {
1✔
908
                    warn!("Failed to mine one microblock: {e:?}");
1✔
909
                }
910
            }
911
        } else {
912
            debug!("Will not mine microblocks yet -- have {num_attachable} attachable blocks that arrived in the last 10 minutes");
×
913
        }
914

915
        self.last_mined = get_epoch_time_ms();
9,864✔
916

917
        Ok(next_microblock_and_runtime)
9,864✔
918
    }
9,864✔
919

920
    /// Try to mine one microblock, given the current chain tip and access to the chain state DBs.
921
    /// If we succeed, return the microblock and log the tx events to the given event dispatcher.
922
    /// May return None if any of the following are true:
923
    /// * `miner_tip` does not match this miner's miner tip
924
    /// * it's been too soon (less than microblock_frequency milliseconds) since we tried this call
925
    /// * there are simply no transactions to mine
926
    /// * there are still stacks blocks to be processed in the staging db
927
    /// * the miner thread got cancelled
928
    pub fn try_mine_microblock(
9,864✔
929
        &mut self,
9,864✔
930
        cur_tip: MinerTip,
9,864✔
931
    ) -> Result<Option<(StacksMicroblock, ExecutionCost)>, NetError> {
9,864✔
932
        debug!("microblock miner thread ID is {:?}", thread::current().id());
9,864✔
933
        self.with_chainstate(|mblock_miner, sortdb, chainstate, mempool| {
9,864✔
934
            mblock_miner.inner_try_mine_microblock(cur_tip, sortdb, chainstate, mempool)
9,864✔
935
        })
9,864✔
936
    }
9,864✔
937
}
938

939
/// Candidate chain tip
940
#[derive(Debug, Clone, PartialEq)]
941
pub struct TipCandidate {
942
    pub stacks_height: u64,
943
    pub consensus_hash: ConsensusHash,
944
    pub anchored_block_hash: BlockHeaderHash,
945
    pub parent_consensus_hash: ConsensusHash,
946
    pub parent_anchored_block_hash: BlockHeaderHash,
947
    /// the block's sortition's burnchain height
948
    pub burn_height: u64,
949
    /// the number of Stacks blocks *at the same height* as this one, but from earlier sortitions
950
    /// than `burn_height`
951
    pub num_earlier_siblings: u64,
952
}
953

954
impl TipCandidate {
955
    pub fn id(&self) -> StacksBlockId {
2,280,057✔
956
        StacksBlockId::new(&self.consensus_hash, &self.anchored_block_hash)
2,280,057✔
957
    }
2,280,057✔
958

959
    pub fn parent_id(&self) -> StacksBlockId {
761,137✔
960
        StacksBlockId::new(
761,137✔
961
            &self.parent_consensus_hash,
761,137✔
962
            &self.parent_anchored_block_hash,
761,137✔
963
        )
964
    }
761,137✔
965

966
    pub fn new(tip: StagingBlock, burn_height: u64) -> Self {
760,624✔
967
        Self {
760,624✔
968
            stacks_height: tip.height,
760,624✔
969
            consensus_hash: tip.consensus_hash,
760,624✔
970
            anchored_block_hash: tip.anchored_block_hash,
760,624✔
971
            parent_consensus_hash: tip.parent_consensus_hash,
760,624✔
972
            parent_anchored_block_hash: tip.parent_anchored_block_hash,
760,624✔
973
            burn_height,
760,624✔
974
            num_earlier_siblings: 0,
760,624✔
975
        }
760,624✔
976
    }
760,624✔
977
}
978

979
impl BlockMinerThread {
980
    /// Instantiate the miner thread from its parent RelayerThread
981
    pub fn from_relayer_thread(
172,623✔
982
        rt: &RelayerThread,
172,623✔
983
        registered_key: RegisteredKey,
172,623✔
984
        burn_block: BlockSnapshot,
172,623✔
985
    ) -> BlockMinerThread {
172,623✔
986
        BlockMinerThread {
172,623✔
987
            config: rt.config.clone(),
172,623✔
988
            globals: rt.globals.clone(),
172,623✔
989
            keychain: rt.keychain.clone(),
172,623✔
990
            burnchain: rt.burnchain.clone(),
172,623✔
991
            last_mined_blocks: rt.last_mined_blocks.clone(),
172,623✔
992
            ongoing_commit: rt.bitcoin_controller.get_ongoing_commit(),
172,623✔
993
            registered_key,
172,623✔
994
            burn_block,
172,623✔
995
            event_dispatcher: rt.event_dispatcher.clone(),
172,623✔
996
            failed_to_submit_last_attempt: rt.last_attempt_failed,
172,623✔
997
        }
172,623✔
998
    }
172,623✔
999

1000
    /// Get the coinbase recipient address, if set in the config and if allowed in this epoch
1001
    fn get_coinbase_recipient(&self, epoch_id: StacksEpochId) -> Option<PrincipalData> {
24,734✔
1002
        let miner_config = self.config.get_miner_config();
24,734✔
1003
        if epoch_id < StacksEpochId::Epoch21 && miner_config.block_reward_recipient.is_some() {
24,734✔
1004
            warn!("Coinbase pay-to-contract is not supported in the current epoch");
×
1005
            None
×
1006
        } else {
1007
            miner_config.block_reward_recipient
24,734✔
1008
        }
1009
    }
24,734✔
1010

1011
    /// Create a coinbase transaction.
1012
    fn inner_generate_coinbase_tx(
24,734✔
1013
        &mut self,
24,734✔
1014
        nonce: u64,
24,734✔
1015
        epoch_id: StacksEpochId,
24,734✔
1016
    ) -> StacksTransaction {
24,734✔
1017
        let is_mainnet = self.config.is_mainnet();
24,734✔
1018
        let chain_id = self.config.burnchain.chain_id;
24,734✔
1019
        let mut tx_auth = self.keychain.get_transaction_auth().unwrap();
24,734✔
1020
        tx_auth.set_origin_nonce(nonce);
24,734✔
1021

1022
        let version = if is_mainnet {
24,734✔
1023
            TransactionVersion::Mainnet
×
1024
        } else {
1025
            TransactionVersion::Testnet
24,734✔
1026
        };
1027

1028
        let recipient_opt = self.get_coinbase_recipient(epoch_id);
24,734✔
1029
        let mut tx = StacksTransaction::new(
24,734✔
1030
            version,
24,734✔
1031
            tx_auth,
24,734✔
1032
            TransactionPayload::Coinbase(CoinbasePayload([0u8; 32]), recipient_opt, None),
24,734✔
1033
        );
1034
        tx.chain_id = chain_id;
24,734✔
1035
        tx.anchor_mode = TransactionAnchorMode::OnChainOnly;
24,734✔
1036
        let mut tx_signer = StacksTransactionSigner::new(&tx);
24,734✔
1037
        self.keychain.sign_as_origin(&mut tx_signer);
24,734✔
1038

1039
        tx_signer.get_tx().unwrap()
24,734✔
1040
    }
24,734✔
1041

1042
    /// Create a poison microblock transaction.
1043
    fn inner_generate_poison_microblock_tx(
×
1044
        &mut self,
×
1045
        nonce: u64,
×
1046
        poison_payload: TransactionPayload,
×
1047
    ) -> StacksTransaction {
×
1048
        let is_mainnet = self.config.is_mainnet();
×
1049
        let chain_id = self.config.burnchain.chain_id;
×
1050
        let mut tx_auth = self.keychain.get_transaction_auth().unwrap();
×
1051
        tx_auth.set_origin_nonce(nonce);
×
1052

1053
        let version = if is_mainnet {
×
1054
            TransactionVersion::Mainnet
×
1055
        } else {
1056
            TransactionVersion::Testnet
×
1057
        };
1058
        let mut tx = StacksTransaction::new(version, tx_auth, poison_payload);
×
1059
        tx.chain_id = chain_id;
×
1060
        tx.anchor_mode = TransactionAnchorMode::OnChainOnly;
×
1061
        let mut tx_signer = StacksTransactionSigner::new(&tx);
×
1062
        self.keychain.sign_as_origin(&mut tx_signer);
×
1063

1064
        tx_signer.get_tx().unwrap()
×
1065
    }
×
1066

1067
    /// Constructs and returns a LeaderBlockCommitOp out of the provided params.
1068
    #[allow(clippy::too_many_arguments)]
1069
    fn inner_generate_block_commit_op(
21,244✔
1070
        &self,
21,244✔
1071
        block_header_hash: BlockHeaderHash,
21,244✔
1072
        burn_fee: u64,
21,244✔
1073
        key: &RegisteredKey,
21,244✔
1074
        parent_burnchain_height: u32,
21,244✔
1075
        parent_winning_vtx: u16,
21,244✔
1076
        vrf_seed: VRFSeed,
21,244✔
1077
        commit_outs: Vec<PoxAddress>,
21,244✔
1078
        sunset_burn: u64,
21,244✔
1079
        current_burn_height: u64,
21,244✔
1080
    ) -> BlockstackOperationType {
21,244✔
1081
        let (parent_block_ptr, parent_vtxindex) = (parent_burnchain_height, parent_winning_vtx);
21,244✔
1082
        let burn_parent_modulus = (current_burn_height % BURN_BLOCK_MINED_AT_MODULUS) as u8;
21,244✔
1083
        let sender = self.keychain.get_burnchain_signer();
21,244✔
1084
        BlockstackOperationType::LeaderBlockCommit(LeaderBlockCommitOp {
21,244✔
1085
            treatment: vec![],
21,244✔
1086
            sunset_burn,
21,244✔
1087
            block_header_hash,
21,244✔
1088
            burn_fee,
21,244✔
1089
            input: (Txid([0; 32]), 0),
21,244✔
1090
            apparent_sender: sender,
21,244✔
1091
            key_block_ptr: key.block_height as u32,
21,244✔
1092
            key_vtxindex: key.op_vtxindex as u16,
21,244✔
1093
            memo: vec![STACKS_EPOCH_3_0_MARKER],
21,244✔
1094
            new_seed: vrf_seed,
21,244✔
1095
            parent_block_ptr,
21,244✔
1096
            parent_vtxindex,
21,244✔
1097
            vtxindex: 0,
21,244✔
1098
            txid: Txid([0u8; 32]),
21,244✔
1099
            block_height: 0,
21,244✔
1100
            burn_header_hash: BurnchainHeaderHash::zero(),
21,244✔
1101
            burn_parent_modulus,
21,244✔
1102
            commit_outs,
21,244✔
1103
            // unused
21,244✔
1104
            descends_from_anchor_block: true,
21,244✔
1105
        })
21,244✔
1106
    }
21,244✔
1107

1108
    /// Get references to the inner assembled anchor block data we've produced for a given burnchain block height
1109
    fn find_inflight_mined_blocks(
179,120✔
1110
        burn_height: u64,
179,120✔
1111
        last_mined_blocks: &MinedBlocks,
179,120✔
1112
    ) -> Vec<&AssembledAnchorBlock> {
179,120✔
1113
        let mut ret = vec![];
179,120✔
1114
        for (_, (assembled_block, _)) in last_mined_blocks.iter() {
180,292✔
1115
            if assembled_block.burn_block_height >= burn_height {
164,373✔
1116
                ret.push(assembled_block);
163,028✔
1117
            }
163,030✔
1118
        }
1119
        ret
179,120✔
1120
    }
179,120✔
1121

1122
    /// Is a given Stacks staging block on the canonical burnchain fork?
1123
    pub(crate) fn is_on_canonical_burnchain_fork(
1,509,304✔
1124
        candidate_ch: &ConsensusHash,
1,509,304✔
1125
        candidate_bh: &BlockHeaderHash,
1,509,304✔
1126
        sortdb_tip_handle: &SortitionHandleConn,
1,509,304✔
1127
    ) -> bool {
1,509,304✔
1128
        let candidate_burn_ht = match SortitionDB::get_block_snapshot_consensus(
1,509,304✔
1129
            sortdb_tip_handle.conn(),
1,509,304✔
1130
            candidate_ch,
1,509,304✔
1131
        ) {
1132
            Ok(Some(x)) => x.block_height,
1,509,304✔
1133
            Ok(None) => {
UNCOV
1134
                warn!("Tried to evaluate potential chain tip with an unknown consensus hash";
×
1135
                      "consensus_hash" => %candidate_ch,
1136
                      "stacks_block_hash" => %candidate_bh);
UNCOV
1137
                return false;
×
1138
            }
1139
            Err(e) => {
×
1140
                warn!("Error while trying to evaluate potential chain tip with an unknown consensus hash";
×
1141
                      "consensus_hash" => %candidate_ch,
1142
                      "stacks_block_hash" => %candidate_bh,
1143
                      "err" => ?e);
1144
                return false;
×
1145
            }
1146
        };
1147
        let tip_ch = match sortdb_tip_handle.get_consensus_at(candidate_burn_ht) {
1,509,304✔
1148
            Ok(Some(x)) => x,
1,509,304✔
1149
            Ok(None) => {
UNCOV
1150
                warn!("Tried to evaluate potential chain tip with a consensus hash ahead of canonical tip";
×
1151
                      "consensus_hash" => %candidate_ch,
1152
                      "stacks_block_hash" => %candidate_bh);
UNCOV
1153
                return false;
×
1154
            }
1155
            Err(e) => {
×
1156
                warn!("Error while trying to evaluate potential chain tip with an unknown consensus hash";
×
1157
                      "consensus_hash" => %candidate_ch,
1158
                      "stacks_block_hash" => %candidate_bh,
1159
                      "err" => ?e);
1160
                return false;
×
1161
            }
1162
        };
1163
        &tip_ch == candidate_ch
1,509,304✔
1164
    }
1,509,304✔
1165

1166
    /// Load all candidate tips upon which to build.  This is all Stacks blocks whose heights are
1167
    /// less than or equal to at `at_stacks_height` (or the canonical chain tip height, if not given),
1168
    /// but greater than or equal to this end height minus `max_depth`.
1169
    /// Returns the list of all Stacks blocks up to max_depth blocks beneath it.
1170
    /// The blocks will be sorted first by stacks height, and then by burnchain height
1171
    pub(crate) fn load_candidate_tips(
193,453✔
1172
        burn_db: &mut SortitionDB,
193,453✔
1173
        chain_state: &mut StacksChainState,
193,453✔
1174
        max_depth: u64,
193,453✔
1175
        at_stacks_height: Option<u64>,
193,453✔
1176
    ) -> Vec<TipCandidate> {
193,453✔
1177
        let stacks_tips = if let Some(start_height) = at_stacks_height {
193,453✔
1178
            chain_state
×
1179
                .get_stacks_chain_tips_at_height(start_height)
×
1180
                .expect("FATAL: could not query chain tips at start height")
×
1181
        } else {
1182
            chain_state
193,453✔
1183
                .get_stacks_chain_tips(burn_db)
193,453✔
1184
                .expect("FATAL: could not query chain tips")
193,453✔
1185
        };
1186

1187
        if stacks_tips.is_empty() {
193,453✔
1188
            return vec![];
274✔
1189
        }
193,179✔
1190

1191
        let sortdb_tip_handle = burn_db.index_handle_at_tip();
193,179✔
1192

1193
        let stacks_tips: Vec<_> = stacks_tips
193,179✔
1194
            .into_iter()
193,179✔
1195
            .filter(|candidate| {
193,897✔
1196
                Self::is_on_canonical_burnchain_fork(
193,897✔
1197
                    &candidate.consensus_hash,
193,897✔
1198
                    &candidate.anchored_block_hash,
193,897✔
1199
                    &sortdb_tip_handle,
193,897✔
1200
                )
1201
            })
193,897✔
1202
            .collect();
193,179✔
1203

1204
        if stacks_tips.is_empty() {
193,179✔
1205
            return vec![];
×
1206
        }
193,179✔
1207

1208
        let mut considered = HashSet::new();
193,179✔
1209
        let mut candidates = vec![];
193,179✔
1210
        let end_height = stacks_tips[0].height;
193,179✔
1211

1212
        // process these tips
1213
        for tip in stacks_tips.into_iter() {
193,323✔
1214
            let index_block_hash =
193,323✔
1215
                StacksBlockId::new(&tip.consensus_hash, &tip.anchored_block_hash);
193,323✔
1216
            let burn_height = burn_db
193,323✔
1217
                .get_consensus_hash_height(&tip.consensus_hash)
193,323✔
1218
                .expect("FATAL: could not query burnchain block height")
193,323✔
1219
                .expect("FATAL: no burnchain block height for Stacks tip");
193,323✔
1220
            let candidate = TipCandidate::new(tip, burn_height);
193,323✔
1221
            candidates.push(candidate);
193,323✔
1222
            considered.insert(index_block_hash);
193,323✔
1223
        }
193,323✔
1224

1225
        // process earlier tips, back to max_depth
1226
        for cur_height in end_height.saturating_sub(max_depth)..end_height {
572,763✔
1227
            let stacks_tips = chain_state
572,763✔
1228
                .get_stacks_chain_tips_at_height(cur_height)
572,763✔
1229
                .expect("FATAL: could not query chain tips at height")
572,763✔
1230
                .into_iter()
572,763✔
1231
                .filter(|candidate| {
573,431✔
1232
                    Self::is_on_canonical_burnchain_fork(
567,754✔
1233
                        &candidate.consensus_hash,
567,754✔
1234
                        &candidate.anchored_block_hash,
567,754✔
1235
                        &sortdb_tip_handle,
567,754✔
1236
                    )
1237
                });
567,754✔
1238

1239
            for tip in stacks_tips {
573,002✔
1240
                let index_block_hash =
567,301✔
1241
                    StacksBlockId::new(&tip.consensus_hash, &tip.anchored_block_hash);
567,301✔
1242

1243
                if considered.insert(index_block_hash) {
567,301✔
1244
                    let burn_height = burn_db
567,301✔
1245
                        .get_consensus_hash_height(&tip.consensus_hash)
567,301✔
1246
                        .expect("FATAL: could not query burnchain block height")
567,301✔
1247
                        .expect("FATAL: no burnchain block height for Stacks tip");
567,301✔
1248
                    let candidate = TipCandidate::new(tip, burn_height);
567,301✔
1249
                    candidates.push(candidate);
567,301✔
1250
                }
567,301✔
1251
            }
1252
        }
1253
        Self::sort_and_populate_candidates(candidates)
193,179✔
1254
    }
193,453✔
1255

1256
    /// Put all tip candidates in order by stacks height, breaking ties with burnchain height.
1257
    /// Also, count up the number of earliersiblings each tip has -- i.e. the number of stacks
1258
    /// blocks that have the same height, but a later burnchain sortition.
1259
    pub(crate) fn sort_and_populate_candidates(
193,182✔
1260
        mut candidates: Vec<TipCandidate>,
193,182✔
1261
    ) -> Vec<TipCandidate> {
193,182✔
1262
        if candidates.is_empty() {
193,182✔
1263
            return candidates;
1✔
1264
        }
193,181✔
1265
        candidates.sort_by(|tip1, tip2| {
944,033✔
1266
            // stacks block height, then burnchain block height
1267
            let ord = tip1.stacks_height.cmp(&tip2.stacks_height);
944,021✔
1268
            if ord == CmpOrdering::Equal {
944,021✔
1269
                return tip1.burn_height.cmp(&tip2.burn_height);
471✔
1270
            }
943,550✔
1271
            ord
943,550✔
1272
        });
944,021✔
1273

1274
        // calculate the number of earlier siblings for each block.
1275
        // this is the number of stacks blocks at the same height, but later burnchain heights.
1276
        let mut idx = 0;
193,181✔
1277
        let mut cur_stacks_height = candidates[idx].stacks_height;
193,181✔
1278
        let mut num_siblings = 0;
193,181✔
1279
        loop {
1280
            idx += 1;
760,636✔
1281
            if idx >= candidates.len() {
760,636✔
1282
                break;
193,181✔
1283
            }
567,455✔
1284
            if cur_stacks_height == candidates[idx].stacks_height {
567,455✔
1285
                // same stacks height, so this block has one more earlier sibling than the last
471✔
1286
                num_siblings += 1;
471✔
1287
                candidates[idx].num_earlier_siblings = num_siblings;
471✔
1288
            } else {
566,984✔
1289
                // new stacks height, so no earlier siblings
566,984✔
1290
                num_siblings = 0;
566,984✔
1291
                cur_stacks_height = candidates[idx].stacks_height;
566,984✔
1292
                candidates[idx].num_earlier_siblings = 0;
566,984✔
1293
            }
566,984✔
1294
        }
1295

1296
        candidates
193,181✔
1297
    }
193,182✔
1298

1299
    /// Select the best tip to mine the next block on. Potential tips are all
1300
    /// leaf nodes where the Stacks block height is <= the max height -
1301
    /// max_reorg_depth. Each potential tip is then scored based on the amount
1302
    /// of orphans that its chain has caused -- that is, the number of orphans
1303
    /// that the tip _and all of its ancestors_ (up to `max_depth`) created.
1304
    /// The tip with the lowest score is composed of blocks that collectively made the fewest
1305
    /// orphans, and is thus the "nicest" chain with the least orphaning.  This is the tip that is
1306
    /// selected.
1307
    pub fn pick_best_tip(
193,453✔
1308
        globals: &Globals,
193,453✔
1309
        config: &Config,
193,453✔
1310
        burn_db: &mut SortitionDB,
193,453✔
1311
        chain_state: &mut StacksChainState,
193,453✔
1312
        at_stacks_height: Option<u64>,
193,453✔
1313
    ) -> Option<TipCandidate> {
193,453✔
1314
        debug!("Picking best Stacks tip");
193,453✔
1315
        let miner_config = config.get_miner_config();
193,453✔
1316
        let max_depth = miner_config.max_reorg_depth;
193,453✔
1317

1318
        // There could be more than one possible chain tip. Go find them.
1319
        let stacks_tips =
193,453✔
1320
            Self::load_candidate_tips(burn_db, chain_state, max_depth, at_stacks_height);
193,453✔
1321

1322
        let mut previous_best_tips = HashMap::new();
193,453✔
1323
        let sortdb_tip_handle = burn_db.index_handle_at_tip();
193,453✔
1324
        for tip in stacks_tips.iter() {
760,625✔
1325
            let Some(prev_best_tip) = globals.get_best_tip(tip.stacks_height) else {
760,624✔
1326
                continue;
12,971✔
1327
            };
1328
            if !Self::is_on_canonical_burnchain_fork(
747,653✔
1329
                &prev_best_tip.consensus_hash,
747,653✔
1330
                &prev_best_tip.anchored_block_hash,
747,653✔
1331
                &sortdb_tip_handle,
747,653✔
1332
            ) {
747,653✔
1333
                continue;
233✔
1334
            }
747,420✔
1335
            previous_best_tips.insert(tip.stacks_height, prev_best_tip);
747,420✔
1336
        }
1337

1338
        let best_tip_opt = Self::inner_pick_best_tip(stacks_tips, previous_best_tips);
193,453✔
1339
        if let Some(best_tip) = best_tip_opt.as_ref() {
193,453✔
1340
            globals.add_best_tip(best_tip.stacks_height, best_tip.clone(), max_depth);
193,179✔
1341
        } else {
193,179✔
1342
            // no best-tip found; revert to old tie-breaker logic
1343
            debug!("No best-tips found; using old tie-breaking logic");
274✔
1344
            return chain_state
274✔
1345
                .get_stacks_chain_tip(burn_db)
274✔
1346
                .expect("FATAL: could not load chain tip")
274✔
1347
                .map(|staging_block| {
274✔
1348
                    let burn_height = burn_db
×
1349
                        .get_consensus_hash_height(&staging_block.consensus_hash)
×
1350
                        .expect("FATAL: could not query burnchain block height")
×
1351
                        .expect("FATAL: no burnchain block height for Stacks tip");
×
1352
                    TipCandidate::new(staging_block, burn_height)
×
1353
                });
×
1354
        }
1355
        best_tip_opt
193,179✔
1356
    }
193,453✔
1357

1358
    /// Given a list of sorted candidate tips, pick the best one.  See `Self::pick_best_tip()`.
1359
    /// Takes the list of stacks tips that are eligible to be built on, and a map of
1360
    /// previously-chosen best tips (so if we chose a tip in the past, we keep confirming it, even
1361
    /// if subsequent stacks blocks show up).  The previous best tips should be from recent Stacks
1362
    /// heights; it's important that older best-tips are forgotten in order to ensure that miners
1363
    /// will eventually (e.g. after `max_reorg_depth` Stacks blocks pass) stop trying to confirm a
1364
    /// now-orphaned previously-chosen best-tip.  If there are multiple best-tips that conflict in
1365
    /// `previosu_best_tips`, then only the highest one which the leaf could confirm will be
1366
    /// considered (since the node updates its understanding of the best-tip on each RunTenure).
1367
    pub(crate) fn inner_pick_best_tip(
193,478✔
1368
        stacks_tips: Vec<TipCandidate>,
193,478✔
1369
        previous_best_tips: HashMap<u64, TipCandidate>,
193,478✔
1370
    ) -> Option<TipCandidate> {
193,478✔
1371
        // identify leaf tips -- i.e. blocks with no children
1372
        let parent_consensus_hashes: HashSet<_> = stacks_tips
193,478✔
1373
            .iter()
193,478✔
1374
            .map(|x| x.parent_consensus_hash.clone())
760,709✔
1375
            .collect();
193,478✔
1376

1377
        let mut leaf_tips: Vec<_> = stacks_tips
193,478✔
1378
            .iter()
193,478✔
1379
            .filter(|x| !parent_consensus_hashes.contains(&x.consensus_hash))
760,709✔
1380
            .collect();
193,478✔
1381

1382
        if leaf_tips.is_empty() {
193,478✔
1383
            return None;
275✔
1384
        }
193,203✔
1385

1386
        // Make scoring deterministic in the case of a tie.
1387
        // Prefer leafs that were mined earlier on the burnchain,
1388
        // but which pass through previously-determined best tips.
1389
        leaf_tips.sort_by(|tip1, tip2| {
193,207✔
1390
            // stacks block height, then burnchain block height
1391
            let ord = tip1.stacks_height.cmp(&tip2.stacks_height);
495✔
1392
            if ord == CmpOrdering::Equal {
495✔
1393
                return tip1.burn_height.cmp(&tip2.burn_height);
164✔
1394
            }
331✔
1395
            ord
331✔
1396
        });
495✔
1397

1398
        let mut scores = BTreeMap::new();
193,203✔
1399
        for (i, leaf_tip) in leaf_tips.iter().enumerate() {
193,698✔
1400
            let leaf_id = leaf_tip.id();
193,698✔
1401
            // Score each leaf tip as the number of preceding Stacks blocks that are _not_ an
1402
            // ancestor.  Because stacks_tips are in order by stacks height, a linear scan of this
1403
            // list will allow us to match all ancestors in the last max_depth Stacks blocks.
1404
            // `ancestor_ptr` tracks the next expected ancestor.
1405
            let mut ancestor_ptr = leaf_tip.parent_id();
193,698✔
1406
            let mut score: u64 = 0;
193,698✔
1407
            let mut score_summaries = vec![];
193,698✔
1408

1409
            // find the highest stacks_tip we must confirm
1410
            let mut must_confirm = None;
193,698✔
1411
            for tip in stacks_tips.iter().rev() {
387,836✔
1412
                if let Some(prev_best_tip) = previous_best_tips.get(&tip.stacks_height) {
387,836✔
1413
                    if leaf_id != prev_best_tip.id() {
375,723✔
1414
                        // the `ancestor_ptr` must pass through this prior best-tip
1415
                        must_confirm = Some(prev_best_tip.clone());
189,414✔
1416
                        break;
189,414✔
1417
                    }
186,309✔
1418
                }
12,113✔
1419
            }
1420

1421
            for tip in stacks_tips.iter().rev() {
762,984✔
1422
                if let Some(required_ancestor) = must_confirm.as_ref() {
762,984✔
1423
                    if tip.stacks_height < required_ancestor.stacks_height
380,377✔
1424
                        && leaf_tip.stacks_height >= required_ancestor.stacks_height
1,475✔
1425
                    {
1426
                        // This leaf does not confirm a previous-best-tip, so assign it the
1427
                        // worst-possible score.
1428
                        info!("Tip #{i} {}/{} at {}:{} conflicts with a previous best-tip {}/{} at {}:{}",
169✔
1429
                              &leaf_tip.consensus_hash,
169✔
1430
                              &leaf_tip.anchored_block_hash,
169✔
1431
                              leaf_tip.burn_height,
1432
                              leaf_tip.stacks_height,
1433
                              &required_ancestor.consensus_hash,
169✔
1434
                              &required_ancestor.anchored_block_hash,
169✔
1435
                              required_ancestor.burn_height,
1436
                              required_ancestor.stacks_height
1437
                        );
1438
                        score = u64::MAX;
169✔
1439
                        score_summaries.push(format!("{} (best-tip reorged)", u64::MAX));
169✔
1440
                        break;
169✔
1441
                    }
380,208✔
1442
                }
382,607✔
1443
                if tip.id() == leaf_id {
762,815✔
1444
                    // we can't orphan ourselves
1445
                    continue;
193,698✔
1446
                }
569,117✔
1447
                if leaf_tip.stacks_height < tip.stacks_height {
569,117✔
1448
                    // this tip is further along than leaf_tip, so canonicalizing leaf_tip would
631✔
1449
                    // orphan `tip.stacks_height - leaf_tip.stacks_height` blocks.
631✔
1450
                    score = score.saturating_add(tip.stacks_height - leaf_tip.stacks_height);
631✔
1451
                    score_summaries.push(format!(
631✔
1452
                        "{} (stx height diff)",
631✔
1453
                        tip.stacks_height - leaf_tip.stacks_height
631✔
1454
                    ));
631✔
1455
                } else if leaf_tip.stacks_height == tip.stacks_height
568,486✔
1456
                    && leaf_tip.burn_height > tip.burn_height
675✔
1457
                {
499✔
1458
                    // this tip has the same stacks height as the leaf, but its sortition happened
499✔
1459
                    // earlier. This means that the leaf is trying to orphan this block and all
499✔
1460
                    // blocks sortition'ed up to this leaf.  The miner should have instead tried to
499✔
1461
                    // confirm this existing tip, instead of mine a sibling.
499✔
1462
                    score = score.saturating_add(tip.num_earlier_siblings + 1);
499✔
1463
                    score_summaries.push(format!("{} (uncles)", tip.num_earlier_siblings + 1));
499✔
1464
                }
567,987✔
1465
                if tip.id() == ancestor_ptr {
569,117✔
1466
                    // did we confirm a previous best-tip? If so, then clear this
1467
                    if let Some(required_ancestor) = must_confirm.take() {
567,439✔
1468
                        if required_ancestor.id() != tip.id() {
189,352✔
1469
                            // did not confirm, so restoroe
424✔
1470
                            must_confirm = Some(required_ancestor);
424✔
1471
                        }
188,934✔
1472
                    }
378,087✔
1473

1474
                    // this stacks tip is the next ancestor.  However, that ancestor may have
1475
                    // earlier-sortition'ed siblings that confirming this tip would orphan, so count those.
1476
                    ancestor_ptr = tip.parent_id();
567,439✔
1477
                    score = score.saturating_add(tip.num_earlier_siblings);
567,439✔
1478
                    score_summaries.push(format!("{} (earlier sibs)", tip.num_earlier_siblings));
567,439✔
1479
                } else {
1,678✔
1480
                    // this stacks tip is not an ancestor, and would be orphaned if leaf_tip is
1,678✔
1481
                    // canonical.
1,678✔
1482
                    score = score.saturating_add(1);
1,678✔
1483
                    score_summaries.push(format!("{} (non-ancestor)", 1));
1,678✔
1484
                }
1,678✔
1485
            }
1486

1487
            debug!(
193,698✔
1488
                "Tip #{i} {}/{} at {}:{} has score {score} ({})",
1489
                &leaf_tip.consensus_hash,
×
1490
                &leaf_tip.anchored_block_hash,
×
1491
                leaf_tip.burn_height,
1492
                leaf_tip.stacks_height,
1493
                score_summaries.join(" + ").to_string()
×
1494
            );
1495
            if score < u64::MAX {
193,698✔
1496
                scores.insert(i, score);
193,529✔
1497
            }
193,529✔
1498
        }
1499

1500
        if scores.is_empty() {
193,203✔
1501
            // revert to prior tie-breaking scheme
1502
            return None;
3✔
1503
        }
193,200✔
1504

1505
        // The lowest score is the "nicest" tip (least amount of orphaning)
1506
        let best_tip_idx = scores
193,200✔
1507
            .iter()
193,200✔
1508
            .min_by_key(|(_, score)| *score)
193,200✔
1509
            .expect("FATAL: candidates should not be empty here")
193,200✔
1510
            .0;
1511

1512
        let best_tip = leaf_tips
193,200✔
1513
            .get(*best_tip_idx)
193,200✔
1514
            .expect("FATAL: candidates should not be empty");
193,200✔
1515

1516
        debug!(
193,200✔
1517
            "Best tip is #{best_tip_idx} {}/{}",
1518
            &best_tip.consensus_hash, &best_tip.anchored_block_hash
×
1519
        );
1520
        Some((*best_tip).clone())
193,200✔
1521
    }
193,478✔
1522

1523
    // TODO: add tests from mutation testing results #4870
1524
    #[cfg_attr(test, mutants::skip)]
1525
    /// Load up the parent block info for mining.
1526
    /// If there's no parent because this is the first block, then return the genesis block's info.
1527
    /// If we can't find the parent in the DB but we expect one, return None.
1528
    fn load_block_parent_info(
172,621✔
1529
        &self,
172,621✔
1530
        burn_db: &mut SortitionDB,
172,621✔
1531
        chain_state: &mut StacksChainState,
172,621✔
1532
    ) -> (Option<ParentStacksBlockInfo>, bool) {
172,621✔
1533
        if let Some(stacks_tip) = chain_state
172,621✔
1534
            .get_stacks_chain_tip(burn_db)
172,621✔
1535
            .expect("FATAL: could not query chain tip")
172,621✔
1536
        {
1537
            let best_stacks_tip =
172,209✔
1538
                Self::pick_best_tip(&self.globals, &self.config, burn_db, chain_state, None)
172,209✔
1539
                    .expect("FATAL: no best chain tip");
172,209✔
1540
            let miner_address = self
172,209✔
1541
                .keychain
172,209✔
1542
                .origin_address(self.config.is_mainnet())
172,209✔
1543
                .unwrap();
172,209✔
1544
            let parent_info = match ParentStacksBlockInfo::lookup(
172,209✔
1545
                chain_state,
172,209✔
1546
                burn_db,
172,209✔
1547
                &self.burn_block,
172,209✔
1548
                miner_address,
172,209✔
1549
                &best_stacks_tip.consensus_hash,
172,209✔
1550
                &best_stacks_tip.anchored_block_hash,
172,209✔
1551
            ) {
1552
                Ok(parent_info) => Some(parent_info),
172,162✔
1553
                Err(Error::BurnchainTipChanged) => {
1554
                    self.globals.counters.bump_missed_tenures();
47✔
1555
                    None
47✔
1556
                }
1557
                Err(..) => None,
×
1558
            };
1559
            if parent_info.is_none() {
172,209✔
1560
                warn!(
47✔
1561
                    "No parent for best-tip {}/{}",
1562
                    &best_stacks_tip.consensus_hash, &best_stacks_tip.anchored_block_hash
47✔
1563
                );
1564
            }
172,162✔
1565
            let canonical = best_stacks_tip.consensus_hash == stacks_tip.consensus_hash
172,209✔
1566
                && best_stacks_tip.anchored_block_hash == stacks_tip.anchored_block_hash;
172,203✔
1567
            (parent_info, canonical)
172,209✔
1568
        } else {
1569
            debug!("No Stacks chain tip known, will return a genesis block");
412✔
1570
            let burnchain_params = burnchain_params_from_config(&self.config.burnchain);
412✔
1571

1572
            let chain_tip = ChainTip::genesis(
412✔
1573
                &burnchain_params.first_block_hash,
412✔
1574
                burnchain_params.first_block_height,
412✔
1575
                burnchain_params.first_block_timestamp.into(),
412✔
1576
            );
1577

1578
            (
412✔
1579
                Some(ParentStacksBlockInfo {
412✔
1580
                    stacks_parent_header: chain_tip.metadata,
412✔
1581
                    parent_consensus_hash: FIRST_BURNCHAIN_CONSENSUS_HASH,
412✔
1582
                    parent_block_burn_height: 0,
412✔
1583
                    parent_block_total_burn: 0,
412✔
1584
                    parent_winning_vtxindex: 0,
412✔
1585
                    coinbase_nonce: 0,
412✔
1586
                }),
412✔
1587
                true,
412✔
1588
            )
412✔
1589
        }
1590
    }
172,621✔
1591

1592
    /// Determine which attempt this will be when mining a block, and whether or not an attempt
1593
    /// should even be made.
1594
    /// Returns Some(attempt, max-txs) if we should attempt to mine (and what attempt it will be)
1595
    /// Returns None if we should not mine.
1596
    fn get_mine_attempt(
172,574✔
1597
        &self,
172,574✔
1598
        chain_state: &StacksChainState,
172,574✔
1599
        parent_block_info: &ParentStacksBlockInfo,
172,574✔
1600
        force: bool,
172,574✔
1601
    ) -> Option<(u64, u64)> {
172,574✔
1602
        let parent_consensus_hash = &parent_block_info.parent_consensus_hash;
172,574✔
1603
        let stacks_parent_header = &parent_block_info.stacks_parent_header;
172,574✔
1604
        let parent_block_burn_height = parent_block_info.parent_block_burn_height;
172,574✔
1605

1606
        let last_mined_blocks =
172,574✔
1607
            Self::find_inflight_mined_blocks(self.burn_block.block_height, &self.last_mined_blocks);
172,574✔
1608

1609
        // has the tip changed from our previously-mined block for this epoch?
1610
        let should_unconditionally_mine = last_mined_blocks.is_empty()
172,574✔
1611
            || (last_mined_blocks.len() == 1 && !self.failed_to_submit_last_attempt);
162,938✔
1612
        let (attempt, max_txs) = if should_unconditionally_mine {
172,574✔
1613
            // always mine if we've not mined a block for this epoch yet, or
1614
            // if we've mined just one attempt, unconditionally try again (so we
1615
            // can use `subsequent_miner_time_ms` in this attempt)
1616
            if last_mined_blocks.len() == 1 {
24,735✔
1617
                info!("Have only attempted one block; unconditionally trying again");
15,099✔
1618
            }
9,636✔
1619
            let attempt = last_mined_blocks.len() as u64 + 1;
24,735✔
1620
            let mut max_txs = 0;
24,735✔
1621
            for last_mined_block in last_mined_blocks.iter() {
24,735✔
1622
                max_txs = cmp::max(max_txs, last_mined_block.anchored_block.txs.len());
15,099✔
1623
            }
15,099✔
1624
            (attempt, max_txs)
24,735✔
1625
        } else {
1626
            let mut best_attempt = 0;
147,839✔
1627
            let mut max_txs = 0;
147,839✔
1628
            debug!(
147,839✔
1629
                "Consider {} in-flight Stacks tip(s)",
1630
                &last_mined_blocks.len()
×
1631
            );
1632
            for prev_block in last_mined_blocks.iter() {
147,840✔
1633
                debug!(
147,840✔
1634
                    "Consider in-flight block {} on Stacks tip {}/{} in {} with {} txs",
1635
                    &prev_block.anchored_block.block_hash(),
×
1636
                    &prev_block.parent_consensus_hash,
×
1637
                    &prev_block.anchored_block.header.parent_block,
×
1638
                    &prev_block.burn_hash,
×
1639
                    &prev_block.anchored_block.txs.len()
×
1640
                );
1641
                max_txs = cmp::max(max_txs, prev_block.anchored_block.txs.len());
147,840✔
1642

1643
                if prev_block.parent_consensus_hash == *parent_consensus_hash
147,840✔
1644
                    && prev_block.burn_hash == self.burn_block.burn_header_hash
147,838✔
1645
                    && prev_block.anchored_block.header.parent_block
147,838✔
1646
                        == stacks_parent_header.anchored_header.block_hash()
147,838✔
1647
                {
1648
                    // the anchored chain tip hasn't changed since we attempted to build a block.
1649
                    // But, have discovered any new microblocks worthy of being mined?
1650
                    if let Ok(Some(stream)) =
17✔
1651
                        StacksChainState::load_descendant_staging_microblock_stream(
147,838✔
1652
                            chain_state.db(),
147,838✔
1653
                            &StacksBlockHeader::make_index_block_hash(
147,838✔
1654
                                &prev_block.parent_consensus_hash,
147,838✔
1655
                                &stacks_parent_header.anchored_header.block_hash(),
147,838✔
1656
                            ),
147,838✔
1657
                            0,
1658
                            u16::MAX,
1659
                        )
1660
                    {
1661
                        if (prev_block.anchored_block.header.parent_microblock
17✔
1662
                            == BlockHeaderHash([0u8; 32])
17✔
1663
                            && stream.is_empty())
×
1664
                            || (prev_block.anchored_block.header.parent_microblock
17✔
1665
                                != BlockHeaderHash([0u8; 32])
17✔
1666
                                && stream.len()
17✔
1667
                                    <= (prev_block.anchored_block.header.parent_microblock_sequence
17✔
1668
                                        as usize)
17✔
1669
                                        + 1)
17✔
1670
                        {
1671
                            if !force {
17✔
1672
                                // the chain tip hasn't changed since we attempted to build a block.  Use what we
1673
                                // already have.
1674
                                debug!("Relayer: Stacks tip is unchanged since we last tried to mine a block off of {}/{} at height {} with {} txs, in {} at burn height {parent_block_burn_height}, and no new microblocks ({} <= {} + 1)",
17✔
1675
                                       &prev_block.parent_consensus_hash, &prev_block.anchored_block.header.parent_block, prev_block.anchored_block.header.total_work.work,
×
1676
                                       prev_block.anchored_block.txs.len(), prev_block.burn_hash, stream.len(), prev_block.anchored_block.header.parent_microblock_sequence);
×
1677

1678
                                return None;
17✔
1679
                            }
×
1680
                        } else {
1681
                            // there are new microblocks!
1682
                            // TODO: only consider rebuilding our anchored block if we (a) have
1683
                            // time, and (b) the new microblocks are worth more than the new BTC
1684
                            // fee minus the old BTC fee
1685
                            debug!("Relayer: Stacks tip is unchanged since we last tried to mine a block off of {}/{} at height {} with {} txs, in {} at burn height {parent_block_burn_height}, but there are new microblocks ({} > {} + 1)",
×
1686
                                   &prev_block.parent_consensus_hash, &prev_block.anchored_block.header.parent_block, prev_block.anchored_block.header.total_work.work,
×
1687
                                   prev_block.anchored_block.txs.len(), prev_block.burn_hash, stream.len(), prev_block.anchored_block.header.parent_microblock_sequence);
×
1688

1689
                            best_attempt = cmp::max(best_attempt, prev_block.attempt);
×
1690
                        }
1691
                    } else if !force {
147,821✔
1692
                        // no microblock stream to confirm, and the stacks tip hasn't changed
1693
                        debug!("Relayer: Stacks tip is unchanged since we last tried to mine a block off of {}/{} at height {} with {} txs, in {} at burn height {parent_block_burn_height}, and no microblocks present",
147,821✔
1694
                                &prev_block.parent_consensus_hash, &prev_block.anchored_block.header.parent_block, prev_block.anchored_block.header.total_work.work,
×
1695
                                prev_block.anchored_block.txs.len(), prev_block.burn_hash);
×
1696

1697
                        return None;
147,821✔
1698
                    }
×
1699
                } else if self.burn_block.burn_header_hash == prev_block.burn_hash {
2✔
1700
                    // only try and re-mine if there was no sortition since the last chain tip
1701
                    info!("Relayer: Stacks tip has changed to {parent_consensus_hash}/{} since we last tried to mine a block in {} at burn height {parent_block_burn_height}; attempt was {} (for Stacks tip {}/{})",
2✔
1702
                            stacks_parent_header.anchored_header.block_hash(), prev_block.burn_hash, prev_block.attempt, &prev_block.parent_consensus_hash, &prev_block.anchored_block.header.parent_block);
2✔
1703
                    best_attempt = cmp::max(best_attempt, prev_block.attempt);
2✔
1704
                    // Since the chain tip has changed, we should try to mine a new block, even
1705
                    // if it has less transactions than the previous block we mined, since that
1706
                    // previous block would now be a reorg.
1707
                    max_txs = 0;
2✔
1708
                } else {
1709
                    info!("Relayer: Burn tip has changed to {} ({}) since we last tried to mine a block in {}",
×
1710
                            &self.burn_block.burn_header_hash, self.burn_block.block_height, &prev_block.burn_hash);
×
1711
                }
1712
            }
1713
            (best_attempt + 1, max_txs)
1✔
1714
        };
1715
        Some((attempt, u64::try_from(max_txs).expect("too many txs")))
24,736✔
1716
    }
172,574✔
1717

1718
    /// Generate the VRF proof for the block we're going to build.
1719
    /// Returns Some(proof) if we could make the proof
1720
    /// Return None if we could not make the proof
1721
    fn make_vrf_proof(&mut self) -> Option<VRFProof> {
24,736✔
1722
        // if we're a mock miner, then make sure that the keychain has a keypair for the mocked VRF
1723
        // key
1724
        let vrf_proof = if self.config.get_node_config(false).mock_mining {
24,736✔
1725
            self.keychain.generate_proof(
82✔
1726
                VRF_MOCK_MINER_KEY,
1727
                self.burn_block.sortition_hash.as_bytes(),
82✔
1728
            )
1729
        } else {
1730
            self.keychain.generate_proof(
24,654✔
1731
                self.registered_key.target_block_height,
24,654✔
1732
                self.burn_block.sortition_hash.as_bytes(),
24,654✔
1733
            )
1734
        };
1735

1736
        let Some(vrf_proof) = vrf_proof else {
24,736✔
1737
            error!(
×
1738
                "Unable to generate VRF proof, will be unable to mine";
1739
                "burn_block_sortition_hash" => %self.burn_block.sortition_hash,
1740
                "burn_block_block_height" => %self.burn_block.block_height,
1741
                "burn_block_hash" => %self.burn_block.burn_header_hash,
1742
                "vrf_pubkey" => &self.registered_key.vrf_public_key.to_hex()
×
1743
            );
1744
            return None;
×
1745
        };
1746

1747
        debug!(
24,736✔
1748
            "Generated VRF Proof: {} over {} ({},{}) with key {}",
1749
            vrf_proof.to_hex(),
×
1750
            &self.burn_block.sortition_hash,
×
1751
            &self.burn_block.block_height,
×
1752
            &self.burn_block.burn_header_hash,
×
1753
            &self.registered_key.vrf_public_key.to_hex()
×
1754
        );
1755
        Some(vrf_proof)
24,736✔
1756
    }
24,736✔
1757

1758
    /// Get the microblock private key we'll be using for this tenure, should we win.
1759
    /// Return the private key.
1760
    ///
1761
    /// In testing, we ignore the parent stacks block hash because we don't have an easy way to
1762
    /// reproduce it in integration tests.
1763
    #[cfg(not(test))]
1764
    fn make_microblock_private_key(
1765
        &mut self,
1766
        parent_stacks_hash: &StacksBlockId,
1767
    ) -> Secp256k1PrivateKey {
1768
        // Generates a new secret key for signing the trail of microblocks
1769
        // of the upcoming tenure.
1770
        self.keychain
1771
            .make_microblock_secret_key(self.burn_block.block_height, &parent_stacks_hash.0)
1772
    }
1773

1774
    /// Get the microblock private key we'll be using for this tenure, should we win.
1775
    /// Return the private key on success
1776
    #[cfg(test)]
1777
    fn make_microblock_private_key(
24,734✔
1778
        &mut self,
24,734✔
1779
        _parent_stacks_hash: &StacksBlockId,
24,734✔
1780
    ) -> Secp256k1PrivateKey {
24,734✔
1781
        // Generates a new secret key for signing the trail of microblocks
1782
        // of the upcoming tenure.
1783
        warn!("test version of make_microblock_secret_key");
24,734✔
1784
        self.keychain.make_microblock_secret_key(
24,734✔
1785
            self.burn_block.block_height,
24,734✔
1786
            &self.burn_block.block_height.to_be_bytes(),
24,734✔
1787
        )
1788
    }
24,734✔
1789

1790
    /// Load the parent microblock stream and vet it for the absence of forks.
1791
    /// If there is a fork, then mine and relay a poison microblock transaction.
1792
    /// Update stacks_parent_header's microblock tail to point to the end of the stream we load.
1793
    /// Return the microblocks we'll confirm, if there are any.
1794
    fn load_and_vet_parent_microblocks(
24,734✔
1795
        &mut self,
24,734✔
1796
        chain_state: &mut StacksChainState,
24,734✔
1797
        sortdb: &SortitionDB,
24,734✔
1798
        mem_pool: &mut MemPoolDB,
24,734✔
1799
        parent_block_info: &mut ParentStacksBlockInfo,
24,734✔
1800
    ) -> Option<Vec<StacksMicroblock>> {
24,734✔
1801
        let parent_consensus_hash = &parent_block_info.parent_consensus_hash;
24,734✔
1802
        let stacks_parent_header = &mut parent_block_info.stacks_parent_header;
24,734✔
1803

1804
        let microblock_info_opt =
24,734✔
1805
            match StacksChainState::load_descendant_staging_microblock_stream_with_poison(
24,734✔
1806
                chain_state.db(),
24,734✔
1807
                &StacksBlockHeader::make_index_block_hash(
24,734✔
1808
                    parent_consensus_hash,
24,734✔
1809
                    &stacks_parent_header.anchored_header.block_hash(),
24,734✔
1810
                ),
24,734✔
1811
                0,
24,734✔
1812
                u16::MAX,
24,734✔
1813
            ) {
24,734✔
1814
                Ok(x) => {
24,734✔
1815
                    let num_mblocks = x.as_ref().map(|(mblocks, ..)| mblocks.len()).unwrap_or(0);
24,734✔
1816
                    debug!(
24,734✔
1817
                        "Loaded {num_mblocks} microblocks descending from {parent_consensus_hash}/{} (data: {})",
1818
                        &stacks_parent_header.anchored_header.block_hash(),
×
1819
                        x.is_some()
×
1820
                    );
1821
                    x
24,734✔
1822
                }
1823
                Err(e) => {
×
1824
                    warn!(
×
1825
                        "Failed to load descendant microblock stream from {parent_consensus_hash}/{}: {e:?}",
1826
                        &stacks_parent_header.anchored_header.block_hash()
×
1827
                    );
1828
                    None
×
1829
                }
1830
            };
1831

1832
        if let Some((ref microblocks, ref poison_opt)) = &microblock_info_opt {
24,734✔
1833
            if let Some(tail) = microblocks.last() {
17✔
1834
                debug!(
17✔
1835
                    "Confirm microblock stream tailed at {} (seq {})",
1836
                    &tail.block_hash(),
×
1837
                    tail.header.sequence
1838
                );
1839
            }
×
1840

1841
            // try and confirm as many microblocks as we can (but note that the stream itself may
1842
            // be too long; we'll try again if that happens).
1843
            stacks_parent_header.microblock_tail = microblocks.last().map(|blk| blk.header.clone());
17✔
1844

1845
            if let Some(poison_payload) = poison_opt {
17✔
1846
                debug!("Detected poisoned microblock fork: {poison_payload:?}");
×
1847

1848
                // submit it multiple times with different nonces, so it'll have a good chance of
1849
                // eventually getting picked up (even if the miner sends other transactions from
1850
                // the same address)
1851
                for i in 0..10 {
×
1852
                    let poison_microblock_tx = self.inner_generate_poison_microblock_tx(
×
1853
                        parent_block_info.coinbase_nonce + 1 + i,
×
1854
                        poison_payload.clone(),
×
1855
                    );
1856

1857
                    // submit the poison payload, privately, so we'll mine it when building the
1858
                    // anchored block.
1859
                    if let Err(e) = mem_pool.miner_submit(
×
1860
                        chain_state,
×
1861
                        sortdb,
×
1862
                        parent_consensus_hash,
×
1863
                        &stacks_parent_header.anchored_header.block_hash(),
×
1864
                        &poison_microblock_tx,
×
1865
                        Some(&self.event_dispatcher),
×
1866
                        1_000_000_000.0, // prioritize this for inclusion
×
1867
                    ) {
×
1868
                        warn!("Detected but failed to mine poison-microblock transaction: {e:?}");
×
1869
                    } else {
1870
                        debug!("Submit poison-microblock transaction {poison_microblock_tx:?}");
×
1871
                    }
1872
                }
1873
            }
17✔
1874
        }
24,717✔
1875

1876
        microblock_info_opt.map(|(stream, _)| stream)
24,734✔
1877
    }
24,734✔
1878

1879
    /// Get the list of possible burn addresses this miner is using
1880
    pub fn get_miner_addrs(config: &Config, keychain: &Keychain) -> Vec<String> {
×
1881
        let mut op_signer = keychain.generate_op_signer();
×
1882
        let mut btc_addrs = vec![
×
1883
            // legacy
1884
            BitcoinAddress::from_bytes_legacy(
×
1885
                config.burnchain.get_bitcoin_network().1,
×
1886
                LegacyBitcoinAddressType::PublicKeyHash,
×
1887
                &Hash160::from_data(&op_signer.get_public_key().to_bytes()).0,
×
1888
            )
1889
            .expect("FATAL: failed to construct legacy bitcoin address"),
×
1890
        ];
1891
        if config.miner.segwit {
×
1892
            btc_addrs.push(
×
1893
                // segwit p2wpkh
×
1894
                BitcoinAddress::from_bytes_segwit_p2wpkh(
×
1895
                    config.burnchain.get_bitcoin_network().1,
×
1896
                    &Hash160::from_data(&op_signer.get_public_key().to_bytes_compressed()).0,
×
1897
                )
×
1898
                .expect("FATAL: failed to construct segwit p2wpkh address"),
×
1899
            );
×
1900
        }
×
1901
        btc_addrs
×
1902
            .into_iter()
×
1903
            .map(|addr| format!("{addr}"))
×
1904
            .collect()
×
1905
    }
×
1906

1907
    /// Obtain the target burn fee cap, when considering how well this miner is performing.
1908
    #[allow(clippy::too_many_arguments)]
1909
    pub fn get_mining_spend_amount<F, G>(
21,244✔
1910
        config: &Config,
21,244✔
1911
        keychain: &Keychain,
21,244✔
1912
        burnchain: &Burnchain,
21,244✔
1913
        sortdb: &SortitionDB,
21,244✔
1914
        recipients: &[PoxAddress],
21,244✔
1915
        start_mine_height: u64,
21,244✔
1916
        at_burn_block: Option<u64>,
21,244✔
1917
        mut get_prior_winning_prob: F,
21,244✔
1918
        mut set_prior_winning_prob: G,
21,244✔
1919
    ) -> u64
21,244✔
1920
    where
21,244✔
1921
        F: FnMut(u64) -> f64,
21,244✔
1922
        G: FnMut(u64, f64),
21,244✔
1923
    {
1924
        let config_file_burn_fee_cap = config.get_burnchain_config().burn_fee_cap;
21,244✔
1925
        let miner_config = config.get_miner_config();
21,244✔
1926

1927
        if miner_config.target_win_probability < 0.00001 {
21,244✔
1928
            // this field is effectively zero
1929
            return config_file_burn_fee_cap;
21,244✔
1930
        }
×
1931
        let Some(miner_stats) = config.get_miner_stats() else {
×
1932
            return config_file_burn_fee_cap;
×
1933
        };
1934

1935
        let Ok(tip) = SortitionDB::get_canonical_burn_chain_tip(sortdb.conn()).map_err(|e| {
×
1936
            warn!("Failed to load canonical burn chain tip: {e:?}");
×
1937
            e
×
1938
        }) else {
×
1939
            return config_file_burn_fee_cap;
×
1940
        };
1941
        let tip = if let Some(at_burn_block) = at_burn_block.as_ref() {
×
1942
            let ih = sortdb.index_handle(&tip.sortition_id);
×
1943
            let Ok(Some(ancestor_tip)) = ih.get_block_snapshot_by_height(*at_burn_block) else {
×
1944
                warn!("Failed to load ancestor tip at burn height {at_burn_block}");
×
1945
                return config_file_burn_fee_cap;
×
1946
            };
1947
            ancestor_tip
×
1948
        } else {
1949
            tip
×
1950
        };
1951

1952
        let Ok(active_miners_and_commits) = MinerStats::get_active_miners(sortdb, at_burn_block)
×
1953
            .map_err(|e| {
×
1954
                warn!("Failed to get active miners: {e:?}");
×
1955
                e
×
1956
            })
×
1957
        else {
1958
            return config_file_burn_fee_cap;
×
1959
        };
1960
        if active_miners_and_commits.is_empty() {
×
1961
            warn!("No active miners detected; using config file burn_fee_cap");
×
1962
            return config_file_burn_fee_cap;
×
1963
        }
×
1964

1965
        let active_miners: Vec<_> = active_miners_and_commits
×
1966
            .iter()
×
1967
            .map(|(miner, _cmt)| miner.as_str())
×
1968
            .collect();
×
1969

1970
        info!("Active miners: {active_miners:?}");
×
1971

1972
        let Ok(unconfirmed_block_commits) = miner_stats
×
1973
            .get_unconfirmed_commits(tip.block_height + 1, &active_miners)
×
1974
            .map_err(|e| {
×
1975
                warn!("Failed to find unconfirmed block-commits: {e}");
×
1976
                e
×
1977
            })
×
1978
        else {
1979
            return config_file_burn_fee_cap;
×
1980
        };
1981

1982
        let unconfirmed_miners_and_amounts: Vec<(String, u64)> = unconfirmed_block_commits
×
1983
            .iter()
×
1984
            .map(|cmt| (cmt.apparent_sender.to_string(), cmt.burn_fee))
×
1985
            .collect();
×
1986

1987
        info!("Found unconfirmed block-commits: {unconfirmed_miners_and_amounts:?}");
×
1988

1989
        let (spend_dist, _total_spend) = MinerStats::get_spend_distribution(
×
1990
            &active_miners_and_commits,
×
1991
            &unconfirmed_block_commits,
×
1992
            recipients,
×
1993
        );
×
1994
        let win_probs = if miner_config.fast_rampup {
×
1995
            // look at spends 6+ blocks in the future
1996
            MinerStats::get_future_win_distribution(
×
1997
                &active_miners_and_commits,
×
1998
                &unconfirmed_block_commits,
×
1999
                recipients,
×
2000
            )
2001
        } else {
2002
            // look at the current spends
2003
            let Ok(unconfirmed_burn_dist) = miner_stats
×
2004
                .get_unconfirmed_burn_distribution(
×
2005
                    burnchain,
×
2006
                    sortdb,
×
2007
                    &active_miners_and_commits,
×
2008
                    unconfirmed_block_commits,
×
2009
                    recipients,
×
2010
                    at_burn_block,
×
2011
                )
2012
                .map_err(|e| {
×
2013
                    warn!("Failed to get unconfirmed burn distribution: {e:?}");
×
2014
                    e
×
2015
                })
×
2016
            else {
2017
                return config_file_burn_fee_cap;
×
2018
            };
2019

2020
            MinerStats::burn_dist_to_prob_dist(&unconfirmed_burn_dist)
×
2021
        };
2022

2023
        info!("Unconfirmed spend distribution: {spend_dist:?}");
×
2024
        info!(
×
2025
            "Unconfirmed win probabilities (fast_rampup={}): {win_probs:?}",
2026
            miner_config.fast_rampup
2027
        );
2028

2029
        let miner_addrs = Self::get_miner_addrs(config, keychain);
×
2030
        let win_prob = miner_addrs
×
2031
            .iter()
×
2032
            .find_map(|x| win_probs.get(x))
×
2033
            .copied()
×
2034
            .unwrap_or(0.0);
×
2035

2036
        info!(
×
2037
            "This miner's win probability at {} is {win_prob}",
2038
            tip.block_height
2039
        );
2040
        set_prior_winning_prob(tip.block_height, win_prob);
×
2041

2042
        if win_prob < config.miner.target_win_probability {
×
2043
            // no mining strategy is viable, so just quit.
2044
            // Unless we're spinning up, that is.
2045
            if start_mine_height + 6 < tip.block_height
×
2046
                && config.miner.underperform_stop_threshold.is_some()
×
2047
            {
2048
                let underperform_stop_threshold =
×
2049
                    config.miner.underperform_stop_threshold.unwrap_or(0);
×
2050
                info!(
×
2051
                    "Miner is spun up, but is not meeting target win probability as of {}",
2052
                    tip.block_height
2053
                );
2054
                // we've spun up and we're underperforming. How long do we tolerate this?
2055
                let mut underperformed_count = 0;
×
2056
                for depth in 0..underperform_stop_threshold {
×
2057
                    let prior_burn_height = tip.block_height.saturating_sub(depth);
×
2058
                    let prior_win_prob = get_prior_winning_prob(prior_burn_height);
×
2059
                    if prior_win_prob < config.miner.target_win_probability {
×
2060
                        info!(
×
2061
                            "Miner underperformed in block {prior_burn_height} ({underperformed_count}/{underperform_stop_threshold})"
2062
                        );
2063
                        underperformed_count += 1;
×
2064
                    }
×
2065
                }
2066
                if underperformed_count == underperform_stop_threshold {
×
2067
                    warn!(
×
2068
                        "Miner underperformed since burn height {}; spinning down",
2069
                        start_mine_height + 6 + underperform_stop_threshold
×
2070
                    );
2071
                    return 0;
×
2072
                }
×
2073
            }
×
2074
        }
×
2075

2076
        config_file_burn_fee_cap
×
2077
    }
21,244✔
2078

2079
    /// Produce the block-commit for this anchored block, if we can.
2080
    /// Returns the op on success
2081
    /// Returns None if we fail somehow.
2082
    #[allow(clippy::too_many_arguments)]
2083
    pub fn make_block_commit(
21,244✔
2084
        &self,
21,244✔
2085
        burn_db: &mut SortitionDB,
21,244✔
2086
        chain_state: &mut StacksChainState,
21,244✔
2087
        block_hash: BlockHeaderHash,
21,244✔
2088
        parent_block_burn_height: u64,
21,244✔
2089
        parent_winning_vtxindex: u16,
21,244✔
2090
        vrf_proof: &VRFProof,
21,244✔
2091
        target_epoch_id: StacksEpochId,
21,244✔
2092
    ) -> Option<BlockstackOperationType> {
21,244✔
2093
        // let's figure out the recipient set!
2094
        let recipients = match get_next_recipients(
21,244✔
2095
            &self.burn_block,
21,244✔
2096
            chain_state,
21,244✔
2097
            burn_db,
21,244✔
2098
            &self.burnchain,
21,244✔
2099
            &OnChainRewardSetProvider::new(),
21,244✔
2100
        ) {
21,244✔
2101
            Ok(x) => x,
21,244✔
2102
            Err(e) => {
×
2103
                error!("Relayer: Failure fetching recipient set: {e:?}");
×
2104
                return None;
×
2105
            }
2106
        };
2107

2108
        let commit_outs = if !self
21,244✔
2109
            .burnchain
21,244✔
2110
            .pox_constants
21,244✔
2111
            .is_after_pox_sunset_end(self.burn_block.block_height, target_epoch_id)
21,244✔
2112
            && !self
21,228✔
2113
                .burnchain
21,228✔
2114
                .is_in_prepare_phase(self.burn_block.block_height + 1)
21,228✔
2115
        {
2116
            RewardSetInfo::into_commit_outs(recipients, self.config.is_mainnet())
12,225✔
2117
        } else {
2118
            vec![PoxAddress::standard_burn_address(self.config.is_mainnet())]
9,019✔
2119
        };
2120

2121
        let burn_fee_cap = Self::get_mining_spend_amount(
21,244✔
2122
            &self.config,
21,244✔
2123
            &self.keychain,
21,244✔
2124
            &self.burnchain,
21,244✔
2125
            burn_db,
21,244✔
2126
            &commit_outs,
21,244✔
2127
            self.globals.get_start_mining_height(),
21,244✔
2128
            None,
21,244✔
2129
            |block_height| {
×
2130
                self.globals
×
2131
                    .get_estimated_win_prob(block_height)
×
2132
                    .unwrap_or(0.0)
×
2133
            },
×
2134
            |block_height, win_prob| self.globals.add_estimated_win_prob(block_height, win_prob),
×
2135
        );
2136
        if burn_fee_cap == 0 {
21,244✔
2137
            warn!("Calculated burn_fee_cap is 0; will not mine");
×
2138
            return None;
×
2139
        }
21,244✔
2140
        let sunset_burn = self.burnchain.expected_sunset_burn(
21,244✔
2141
            self.burn_block.block_height + 1,
21,244✔
2142
            burn_fee_cap,
21,244✔
2143
            target_epoch_id,
21,244✔
2144
        );
2145
        let rest_commit = burn_fee_cap - sunset_burn;
21,244✔
2146

2147
        // let's commit, but target the current burnchain tip with our modulus
2148
        let op = self.inner_generate_block_commit_op(
21,244✔
2149
            block_hash,
21,244✔
2150
            rest_commit,
21,244✔
2151
            &self.registered_key,
21,244✔
2152
            parent_block_burn_height
21,244✔
2153
                .try_into()
21,244✔
2154
                .expect("Could not convert parent block height into u32"),
21,244✔
2155
            parent_winning_vtxindex,
21,244✔
2156
            VRFSeed::from_proof(vrf_proof),
21,244✔
2157
            commit_outs,
21,244✔
2158
            sunset_burn,
21,244✔
2159
            self.burn_block.block_height,
21,244✔
2160
        );
2161
        Some(op)
21,244✔
2162
    }
21,244✔
2163

2164
    /// Are there enough unprocessed blocks that we shouldn't mine?
2165
    fn unprocessed_blocks_prevent_mining(
291,018✔
2166
        burnchain: &Burnchain,
291,018✔
2167
        sortdb: &SortitionDB,
291,018✔
2168
        chainstate: &StacksChainState,
291,018✔
2169
        unprocessed_block_deadline: u64,
291,018✔
2170
    ) -> bool {
291,018✔
2171
        let sort_tip = SortitionDB::get_canonical_burn_chain_tip(sortdb.conn())
291,018✔
2172
            .expect("FATAL: could not query canonical sortition DB tip");
291,018✔
2173

2174
        if let Some(stacks_tip) =
291,018✔
2175
            NakamotoChainState::get_canonical_block_header(chainstate.db(), sortdb)
291,018✔
2176
                .expect("FATAL: could not query canonical Stacks chain tip")
291,018✔
2177
        {
2178
            // if a block hasn't been processed within some deadline seconds of receipt, don't block
2179
            //  mining
2180
            let process_deadline = get_epoch_time_secs() - unprocessed_block_deadline;
291,018✔
2181
            let has_unprocessed = StacksChainState::has_higher_unprocessed_blocks(
291,018✔
2182
                chainstate.db(),
291,018✔
2183
                stacks_tip.anchored_header.height(),
291,018✔
2184
                process_deadline,
291,018✔
2185
            )
2186
            .expect("FATAL: failed to query staging blocks");
291,018✔
2187
            if has_unprocessed {
291,018✔
UNCOV
2188
                let highest_unprocessed_opt = StacksChainState::get_highest_unprocessed_block(
×
UNCOV
2189
                    chainstate.db(),
×
UNCOV
2190
                    process_deadline,
×
2191
                )
UNCOV
2192
                .expect("FATAL: failed to query staging blocks");
×
2193

UNCOV
2194
                if let Some(highest_unprocessed) = highest_unprocessed_opt {
×
UNCOV
2195
                    let highest_unprocessed_block_sn_opt =
×
UNCOV
2196
                        SortitionDB::get_block_snapshot_consensus(
×
UNCOV
2197
                            sortdb.conn(),
×
UNCOV
2198
                            &highest_unprocessed.consensus_hash,
×
2199
                        )
UNCOV
2200
                        .expect("FATAL: could not query sortition DB");
×
2201

2202
                    // NOTE: this could be None if it's not part of the canonical PoX fork any
2203
                    // longer
UNCOV
2204
                    if let Some(highest_unprocessed_block_sn) = highest_unprocessed_block_sn_opt {
×
UNCOV
2205
                        if stacks_tip.anchored_header.height()
×
UNCOV
2206
                            + u64::from(burnchain.pox_constants.prepare_length)
×
UNCOV
2207
                            > highest_unprocessed.height
×
UNCOV
2208
                            && highest_unprocessed_block_sn.block_height
×
UNCOV
2209
                                + u64::from(burnchain.pox_constants.prepare_length)
×
UNCOV
2210
                                > sort_tip.block_height
×
2211
                        {
2212
                            // we're close enough to the chain tip that it's a bad idea for us to mine
2213
                            // -- we'll likely create an orphan
UNCOV
2214
                            return true;
×
UNCOV
2215
                        }
×
2216
                    }
×
2217
                }
×
2218
            }
291,018✔
2219
        }
×
2220
        // we can mine
2221
        false
291,018✔
2222
    }
291,018✔
2223

2224
    /// Only used in mock signing to generate a peer info view
2225
    fn generate_peer_info(&self) -> PeerInfo {
334✔
2226
        // Create a peer info view of the current state
2227
        let server_version = version_string("stacks-node", option_env!("STACKS_NODE_VERSION"));
334✔
2228
        let stacks_tip_height = self.burn_block.canonical_stacks_tip_height;
334✔
2229
        let stacks_tip = self.burn_block.canonical_stacks_tip_hash.clone();
334✔
2230
        let stacks_tip_consensus_hash = self.burn_block.canonical_stacks_tip_consensus_hash.clone();
334✔
2231
        let pox_consensus = self.burn_block.consensus_hash.clone();
334✔
2232
        let burn_block_height = self.burn_block.block_height;
334✔
2233

2234
        PeerInfo {
334✔
2235
            burn_block_height,
334✔
2236
            stacks_tip_consensus_hash,
334✔
2237
            stacks_tip,
334✔
2238
            stacks_tip_height,
334✔
2239
            pox_consensus,
334✔
2240
            server_version,
334✔
2241
            network_id: self.config.get_burnchain_config().chain_id,
334✔
2242
        }
334✔
2243
    }
334✔
2244

2245
    /// Only used in mock signing to retrieve the mock signatures for the given mock proposal
2246
    fn wait_for_mock_signatures(
98✔
2247
        &self,
98✔
2248
        mock_proposal: &MockProposal,
98✔
2249
        stackerdbs: &StackerDBs,
98✔
2250
        timeout: Duration,
98✔
2251
    ) -> Result<Vec<MockSignature>, ChainstateError> {
98✔
2252
        let reward_cycle = self
98✔
2253
            .burnchain
98✔
2254
            .block_height_to_reward_cycle(self.burn_block.block_height)
98✔
2255
            .expect("BUG: block commit exists before first block height");
98✔
2256
        let signers_contract_id = MessageSlotID::BlockResponse
98✔
2257
            .stacker_db_contract(self.config.is_mainnet(), reward_cycle);
98✔
2258
        let slot_ids: Vec<_> = stackerdbs
98✔
2259
            .get_signers(&signers_contract_id)
98✔
2260
            .expect("FATAL: could not get signers from stacker DB")
98✔
2261
            .into_iter()
98✔
2262
            .enumerate()
98✔
2263
            .map(|(slot_id, _)| {
300✔
2264
                u32::try_from(slot_id).expect("FATAL: too many signers to fit into u32 range")
300✔
2265
            })
300✔
2266
            .collect();
98✔
2267
        let mock_poll_start = Instant::now();
98✔
2268
        let mut mock_signatures = vec![];
98✔
2269
        // Because we don't care really if all signers reach quorum and this is just for testing purposes,
2270
        // we don't need to wait for ALL signers to sign the mock proposal and should not slow down mining too much
2271
        // Just wait a min amount of time for the mock signatures to come in
2272
        while mock_signatures.len() < slot_ids.len() && mock_poll_start.elapsed() < timeout {
456,519✔
2273
            let chunks = stackerdbs.get_latest_chunks(&signers_contract_id, &slot_ids)?;
456,421✔
2274
            for chunk in chunks.into_iter().flatten() {
2,282,105✔
2275
                if let Ok(SignerMessage::MockSignature(mock_signature)) =
1,700,543✔
2276
                    SignerMessage::consensus_deserialize(&mut chunk.as_slice())
2,282,105✔
2277
                {
2278
                    if mock_signature.mock_proposal == *mock_proposal
1,700,543✔
2279
                        && !mock_signatures.contains(&mock_signature)
178,120✔
2280
                    {
295✔
2281
                        mock_signatures.push(mock_signature);
295✔
2282
                    }
1,700,248✔
2283
                }
581,562✔
2284
            }
2285
        }
2286
        Ok(mock_signatures)
98✔
2287
    }
98✔
2288

2289
    /// Only used in mock signing to determine if the peer info view was already signed across
2290
    fn mock_block_exists(&self, peer_info: &PeerInfo) -> bool {
334✔
2291
        let miner_contract_id = boot_code_id(MINERS_NAME, self.config.is_mainnet());
334✔
2292
        let mut miners_stackerdb = StackerDBSession::new(
334✔
2293
            &self.config.node.rpc_bind,
334✔
2294
            miner_contract_id,
334✔
2295
            self.config.miner.stackerdb_timeout,
334✔
2296
        );
2297
        let miner_slot_ids: Vec<_> = (0..MINER_SLOT_COUNT * 2).collect();
334✔
2298
        if let Ok(messages) = miners_stackerdb.get_latest_chunks(&miner_slot_ids) {
334✔
2299
            for message in messages.into_iter().flatten() {
1,268✔
2300
                if message.is_empty() {
1,268✔
2301
                    continue;
12✔
2302
                }
1,256✔
2303
                let Ok(SignerMessage::MockBlock(mock_block)) =
628✔
2304
                    SignerMessage::consensus_deserialize(&mut message.as_slice())
1,256✔
2305
                else {
2306
                    continue;
628✔
2307
                };
2308
                if mock_block.mock_proposal.peer_info == *peer_info {
628✔
2309
                    return true;
156✔
2310
                }
472✔
2311
            }
2312
        }
×
2313
        false
178✔
2314
    }
334✔
2315

2316
    /// Read any mock signatures from stackerdb and respond to them
2317
    pub fn send_mock_miner_messages(&mut self) -> Result<(), String> {
172,623✔
2318
        let burn_db_path = self.config.get_burn_db_file_path();
172,623✔
2319
        let burn_db = SortitionDB::open(
172,623✔
2320
            &burn_db_path,
172,623✔
2321
            false,
2322
            self.burnchain.pox_constants.clone(),
172,623✔
2323
            Some(self.config.node.get_marf_opts()),
172,623✔
2324
        )
2325
        .expect("FATAL: could not open sortition DB");
172,623✔
2326
        let epoch_id = SortitionDB::get_stacks_epoch(burn_db.conn(), self.burn_block.block_height)
172,623✔
2327
            .map_err(|e| e.to_string())?
172,623✔
2328
            .expect("FATAL: no epoch defined")
172,623✔
2329
            .epoch_id;
2330
        if epoch_id != StacksEpochId::Epoch25 {
172,623✔
2331
            debug!("Mock miner messaging is disabled for non-epoch 2.5 blocks.";
12,445✔
2332
                "epoch_id" => epoch_id.to_string()
×
2333
            );
2334
            return Ok(());
12,445✔
2335
        }
160,178✔
2336

2337
        let miner_config = self.config.get_miner_config();
160,178✔
2338
        if !miner_config.pre_nakamoto_mock_signing {
160,178✔
2339
            debug!("Pre-Nakamoto mock signing is disabled");
159,844✔
2340
            return Ok(());
159,844✔
2341
        }
334✔
2342

2343
        let mining_key = miner_config
334✔
2344
            .mining_key
334✔
2345
            .expect("Cannot mock sign without mining key");
334✔
2346

2347
        // Create a peer info view of the current state
2348
        let peer_info = self.generate_peer_info();
334✔
2349
        if self.mock_block_exists(&peer_info) {
334✔
2350
            debug!(
155✔
2351
                "Already sent mock miner block proposal for current peer info view. Not sending another mock proposal."
2352
            );
2353
            return Ok(());
155✔
2354
        }
179✔
2355

2356
        // find out which slot we're in. If we are not the latest sortition winner, we should not be sending anymore messages anyway
2357
        let ih = burn_db.index_handle(&self.burn_block.sortition_id);
179✔
2358
        let last_winner_snapshot = ih
179✔
2359
            .get_last_snapshot_with_sortition(self.burn_block.block_height)
179✔
2360
            .map_err(|e| e.to_string())?;
179✔
2361

2362
        if last_winner_snapshot.miner_pk_hash
179✔
2363
            != Some(Hash160::from_node_public_key(
179✔
2364
                &StacksPublicKey::from_private(&mining_key),
179✔
2365
            ))
179✔
2366
        {
2367
            return Ok(());
80✔
2368
        }
99✔
2369
        let election_sortition = last_winner_snapshot.consensus_hash;
99✔
2370
        let mock_proposal = MockProposal::new(peer_info, &mining_key);
99✔
2371

2372
        info!("Sending mock proposal to stackerdb: {mock_proposal:?}");
99✔
2373

2374
        let stackerdbs = StackerDBs::connect(&self.config.get_stacker_db_file_path(), false)
99✔
2375
            .map_err(|e| e.to_string())?;
99✔
2376
        let miner_contract_id = boot_code_id(MINERS_NAME, self.config.is_mainnet());
99✔
2377
        let mut miners_stackerdb = StackerDBSession::new(
99✔
2378
            &self.config.node.rpc_bind,
99✔
2379
            miner_contract_id,
99✔
2380
            self.config.miner.stackerdb_timeout,
99✔
2381
        );
2382
        let miner_db = MinerDB::open_with_config(&self.config).map_err(|e| e.to_string())?;
99✔
2383

2384
        SignerCoordinator::send_miners_message(
99✔
2385
            &mining_key,
99✔
2386
            &burn_db,
99✔
2387
            &self.burn_block,
99✔
2388
            &stackerdbs,
99✔
2389
            SignerMessage::MockProposal(mock_proposal.clone()),
99✔
2390
            MinerSlotID::BlockProposal, // There is no specific slot for mock miner messages so we use BlockProposal for MockProposal as well.
99✔
2391
            self.config.is_mainnet(),
99✔
2392
            &mut miners_stackerdb,
99✔
2393
            &election_sortition,
99✔
2394
            &miner_db,
99✔
2395
        )
2396
        .map_err(|e| {
99✔
2397
            warn!("Failed to write mock proposal to stackerdb.");
×
2398
            e.to_string()
×
2399
        })?;
×
2400

2401
        // Retrieve any MockSignatures from stackerdb
2402
        info!("Waiting for mock signatures...");
99✔
2403
        let mock_signatures = self
99✔
2404
            .wait_for_mock_signatures(&mock_proposal, &stackerdbs, Duration::from_secs(10))
99✔
2405
            .map_err(|e| e.to_string())?;
99✔
2406

2407
        let mock_block = MockBlock {
99✔
2408
            mock_proposal,
99✔
2409
            mock_signatures,
99✔
2410
        };
99✔
2411

2412
        info!("Sending mock block to stackerdb: {mock_block:?}");
99✔
2413
        SignerCoordinator::send_miners_message(
99✔
2414
            &mining_key,
99✔
2415
            &burn_db,
99✔
2416
            &self.burn_block,
99✔
2417
            &stackerdbs,
99✔
2418
            SignerMessage::MockBlock(mock_block),
99✔
2419
            MinerSlotID::BlockPushed, // There is no specific slot for mock miner messages. Let's use BlockPushed for MockBlock since MockProposal uses BlockProposal.
99✔
2420
            self.config.is_mainnet(),
99✔
2421
            &mut miners_stackerdb,
99✔
2422
            &election_sortition,
99✔
2423
            &miner_db,
99✔
2424
        )
2425
        .map_err(|e| {
99✔
2426
            warn!("Failed to write mock block to stackerdb.");
×
2427
            e.to_string()
×
2428
        })?;
×
2429
        Ok(())
99✔
2430
    }
172,623✔
2431

2432
    // TODO: add tests from mutation testing results #4871
2433
    #[cfg_attr(test, mutants::skip)]
2434
    /// Try to mine a Stacks block by assembling one from mempool transactions and sending a
2435
    /// burnchain block-commit transaction.  If we succeed, then return the assembled block data as
2436
    /// well as the microblock private key to use to produce microblocks.
2437
    /// Return None if we couldn't build a block for whatever reason.
2438
    pub fn run_tenure(&mut self) -> Option<MinerThreadResult> {
172,622✔
2439
        debug!("block miner thread ID is {:?}", thread::current().id());
172,622✔
2440
        fault_injection_long_tenure();
172,622✔
2441

2442
        let burn_db_path = self.config.get_burn_db_file_path();
172,622✔
2443
        let stacks_chainstate_path = self.config.get_chainstate_path_str();
172,622✔
2444

2445
        let cost_estimator = self
172,622✔
2446
            .config
172,622✔
2447
            .make_cost_estimator()
172,622✔
2448
            .unwrap_or_else(|| Box::new(UnitEstimator));
172,622✔
2449
        let metric = self
172,622✔
2450
            .config
172,622✔
2451
            .make_cost_metric()
172,622✔
2452
            .unwrap_or_else(|| Box::new(UnitMetric));
172,622✔
2453

2454
        let mut bitcoin_controller = BitcoinRegtestController::new_ongoing_dummy(
172,622✔
2455
            self.config.clone(),
172,622✔
2456
            self.ongoing_commit.clone(),
172,622✔
2457
        );
2458

2459
        let miner_config = self.config.get_miner_config();
172,622✔
2460
        let last_miner_config_opt = self.globals.get_last_miner_config();
172,622✔
2461
        let force_remine = if let Some(last_miner_config) = last_miner_config_opt {
172,622✔
2462
            last_miner_config != miner_config
172,364✔
2463
        } else {
2464
            false
258✔
2465
        };
2466
        if force_remine {
172,622✔
2467
            info!("Miner config changed; forcing a re-mine attempt");
×
2468
        }
172,622✔
2469

2470
        self.globals.set_last_miner_config(miner_config);
172,622✔
2471

2472
        // NOTE: read-write access is needed in order to be able to query the recipient set.
2473
        // This is an artifact of the way the MARF is built (see #1449)
2474
        let mut burn_db = SortitionDB::open(
172,622✔
2475
            &burn_db_path,
172,622✔
2476
            true,
2477
            self.burnchain.pox_constants.clone(),
172,622✔
2478
            Some(self.config.node.get_marf_opts()),
172,622✔
2479
        )
2480
        .expect("FATAL: could not open sortition DB");
172,622✔
2481

2482
        let mut chain_state =
172,622✔
2483
            open_chainstate_with_faults(&self.config).expect("FATAL: could not open chainstate DB");
172,622✔
2484

2485
        let mut mem_pool = MemPoolDB::open(
172,622✔
2486
            self.config.is_mainnet(),
172,622✔
2487
            self.config.burnchain.chain_id,
172,622✔
2488
            &stacks_chainstate_path,
172,622✔
2489
            cost_estimator,
172,622✔
2490
            metric,
172,622✔
2491
        )
2492
        .expect("Database failure opening mempool");
172,622✔
2493

2494
        let tenure_begin = get_epoch_time_ms();
172,622✔
2495

2496
        let target_epoch_id =
172,622✔
2497
            SortitionDB::get_stacks_epoch(burn_db.conn(), self.burn_block.block_height + 1)
172,622✔
2498
                .ok()?
172,622✔
2499
                .expect("FATAL: no epoch defined")
172,622✔
2500
                .epoch_id;
2501

2502
        let (Some(mut parent_block_info), _) =
172,575✔
2503
            self.load_block_parent_info(&mut burn_db, &mut chain_state)
172,622✔
2504
        else {
2505
            return None;
47✔
2506
        };
2507
        let (attempt, max_txs) =
24,737✔
2508
            self.get_mine_attempt(&chain_state, &parent_block_info, force_remine)?;
172,575✔
2509
        let vrf_proof = self.make_vrf_proof()?;
24,737✔
2510

2511
        // Generates a new secret key for signing the trail of microblocks
2512
        // of the upcoming tenure.
2513
        let microblock_private_key = self.make_microblock_private_key(
24,737✔
2514
            &parent_block_info.stacks_parent_header.index_block_hash(),
24,737✔
2515
        );
2516
        let mblock_pubkey_hash = {
24,737✔
2517
            let mut pubkh = Hash160::from_node_public_key(&StacksPublicKey::from_private(
24,737✔
2518
                &microblock_private_key,
24,737✔
2519
            ));
24,737✔
2520
            if cfg!(test) {
24,737✔
2521
                if let Ok(mblock_pubkey_hash_str) = std::env::var("STACKS_MICROBLOCK_PUBKEY_HASH") {
24,737✔
2522
                    if let Ok(bad_pubkh) = Hash160::from_hex(&mblock_pubkey_hash_str) {
3✔
2523
                        debug!("Fault injection: set microblock public key hash to {bad_pubkh}");
3✔
2524
                        pubkh = bad_pubkh
3✔
2525
                    }
×
2526
                }
24,734✔
2527
            }
×
2528
            pubkh
24,737✔
2529
        };
2530

2531
        // create our coinbase
2532
        let coinbase_tx =
24,737✔
2533
            self.inner_generate_coinbase_tx(parent_block_info.coinbase_nonce, target_epoch_id);
24,737✔
2534

2535
        // find the longest microblock tail we can build off of and vet microblocks for forks
2536
        self.load_and_vet_parent_microblocks(
24,737✔
2537
            &mut chain_state,
24,737✔
2538
            &burn_db,
24,737✔
2539
            &mut mem_pool,
24,737✔
2540
            &mut parent_block_info,
24,737✔
2541
        );
2542

2543
        let burn_tip = SortitionDB::get_canonical_burn_chain_tip(burn_db.conn())
24,737✔
2544
            .expect("FATAL: failed to read current burnchain tip");
24,737✔
2545
        let microblocks_disabled =
24,737✔
2546
            SortitionDB::are_microblocks_disabled(burn_db.conn(), burn_tip.block_height)
24,737✔
2547
                .expect("FATAL: failed to query epoch's microblock status");
24,737✔
2548

2549
        // build the block itself
2550
        let mut builder_settings = self.config.make_block_builder_settings(
24,737✔
2551
            attempt,
24,737✔
2552
            false,
2553
            self.globals.get_miner_status(),
24,737✔
2554
        );
2555
        if microblocks_disabled {
24,737✔
2556
            builder_settings.confirm_microblocks = false;
13,855✔
2557
            if cfg!(test)
13,855✔
2558
                && std::env::var("STACKS_TEST_CONFIRM_MICROBLOCKS_POST_25").as_deref() == Ok("1")
13,855✔
2559
            {
×
2560
                builder_settings.confirm_microblocks = true;
×
2561
            }
13,855✔
2562
        }
10,882✔
2563
        let (anchored_block, _, _) = match StacksBlockBuilder::build_anchored_block(
24,737✔
2564
            &chain_state,
24,737✔
2565
            &burn_db.index_handle(&burn_tip.sortition_id),
24,737✔
2566
            &mut mem_pool,
24,737✔
2567
            &parent_block_info.stacks_parent_header,
24,737✔
2568
            parent_block_info.parent_block_total_burn,
24,737✔
2569
            &vrf_proof,
24,737✔
2570
            &mblock_pubkey_hash,
24,737✔
2571
            &coinbase_tx,
24,737✔
2572
            builder_settings,
24,737✔
2573
            Some(&self.event_dispatcher),
24,737✔
2574
            &self.burnchain,
24,737✔
2575
        ) {
2576
            Ok(block) => block,
24,385✔
2577
            Err(ChainstateError::InvalidStacksMicroblock(msg, mblock_header_hash)) => {
×
2578
                // part of the parent microblock stream is invalid, so try again
2579
                info!(
×
2580
                    "Parent microblock stream is invalid; trying again without microblocks";
2581
                    "microblock_offender" => %mblock_header_hash,
2582
                    "error" => &msg
×
2583
                );
2584

2585
                let mut builder_settings = self.config.make_block_builder_settings(
×
2586
                    attempt,
×
2587
                    false,
2588
                    self.globals.get_miner_status(),
×
2589
                );
2590
                builder_settings.confirm_microblocks = false;
×
2591

2592
                // try again
2593
                match StacksBlockBuilder::build_anchored_block(
×
2594
                    &chain_state,
×
2595
                    &burn_db.index_handle(&burn_tip.sortition_id),
×
2596
                    &mut mem_pool,
×
2597
                    &parent_block_info.stacks_parent_header,
×
2598
                    parent_block_info.parent_block_total_burn,
×
2599
                    &vrf_proof,
×
2600
                    &mblock_pubkey_hash,
×
2601
                    &coinbase_tx,
×
2602
                    builder_settings,
×
2603
                    Some(&self.event_dispatcher),
×
2604
                    &self.burnchain,
×
2605
                ) {
×
2606
                    Ok(block) => block,
×
2607
                    Err(e) => {
×
2608
                        error!("Relayer: Failure mining anchor block even after removing offending microblock {mblock_header_hash}: {e}");
×
2609
                        return None;
×
2610
                    }
2611
                }
2612
            }
2613
            Err(e) => {
352✔
2614
                error!("Relayer: Failure mining anchored block: {e}");
352✔
2615
                return None;
352✔
2616
            }
2617
        };
2618

2619
        let miner_config = self.config.get_miner_config();
24,385✔
2620

2621
        if attempt > 1
24,385✔
2622
            && miner_config.min_tx_count > 0
14,760✔
2623
            && u64::try_from(anchored_block.txs.len()).expect("too many txs")
×
2624
                < miner_config.min_tx_count
×
2625
        {
2626
            info!("Relayer: Succeeded assembling subsequent block with {} txs, but expected at least {}", anchored_block.txs.len(), miner_config.min_tx_count);
×
2627
            return None;
×
2628
        }
24,385✔
2629

2630
        if miner_config.only_increase_tx_count
24,385✔
2631
            && max_txs > u64::try_from(anchored_block.txs.len()).expect("too many txs")
×
2632
        {
2633
            info!("Relayer: Succeeded assembling subsequent block with {} txs, but had previously produced a block with {max_txs} txs", anchored_block.txs.len());
×
2634
            return None;
×
2635
        }
24,385✔
2636

2637
        info!(
24,385✔
2638
            "Relayer: Succeeded assembling {} block #{}: {}, with {} txs, attempt {attempt}",
2639
            if parent_block_info.parent_block_total_burn == 0 {
24,382✔
2640
                "Genesis"
278✔
2641
            } else {
2642
                "Stacks"
24,104✔
2643
            },
2644
            anchored_block.header.total_work.work,
2645
            anchored_block.block_hash(),
24,382✔
2646
            anchored_block.txs.len()
24,382✔
2647
        );
2648

2649
        // let's commit
2650
        #[cfg(test)]
2651
        if self.globals.counters.skip_commit_op.get() {
24,385✔
2652
            debug!("Relayer: fault injection: skip block commit");
3,138✔
2653
            return None;
3,138✔
2654
        }
21,247✔
2655
        let op = self.make_block_commit(
21,247✔
2656
            &mut burn_db,
21,247✔
2657
            &mut chain_state,
21,247✔
2658
            anchored_block.block_hash(),
21,247✔
2659
            parent_block_info.parent_block_burn_height,
21,247✔
2660
            parent_block_info.parent_winning_vtxindex,
21,247✔
2661
            &vrf_proof,
21,247✔
2662
            target_epoch_id,
21,247✔
2663
        )?;
×
2664
        let burn_fee = if let BlockstackOperationType::LeaderBlockCommit(ref op) = &op {
21,247✔
2665
            op.burn_fee
21,247✔
2666
        } else {
2667
            0
×
2668
        };
2669

2670
        // last chance -- confirm that the stacks tip is unchanged (since it could have taken long
2671
        // enough to build this block that another block could have arrived), and confirm that all
2672
        // Stacks blocks with heights higher than the canoincal tip are processed.
2673
        let cur_burn_chain_tip = SortitionDB::get_canonical_burn_chain_tip(burn_db.conn())
21,247✔
2674
            .expect("FATAL: failed to query sortition DB for canonical burn chain tip");
21,247✔
2675

2676
        if let Some(stacks_tip) = Self::pick_best_tip(
21,247✔
2677
            &self.globals,
21,247✔
2678
            &self.config,
21,247✔
2679
            &mut burn_db,
21,247✔
2680
            &mut chain_state,
21,247✔
2681
            None,
21,247✔
2682
        ) {
21,247✔
2683
            let is_miner_blocked = self
20,973✔
2684
                .globals
20,973✔
2685
                .get_miner_status()
20,973✔
2686
                .lock()
20,973✔
2687
                .expect("FATAL: mutex poisoned")
20,973✔
2688
                .is_blocked();
20,973✔
2689

2690
            let has_unprocessed = Self::unprocessed_blocks_prevent_mining(
20,973✔
2691
                &self.burnchain,
20,973✔
2692
                &burn_db,
20,973✔
2693
                &chain_state,
20,973✔
2694
                miner_config.unprocessed_block_deadline_secs,
20,973✔
2695
            );
2696

2697
            if stacks_tip.anchored_block_hash != anchored_block.header.parent_block
20,973✔
2698
                || parent_block_info.parent_consensus_hash != stacks_tip.consensus_hash
20,967✔
2699
                || cur_burn_chain_tip.burn_header_hash != self.burn_block.burn_header_hash
20,967✔
2700
                || is_miner_blocked
20,945✔
2701
                || has_unprocessed
20,922✔
2702
            {
2703
                info!(
48✔
2704
                    "Relayer: Cancel block-commit; chain tip(s) have changed or cancelled";
2705
                    "block_hash" => %anchored_block.block_hash(),
48✔
2706
                    "tx_count" => anchored_block.txs.len(),
48✔
2707
                    "target_height" => %anchored_block.header.total_work.work,
2708
                    "parent_consensus_hash" => %parent_block_info.parent_consensus_hash,
2709
                    "parent_block_hash" => %anchored_block.header.parent_block,
2710
                    "parent_microblock_hash" => %anchored_block.header.parent_microblock,
2711
                    "parent_microblock_seq" => anchored_block.header.parent_microblock_sequence,
48✔
2712
                    "old_tip_burn_block_hash" => %self.burn_block.burn_header_hash,
2713
                    "old_tip_burn_block_height" => self.burn_block.block_height,
48✔
2714
                    "old_tip_burn_block_sortition_id" => %self.burn_block.sortition_id,
2715
                    "attempt" => attempt,
48✔
2716
                    "new_stacks_tip_block_hash" => %stacks_tip.anchored_block_hash,
2717
                    "new_stacks_tip_consensus_hash" => %stacks_tip.consensus_hash,
2718
                    "new_tip_burn_block_height" => cur_burn_chain_tip.block_height,
48✔
2719
                    "new_tip_burn_block_sortition_id" => %cur_burn_chain_tip.sortition_id,
2720
                    "new_burn_block_sortition_id" => %cur_burn_chain_tip.sortition_id,
2721
                    "miner_blocked" => %is_miner_blocked,
2722
                    "has_unprocessed" => %has_unprocessed
2723
                );
2724
                self.globals.counters.bump_missed_tenures();
48✔
2725
                return None;
48✔
2726
            }
20,925✔
2727
        }
274✔
2728

2729
        let mut op_signer = self.keychain.generate_op_signer();
21,199✔
2730
        info!(
21,199✔
2731
            "Relayer: Submit block-commit";
2732
            "burn_fee" => burn_fee,
21,196✔
2733
            "block_hash" => %anchored_block.block_hash(),
21,196✔
2734
            "tx_count" => anchored_block.txs.len(),
21,196✔
2735
            "target_height" => anchored_block.header.total_work.work,
21,196✔
2736
            "parent_consensus_hash" => %parent_block_info.parent_consensus_hash,
2737
            "parent_block_hash" => %anchored_block.header.parent_block,
2738
            "parent_microblock_hash" => %anchored_block.header.parent_microblock,
2739
            "parent_microblock_seq" => anchored_block.header.parent_microblock_sequence,
21,196✔
2740
            "tip_burn_block_hash" => %self.burn_block.burn_header_hash,
2741
            "tip_burn_block_height" => self.burn_block.block_height,
21,196✔
2742
            "tip_burn_block_sortition_id" => %self.burn_block.sortition_id,
2743
            "cur_burn_block_hash" => %cur_burn_chain_tip.burn_header_hash,
2744
            "cur_burn_block_height" => %cur_burn_chain_tip.block_height,
2745
            "cur_burn_block_sortition_id" => %cur_burn_chain_tip.sortition_id,
2746
            "attempt" => attempt
21,196✔
2747
        );
2748

2749
        let NodeConfig {
2750
            mock_mining,
21,199✔
2751
            mock_mining_output_dir,
21,199✔
2752
            ..
2753
        } = self.config.get_node_config(false);
21,199✔
2754

2755
        let res = bitcoin_controller.submit_operation(target_epoch_id, op, &mut op_signer);
21,199✔
2756
        match res {
14,649✔
2757
            Ok(_) => {
6,471✔
2758
                self.failed_to_submit_last_attempt = false;
6,471✔
2759
                self.globals
6,471✔
2760
                    .counters
6,471✔
2761
                    .bump_neon_submitted_commits(self.burn_block.block_height);
6,471✔
2762
            }
6,471✔
2763
            Err(_) if mock_mining => {
79✔
2764
                debug!("Relayer: Mock-mining enabled; not sending Bitcoin transaction");
79✔
2765
                self.failed_to_submit_last_attempt = true;
79✔
2766
            }
2767
            Err(BurnchainControllerError::IdenticalOperation) => {
2768
                info!("Relayer: Block-commit already submitted");
14,649✔
2769
                self.failed_to_submit_last_attempt = true;
14,649✔
2770
                return None;
14,649✔
2771
            }
2772
            Err(e) => {
×
2773
                warn!("Relayer: Failed to submit Bitcoin transaction: {e:?}");
×
2774
                self.failed_to_submit_last_attempt = true;
×
2775
                return None;
×
2776
            }
2777
        };
2778

2779
        let assembled_block = AssembledAnchorBlock {
6,550✔
2780
            parent_consensus_hash: parent_block_info.parent_consensus_hash.clone(),
6,550✔
2781
            consensus_hash: cur_burn_chain_tip.consensus_hash.clone(),
6,550✔
2782
            burn_hash: cur_burn_chain_tip.burn_header_hash.clone(),
6,550✔
2783
            burn_block_height: cur_burn_chain_tip.block_height,
6,550✔
2784
            orig_burn_hash: self.burn_block.burn_header_hash.clone(),
6,550✔
2785
            anchored_block,
6,550✔
2786
            attempt,
6,550✔
2787
            tenure_begin,
6,550✔
2788
        };
6,550✔
2789

2790
        if mock_mining {
6,550✔
2791
            let stacks_block_height = assembled_block.anchored_block.header.total_work.work;
79✔
2792
            info!("Mock mined Stacks block {stacks_block_height}");
79✔
2793
            if let Some(dir) = mock_mining_output_dir {
79✔
2794
                info!("Writing mock mined Stacks block {stacks_block_height} to file");
26✔
2795
                fs::create_dir_all(&dir).unwrap_or_else(|e| match e.kind() {
26✔
2796
                    ErrorKind::AlreadyExists => { /* This is fine */ }
×
2797
                    _ => error!("Failed to create directory '{dir:?}': {e}"),
×
2798
                });
×
2799
                let filename = format!("{stacks_block_height}.json");
26✔
2800
                let filepath = dir.join(filename);
26✔
2801
                assembled_block
26✔
2802
                    .serialize_to_file(&filepath)
26✔
2803
                    .unwrap_or_else(|e| match e.kind() {
26✔
2804
                        ErrorKind::AlreadyExists => {
2805
                            error!("Failed to overwrite file '{filepath:?}'")
×
2806
                        }
2807
                        _ => error!("Failed to write to file '{filepath:?}': {e}"),
×
2808
                    });
×
2809
            }
53✔
2810
        }
6,471✔
2811

2812
        Some(MinerThreadResult::Block(
6,550✔
2813
            assembled_block,
6,550✔
2814
            microblock_private_key,
6,550✔
2815
            bitcoin_controller.get_ongoing_commit(),
6,550✔
2816
        ))
6,550✔
2817
    }
172,622✔
2818
}
2819

2820
impl RelayerThread {
2821
    /// Instantiate off of a StacksNode, a runloop, and a relayer.
2822
    pub fn new(runloop: &RunLoop, local_peer: LocalPeer, relayer: Relayer) -> RelayerThread {
272✔
2823
        let config = runloop.config().clone();
272✔
2824
        let globals = runloop.get_globals();
272✔
2825
        let burn_db_path = config.get_burn_db_file_path();
272✔
2826
        let stacks_chainstate_path = config.get_chainstate_path_str();
272✔
2827
        let is_mainnet = config.is_mainnet();
272✔
2828
        let chain_id = config.burnchain.chain_id;
272✔
2829

2830
        let sortdb = SortitionDB::open(
272✔
2831
            &burn_db_path,
272✔
2832
            true,
2833
            runloop.get_burnchain().pox_constants,
272✔
2834
            Some(config.node.get_marf_opts()),
272✔
2835
        )
2836
        .expect("FATAL: failed to open burnchain DB");
272✔
2837

2838
        let chainstate =
272✔
2839
            open_chainstate_with_faults(&config).expect("FATAL: failed to open chainstate DB");
272✔
2840

2841
        let cost_estimator = config
272✔
2842
            .make_cost_estimator()
272✔
2843
            .unwrap_or_else(|| Box::new(UnitEstimator));
272✔
2844
        let metric = config
272✔
2845
            .make_cost_metric()
272✔
2846
            .unwrap_or_else(|| Box::new(UnitMetric));
272✔
2847

2848
        let mempool = MemPoolDB::open(
272✔
2849
            is_mainnet,
272✔
2850
            chain_id,
272✔
2851
            &stacks_chainstate_path,
272✔
2852
            cost_estimator,
272✔
2853
            metric,
272✔
2854
        )
2855
        .expect("Database failure opening mempool");
272✔
2856

2857
        let keychain = Keychain::default(config.node.seed.clone());
272✔
2858
        let bitcoin_controller = BitcoinRegtestController::new_dummy(config.clone());
272✔
2859

2860
        RelayerThread {
272✔
2861
            config: config.clone(),
272✔
2862
            sortdb: Some(sortdb),
272✔
2863
            chainstate: Some(chainstate),
272✔
2864
            mempool: Some(mempool),
272✔
2865
            globals,
272✔
2866
            keychain,
272✔
2867
            burnchain: runloop.get_burnchain(),
272✔
2868
            last_vrf_key_burn_height: 0,
272✔
2869
            last_mined_blocks: MinedBlocks::new(),
272✔
2870
            bitcoin_controller,
272✔
2871
            event_dispatcher: runloop.get_event_dispatcher(),
272✔
2872
            local_peer,
272✔
2873

272✔
2874
            last_tenure_issue_time: 0,
272✔
2875
            last_network_block_height: 0,
272✔
2876
            last_network_block_height_ts: 0,
272✔
2877
            last_network_download_passes: 0,
272✔
2878
            min_network_download_passes: 0,
272✔
2879
            last_network_inv_passes: 0,
272✔
2880
            min_network_inv_passes: 0,
272✔
2881

272✔
2882
            last_tenure_consensus_hash: None,
272✔
2883
            miner_tip: None,
272✔
2884
            last_microblock_tenure_time: 0,
272✔
2885
            microblock_deadline: 0,
272✔
2886
            microblock_stream_cost: ExecutionCost::ZERO,
272✔
2887

272✔
2888
            relayer,
272✔
2889

272✔
2890
            miner_thread: None,
272✔
2891
            mined_stacks_block: false,
272✔
2892
            last_attempt_failed: false,
272✔
2893
        }
272✔
2894
    }
272✔
2895

2896
    /// Get an immutible ref to the sortdb
2897
    pub fn sortdb_ref(&self) -> &SortitionDB {
1,362,962✔
2898
        self.sortdb
1,362,962✔
2899
            .as_ref()
1,362,962✔
2900
            .expect("FATAL: tried to access sortdb while taken")
1,362,962✔
2901
    }
1,362,962✔
2902

2903
    /// Get an immutible ref to the chainstate
2904
    pub fn chainstate_ref(&self) -> &StacksChainState {
304,652✔
2905
        self.chainstate
304,652✔
2906
            .as_ref()
304,652✔
2907
            .expect("FATAL: tried to access chainstate while it was taken")
304,652✔
2908
    }
304,652✔
2909

2910
    /// Fool the borrow checker into letting us do something with the chainstate databases.
2911
    /// DOES NOT COMPOSE -- do NOT call this, or self.sortdb_ref(), or self.chainstate_ref(), within
2912
    /// `func`.  You will get a runtime panic.
2913
    pub fn with_chainstate<F, R>(&mut self, func: F) -> R
1,396,263✔
2914
    where
1,396,263✔
2915
        F: FnOnce(&mut RelayerThread, &mut SortitionDB, &mut StacksChainState, &mut MemPoolDB) -> R,
1,396,263✔
2916
    {
2917
        let mut sortdb = self
1,396,263✔
2918
            .sortdb
1,396,263✔
2919
            .take()
1,396,263✔
2920
            .expect("FATAL: tried to take sortdb while taken");
1,396,263✔
2921
        let mut chainstate = self
1,396,263✔
2922
            .chainstate
1,396,263✔
2923
            .take()
1,396,263✔
2924
            .expect("FATAL: tried to take chainstate while taken");
1,396,263✔
2925
        let mut mempool = self
1,396,263✔
2926
            .mempool
1,396,263✔
2927
            .take()
1,396,263✔
2928
            .expect("FATAL: tried to take mempool while taken");
1,396,263✔
2929
        let res = func(self, &mut sortdb, &mut chainstate, &mut mempool);
1,396,263✔
2930
        self.sortdb = Some(sortdb);
1,396,263✔
2931
        self.chainstate = Some(chainstate);
1,396,263✔
2932
        self.mempool = Some(mempool);
1,396,263✔
2933
        res
1,396,263✔
2934
    }
1,396,263✔
2935

2936
    /// have we waited for the right conditions under which to start mining a block off of our
2937
    /// chain tip?
2938
    pub fn has_waited_for_latest_blocks(&self) -> bool {
882,102✔
2939
        // a network download pass took place
2940
        self.min_network_download_passes <= self.last_network_download_passes
882,102✔
2941
        // we waited long enough for a download pass, but timed out waiting
2942
        || self.last_network_block_height_ts + (self.config.node.wait_time_for_blocks as u128) < get_epoch_time_ms()
687,703✔
2943
        // we're not supposed to wait at all
2944
        || !self.config.miner.wait_for_block_download
152,357✔
2945
    }
882,102✔
2946

2947
    /// Return debug string for waiting for latest blocks
2948
    pub fn debug_waited_for_latest_blocks(&self) -> String {
×
2949
        format!(
×
2950
            "({} <= {} && {} <= {}) || {} + {} < {} || {}",
2951
            self.min_network_download_passes,
2952
            self.last_network_download_passes,
2953
            self.min_network_inv_passes,
2954
            self.last_network_inv_passes,
2955
            self.last_network_block_height_ts,
2956
            self.config.node.wait_time_for_blocks,
2957
            get_epoch_time_ms(),
×
2958
            self.config.miner.wait_for_block_download
2959
        )
2960
    }
×
2961

2962
    /// Handle a NetworkResult from the p2p/http state machine.  Usually this is the act of
2963
    /// * preprocessing and storing new blocks and microblocks
2964
    /// * relaying blocks, microblocks, and transactions
2965
    /// * updating unconfirmed state views
2966
    pub fn process_network_result(&mut self, mut net_result: NetworkResult) {
688,870✔
2967
        debug!(
688,870✔
2968
            "Relayer: Handle network result (from {})",
2969
            net_result.burn_height
2970
        );
2971

2972
        if self.last_network_block_height != net_result.burn_height {
688,870✔
2973
            // burnchain advanced; disable mining until we also do a download pass.
2974
            self.last_network_block_height = net_result.burn_height;
18,744✔
2975
            self.min_network_download_passes = net_result.num_download_passes + 1;
18,744✔
2976
            self.min_network_inv_passes = net_result.num_inv_sync_passes + 1;
18,744✔
2977
            self.last_network_block_height_ts = get_epoch_time_ms();
18,744✔
2978
            debug!(
18,744✔
2979
                "Relayer: block mining until the next download pass {}",
2980
                self.min_network_download_passes
2981
            );
2982
            signal_mining_blocked(self.globals.get_miner_status());
18,744✔
2983
        }
670,126✔
2984

2985
        let net_receipts = self.with_chainstate(|relayer_thread, sortdb, chainstate, mempool| {
688,870✔
2986
            relayer_thread
688,870✔
2987
                .relayer
688,870✔
2988
                .process_network_result(
688,870✔
2989
                    &relayer_thread.local_peer,
688,870✔
2990
                    &mut net_result,
688,870✔
2991
                    &relayer_thread.burnchain,
688,870✔
2992
                    sortdb,
688,870✔
2993
                    chainstate,
688,870✔
2994
                    mempool,
688,870✔
2995
                    relayer_thread.globals.sync_comms.get_ibd(),
688,870✔
2996
                    Some(&relayer_thread.globals.coord_comms),
688,870✔
2997
                    Some(&relayer_thread.event_dispatcher),
688,870✔
2998
                )
2999
                .expect("BUG: failure processing network results")
688,870✔
3000
        });
688,870✔
3001

3002
        if net_receipts.num_new_blocks > 0 || net_receipts.num_new_confirmed_microblocks > 0 {
688,870✔
3003
            // if we received any new block data that could invalidate our view of the chain tip,
3004
            // then stop mining until we process it
3005
            debug!("Relayer: block mining to process newly-arrived blocks or microblocks");
1,062✔
3006
            signal_mining_blocked(self.globals.get_miner_status());
1,062✔
3007
        }
687,808✔
3008

3009
        let mempool_txs_added = net_receipts.mempool_txs_added.len();
688,870✔
3010
        if mempool_txs_added > 0 {
688,870✔
3011
            self.event_dispatcher
1,083✔
3012
                .process_new_mempool_txs(net_receipts.mempool_txs_added);
1,083✔
3013
        }
687,787✔
3014

3015
        let num_unconfirmed_microblock_tx_receipts =
688,870✔
3016
            net_receipts.processed_unconfirmed_state.receipts.len();
688,870✔
3017
        if num_unconfirmed_microblock_tx_receipts > 0 {
688,870✔
3018
            if let Some(unconfirmed_state) = self.chainstate_ref().unconfirmed_state.as_ref() {
1✔
3019
                self.event_dispatcher.process_new_microblocks(
1✔
3020
                    &unconfirmed_state.confirmed_chain_tip,
1✔
3021
                    &net_receipts.processed_unconfirmed_state,
1✔
3022
                );
1✔
3023
            } else {
1✔
3024
                warn!("Relayer: oops, unconfirmed state is uninitialized but there are microblock events");
×
3025
            }
3026
        }
688,869✔
3027

3028
        // Dispatch retrieved attachments, if any.
3029
        if net_result.has_attachments() {
688,870✔
3030
            self.event_dispatcher
×
3031
                .process_new_attachments(&net_result.attachments);
×
3032
        }
688,870✔
3033

3034
        // synchronize unconfirmed tx index to p2p thread
3035
        self.with_chainstate(|relayer_thread, _sortdb, chainstate, _mempool| {
688,870✔
3036
            relayer_thread.globals.send_unconfirmed_txs(chainstate);
688,868✔
3037
        });
688,868✔
3038

3039
        // resume mining if we blocked it, and if we've done the requisite download
3040
        // passes
3041
        self.last_network_download_passes = net_result.num_download_passes;
688,870✔
3042
        self.last_network_inv_passes = net_result.num_inv_sync_passes;
688,870✔
3043
        if self.has_waited_for_latest_blocks() {
688,870✔
3044
            debug!("Relayer: did a download pass, so unblocking mining");
688,868✔
3045
            signal_mining_ready(self.globals.get_miner_status());
688,868✔
3046
        }
2✔
3047
    }
688,870✔
3048

3049
    /// Process the block and microblocks from a sortition that we won.
3050
    /// At this point, we're modifying the chainstate, and merging the artifacts from the previous tenure.
3051
    /// Blocks until the given stacks block is processed.
3052
    /// Returns true if we accepted this block as new.
3053
    /// Returns false if we already processed this block.
3054
    fn accept_winning_tenure(
6,191✔
3055
        &mut self,
6,191✔
3056
        anchored_block: &StacksBlock,
6,191✔
3057
        consensus_hash: &ConsensusHash,
6,191✔
3058
        parent_consensus_hash: &ConsensusHash,
6,191✔
3059
    ) -> Result<bool, ChainstateError> {
6,191✔
3060
        if StacksChainState::has_stored_block(
6,191✔
3061
            self.chainstate_ref().db(),
6,191✔
3062
            &self.chainstate_ref().blocks_path,
6,191✔
3063
            consensus_hash,
6,191✔
3064
            &anchored_block.block_hash(),
6,191✔
3065
        )? {
×
3066
            // already processed my tenure
3067
            return Ok(false);
×
3068
        }
6,191✔
3069
        let burn_height =
6,191✔
3070
            SortitionDB::get_block_snapshot_consensus(self.sortdb_ref().conn(), consensus_hash)
6,191✔
3071
                .map_err(|e| {
6,191✔
3072
                    error!("Failed to find block snapshot for mined block: {e}");
×
3073
                    e
×
3074
                })?
×
3075
                .ok_or_else(|| {
6,191✔
3076
                    error!("Failed to find block snapshot for mined block");
×
3077
                    ChainstateError::NoSuchBlockError
×
3078
                })?
×
3079
                .block_height;
3080

3081
        let epoch_id = SortitionDB::get_stacks_epoch(self.sortdb_ref().conn(), burn_height)?
6,191✔
3082
            .expect("FATAL: no epoch defined")
6,191✔
3083
            .epoch_id;
3084

3085
        // failsafe
3086
        if !Relayer::static_check_problematic_relayed_block(
6,191✔
3087
            self.chainstate_ref().mainnet,
6,191✔
3088
            epoch_id,
6,191✔
3089
            anchored_block,
6,191✔
3090
        ) {
6,191✔
3091
            // nope!
3092
            warn!(
×
3093
                "Our mined block {} was problematic. Will NOT process.",
3094
                &anchored_block.block_hash()
×
3095
            );
3096
            #[cfg(any(test, feature = "testing"))]
3097
            {
3098
                use std::path::Path;
3099
                if let Ok(path) = std::env::var("STACKS_BAD_BLOCKS_DIR") {
×
3100
                    // record this block somewhere
3101
                    if fs::metadata(&path).is_err() {
×
3102
                        fs::create_dir_all(&path)
×
3103
                            .unwrap_or_else(|_| panic!("FATAL: could not create '{path}'"));
×
3104
                    }
×
3105

3106
                    let path = Path::new(&path);
×
3107
                    let path = path.join(Path::new(&format!("{}", &anchored_block.block_hash())));
×
3108
                    let mut file = fs::File::create(&path)
×
3109
                        .unwrap_or_else(|_| panic!("FATAL: could not create '{path:?}'"));
×
3110

3111
                    let block_bits = anchored_block.serialize_to_vec();
×
3112
                    let block_bits_hex = to_hex(&block_bits);
×
3113
                    let block_json =
×
3114
                        format!(r#"{{"block":"{block_bits_hex}","consensus":"{consensus_hash}"}}"#);
×
3115
                    file.write_all(block_json.as_bytes()).unwrap_or_else(|_| {
×
3116
                        panic!("FATAL: failed to write block bits to '{path:?}'")
×
3117
                    });
3118
                    info!(
×
3119
                        "Fault injection: bad block {} saved to {}",
3120
                        &anchored_block.block_hash(),
×
3121
                        &path.to_str().unwrap()
×
3122
                    );
3123
                }
×
3124
            }
3125
            return Err(ChainstateError::NoTransactionsToMine);
×
3126
        }
6,191✔
3127

3128
        // Preprocess the anchored block
3129
        self.with_chainstate(|_relayer_thread, sort_db, chainstate, _mempool| {
6,191✔
3130
            let ic = sort_db.index_conn();
6,191✔
3131
            chainstate.preprocess_anchored_block(
6,191✔
3132
                &ic,
6,191✔
3133
                consensus_hash,
6,191✔
3134
                anchored_block,
6,191✔
3135
                parent_consensus_hash,
6,191✔
3136
                0,
3137
            )
3138
        })?;
6,191✔
3139

3140
        Ok(true)
6,166✔
3141
    }
6,191✔
3142

3143
    /// Process a new block we mined
3144
    /// Return true if we processed it
3145
    /// Return false if we timed out waiting for it
3146
    /// Return Err(..) if we couldn't reach the chains coordiantor thread
3147
    fn process_new_block(&self) -> Result<bool, Error> {
6,166✔
3148
        // process the block
3149
        let stacks_blocks_processed = self.globals.coord_comms.get_stacks_blocks_processed();
6,166✔
3150
        if !self.globals.coord_comms.announce_new_stacks_block() {
6,166✔
3151
            return Err(Error::CoordinatorClosed);
×
3152
        }
6,166✔
3153
        if !self
6,166✔
3154
            .globals
6,166✔
3155
            .coord_comms
6,166✔
3156
            .wait_for_stacks_blocks_processed(stacks_blocks_processed, u64::MAX)
6,166✔
3157
        {
3158
            // basically unreachable
3159
            warn!("ChainsCoordinator timed out while waiting for new stacks block to be processed");
×
3160
            return Ok(false);
×
3161
        }
6,166✔
3162
        debug!("Relayer: Stacks block has been processed");
6,166✔
3163

3164
        Ok(true)
6,166✔
3165
    }
6,166✔
3166

3167
    /// Given the two miner tips, return the newer tip.
3168
    fn pick_higher_tip(cur: Option<MinerTip>, new: Option<MinerTip>) -> Option<MinerTip> {
15,024✔
3169
        match (cur, new) {
15,024✔
3170
            (Some(cur), None) => Some(cur),
54✔
3171
            (None, Some(new)) => Some(new),
6,386✔
3172
            (None, None) => None,
2,643✔
3173
            (Some(cur), Some(new)) => {
5,941✔
3174
                if cur.stacks_height < new.stacks_height {
5,941✔
3175
                    Some(new)
5,935✔
3176
                } else if cur.stacks_height > new.stacks_height {
6✔
3177
                    Some(cur)
4✔
3178
                } else if cur.burn_height < new.burn_height {
2✔
3179
                    Some(new)
2✔
3180
                } else if cur.burn_height > new.burn_height {
×
3181
                    Some(cur)
×
3182
                } else {
3183
                    assert_eq!(cur, new);
×
3184
                    Some(cur)
×
3185
                }
3186
            }
3187
        }
3188
    }
15,024✔
3189

3190
    /// Given the pointer to a recently-discovered tenure, see if we won the sortition and if so,
3191
    /// store it, preprocess it, and forward it to our neighbors.  All the while, keep track of the
3192
    /// latest Stacks mining tip we have produced so far.
3193
    ///
3194
    /// Returns (true, Some(tip)) if the coordinator is still running and we have a miner tip to
3195
    /// build on (i.e. we won this last sortition).
3196
    ///
3197
    /// Returns (true, None) if the coordinator is still running, and we do NOT have a miner tip to
3198
    /// build on (i.e. we did not win this last sortition)
3199
    ///
3200
    /// Returns (false, _) if the coordinator could not be reached, meaning this thread should die.
3201
    pub fn process_one_tenure(
7,513✔
3202
        &mut self,
7,513✔
3203
        consensus_hash: ConsensusHash,
7,513✔
3204
        block_header_hash: BlockHeaderHash,
7,513✔
3205
        burn_hash: BurnchainHeaderHash,
7,513✔
3206
    ) -> (bool, Option<MinerTip>) {
7,513✔
3207
        let mut miner_tip = None;
7,513✔
3208
        let sn =
7,513✔
3209
            SortitionDB::get_block_snapshot_consensus(self.sortdb_ref().conn(), &consensus_hash)
7,513✔
3210
                .expect("FATAL: failed to query sortition DB")
7,513✔
3211
                .expect("FATAL: unknown consensus hash");
7,513✔
3212

3213
        debug!(
7,513✔
3214
            "Relayer: Process tenure {consensus_hash}/{block_header_hash} in {burn_hash} burn height {}",
3215
            sn.block_height
3216
        );
3217

3218
        if let Some((last_mined_block_data, microblock_privkey)) =
6,191✔
3219
            self.last_mined_blocks.remove(&block_header_hash)
7,513✔
3220
        {
3221
            // we won!
3222
            let AssembledAnchorBlock {
3223
                parent_consensus_hash,
6,191✔
3224
                anchored_block: mined_block,
6,191✔
3225
                burn_hash: mined_burn_hash,
6,191✔
3226
                attempt: _,
3227
                ..
3228
            } = last_mined_block_data;
6,191✔
3229

3230
            let reward_block_height = mined_block.header.total_work.work + MINER_REWARD_MATURITY;
6,191✔
3231
            info!(
6,191✔
3232
                "Relayer: Won sortition! Mining reward will be received in {MINER_REWARD_MATURITY} blocks (block #{reward_block_height})"
3233
            );
3234
            debug!("Relayer: Won sortition!";
6,191✔
3235
                  "stacks_header" => %block_header_hash,
3236
                  "burn_hash" => %mined_burn_hash,
3237
            );
3238

3239
            increment_stx_blocks_mined_counter();
6,191✔
3240
            let has_new_data = match self.accept_winning_tenure(
6,191✔
3241
                &mined_block,
6,191✔
3242
                &consensus_hash,
6,191✔
3243
                &parent_consensus_hash,
6,191✔
3244
            ) {
3245
                Ok(accepted) => accepted,
6,166✔
3246
                Err(ChainstateError::ChannelClosed(_)) => {
3247
                    warn!("Coordinator stopped, stopping relayer thread...");
×
3248
                    return (false, None);
×
3249
                }
3250
                Err(e) => {
25✔
3251
                    warn!("Error processing my tenure, bad block produced: {e}");
25✔
3252
                    warn!(
25✔
3253
                        "Bad block";
3254
                        "stacks_header" => %block_header_hash,
3255
                        "data" => %to_hex(&mined_block.serialize_to_vec()),
25✔
3256
                    );
3257
                    return (true, None);
25✔
3258
                }
3259
            };
3260

3261
            // advertize _and_ push blocks for now
3262
            let blocks_available = Relayer::load_blocks_available_data(
6,166✔
3263
                self.sortdb_ref(),
6,166✔
3264
                vec![consensus_hash.clone()],
6,166✔
3265
            )
3266
            .expect("Failed to obtain block information for a block we mined.");
6,166✔
3267

3268
            let block_data = {
6,166✔
3269
                let mut bd = HashMap::new();
6,166✔
3270
                bd.insert(consensus_hash.clone(), mined_block.clone());
6,166✔
3271
                bd
6,166✔
3272
            };
3273

3274
            if let Err(e) = self.relayer.advertize_blocks(blocks_available, block_data) {
6,166✔
3275
                warn!("Failed to advertise new block: {e}");
×
3276
            }
6,166✔
3277

3278
            let snapshot = SortitionDB::get_block_snapshot_consensus(
6,166✔
3279
                self.sortdb_ref().conn(),
6,166✔
3280
                &consensus_hash,
6,166✔
3281
            )
3282
            .expect("Failed to obtain snapshot for block")
6,166✔
3283
            .expect("Failed to obtain snapshot for block");
6,166✔
3284

3285
            if !snapshot.pox_valid {
6,166✔
3286
                warn!(
×
3287
                    "Snapshot for {consensus_hash} is no longer valid; discarding {}...",
3288
                    &mined_block.block_hash()
×
3289
                );
3290
                miner_tip = Self::pick_higher_tip(miner_tip, None);
×
3291
            } else {
3292
                let ch = snapshot.consensus_hash.clone();
6,166✔
3293
                let bh = mined_block.block_hash();
6,166✔
3294
                let height = mined_block.header.total_work.work;
6,166✔
3295

3296
                let mut broadcast = true;
6,166✔
3297
                if self.chainstate_ref().fault_injection.hide_blocks
6,166✔
3298
                    && Relayer::fault_injection_is_block_hidden(
×
3299
                        &mined_block.header,
×
3300
                        snapshot.block_height,
×
3301
                    )
3302
                {
×
3303
                    broadcast = false;
×
3304
                }
6,166✔
3305
                if broadcast {
6,166✔
3306
                    if let Err(e) = self
6,166✔
3307
                        .relayer
6,166✔
3308
                        .broadcast_block(snapshot.consensus_hash, mined_block)
6,166✔
3309
                    {
3310
                        warn!("Failed to push new block: {e}");
×
3311
                    }
6,166✔
3312
                }
×
3313

3314
                // proceed to mine microblocks
3315
                miner_tip = Some(MinerTip::new(
6,166✔
3316
                    ch,
6,166✔
3317
                    bh,
6,166✔
3318
                    microblock_privkey,
6,166✔
3319
                    height,
6,166✔
3320
                    snapshot.block_height,
6,166✔
3321
                ));
6,166✔
3322
            }
3323

3324
            if has_new_data {
6,166✔
3325
                // process the block, now that we've advertized it
3326
                if let Err(Error::CoordinatorClosed) = self.process_new_block() {
6,166✔
3327
                    // coordiantor stopped
3328
                    return (false, None);
×
3329
                }
6,166✔
3330
            }
×
3331
        } else {
3332
            debug!(
1,322✔
3333
                "Relayer: Did not win sortition in {burn_hash}, winning block was {consensus_hash}/{block_header_hash}"
3334
            );
3335
            miner_tip = None;
1,322✔
3336
        }
3337

3338
        (true, miner_tip)
7,488✔
3339
    }
7,513✔
3340

3341
    // TODO: add tests from mutation testing results #4872
3342
    #[cfg_attr(test, mutants::skip)]
3343
    /// Process all new tenures that we're aware of.
3344
    /// Clear out stale tenure artifacts as well.
3345
    /// Update the miner tip if we won the highest tenure (or clear it if we didn't).
3346
    /// If we won any sortitions, send the block and microblock data to the p2p thread.
3347
    /// Return true if we can still continue to run; false if not.
3348
    pub fn process_new_tenures(
7,511✔
3349
        &mut self,
7,511✔
3350
        consensus_hash: ConsensusHash,
7,511✔
3351
        burn_hash: BurnchainHeaderHash,
7,511✔
3352
        block_header_hash: BlockHeaderHash,
7,511✔
3353
    ) -> bool {
7,511✔
3354
        let mut miner_tip = None;
7,511✔
3355
        let mut num_sortitions = 0;
7,511✔
3356

3357
        // process all sortitions between the last-processed consensus hash and this
3358
        // one.  ProcessTenure(..) messages can get lost.
3359
        let burn_tip = SortitionDB::get_canonical_burn_chain_tip(self.sortdb_ref().conn())
7,511✔
3360
            .expect("FATAL: failed to read current burnchain tip");
7,511✔
3361
        let mut microblocks_disabled =
7,511✔
3362
            SortitionDB::are_microblocks_disabled(self.sortdb_ref().conn(), burn_tip.block_height)
7,511✔
3363
                .expect("FATAL: failed to query epoch's microblock status");
7,511✔
3364

3365
        let tenures = if let Some(last_ch) = self.last_tenure_consensus_hash.as_ref() {
7,511✔
3366
            let mut tenures = vec![];
7,247✔
3367
            let last_sn =
7,247✔
3368
                SortitionDB::get_block_snapshot_consensus(self.sortdb_ref().conn(), last_ch)
7,247✔
3369
                    .expect("FATAL: failed to query sortition DB")
7,247✔
3370
                    .expect("FATAL: unknown prior consensus hash");
7,247✔
3371

3372
            debug!(
7,247✔
3373
                "Relayer: query tenures between burn block heights {} and {}",
3374
                last_sn.block_height + 1,
×
3375
                burn_tip.block_height + 1
×
3376
            );
3377
            for block_to_process in (last_sn.block_height + 1)..(burn_tip.block_height + 1) {
7,282✔
3378
                num_sortitions += 1;
7,282✔
3379
                let sn = {
7,282✔
3380
                    let ic = self.sortdb_ref().index_conn();
7,282✔
3381
                    SortitionDB::get_ancestor_snapshot(
7,282✔
3382
                        &ic,
7,282✔
3383
                        block_to_process,
7,282✔
3384
                        &burn_tip.sortition_id,
7,282✔
3385
                    )
3386
                    .expect("FATAL: failed to read ancestor snapshot from sortition DB")
7,282✔
3387
                    .expect("Failed to find block in fork processed by burnchain indexer")
7,282✔
3388
                };
3389
                if !sn.sortition {
7,282✔
3390
                    debug!(
33✔
3391
                        "Relayer: Skipping tenure {}/{} at burn hash/height {},{} -- no sortition",
3392
                        &sn.consensus_hash,
×
3393
                        &sn.winning_stacks_block_hash,
×
3394
                        &sn.burn_header_hash,
×
3395
                        sn.block_height
3396
                    );
3397
                    continue;
33✔
3398
                }
7,249✔
3399
                debug!(
7,249✔
3400
                    "Relayer: Will process tenure {}/{} at burn hash/height {},{}",
3401
                    &sn.consensus_hash,
×
3402
                    &sn.winning_stacks_block_hash,
×
3403
                    &sn.burn_header_hash,
×
3404
                    sn.block_height
3405
                );
3406
                tenures.push((
7,249✔
3407
                    sn.consensus_hash,
7,249✔
3408
                    sn.burn_header_hash,
7,249✔
3409
                    sn.winning_stacks_block_hash,
7,249✔
3410
                ));
7,249✔
3411
            }
3412
            tenures
7,247✔
3413
        } else {
3414
            // first-ever tenure processed
3415
            vec![(consensus_hash, burn_hash, block_header_hash)]
264✔
3416
        };
3417

3418
        debug!("Relayer: will process {} tenures", &tenures.len());
7,511✔
3419
        let num_tenures = tenures.len();
7,511✔
3420
        if num_tenures > 0 {
7,511✔
3421
            // temporarily halt mining
3422
            debug!(
7,267✔
3423
                "Relayer: block mining to process {} tenures",
3424
                &tenures.len()
×
3425
            );
3426
            signal_mining_blocked(self.globals.get_miner_status());
7,267✔
3427
        }
244✔
3428

3429
        for (consensus_hash, burn_hash, block_header_hash) in tenures.into_iter() {
7,513✔
3430
            self.miner_thread_try_join();
7,513✔
3431
            let (continue_thread, new_miner_tip) =
7,513✔
3432
                self.process_one_tenure(consensus_hash.clone(), block_header_hash, burn_hash);
7,513✔
3433
            if !continue_thread {
7,513✔
3434
                // coordinator thread hang-up
3435
                return false;
×
3436
            }
7,513✔
3437
            miner_tip = Self::pick_higher_tip(miner_tip, new_miner_tip);
7,513✔
3438

3439
            // clear all blocks up to this consensus hash
3440
            let this_burn_tip = SortitionDB::get_block_snapshot_consensus(
7,513✔
3441
                self.sortdb_ref().conn(),
7,513✔
3442
                &consensus_hash,
7,513✔
3443
            )
3444
            .expect("FATAL: failed to query sortition DB")
7,513✔
3445
            .expect("FATAL: no snapshot for consensus hash");
7,513✔
3446

3447
            let old_last_mined_blocks = mem::take(&mut self.last_mined_blocks);
7,513✔
3448
            self.last_mined_blocks =
7,513✔
3449
                Self::clear_stale_mined_blocks(this_burn_tip.block_height, old_last_mined_blocks);
7,513✔
3450

3451
            // update last-tenure pointer
3452
            self.last_tenure_consensus_hash = Some(consensus_hash);
7,513✔
3453
        }
3454

3455
        if let Some(mtip) = miner_tip.take() {
7,511✔
3456
            // sanity check -- is this also the canonical tip?
3457
            let (stacks_tip_consensus_hash, stacks_tip_block_hash) =
6,165✔
3458
                self.with_chainstate(|_relayer_thread, sortdb, _chainstate, _| {
6,165✔
3459
                    SortitionDB::get_canonical_stacks_chain_tip_hash(sortdb.conn()).expect(
6,165✔
3460
                        "FATAL: failed to query sortition DB for canonical stacks chain tip hashes",
6,165✔
3461
                    )
3462
                });
6,165✔
3463

3464
            if mtip.consensus_hash != stacks_tip_consensus_hash
6,165✔
3465
                || mtip.block_hash != stacks_tip_block_hash
6,161✔
3466
            {
3467
                debug!(
4✔
3468
                    "Relayer: miner tip {}/{} is NOT canonical ({stacks_tip_consensus_hash}/{stacks_tip_block_hash})",
3469
                    &mtip.consensus_hash,
×
3470
                    &mtip.block_hash,
×
3471
                );
3472
                miner_tip = None;
4✔
3473
            } else {
3474
                debug!(
6,161✔
3475
                    "Relayer: Microblock miner tip is now {}/{} ({})",
3476
                    mtip.consensus_hash,
3477
                    mtip.block_hash,
3478
                    StacksBlockHeader::make_index_block_hash(
×
3479
                        &mtip.consensus_hash,
×
3480
                        &mtip.block_hash
×
3481
                    )
3482
                );
3483

3484
                self.with_chainstate(|relayer_thread, sortdb, chainstate, _mempool| {
6,161✔
3485
                    Relayer::refresh_unconfirmed(chainstate, sortdb);
6,161✔
3486
                    relayer_thread.globals.send_unconfirmed_txs(chainstate);
6,161✔
3487
                });
6,161✔
3488

3489
                miner_tip = Some(mtip);
6,161✔
3490
            }
3491
        }
1,346✔
3492

3493
        // update state for microblock mining
3494
        self.setup_microblock_mining_state(miner_tip);
7,511✔
3495

3496
        if cfg!(test)
7,511✔
3497
            && std::env::var("STACKS_TEST_FORCE_MICROBLOCKS_POST_25").as_deref() == Ok("1")
7,511✔
3498
        {
3499
            debug!("Allowing miner to mine microblocks because STACKS_TEST_FORCE_MICROBLOCKS_POST_25 = 1");
×
3500
            microblocks_disabled = false;
×
3501
        }
7,511✔
3502

3503
        // resume mining if we blocked it
3504
        if num_tenures > 0 || num_sortitions > 0 {
7,511✔
3505
            if self.miner_tip.is_some() {
7,270✔
3506
                // we won the highest tenure
3507
                if self.config.node.mine_microblocks && !microblocks_disabled {
6,215✔
3508
                    // mine a microblock first
334✔
3509
                    self.mined_stacks_block = true;
334✔
3510
                } else {
6,215✔
3511
                    // mine a Stacks block first -- we won't build microblocks
5,881✔
3512
                    self.mined_stacks_block = false;
5,881✔
3513
                }
5,881✔
3514
            } else {
1,055✔
3515
                // mine a Stacks block first -- we didn't win
1,055✔
3516
                self.mined_stacks_block = false;
1,055✔
3517
            }
1,055✔
3518
            signal_mining_ready(self.globals.get_miner_status());
7,270✔
3519
        }
241✔
3520
        true
7,511✔
3521
    }
7,511✔
3522

3523
    /// Update the miner tip with a new tip.  If it's changed, then clear out the microblock stream
3524
    /// cost since we won't be mining it anymore.
3525
    fn setup_microblock_mining_state(&mut self, new_miner_tip: Option<MinerTip>) {
7,511✔
3526
        // update state
3527
        let my_miner_tip = std::mem::take(&mut self.miner_tip);
7,511✔
3528
        let best_tip = Self::pick_higher_tip(my_miner_tip.clone(), new_miner_tip.clone());
7,511✔
3529
        if best_tip == new_miner_tip && best_tip != my_miner_tip {
7,511✔
3530
            // tip has changed
3531
            debug!("Relayer: Best miner tip went from {my_miner_tip:?} to {new_miner_tip:?}");
6,157✔
3532
            self.microblock_stream_cost = ExecutionCost::ZERO;
6,157✔
3533
        }
1,354✔
3534
        self.miner_tip = best_tip;
7,511✔
3535
    }
7,511✔
3536

3537
    /// Try to resume microblock mining if we don't need to build an anchored block
3538
    fn try_resume_microblock_mining(&mut self) {
12,584✔
3539
        if self.miner_tip.is_some() {
12,584✔
3540
            // we won the highest tenure
3541
            if self.config.node.mine_microblocks {
9,464✔
3542
                // mine a microblock first
9,358✔
3543
                self.mined_stacks_block = true;
9,358✔
3544
            } else {
9,464✔
3545
                // mine a Stacks block first -- we won't build microblocks
106✔
3546
                self.mined_stacks_block = false;
106✔
3547
            }
106✔
3548
        } else {
3,120✔
3549
            // mine a Stacks block first -- we didn't win
3,120✔
3550
            self.mined_stacks_block = false;
3,120✔
3551
        }
3,120✔
3552
    }
12,584✔
3553

3554
    /// Constructs and returns a LeaderKeyRegisterOp out of the provided params
3555
    fn inner_generate_leader_key_register_op(
222✔
3556
        vrf_public_key: VRFPublicKey,
222✔
3557
        consensus_hash: ConsensusHash,
222✔
3558
        miner_pk: Option<&StacksPublicKey>,
222✔
3559
    ) -> BlockstackOperationType {
222✔
3560
        let memo = if let Some(pk) = miner_pk {
222✔
3561
            Hash160::from_node_public_key(pk).as_bytes().to_vec()
191✔
3562
        } else {
3563
            vec![]
31✔
3564
        };
3565
        BlockstackOperationType::LeaderKeyRegister(LeaderKeyRegisterOp {
222✔
3566
            public_key: vrf_public_key,
222✔
3567
            memo,
222✔
3568
            consensus_hash,
222✔
3569
            vtxindex: 0,
222✔
3570
            txid: Txid([0u8; 32]),
222✔
3571
            block_height: 0,
222✔
3572
            burn_header_hash: BurnchainHeaderHash::zero(),
222✔
3573
        })
222✔
3574
    }
222✔
3575

3576
    /// Create and broadcast a VRF public key registration transaction.
3577
    /// Returns true if we succeed in doing so; false if not.
3578
    pub fn rotate_vrf_and_register(&mut self, burn_block: &BlockSnapshot) {
222✔
3579
        if burn_block.block_height == self.last_vrf_key_burn_height {
222✔
3580
            // already in-flight
3581
            return;
×
3582
        }
222✔
3583
        let cur_epoch =
222✔
3584
            SortitionDB::get_stacks_epoch(self.sortdb_ref().conn(), burn_block.block_height)
222✔
3585
                .expect("FATAL: failed to query sortition DB")
222✔
3586
                .expect("FATAL: no epoch defined")
222✔
3587
                .epoch_id;
222✔
3588
        let (vrf_pk, _) = self.keychain.make_vrf_keypair(burn_block.block_height);
222✔
3589

3590
        debug!(
222✔
3591
            "Submit leader-key-register for {} {}",
3592
            &vrf_pk.to_hex(),
×
3593
            burn_block.block_height
3594
        );
3595

3596
        let burnchain_tip_consensus_hash = burn_block.consensus_hash.clone();
222✔
3597
        // if the miner has set a mining key in preparation for epoch-3.0, register it as part of their VRF key registration
3598
        // once implemented in the nakamoto_node, this will allow miners to transition from 2.5 to 3.0 without submitting a new
3599
        // VRF key registration.
3600
        let miner_pk = self
222✔
3601
            .config
222✔
3602
            .miner
222✔
3603
            .mining_key
222✔
3604
            .as_ref()
222✔
3605
            .map(StacksPublicKey::from_private);
222✔
3606
        let op = Self::inner_generate_leader_key_register_op(
222✔
3607
            vrf_pk,
222✔
3608
            burnchain_tip_consensus_hash,
222✔
3609
            miner_pk.as_ref(),
222✔
3610
        );
3611

3612
        let mut one_off_signer = self.keychain.generate_op_signer();
222✔
3613
        if let Ok(txid) =
222✔
3614
            self.bitcoin_controller
222✔
3615
                .submit_operation(cur_epoch, op, &mut one_off_signer)
222✔
3616
        {
222✔
3617
            // advance key registration state
222✔
3618
            self.last_vrf_key_burn_height = burn_block.block_height;
222✔
3619
            self.globals
222✔
3620
                .set_pending_leader_key_registration(burn_block.block_height, txid);
222✔
3621
        }
222✔
3622
    }
222✔
3623

3624
    /// Remove any block state we've mined for the given burnchain height.
3625
    /// Return the filtered `last_mined_blocks`
3626
    fn clear_stale_mined_blocks(burn_height: u64, last_mined_blocks: MinedBlocks) -> MinedBlocks {
7,513✔
3627
        let mut ret = HashMap::new();
7,513✔
3628
        for (stacks_bhh, (assembled_block, microblock_privkey)) in last_mined_blocks.into_iter() {
7,513✔
3629
            if assembled_block.burn_block_height < burn_height {
137✔
3630
                debug!(
131✔
3631
                    "Stale mined block: {stacks_bhh} (as of {},{})",
3632
                    &assembled_block.burn_hash, assembled_block.burn_block_height
×
3633
                );
3634
                continue;
131✔
3635
            }
6✔
3636
            debug!(
6✔
3637
                "Mined block in-flight: {stacks_bhh} (as of {},{})",
3638
                &assembled_block.burn_hash, assembled_block.burn_block_height
×
3639
            );
3640
            ret.insert(stacks_bhh, (assembled_block, microblock_privkey));
6✔
3641
        }
3642
        ret
7,513✔
3643
    }
7,513✔
3644

3645
    /// Create the block miner thread state.
3646
    /// Only proceeds if all of the following are true:
3647
    ///   * The miner is not blocked
3648
    ///   * `last_burn_block` corresponds to the canonical sortition DB's chain tip
3649
    ///   * The time of issuance is sufficiently recent
3650
    ///   * There are no unprocessed stacks blocks in the staging DB
3651
    ///   * The relayer has already tried a download scan that included this sortition (which, if a
3652
    ///     block was found, would have placed it into the staging DB and marked it as
3653
    ///     unprocessed)
3654
    ///   * A miner thread is not running already
3655
    fn create_block_miner(
270,304✔
3656
        &mut self,
270,304✔
3657
        registered_key: RegisteredKey,
270,304✔
3658
        last_burn_block: BlockSnapshot,
270,304✔
3659
        issue_timestamp_ms: u128,
270,304✔
3660
    ) -> Option<BlockMinerThread> {
270,304✔
3661
        if self
270,304✔
3662
            .globals
270,304✔
3663
            .get_miner_status()
270,304✔
3664
            .lock()
270,304✔
3665
            .expect("FATAL: mutex poisoned")
270,304✔
3666
            .is_blocked()
270,304✔
3667
        {
3668
            debug!(
246✔
3669
                "Relayer: miner is blocked as of {}; cannot mine Stacks block at this time",
3670
                &last_burn_block.burn_header_hash
×
3671
            );
3672
            return None;
246✔
3673
        }
270,058✔
3674

3675
        if fault_injection_skip_mining(&self.config.node.rpc_bind, last_burn_block.block_height) {
270,058✔
3676
            debug!(
×
3677
                "Relayer: fault injection skip mining at block height {}",
3678
                last_burn_block.block_height
3679
            );
3680
            return None;
×
3681
        }
270,058✔
3682

3683
        // start a new tenure
3684
        if let Some(cur_sortition) = self.globals.get_last_sortition() {
270,058✔
3685
            if last_burn_block.sortition_id != cur_sortition.sortition_id {
270,058✔
3686
                debug!(
3✔
3687
                    "Relayer: Drop stale RunTenure for {}: current sortition is for {}",
3688
                    &last_burn_block.burn_header_hash, &cur_sortition.burn_header_hash
×
3689
                );
3690
                self.globals.counters.bump_missed_tenures();
3✔
3691
                return None;
3✔
3692
            }
270,055✔
3693
        }
×
3694

3695
        let burn_header_hash = last_burn_block.burn_header_hash.clone();
270,055✔
3696
        let burn_chain_sn = SortitionDB::get_canonical_burn_chain_tip(self.sortdb_ref().conn())
270,055✔
3697
            .expect("FATAL: failed to query sortition DB for canonical burn chain tip");
270,055✔
3698

3699
        let burn_chain_tip = burn_chain_sn.burn_header_hash;
270,055✔
3700

3701
        if burn_chain_tip != burn_header_hash {
270,055✔
3702
            debug!(
7✔
3703
                "Relayer: Drop stale RunTenure for {burn_header_hash}: current sortition is for {burn_chain_tip}"
3704
            );
3705
            self.globals.counters.bump_missed_tenures();
7✔
3706
            return None;
7✔
3707
        }
270,048✔
3708

3709
        let miner_config = self.config.get_miner_config();
270,048✔
3710

3711
        let has_unprocessed = BlockMinerThread::unprocessed_blocks_prevent_mining(
270,048✔
3712
            &self.burnchain,
270,048✔
3713
            self.sortdb_ref(),
270,048✔
3714
            self.chainstate_ref(),
270,048✔
3715
            miner_config.unprocessed_block_deadline_secs,
270,048✔
3716
        );
3717
        if has_unprocessed {
270,048✔
3718
            debug!(
×
3719
                "Relayer: Drop RunTenure for {burn_header_hash} because there are fewer than {} pending blocks",
3720
                self.burnchain.pox_constants.prepare_length - 1
×
3721
            );
3722
            return None;
×
3723
        }
270,048✔
3724

3725
        if burn_chain_sn.block_height != self.last_network_block_height
270,048✔
3726
            || !self.has_waited_for_latest_blocks()
193,234✔
3727
        {
3728
            debug!("Relayer: network has not had a chance to process in-flight blocks ({} != {} || !({}))",
76,814✔
3729
                    burn_chain_sn.block_height, self.last_network_block_height, self.debug_waited_for_latest_blocks());
×
3730
            return None;
76,814✔
3731
        }
193,234✔
3732

3733
        let tenure_cooldown = if self.config.node.mine_microblocks {
193,234✔
3734
            self.config.node.wait_time_for_microblocks as u128
29,153✔
3735
        } else {
3736
            0
164,081✔
3737
        };
3738

3739
        // no burnchain change, so only re-run block tenure every so often in order
3740
        // to give microblocks a chance to collect
3741
        if issue_timestamp_ms < self.last_tenure_issue_time + tenure_cooldown {
193,234✔
3742
            debug!("Relayer: will NOT run tenure since issuance at {} is too fresh (wait until {} + {} = {})",
20,611✔
3743
                    issue_timestamp_ms / 1000, self.last_tenure_issue_time / 1000, tenure_cooldown / 1000, (self.last_tenure_issue_time + tenure_cooldown) / 1000);
×
3744
            return None;
20,611✔
3745
        }
172,623✔
3746

3747
        // if we're still mining on this burn block, then do nothing
3748
        if self.miner_thread.is_some() {
172,623✔
3749
            debug!("Relayer: will NOT run tenure since miner thread is already running for burn tip {burn_chain_tip}");
×
3750
            return None;
×
3751
        }
172,623✔
3752

3753
        debug!(
172,623✔
3754
            "Relayer: Spawn tenure thread";
3755
            "height" => last_burn_block.block_height,
×
3756
            "burn_header_hash" => %burn_header_hash,
3757
        );
3758

3759
        let miner_thread_state =
172,623✔
3760
            BlockMinerThread::from_relayer_thread(self, registered_key, last_burn_block);
172,623✔
3761
        Some(miner_thread_state)
172,623✔
3762
    }
270,304✔
3763

3764
    /// Try to start up a block miner thread with this given VRF key and current burnchain tip.
3765
    /// Returns true if the thread was started; false if it was not (for any reason)
3766
    #[allow(clippy::incompatible_msrv)]
3767
    pub fn block_miner_thread_try_start(
344,585✔
3768
        &mut self,
344,585✔
3769
        registered_key: RegisteredKey,
344,585✔
3770
        last_burn_block: BlockSnapshot,
344,585✔
3771
        issue_timestamp_ms: u128,
344,585✔
3772
    ) -> bool {
344,585✔
3773
        if !self.miner_thread_try_join() {
344,585✔
3774
            return false;
58,821✔
3775
        }
285,764✔
3776

3777
        if !self.config.get_node_config(false).mock_mining {
285,764✔
3778
            // mock miner can't mine microblocks yet, so don't stop it from trying multiple
3779
            // anchored blocks
3780
            if self.mined_stacks_block && self.config.node.mine_microblocks {
284,917✔
3781
                debug!("Relayer: mined a Stacks block already; waiting for microblock miner");
15,460✔
3782
                return false;
15,460✔
3783
            }
269,457✔
3784
        }
847✔
3785

3786
        let Some(mut miner_thread_state) =
172,623✔
3787
            self.create_block_miner(registered_key, last_burn_block, issue_timestamp_ms)
270,304✔
3788
        else {
3789
            return false;
97,681✔
3790
        };
3791

3792
        if let Ok(miner_handle) = thread::Builder::new()
172,623✔
3793
            .name(format!("miner-block-{}", self.local_peer.data_url))
172,623✔
3794
            .stack_size(BLOCK_PROCESSOR_STACK_SIZE)
172,623✔
3795
            .spawn(move || {
172,623✔
3796
                if let Err(e) = miner_thread_state.send_mock_miner_messages() {
172,623✔
3797
                    warn!("Failed to send mock miner messages: {e}");
1✔
3798
                }
172,622✔
3799
                miner_thread_state.run_tenure()
172,623✔
3800
            })
172,623✔
3801
            .inspect_err(|e| error!("Relayer: Failed to start tenure thread: {e:?}"))
172,623✔
3802
        {
172,623✔
3803
            self.miner_thread = Some(miner_handle);
172,623✔
3804
        }
172,623✔
3805

3806
        true
172,623✔
3807
    }
344,585✔
3808

3809
    // TODO: add tests from mutation testing results #4872
3810
    #[cfg_attr(test, mutants::skip)]
3811
    /// See if we should run a microblock tenure now.
3812
    /// Return true if so; false if not
3813
    fn can_run_microblock_tenure(&mut self) -> bool {
1,041,221✔
3814
        if !self.config.node.mine_microblocks {
1,041,221✔
3815
            // not enabled
3816
            test_debug!("Relayer: not configured to mine microblocks");
837,351✔
3817
            return false;
837,351✔
3818
        }
203,870✔
3819

3820
        let burn_tip = SortitionDB::get_canonical_burn_chain_tip(self.sortdb_ref().conn())
203,870✔
3821
            .expect("FATAL: failed to read current burnchain tip");
203,870✔
3822
        let microblocks_disabled =
203,870✔
3823
            SortitionDB::are_microblocks_disabled(self.sortdb_ref().conn(), burn_tip.block_height)
203,870✔
3824
                .expect("FATAL: failed to query epoch's microblock status");
203,870✔
3825

3826
        if microblocks_disabled {
203,870✔
3827
            if cfg!(test)
1,030✔
3828
                && std::env::var("STACKS_TEST_FORCE_MICROBLOCKS_POST_25").as_deref() == Ok("1")
1,030✔
3829
            {
3830
                debug!("Allowing miner to mine microblocks because STACKS_TEST_FORCE_MICROBLOCKS_POST_25 = 1");
×
3831
            } else {
3832
                return false;
1,030✔
3833
            }
3834
        }
202,840✔
3835

3836
        if !self.miner_thread_try_join() {
202,840✔
3837
            // already running (for an anchored block or microblock)
3838
            test_debug!("Relayer: miner thread already running so cannot mine microblock");
53,809✔
3839
            return false;
53,809✔
3840
        }
149,031✔
3841
        if self.microblock_deadline > get_epoch_time_ms() {
149,031✔
3842
            debug!(
11,368✔
3843
                "Relayer: Too soon to start a microblock tenure ({} > {})",
3844
                self.microblock_deadline,
3845
                get_epoch_time_ms()
×
3846
            );
3847
            return false;
11,368✔
3848
        }
137,663✔
3849
        if self.miner_tip.is_none() {
137,663✔
3850
            debug!("Relayer: did not win last block, so cannot mine microblocks");
95,426✔
3851
            return false;
95,426✔
3852
        }
42,237✔
3853
        if !self.mined_stacks_block {
42,237✔
3854
            // have not tried to mine a stacks block yet that confirms previously-mined unconfirmed
3855
            // state (or have not tried to mine a new Stacks block yet for this active tenure);
3856
            debug!("Relayer: Did not mine a block yet, so will not mine a microblock");
32,350✔
3857
            return false;
32,350✔
3858
        }
9,887✔
3859
        if self.globals.get_last_sortition().is_none() {
9,887✔
3860
            debug!("Relayer: no first sortition yet");
×
3861
            return false;
×
3862
        }
9,887✔
3863

3864
        // go ahead
3865
        true
9,887✔
3866
    }
1,041,221✔
3867

3868
    /// Start up a microblock miner thread if possible:
3869
    ///   * No miner thread must be running already
3870
    ///   * The miner must not be blocked
3871
    ///   * We must have won the sortition on the Stacks chain tip
3872
    ///
3873
    /// Returns `true` if the thread was started; `false` if not.
3874
    #[allow(clippy::incompatible_msrv)]
3875
    pub fn microblock_miner_thread_try_start(&mut self) -> bool {
9,887✔
3876
        let miner_tip = match self.miner_tip.as_ref() {
9,887✔
3877
            Some(tip) => tip.clone(),
9,887✔
3878
            None => {
3879
                debug!("Relayer: did not win last block, so cannot mine microblocks");
×
3880
                return false;
×
3881
            }
3882
        };
3883

3884
        let burnchain_tip = match self.globals.get_last_sortition() {
9,887✔
3885
            Some(sn) => sn,
9,887✔
3886
            None => {
3887
                debug!("Relayer: no first sortition yet");
×
3888
                return false;
×
3889
            }
3890
        };
3891

3892
        debug!(
9,887✔
3893
            "Relayer: mined Stacks block {}/{} so can mine microblocks",
3894
            &miner_tip.consensus_hash, &miner_tip.block_hash
×
3895
        );
3896

3897
        if !self.miner_thread_try_join() {
9,887✔
3898
            // already running (for an anchored block or microblock)
3899
            debug!("Relayer: miner thread already running so cannot mine microblock");
×
3900
            return false;
×
3901
        }
9,887✔
3902
        if self
9,887✔
3903
            .globals
9,887✔
3904
            .get_miner_status()
9,887✔
3905
            .lock()
9,887✔
3906
            .expect("FATAL: mutex poisoned")
9,887✔
3907
            .is_blocked()
9,887✔
3908
        {
3909
            debug!(
22✔
3910
                "Relayer: miner is blocked as of {}; cannot mine microblock at this time",
3911
                &burnchain_tip.burn_header_hash
×
3912
            );
3913
            self.globals.counters.set_microblocks_processed(0);
22✔
3914
            return false;
22✔
3915
        }
9,865✔
3916

3917
        let parent_consensus_hash = &miner_tip.consensus_hash;
9,865✔
3918
        let parent_block_hash = &miner_tip.block_hash;
9,865✔
3919

3920
        debug!("Relayer: Run microblock tenure for {parent_consensus_hash}/{parent_block_hash}");
9,865✔
3921

3922
        let Some(mut microblock_thread_state) = MicroblockMinerThread::from_relayer_thread(self)
9,865✔
3923
        else {
3924
            return false;
×
3925
        };
3926

3927
        if let Ok(miner_handle) = thread::Builder::new()
9,865✔
3928
            .name(format!("miner-microblock-{}", self.local_peer.data_url))
9,865✔
3929
            .stack_size(BLOCK_PROCESSOR_STACK_SIZE)
9,865✔
3930
            .spawn(move || {
9,865✔
3931
                Some(MinerThreadResult::Microblock(
9,864✔
3932
                    microblock_thread_state.try_mine_microblock(miner_tip.clone()),
9,864✔
3933
                    miner_tip,
9,864✔
3934
                ))
9,864✔
3935
            })
9,864✔
3936
            .inspect_err(|e| error!("Relayer: Failed to start tenure thread: {e:?}"))
9,865✔
3937
        {
9,865✔
3938
            // thread started!
9,865✔
3939
            self.miner_thread = Some(miner_handle);
9,865✔
3940
            self.microblock_deadline =
9,865✔
3941
                get_epoch_time_ms() + (self.config.node.microblock_frequency as u128);
9,865✔
3942
        }
9,865✔
3943

3944
        true
9,865✔
3945
    }
9,887✔
3946

3947
    /// Inner body of Self::miner_thread_try_join
3948
    fn inner_miner_thread_try_join(
295,035✔
3949
        &mut self,
295,035✔
3950
        thread_handle: JoinHandle<Option<MinerThreadResult>>,
295,035✔
3951
    ) -> Option<JoinHandle<Option<MinerThreadResult>>> {
295,035✔
3952
        // tenure run already in progress; try and join
3953
        if !thread_handle.is_finished() {
295,035✔
3954
            debug!("Relayer: RunTenure thread not finished / is in-progress");
112,732✔
3955
            return Some(thread_handle);
112,732✔
3956
        }
182,303✔
3957
        let last_mined_block_opt = thread_handle
182,303✔
3958
            .join()
182,303✔
3959
            .expect("FATAL: failed to join miner thread");
182,303✔
3960
        self.last_attempt_failed = false;
182,303✔
3961
        if let Some(miner_result) = last_mined_block_opt {
182,303✔
3962
            match miner_result {
16,406✔
3963
                MinerThreadResult::Block(
3964
                    last_mined_block,
6,546✔
3965
                    microblock_privkey,
6,546✔
3966
                    ongoing_commit_opt,
6,546✔
3967
                ) => {
3968
                    // finished mining a block
3969
                    if BlockMinerThread::find_inflight_mined_blocks(
6,546✔
3970
                        last_mined_block.burn_block_height,
6,546✔
3971
                        &self.last_mined_blocks,
6,546✔
3972
                    )
6,546✔
3973
                    .is_empty()
6,546✔
3974
                    {
3975
                        // first time we've mined a block in this burnchain block
3976
                        debug!(
6,476✔
3977
                            "Bump block processed for burnchain block {}",
3978
                            &last_mined_block.burn_block_height
×
3979
                        );
3980
                        self.globals.counters.bump_blocks_processed();
6,476✔
3981
                    }
70✔
3982

3983
                    debug!(
6,546✔
3984
                        "Relayer: RunTenure thread joined; got Stacks block {}",
3985
                        &last_mined_block.anchored_block.block_hash()
×
3986
                    );
3987

3988
                    let bhh = last_mined_block.burn_hash.clone();
6,546✔
3989
                    let orig_bhh = last_mined_block.orig_burn_hash.clone();
6,546✔
3990
                    let tenure_begin = last_mined_block.tenure_begin;
6,546✔
3991

3992
                    self.last_mined_blocks.insert(
6,546✔
3993
                        last_mined_block.anchored_block.block_hash(),
6,546✔
3994
                        (last_mined_block, microblock_privkey),
6,546✔
3995
                    );
3996

3997
                    self.last_tenure_issue_time = get_epoch_time_ms();
6,546✔
3998
                    self.bitcoin_controller
6,546✔
3999
                        .set_ongoing_commit(ongoing_commit_opt);
6,546✔
4000

4001
                    debug!(
6,546✔
4002
                        "Relayer: RunTenure finished at {} (in {}ms) targeting {bhh} (originally {orig_bhh})",
4003
                        self.last_tenure_issue_time,
4004
                        self.last_tenure_issue_time.saturating_sub(tenure_begin)
×
4005
                    );
4006

4007
                    // this stacks block confirms all in-flight microblocks we know about,
4008
                    // including the ones we produced.
4009
                    self.mined_stacks_block = true;
6,546✔
4010
                }
4011
                MinerThreadResult::Microblock(microblock_result, miner_tip) => {
9,860✔
4012
                    // finished mining a microblock
4013
                    match microblock_result {
9,860✔
4014
                        Ok(Some((next_microblock, new_cost))) => {
4✔
4015
                            // apply it
4016
                            let microblock_hash = next_microblock.block_hash();
4✔
4017

4018
                            let (processed_unconfirmed_state, num_mblocks) = self.with_chainstate(
4✔
4019
                                |_relayer_thread, sortdb, chainstate, _mempool| {
4✔
4020
                                    let processed_unconfirmed_state =
4✔
4021
                                        Relayer::refresh_unconfirmed(chainstate, sortdb);
4✔
4022
                                    let num_mblocks = chainstate
4✔
4023
                                        .unconfirmed_state
4✔
4024
                                        .as_ref()
4✔
4025
                                        .map(|unconfirmed| unconfirmed.num_microblocks())
4✔
4026
                                        .unwrap_or(0);
4✔
4027

4028
                                    (processed_unconfirmed_state, num_mblocks)
4✔
4029
                                },
4✔
4030
                            );
4031

4032
                            info!(
4✔
4033
                                "Mined one microblock: {microblock_hash} seq {} txs {} (total processed: {num_mblocks})",
4034
                                next_microblock.header.sequence,
4035
                                next_microblock.txs.len()
4✔
4036
                            );
4037
                            self.globals.counters.set_microblocks_processed(num_mblocks);
4✔
4038

4039
                            let parent_index_block_hash = StacksBlockHeader::make_index_block_hash(
4✔
4040
                                &miner_tip.consensus_hash,
4✔
4041
                                &miner_tip.block_hash,
4✔
4042
                            );
4043
                            self.event_dispatcher.process_new_microblocks(
4✔
4044
                                &parent_index_block_hash,
4✔
4045
                                &processed_unconfirmed_state,
4✔
4046
                            );
4047

4048
                            // send it off
4049
                            if let Err(e) = self.relayer.broadcast_microblock(
4✔
4050
                                &miner_tip.consensus_hash,
4✔
4051
                                &miner_tip.block_hash,
4✔
4052
                                next_microblock,
4✔
4053
                            ) {
4✔
4054
                                error!(
×
4055
                                    "Failure trying to broadcast microblock {microblock_hash}: {e}"
4056
                                );
4057
                            }
4✔
4058

4059
                            self.last_microblock_tenure_time = get_epoch_time_ms();
4✔
4060
                            self.microblock_stream_cost = new_cost;
4✔
4061

4062
                            // synchronise state
4063
                            self.with_chainstate(
4✔
4064
                                |relayer_thread, _sortdb, chainstate, _mempool| {
4✔
4065
                                    relayer_thread.globals.send_unconfirmed_txs(chainstate);
4✔
4066
                                },
4✔
4067
                            );
4068

4069
                            // have not yet mined a stacks block that confirms this microblock, so
4070
                            // do that on the next run
4071
                            self.mined_stacks_block = false;
4✔
4072
                        }
4073
                        Ok(None) => {
4074
                            debug!("Relayer: did not mine microblock in this tenure");
9,856✔
4075

4076
                            // switch back to block mining
4077
                            self.mined_stacks_block = false;
9,856✔
4078
                        }
4079
                        Err(e) => {
×
4080
                            warn!("Relayer: Failed to mine next microblock: {e:?}");
×
4081

4082
                            // switch back to block mining
4083
                            self.mined_stacks_block = false;
×
4084
                        }
4085
                    }
4086
                }
4087
            }
4088
        } else {
4089
            self.last_attempt_failed = true;
165,897✔
4090
            // if we tried and failed to make an anchored block (e.g. because there's nothing to
4091
            // do), then resume microblock mining
4092
            if !self.mined_stacks_block {
165,897✔
4093
                self.try_resume_microblock_mining();
12,584✔
4094
            }
163,416✔
4095
        }
4096
        None
182,303✔
4097
    }
295,035✔
4098

4099
    /// Try to join with the miner thread. If successful, join the thread and return `true`.
4100
    /// Otherwise, if the thread is still running, return `false`.
4101
    ///
4102
    /// Updates internal state gleaned from the miner, such as:
4103
    ///   * New Stacks block data
4104
    ///   * New keychain state
4105
    ///   * New metrics
4106
    ///   * New unconfirmed state
4107
    ///
4108
    /// Returns `true` if joined; `false` if not.
4109
    pub fn miner_thread_try_join(&mut self) -> bool {
564,825✔
4110
        if let Some(thread_handle) = self.miner_thread.take() {
564,825✔
4111
            let new_thread_handle = self.inner_miner_thread_try_join(thread_handle);
295,035✔
4112
            self.miner_thread = new_thread_handle;
295,035✔
4113
        }
490,286✔
4114
        self.miner_thread.is_none()
564,825✔
4115
    }
564,825✔
4116

4117
    /// Try loading up a saved VRF key
4118
    pub(crate) fn load_saved_vrf_key(path: &str) -> Option<RegisteredKey> {
173✔
4119
        let mut f = match fs::File::open(path) {
173✔
4120
            Ok(f) => f,
36✔
4121
            Err(e) => {
137✔
4122
                warn!("Could not open {path}: {e:?}");
137✔
4123
                return None;
137✔
4124
            }
4125
        };
4126
        let mut registered_key_bytes = vec![];
36✔
4127
        if let Err(e) = f.read_to_end(&mut registered_key_bytes) {
36✔
4128
            warn!("Failed to read registered key bytes from {path}: {e:?}");
×
4129
            return None;
×
4130
        }
36✔
4131

4132
        let Ok(registered_key) = serde_json::from_slice(&registered_key_bytes) else {
36✔
4133
            warn!("Did not load registered key from {path}: could not decode JSON");
×
4134
            return None;
×
4135
        };
4136

4137
        info!("Loaded registered key from {path}");
36✔
4138
        Some(registered_key)
36✔
4139
    }
173✔
4140

4141
    /// Top-level dispatcher
4142
    pub fn handle_directive(&mut self, directive: RelayerDirective) -> bool {
1,042,244✔
4143
        debug!("Relayer: received next directive");
1,042,244✔
4144
        let continue_running = match directive {
1,042,244✔
4145
            RelayerDirective::HandleNetResult(net_result) => {
688,870✔
4146
                debug!("Relayer: directive Handle network result");
688,870✔
4147
                self.process_network_result(net_result);
688,870✔
4148
                debug!("Relayer: directive Handled network result");
688,870✔
4149
                true
688,870✔
4150
            }
4151
            RelayerDirective::RegisterKey(last_burn_block) => {
257✔
4152
                let mut saved_key_opt = None;
257✔
4153
                if let Some(path) = self.config.miner.activated_vrf_key_path.as_ref() {
257✔
4154
                    saved_key_opt = Self::load_saved_vrf_key(path);
172✔
4155
                }
257✔
4156
                if let Some(saved_key) = saved_key_opt {
257✔
4157
                    self.globals.resume_leader_key(saved_key);
35✔
4158
                } else {
35✔
4159
                    self.rotate_vrf_and_register(&last_burn_block);
222✔
4160
                    debug!("Relayer: directive Registered VRF key");
222✔
4161
                }
4162
                self.globals.counters.bump_blocks_processed();
257✔
4163
                true
257✔
4164
            }
4165
            RelayerDirective::ProcessTenure(consensus_hash, burn_hash, block_header_hash) => {
7,511✔
4166
                debug!("Relayer: directive Process tenures");
7,511✔
4167
                let res = self.process_new_tenures(consensus_hash, burn_hash, block_header_hash);
7,511✔
4168
                debug!("Relayer: directive Processed tenures");
7,511✔
4169
                res
7,511✔
4170
            }
4171
            RelayerDirective::RunTenure(registered_key, last_burn_block, issue_timestamp_ms) => {
345,606✔
4172
                debug!("Relayer: directive Run tenure");
345,606✔
4173
                let Ok(Some(next_block_epoch)) = SortitionDB::get_stacks_epoch(
345,606✔
4174
                    self.sortdb_ref().conn(),
345,606✔
4175
                    last_burn_block.block_height.saturating_add(1),
345,606✔
4176
                ) else {
4177
                    warn!("Failed to load Stacks Epoch for next burn block, skipping RunTenure directive");
×
4178
                    return true;
×
4179
                };
4180
                if next_block_epoch.epoch_id.uses_nakamoto_blocks() {
345,606✔
4181
                    info!("Next burn block is in Nakamoto epoch, skipping RunTenure directive for 2.x node");
1,021✔
4182
                    return true;
1,021✔
4183
                }
344,585✔
4184
                self.block_miner_thread_try_start(
344,585✔
4185
                    registered_key,
344,585✔
4186
                    last_burn_block,
344,585✔
4187
                    issue_timestamp_ms,
344,585✔
4188
                );
4189
                debug!("Relayer: directive Ran tenure");
344,585✔
4190
                true
344,585✔
4191
            }
4192
            RelayerDirective::NakamotoTenureStartProcessed(_, _) => {
4193
                warn!("Relayer: Nakamoto tenure start notification received while still operating 2.x neon node");
×
4194
                true
×
4195
            }
4196
            RelayerDirective::Exit => false,
×
4197
        };
4198
        if !continue_running {
1,041,223✔
4199
            return false;
2✔
4200
        }
1,041,221✔
4201

4202
        // see if we need to run a microblock tenure
4203
        if self.can_run_microblock_tenure() {
1,041,221✔
4204
            self.microblock_miner_thread_try_start();
9,887✔
4205
        }
1,031,334✔
4206
        continue_running
1,041,221✔
4207
    }
1,042,244✔
4208
}
4209

4210
impl ParentStacksBlockInfo {
4211
    /// Determine where in the set of forks to attempt to mine the next anchored block.
4212
    /// `mine_tip_ch` and `mine_tip_bhh` identify the parent block on top of which to mine.
4213
    /// `check_burn_block` identifies what we believe to be the burn chain's sortition history tip.
4214
    /// This is used to mitigate (but not eliminate) a TOCTTOU issue with mining: the caller's
4215
    /// conception of the sortition history tip may have become stale by the time they call this
4216
    /// method, in which case, mining should *not* happen (since the block will be invalid).
4217
    pub fn lookup(
172,209✔
4218
        chain_state: &mut StacksChainState,
172,209✔
4219
        burn_db: &mut SortitionDB,
172,209✔
4220
        check_burn_block: &BlockSnapshot,
172,209✔
4221
        miner_address: StacksAddress,
172,209✔
4222
        mine_tip_ch: &ConsensusHash,
172,209✔
4223
        mine_tip_bh: &BlockHeaderHash,
172,209✔
4224
    ) -> Result<ParentStacksBlockInfo, Error> {
172,209✔
4225
        let stacks_tip_header = StacksChainState::get_anchored_block_header_info(
172,209✔
4226
            chain_state.db(),
172,209✔
4227
            mine_tip_ch,
172,209✔
4228
            mine_tip_bh,
172,209✔
4229
        )
4230
        .unwrap()
172,209✔
4231
        .ok_or_else(|| {
172,209✔
4232
            error!(
×
4233
                "Could not mine new tenure, since could not find header for known chain tip.";
4234
                "tip_consensus_hash" => %mine_tip_ch,
4235
                "tip_stacks_block_hash" => %mine_tip_bh
4236
            );
4237
            Error::HeaderNotFoundForChainTip
×
4238
        })?;
×
4239

4240
        // the stacks block I'm mining off of's burn header hash and vtxindex:
4241
        let parent_snapshot =
172,209✔
4242
            SortitionDB::get_block_snapshot_consensus(burn_db.conn(), mine_tip_ch)
172,209✔
4243
                .expect("Failed to look up block's parent snapshot")
172,209✔
4244
                .expect("Failed to look up block's parent snapshot");
172,209✔
4245

4246
        let parent_sortition_id = &parent_snapshot.sortition_id;
172,209✔
4247

4248
        let (parent_block_height, parent_winning_vtxindex, parent_block_total_burn) = if mine_tip_ch
172,209✔
4249
            == &FIRST_BURNCHAIN_CONSENSUS_HASH
172,209✔
4250
        {
4251
            (0, 0, 0)
×
4252
        } else {
4253
            let parent_winning_vtxindex =
172,209✔
4254
                SortitionDB::get_block_winning_vtxindex(burn_db.conn(), parent_sortition_id)
172,209✔
4255
                    .expect("SortitionDB failure.")
172,209✔
4256
                    .ok_or_else(|| {
172,209✔
4257
                        error!(
×
4258
                            "Failed to find winning vtx index for the parent sortition";
4259
                            "parent_sortition_id" => %parent_sortition_id
4260
                        );
4261
                        Error::WinningVtxNotFoundForChainTip
×
4262
                    })?;
×
4263

4264
            let parent_block = SortitionDB::get_block_snapshot(burn_db.conn(), parent_sortition_id)
172,209✔
4265
                .expect("SortitionDB failure.")
172,209✔
4266
                .ok_or_else(|| {
172,209✔
4267
                    error!(
×
4268
                        "Failed to find block snapshot for the parent sortition";
4269
                        "parent_sortition_id" => %parent_sortition_id
4270
                    );
4271
                    Error::SnapshotNotFoundForChainTip
×
4272
                })?;
×
4273

4274
            (
172,209✔
4275
                parent_block.block_height,
172,209✔
4276
                parent_winning_vtxindex,
172,209✔
4277
                parent_block.total_burn,
172,209✔
4278
            )
172,209✔
4279
        };
4280

4281
        // don't mine off of an old burnchain block
4282
        let burn_chain_tip = SortitionDB::get_canonical_burn_chain_tip(burn_db.conn())
172,209✔
4283
            .expect("FATAL: failed to query sortition DB for canonical burn chain tip");
172,209✔
4284

4285
        if burn_chain_tip.consensus_hash != check_burn_block.consensus_hash {
172,209✔
4286
            info!(
47✔
4287
                "New canonical burn chain tip detected. Will not try to mine.";
4288
                "new_consensus_hash" => %burn_chain_tip.consensus_hash,
4289
                "old_consensus_hash" => %check_burn_block.consensus_hash,
4290
                "new_burn_height" => burn_chain_tip.block_height,
47✔
4291
                "old_burn_height" => check_burn_block.block_height
47✔
4292
            );
4293
            return Err(Error::BurnchainTipChanged);
47✔
4294
        }
172,162✔
4295

4296
        debug!("Mining tenure's last consensus hash: {} (height {} hash {}), stacks tip consensus hash: {mine_tip_ch} (height {} hash {})",
172,162✔
4297
               &check_burn_block.consensus_hash, check_burn_block.block_height, &check_burn_block.burn_header_hash,
×
4298
               parent_snapshot.block_height, &parent_snapshot.burn_header_hash);
×
4299

4300
        let coinbase_nonce = {
172,162✔
4301
            let principal = miner_address.into();
172,162✔
4302
            let account = chain_state
172,162✔
4303
                .with_read_only_clarity_tx(
172,162✔
4304
                    &burn_db.index_handle(&burn_chain_tip.sortition_id),
172,162✔
4305
                    &StacksBlockHeader::make_index_block_hash(mine_tip_ch, mine_tip_bh),
172,162✔
4306
                    |conn| StacksChainState::get_account(conn, &principal),
172,162✔
4307
                )
4308
                .unwrap_or_else(|| {
172,162✔
4309
                    panic!(
×
4310
                        "BUG: stacks tip block {mine_tip_ch}/{mine_tip_bh} no longer exists after we queried it"
4311
                    )
4312
                });
4313
            account.nonce
172,162✔
4314
        };
4315

4316
        Ok(ParentStacksBlockInfo {
172,162✔
4317
            stacks_parent_header: stacks_tip_header,
172,162✔
4318
            parent_consensus_hash: mine_tip_ch.clone(),
172,162✔
4319
            parent_block_burn_height: parent_block_height,
172,162✔
4320
            parent_block_total_burn,
172,162✔
4321
            parent_winning_vtxindex,
172,162✔
4322
            coinbase_nonce,
172,162✔
4323
        })
172,162✔
4324
    }
172,209✔
4325
}
4326

4327
/// Thread that runs the network state machine, handling both p2p and http requests.
4328
pub struct PeerThread {
4329
    /// Node config
4330
    config: Config,
4331
    /// instance of the peer network. Made optional in order to trick the borrow checker.
4332
    net: Option<PeerNetwork>,
4333
    /// handle to global inter-thread comms
4334
    globals: Globals,
4335
    /// how long to wait for network messages on each poll, in millis
4336
    poll_timeout: u64,
4337
    /// handle to the sortition DB (optional so we can take/replace it)
4338
    sortdb: Option<SortitionDB>,
4339
    /// handle to the chainstate DB (optional so we can take/replace it)
4340
    chainstate: Option<StacksChainState>,
4341
    /// handle to the mempool DB (optional so we can take/replace it)
4342
    mempool: Option<MemPoolDB>,
4343
    /// buffer of relayer commands with block data that couldn't be sent to the relayer just yet
4344
    /// (i.e. due to backpressure).  We track this separately, instead of just using a bigger
4345
    /// channel, because we need to know when backpressure occurs in order to throttle the p2p
4346
    /// thread's downloader.
4347
    results_with_data: VecDeque<RelayerDirective>,
4348
    /// total number of p2p state-machine passes so far. Used to signal when to download the next
4349
    /// reward cycle of blocks
4350
    num_p2p_state_machine_passes: u64,
4351
    /// total number of inventory state-machine passes so far. Used to signal when to download the
4352
    /// next reward cycle of blocks.
4353
    num_inv_sync_passes: u64,
4354
    /// total number of download state-machine passes so far. Used to signal when to download the
4355
    /// next reward cycle of blocks.
4356
    num_download_passes: u64,
4357
    /// last burnchain block seen in the PeerNetwork's chain view since the last run
4358
    last_burn_block_height: u64,
4359
}
4360

4361
impl PeerThread {
4362
    /// set up the mempool DB connection
4363
    pub fn connect_mempool_db(config: &Config) -> MemPoolDB {
272✔
4364
        // create estimators, metric instances for RPC handler
4365
        let cost_estimator = config
272✔
4366
            .make_cost_estimator()
272✔
4367
            .unwrap_or_else(|| Box::new(UnitEstimator));
272✔
4368
        let metric = config
272✔
4369
            .make_cost_metric()
272✔
4370
            .unwrap_or_else(|| Box::new(UnitMetric));
272✔
4371

4372
        MemPoolDB::open(
272✔
4373
            config.is_mainnet(),
272✔
4374
            config.burnchain.chain_id,
272✔
4375
            &config.get_chainstate_path_str(),
272✔
4376
            cost_estimator,
272✔
4377
            metric,
272✔
4378
        )
4379
        .expect("Database failure opening mempool")
272✔
4380
    }
272✔
4381

4382
    /// Instantiate the p2p thread.
4383
    /// Binds the addresses in the config (which may panic if the port is blocked).
4384
    /// This is so the node will crash "early" before any new threads start if there's going to be
4385
    /// a bind error anyway.
4386
    pub fn new(runloop: &RunLoop, net: PeerNetwork) -> PeerThread {
272✔
4387
        Self::new_all(
272✔
4388
            runloop.get_globals(),
272✔
4389
            runloop.config(),
272✔
4390
            runloop.get_burnchain().pox_constants,
272✔
4391
            net,
272✔
4392
        )
4393
    }
272✔
4394

4395
    pub fn new_all(
272✔
4396
        globals: Globals,
272✔
4397
        config: &Config,
272✔
4398
        pox_constants: PoxConstants,
272✔
4399
        mut net: PeerNetwork,
272✔
4400
    ) -> Self {
272✔
4401
        let config = config.clone();
272✔
4402
        let mempool = Self::connect_mempool_db(&config);
272✔
4403
        let burn_db_path = config.get_burn_db_file_path();
272✔
4404

4405
        let sortdb = SortitionDB::open(
272✔
4406
            &burn_db_path,
272✔
4407
            false,
4408
            pox_constants,
272✔
4409
            Some(config.node.get_marf_opts()),
272✔
4410
        )
4411
        .expect("FATAL: could not open sortition DB");
272✔
4412

4413
        let chainstate =
272✔
4414
            open_chainstate_with_faults(&config).expect("FATAL: could not open chainstate DB");
272✔
4415

4416
        let p2p_sock: SocketAddr = config
272✔
4417
            .node
272✔
4418
            .p2p_bind
272✔
4419
            .parse()
272✔
4420
            .unwrap_or_else(|_| panic!("Failed to parse socket: {}", &config.node.p2p_bind));
272✔
4421
        let rpc_sock = config
272✔
4422
            .node
272✔
4423
            .rpc_bind
272✔
4424
            .parse()
272✔
4425
            .unwrap_or_else(|_| panic!("Failed to parse socket: {}", &config.node.rpc_bind));
272✔
4426

4427
        net.bind(&p2p_sock, &rpc_sock)
272✔
4428
            .expect("BUG: PeerNetwork could not bind or is already bound");
272✔
4429

4430
        let poll_timeout = config.get_poll_time();
272✔
4431

4432
        PeerThread {
272✔
4433
            config,
272✔
4434
            net: Some(net),
272✔
4435
            globals,
272✔
4436
            poll_timeout,
272✔
4437
            sortdb: Some(sortdb),
272✔
4438
            chainstate: Some(chainstate),
272✔
4439
            mempool: Some(mempool),
272✔
4440
            results_with_data: VecDeque::new(),
272✔
4441
            num_p2p_state_machine_passes: 0,
272✔
4442
            num_inv_sync_passes: 0,
272✔
4443
            num_download_passes: 0,
272✔
4444
            last_burn_block_height: 0,
272✔
4445
        }
272✔
4446
    }
272✔
4447

4448
    /// Do something with mutable references to the mempool, sortdb, and chainstate
4449
    /// Fools the borrow checker.
4450
    /// NOT COMPOSIBLE
4451
    fn with_chainstate<F, R>(&mut self, func: F) -> R
1,745,228✔
4452
    where
1,745,228✔
4453
        F: FnOnce(&mut PeerThread, &mut SortitionDB, &mut StacksChainState, &mut MemPoolDB) -> R,
1,745,228✔
4454
    {
4455
        let mut sortdb = self.sortdb.take().expect("BUG: sortdb already taken");
1,745,228✔
4456
        let mut chainstate = self
1,745,228✔
4457
            .chainstate
1,745,228✔
4458
            .take()
1,745,228✔
4459
            .expect("BUG: chainstate already taken");
1,745,228✔
4460
        let mut mempool = self.mempool.take().expect("BUG: mempool already taken");
1,745,228✔
4461

4462
        let res = func(self, &mut sortdb, &mut chainstate, &mut mempool);
1,745,228✔
4463

4464
        self.sortdb = Some(sortdb);
1,745,228✔
4465
        self.chainstate = Some(chainstate);
1,745,228✔
4466
        self.mempool = Some(mempool);
1,745,228✔
4467

4468
        res
1,745,228✔
4469
    }
1,745,228✔
4470

4471
    /// Get an immutable ref to the inner network.
4472
    /// DO NOT USE WITHIN with_network()
4473
    fn get_network(&self) -> &PeerNetwork {
869,643✔
4474
        self.net.as_ref().expect("BUG: did not replace net")
869,643✔
4475
    }
869,643✔
4476

4477
    /// Do something with mutable references to the network.
4478
    /// Fools the borrow checker.
4479
    /// NOT COMPOSIBLE. DO NOT CALL THIS OR get_network() IN func
4480
    fn with_network<F, R>(&mut self, func: F) -> R
872,614✔
4481
    where
872,614✔
4482
        F: FnOnce(&mut PeerThread, &mut PeerNetwork) -> R,
872,614✔
4483
    {
4484
        let mut net = self.net.take().expect("BUG: net already taken");
872,614✔
4485

4486
        let res = func(self, &mut net);
872,614✔
4487

4488
        self.net = Some(net);
872,614✔
4489
        res
872,614✔
4490
    }
872,614✔
4491

4492
    /// Run one pass of the p2p/http state machine
4493
    /// Return true if we should continue running passes; false if not
4494
    #[allow(clippy::borrowed_box)]
4495
    pub fn run_one_pass<B: BurnchainHeaderReader>(
872,614✔
4496
        &mut self,
872,614✔
4497
        indexer: &B,
872,614✔
4498
        dns_client_opt: Option<&mut DNSClient>,
872,614✔
4499
        event_dispatcher: &EventDispatcher,
872,614✔
4500
        cost_estimator: &Box<dyn CostEstimator>,
872,614✔
4501
        cost_metric: &Box<dyn CostMetric>,
872,614✔
4502
        fee_estimator: Option<&Box<dyn FeeEstimator>>,
872,614✔
4503
    ) -> bool {
872,614✔
4504
        // initial block download?
4505
        let ibd = self.globals.sync_comms.get_ibd();
872,614✔
4506
        let download_backpressure = !self.results_with_data.is_empty();
872,614✔
4507
        let poll_ms = if !download_backpressure && self.get_network().has_more_downloads() {
872,614✔
4508
            // keep getting those blocks -- drive the downloader state-machine
4509
            debug!(
42,035✔
4510
                "P2P: backpressure: {download_backpressure}, more downloads: {}",
4511
                self.get_network().has_more_downloads()
×
4512
            );
4513
            1
42,035✔
4514
        } else {
4515
            self.poll_timeout
830,579✔
4516
        };
4517

4518
        // move over unconfirmed state obtained from the relayer
4519
        self.with_chainstate(|p2p_thread, sortdb, chainstate, _mempool| {
872,614✔
4520
            let _ = Relayer::setup_unconfirmed_state_readonly(chainstate, sortdb);
872,614✔
4521
            p2p_thread.globals.recv_unconfirmed_txs(chainstate);
872,614✔
4522
        });
872,614✔
4523

4524
        let txindex = self.config.node.txindex;
872,614✔
4525

4526
        // do one pass
4527
        let p2p_res = self.with_chainstate(|p2p_thread, sortdb, chainstate, mempool| {
872,614✔
4528
            // NOTE: handler_args must be created such that it outlives the inner net.run() call and
4529
            // doesn't ref anything within p2p_thread.
4530
            let handler_args = RPCHandlerArgs {
872,614✔
4531
                exit_at_block_height: p2p_thread.config.burnchain.process_exit_at_block_height,
872,614✔
4532
                genesis_chainstate_hash: Sha256Sum::from_hex(stx_genesis::GENESIS_CHAINSTATE_HASH)
872,614✔
4533
                    .unwrap(),
872,614✔
4534
                event_observer: Some(event_dispatcher),
872,614✔
4535
                cost_estimator: Some(cost_estimator.as_ref()),
872,614✔
4536
                cost_metric: Some(cost_metric.as_ref()),
872,614✔
4537
                fee_estimator: fee_estimator.map(|boxed_estimator| boxed_estimator.as_ref()),
872,614✔
4538
                ..RPCHandlerArgs::default()
872,614✔
4539
            };
4540
            p2p_thread.with_network(|_, net| {
872,614✔
4541
                net.run(
872,614✔
4542
                    indexer,
872,614✔
4543
                    sortdb,
872,614✔
4544
                    chainstate,
872,614✔
4545
                    mempool,
872,614✔
4546
                    dns_client_opt,
872,614✔
4547
                    download_backpressure,
872,614✔
4548
                    ibd,
872,614✔
4549
                    poll_ms,
872,614✔
4550
                    &handler_args,
872,614✔
4551
                    txindex,
872,614✔
4552
                )
4553
            })
872,614✔
4554
        });
872,614✔
4555

4556
        match p2p_res {
872,614✔
4557
            Ok(network_result) => {
872,577✔
4558
                let mut have_update = false;
872,577✔
4559
                if self.num_p2p_state_machine_passes < network_result.num_state_machine_passes {
872,577✔
4560
                    // p2p state-machine did a full pass. Notify anyone listening.
665,296✔
4561
                    self.globals.sync_comms.notify_p2p_state_pass();
665,296✔
4562
                    self.num_p2p_state_machine_passes = network_result.num_state_machine_passes;
665,296✔
4563
                }
687,268✔
4564

4565
                if self.num_inv_sync_passes < network_result.num_inv_sync_passes {
872,577✔
4566
                    // inv-sync state-machine did a full pass. Notify anyone listening.
507,917✔
4567
                    self.globals.sync_comms.notify_inv_sync_pass();
507,917✔
4568
                    self.num_inv_sync_passes = network_result.num_inv_sync_passes;
507,917✔
4569

507,917✔
4570
                    // the relayer cares about the number of inventory passes, so pass this along
507,917✔
4571
                    have_update = true;
507,917✔
4572
                }
737,965✔
4573

4574
                if self.num_download_passes < network_result.num_download_passes {
872,577✔
4575
                    // download state-machine did a full pass.  Notify anyone listening.
168,517✔
4576
                    self.globals.sync_comms.notify_download_pass();
168,517✔
4577
                    self.num_download_passes = network_result.num_download_passes;
168,517✔
4578

168,517✔
4579
                    // the relayer cares about the number of download passes, so pass this along
168,517✔
4580
                    have_update = true;
168,517✔
4581
                }
704,060✔
4582

4583
                if network_result.has_data_to_store()
872,577✔
4584
                    || self.last_burn_block_height != network_result.burn_height
821,427✔
4585
                    || have_update
803,066✔
4586
                {
689,153✔
4587
                    // pass along if we have blocks, microblocks, or transactions, or a status
689,153✔
4588
                    // update on the network's view of the burnchain
689,153✔
4589
                    self.last_burn_block_height = network_result.burn_height;
689,153✔
4590
                    self.results_with_data
689,153✔
4591
                        .push_back(RelayerDirective::HandleNetResult(network_result));
689,153✔
4592
                }
705,668✔
4593
            }
4594
            Err(e) => {
37✔
4595
                // this is only reachable if the network is not instantiated correctly --
4596
                // i.e. you didn't connect it
4597
                panic!("P2P: Failed to process network dispatch: {e:?}");
37✔
4598
            }
4599
        };
4600

4601
        while let Some(next_result) = self.results_with_data.pop_front() {
1,561,641✔
4602
            // have blocks, microblocks, and/or transactions (don't care about anything else),
4603
            // or a directive to mine microblocks
4604
            if let Err(e) = self.globals.relay_send.try_send(next_result) {
692,124✔
4605
                debug!(
3,060✔
4606
                    "P2P: {:?}: download backpressure detected (bufferred {})",
4607
                    &self.get_network().local_peer,
×
4608
                    self.results_with_data.len()
×
4609
                );
4610
                match e {
3,060✔
4611
                    TrySendError::Full(directive) => {
2,971✔
4612
                        if let RelayerDirective::RunTenure(..) = directive {
2,971✔
4613
                            // can drop this
×
4614
                        } else {
2,971✔
4615
                            // don't lose this data -- just try it again
2,971✔
4616
                            self.results_with_data.push_front(directive);
2,971✔
4617
                        }
2,971✔
4618
                        break;
2,971✔
4619
                    }
4620
                    TrySendError::Disconnected(_) => {
4621
                        info!("P2P: Relayer hang up with p2p channel");
89✔
4622
                        self.globals.signal_stop();
89✔
4623
                        return false;
89✔
4624
                    }
4625
                }
4626
            } else {
4627
                debug!("P2P: Dispatched result to Relayer!");
689,064✔
4628
            }
4629
        }
4630

4631
        true
872,488✔
4632
    }
872,577✔
4633
}
4634

4635
impl StacksNode {
4636
    /// Create a StacksPrivateKey from a given seed buffer
4637
    pub fn make_node_private_key_from_seed(seed: &[u8]) -> StacksPrivateKey {
×
4638
        let node_privkey = {
×
4639
            let mut re_hashed_seed = seed.to_vec();
×
4640
            let my_private_key = loop {
×
4641
                match Secp256k1PrivateKey::from_slice(&re_hashed_seed[..]) {
×
4642
                    Ok(sk) => break sk,
×
4643
                    Err(_) => {
4644
                        re_hashed_seed = Sha256Sum::from_data(&re_hashed_seed[..])
×
4645
                            .as_bytes()
×
4646
                            .to_vec()
×
4647
                    }
4648
                }
4649
            };
4650
            my_private_key
×
4651
        };
4652
        node_privkey
×
4653
    }
×
4654

4655
    /// Set up the mempool DB by making sure it exists.
4656
    /// Panics on failure.
4657
    fn setup_mempool_db(config: &Config) -> MemPoolDB {
272✔
4658
        // force early mempool instantiation
4659
        let cost_estimator = config
272✔
4660
            .make_cost_estimator()
272✔
4661
            .unwrap_or_else(|| Box::new(UnitEstimator));
272✔
4662
        let metric = config
272✔
4663
            .make_cost_metric()
272✔
4664
            .unwrap_or_else(|| Box::new(UnitMetric));
272✔
4665

4666
        MemPoolDB::open(
272✔
4667
            config.is_mainnet(),
272✔
4668
            config.burnchain.chain_id,
272✔
4669
            &config.get_chainstate_path_str(),
272✔
4670
            cost_estimator,
272✔
4671
            metric,
272✔
4672
        )
4673
        .expect("BUG: failed to instantiate mempool")
272✔
4674
    }
272✔
4675

4676
    /// Set up the Peer DB and update any soft state from the config file. This includes:
4677
    ///   * Blacklisted/whitelisted nodes
4678
    ///   * Node keys
4679
    ///   * Bootstrap nodes
4680
    ///
4681
    /// Returns the instantiated `PeerDB`.
4682
    ///
4683
    /// Panics on failure.
4684
    fn setup_peer_db(
275✔
4685
        config: &Config,
275✔
4686
        burnchain: &Burnchain,
275✔
4687
        stackerdb_contract_ids: &[QualifiedContractIdentifier],
275✔
4688
    ) -> PeerDB {
275✔
4689
        let data_url = UrlString::try_from(config.node.data_url.to_string()).unwrap();
275✔
4690
        let initial_neighbors = config.node.bootstrap_node.clone();
275✔
4691
        if !initial_neighbors.is_empty() {
275✔
4692
            info!(
51✔
4693
                "Will bootstrap from peers {}",
4694
                VecDisplay(&initial_neighbors)
51✔
4695
            );
4696
        } else {
4697
            warn!("Without a peer to bootstrap from, the node will start mining a new chain");
224✔
4698
        }
4699

4700
        let p2p_sock: SocketAddr = config
275✔
4701
            .node
275✔
4702
            .p2p_bind
275✔
4703
            .parse()
275✔
4704
            .unwrap_or_else(|_| panic!("Failed to parse socket: {}", &config.node.p2p_bind));
275✔
4705
        let p2p_addr: SocketAddr = config
275✔
4706
            .node
275✔
4707
            .p2p_address
275✔
4708
            .parse()
275✔
4709
            .unwrap_or_else(|_| panic!("Failed to parse socket: {}", &config.node.p2p_address));
275✔
4710
        let node_privkey = Secp256k1PrivateKey::from_seed(&config.node.local_peer_seed);
275✔
4711

4712
        let mut peerdb = PeerDB::connect(
275✔
4713
            &config.get_peer_db_file_path(),
275✔
4714
            true,
4715
            config.burnchain.chain_id,
275✔
4716
            burnchain.network_id,
275✔
4717
            Some(node_privkey),
275✔
4718
            config.connection_options.private_key_lifetime,
275✔
4719
            PeerAddress::from_socketaddr(&p2p_addr),
275✔
4720
            p2p_sock.port(),
275✔
4721
            data_url,
275✔
4722
            &[],
275✔
4723
            Some(&initial_neighbors),
275✔
4724
            stackerdb_contract_ids,
275✔
4725
        )
4726
        .map_err(|e| {
275✔
4727
            eprintln!("Failed to open {}: {e:?}", &config.get_peer_db_file_path());
×
4728
            panic!();
×
4729
        })
4730
        .unwrap();
275✔
4731

4732
        // allow all bootstrap nodes
4733
        {
4734
            let tx = peerdb.tx_begin().unwrap();
275✔
4735
            for initial_neighbor in initial_neighbors.iter() {
275✔
4736
                // update peer in case public key changed
51✔
4737
                PeerDB::update_peer(&tx, initial_neighbor).unwrap();
51✔
4738
                PeerDB::set_allow_peer(
51✔
4739
                    &tx,
51✔
4740
                    initial_neighbor.addr.network_id,
51✔
4741
                    &initial_neighbor.addr.addrbytes,
51✔
4742
                    initial_neighbor.addr.port,
51✔
4743
                    -1,
51✔
4744
                )
51✔
4745
                .unwrap();
51✔
4746
            }
51✔
4747
            tx.commit().unwrap();
275✔
4748
        }
4749

4750
        if !config.node.deny_nodes.is_empty() {
275✔
4751
            warn!("Will ignore nodes {:?}", &config.node.deny_nodes);
×
4752
        }
275✔
4753

4754
        // deny all config-denied peers
4755
        {
4756
            let tx = peerdb.tx_begin().unwrap();
275✔
4757
            for denied in config.node.deny_nodes.iter() {
275✔
4758
                PeerDB::set_deny_peer(
×
4759
                    &tx,
×
4760
                    denied.addr.network_id,
×
4761
                    &denied.addr.addrbytes,
×
4762
                    denied.addr.port,
×
4763
                    get_epoch_time_secs() + 24 * 365 * 3600,
×
4764
                )
×
4765
                .unwrap();
×
4766
            }
×
4767
            tx.commit().unwrap();
275✔
4768
        }
4769

4770
        // update services to indicate we can support mempool sync and stackerdb
4771
        {
275✔
4772
            let tx = peerdb.tx_begin().unwrap();
275✔
4773
            PeerDB::set_local_services(
275✔
4774
                &tx,
275✔
4775
                (ServiceFlags::RPC as u16)
275✔
4776
                    | (ServiceFlags::RELAY as u16)
275✔
4777
                    | (ServiceFlags::STACKERDB as u16),
275✔
4778
            )
275✔
4779
            .unwrap();
275✔
4780
            tx.commit().unwrap();
275✔
4781
        }
275✔
4782

4783
        peerdb
275✔
4784
    }
275✔
4785

4786
    /// Set up the PeerNetwork, but do not bind it.
4787
    pub(crate) fn setup_peer_network(
275✔
4788
        config: &Config,
275✔
4789
        atlas_config: &AtlasConfig,
275✔
4790
        burnchain: Burnchain,
275✔
4791
    ) -> PeerNetwork {
275✔
4792
        let sortdb = SortitionDB::open(
275✔
4793
            &config.get_burn_db_file_path(),
275✔
4794
            true,
4795
            burnchain.pox_constants.clone(),
275✔
4796
            Some(config.node.get_marf_opts()),
275✔
4797
        )
4798
        .expect("Error while instantiating sor/tition db");
275✔
4799

4800
        let epochs_vec = SortitionDB::get_stacks_epochs(sortdb.conn())
275✔
4801
            .expect("Error while loading stacks epochs");
275✔
4802
        let epochs = EpochList::new(&epochs_vec);
275✔
4803

4804
        let view = {
275✔
4805
            let sortition_tip = SortitionDB::get_canonical_burn_chain_tip(sortdb.conn())
275✔
4806
                .expect("Failed to get sortition tip");
275✔
4807
            SortitionDB::get_burnchain_view(&sortdb.index_conn(), &burnchain, &sortition_tip)
275✔
4808
                .unwrap()
275✔
4809
        };
4810

4811
        let atlasdb =
275✔
4812
            AtlasDB::connect(atlas_config.clone(), &config.get_atlas_db_file_path(), true).unwrap();
275✔
4813

4814
        let mut chainstate =
275✔
4815
            open_chainstate_with_faults(config).expect("FATAL: could not open chainstate DB");
275✔
4816

4817
        let mut stackerdb_machines = HashMap::new();
275✔
4818
        let mut stackerdbs = StackerDBs::connect(&config.get_stacker_db_file_path(), true).unwrap();
275✔
4819

4820
        let mut stackerdb_configs = HashMap::new();
275✔
4821
        for contract in config.node.stacker_dbs.iter() {
6,489✔
4822
            stackerdb_configs.insert(contract.clone(), StackerDBConfig::noop());
6,453✔
4823
        }
6,453✔
4824
        let stackerdb_configs = stackerdbs
275✔
4825
            .create_or_reconfigure_stackerdbs(
275✔
4826
                &mut chainstate,
275✔
4827
                &sortdb,
275✔
4828
                stackerdb_configs,
275✔
4829
                &config.connection_options,
275✔
4830
            )
4831
            .unwrap();
275✔
4832

4833
        let stackerdb_contract_ids: Vec<QualifiedContractIdentifier> =
275✔
4834
            stackerdb_configs.keys().cloned().collect();
275✔
4835
        for (contract_id, stackerdb_config) in stackerdb_configs {
6,489✔
4836
            let stackerdbs = StackerDBs::connect(&config.get_stacker_db_file_path(), true).unwrap();
6,453✔
4837
            let stacker_db_sync = StackerDBSync::new(
6,453✔
4838
                contract_id.clone(),
6,453✔
4839
                &stackerdb_config,
6,453✔
4840
                PeerNetworkComms::new(),
6,453✔
4841
                stackerdbs,
6,453✔
4842
            );
6,453✔
4843
            stackerdb_machines.insert(contract_id, (stackerdb_config, stacker_db_sync));
6,453✔
4844
        }
6,453✔
4845
        let peerdb = Self::setup_peer_db(config, &burnchain, &stackerdb_contract_ids);
275✔
4846
        let burnchain_db = burnchain
275✔
4847
            .open_burnchain_db(false)
275✔
4848
            .expect("Failed to open burnchain DB");
275✔
4849

4850
        let local_peer = match PeerDB::get_local_peer(peerdb.conn()) {
275✔
4851
            Ok(local_peer) => local_peer,
275✔
4852
            _ => panic!("Unable to retrieve local peer"),
×
4853
        };
4854

4855
        PeerNetwork::new(
275✔
4856
            peerdb,
275✔
4857
            atlasdb,
275✔
4858
            stackerdbs,
275✔
4859
            burnchain_db,
275✔
4860
            local_peer,
275✔
4861
            config.burnchain.peer_version,
275✔
4862
            burnchain,
275✔
4863
            view,
275✔
4864
            config.connection_options.clone(),
275✔
4865
            stackerdb_machines,
275✔
4866
            epochs,
275✔
4867
        )
4868
    }
275✔
4869

4870
    /// Main loop of the relayer.
4871
    /// Runs in a separate thread.
4872
    /// Continuously receives
4873
    pub fn relayer_main(mut relayer_thread: RelayerThread, relay_recv: Receiver<RelayerDirective>) {
272✔
4874
        while let Ok(directive) = relay_recv.recv() {
1,042,513✔
4875
            if !relayer_thread.globals.keep_running() {
1,042,513✔
4876
                break;
269✔
4877
            }
1,042,244✔
4878

4879
            if !relayer_thread.handle_directive(directive) {
1,042,244✔
4880
                break;
3✔
4881
            }
1,042,241✔
4882
        }
4883

4884
        // kill miner if it's running
4885
        signal_mining_blocked(relayer_thread.globals.get_miner_status());
272✔
4886

4887
        // set termination flag so other threads die
4888
        relayer_thread.globals.signal_stop();
272✔
4889

4890
        debug!("Relayer exit!");
272✔
4891
    }
272✔
4892

4893
    /// Main loop of the p2p thread.
4894
    /// Runs in a separate thread.
4895
    /// Continuously receives, until told otherwise.
4896
    pub fn p2p_main(
272✔
4897
        mut p2p_thread: PeerThread,
272✔
4898
        event_dispatcher: EventDispatcher,
272✔
4899
    ) -> Option<PeerNetwork> {
272✔
4900
        let should_keep_running = p2p_thread.globals.should_keep_running.clone();
272✔
4901
        let (mut dns_resolver, mut dns_client) = DNSResolver::new(10);
272✔
4902

4903
        // spawn a daemon thread that runs the DNS resolver.
4904
        // It will die when the rest of the system dies.
4905
        {
4906
            let _jh = thread::Builder::new()
272✔
4907
                .name("dns-resolver".to_string())
272✔
4908
                .spawn(move || {
272✔
4909
                    debug!("DNS resolver thread ID is {:?}", thread::current().id());
272✔
4910
                    dns_resolver.thread_main();
272✔
4911
                })
272✔
4912
                .unwrap();
272✔
4913
        }
4914

4915
        // NOTE: these must be instantiated in the thread context, since it can't be safely sent
4916
        // between threads
4917
        let fee_estimator_opt = p2p_thread.config.make_fee_estimator();
272✔
4918
        let cost_estimator = p2p_thread
272✔
4919
            .config
272✔
4920
            .make_cost_estimator()
272✔
4921
            .unwrap_or_else(|| Box::new(UnitEstimator));
272✔
4922
        let cost_metric = p2p_thread
272✔
4923
            .config
272✔
4924
            .make_cost_metric()
272✔
4925
            .unwrap_or_else(|| Box::new(UnitMetric));
272✔
4926

4927
        let indexer = make_bitcoin_indexer(&p2p_thread.config, Some(should_keep_running));
272✔
4928

4929
        // receive until we can't reach the receiver thread
4930
        loop {
4931
            if !p2p_thread.globals.keep_running() {
872,760✔
4932
                break;
146✔
4933
            }
872,614✔
4934
            if !p2p_thread.run_one_pass(
872,614✔
4935
                &indexer,
872,614✔
4936
                Some(&mut dns_client),
872,614✔
4937
                &event_dispatcher,
872,614✔
4938
                &cost_estimator,
872,614✔
4939
                &cost_metric,
872,614✔
4940
                fee_estimator_opt.as_ref(),
872,614✔
4941
            ) {
872,614✔
4942
                break;
126✔
4943
            }
872,488✔
4944
        }
4945

4946
        // kill miner
4947
        signal_mining_blocked(p2p_thread.globals.get_miner_status());
272✔
4948

4949
        // set termination flag so other threads die
4950
        p2p_thread.globals.signal_stop();
272✔
4951

4952
        // thread exited, so signal to the relayer thread to die.
4953
        while let Err(TrySendError::Full(_)) = p2p_thread
235✔
4954
            .globals
235✔
4955
            .relay_send
235✔
4956
            .try_send(RelayerDirective::Exit)
235✔
4957
        {
4958
            warn!("Failed to direct relayer thread to exit, sleeping and trying again");
6✔
4959
            thread::sleep(Duration::from_secs(5));
6✔
4960
        }
4961
        info!("P2P thread exit!");
272✔
4962
        p2p_thread.net
272✔
4963
    }
272✔
4964

4965
    /// This function sets the global var `GLOBAL_BURNCHAIN_SIGNER`.
4966
    ///
4967
    /// This variable is used for prometheus monitoring (which only
4968
    /// runs when the feature flag `monitoring_prom` is activated).
4969
    /// The address is set using the single-signature BTC address
4970
    /// associated with `keychain`'s public key. This address always
4971
    /// assumes Epoch-2.1 rules for the miner address: if the
4972
    /// node is configured for segwit, then the miner address generated
4973
    /// is a segwit address, otherwise it is a p2pkh.
4974
    ///
4975
    fn set_monitoring_miner_address(keychain: &Keychain, relayer_thread: &RelayerThread) {
272✔
4976
        let public_key = keychain.get_pub_key();
272✔
4977
        let miner_addr = relayer_thread
272✔
4978
            .bitcoin_controller
272✔
4979
            .get_miner_address(StacksEpochId::Epoch21, &public_key);
272✔
4980
        let miner_addr_str = miner_addr.to_string();
272✔
4981
        let _ = monitoring::set_burnchain_signer(BurnchainSigner(miner_addr_str)).map_err(|e| {
272✔
4982
            warn!("Failed to set global burnchain signer: {e:?}");
49✔
4983
            e
49✔
4984
        });
49✔
4985
    }
272✔
4986

4987
    pub fn spawn(
272✔
4988
        runloop: &RunLoop,
272✔
4989
        globals: Globals,
272✔
4990
        // relay receiver endpoint for the p2p thread, so the relayer can feed it data to push
272✔
4991
        relay_recv: Receiver<RelayerDirective>,
272✔
4992
    ) -> StacksNode {
272✔
4993
        let config = runloop.config().clone();
272✔
4994
        let is_miner = runloop.is_miner();
272✔
4995
        let burnchain = runloop.get_burnchain();
272✔
4996
        let atlas_config = config.atlas.clone();
272✔
4997
        let keychain = Keychain::default(config.node.seed.clone());
272✔
4998

4999
        let _ = Self::setup_mempool_db(&config);
272✔
5000

5001
        let mut p2p_net = Self::setup_peer_network(&config, &atlas_config, burnchain);
272✔
5002

5003
        let stackerdbs = StackerDBs::connect(&config.get_stacker_db_file_path(), true)
272✔
5004
            .expect("FATAL: failed to connect to stacker DB");
272✔
5005

5006
        let relayer = Relayer::from_p2p(&mut p2p_net, stackerdbs);
272✔
5007

5008
        let local_peer = p2p_net.local_peer.clone();
272✔
5009

5010
        let NodeConfig {
5011
            mock_mining, miner, ..
272✔
5012
        } = config.get_node_config(false);
272✔
5013

5014
        // setup initial key registration
5015
        let leader_key_registration_state = if mock_mining {
272✔
5016
            // mock mining, pretend to have a registered key
5017
            let (vrf_public_key, _) = keychain.make_vrf_keypair(VRF_MOCK_MINER_KEY);
5✔
5018
            LeaderKeyRegistrationState::Active(RegisteredKey {
5✔
5019
                target_block_height: VRF_MOCK_MINER_KEY,
5✔
5020
                block_height: 1,
5✔
5021
                op_vtxindex: 1,
5✔
5022
                vrf_public_key,
5✔
5023
                memo: vec![],
5✔
5024
            })
5✔
5025
        } else {
5026
            // Warn the user that they need to set up a miner key
5027
            if miner && config.miner.mining_key.is_none() {
267✔
5028
                warn!("`[miner.mining_key]` not set in config file. This will be required to mine in Epoch 3.0!")
33✔
5029
            }
234✔
5030
            LeaderKeyRegistrationState::Inactive
267✔
5031
        };
5032
        globals.set_initial_leader_key_registration_state(leader_key_registration_state);
272✔
5033

5034
        let relayer_thread = RelayerThread::new(runloop, local_peer.clone(), relayer);
272✔
5035

5036
        StacksNode::set_monitoring_miner_address(&keychain, &relayer_thread);
272✔
5037

5038
        let relayer_thread_handle = thread::Builder::new()
272✔
5039
            .name(format!("relayer-{}", &local_peer.data_url))
272✔
5040
            .stack_size(BLOCK_PROCESSOR_STACK_SIZE)
272✔
5041
            .spawn(move || {
272✔
5042
                debug!("relayer thread ID is {:?}", thread::current().id());
272✔
5043
                Self::relayer_main(relayer_thread, relay_recv);
272✔
5044
            })
272✔
5045
            .expect("FATAL: failed to start relayer thread");
272✔
5046

5047
        let p2p_event_dispatcher = runloop.get_event_dispatcher();
272✔
5048
        let p2p_thread = PeerThread::new(runloop, p2p_net);
272✔
5049
        let p2p_thread_handle = thread::Builder::new()
272✔
5050
            .stack_size(BLOCK_PROCESSOR_STACK_SIZE)
272✔
5051
            .name(format!(
272✔
5052
                "p2p-({},{})",
5053
                &config.node.p2p_bind, &config.node.rpc_bind
272✔
5054
            ))
5055
            .spawn(move || {
272✔
5056
                debug!("p2p thread ID is {:?}", thread::current().id());
272✔
5057
                Self::p2p_main(p2p_thread, p2p_event_dispatcher)
272✔
5058
            })
272✔
5059
            .expect("FATAL: failed to start p2p thread");
272✔
5060

5061
        info!("Start HTTP server on: {}", &config.node.rpc_bind);
272✔
5062
        info!("Start P2P server on: {}", &config.node.p2p_bind);
272✔
5063

5064
        StacksNode {
272✔
5065
            atlas_config,
272✔
5066
            globals,
272✔
5067
            is_miner,
272✔
5068
            p2p_thread_handle,
272✔
5069
            relayer_thread_handle,
272✔
5070
        }
272✔
5071
    }
272✔
5072

5073
    /// Manage the VRF public key registration state machine.
5074
    /// Tell the relayer thread to fire off a tenure and a block commit op,
5075
    /// if it is time to do so.
5076
    /// `ibd` indicates whether or not we're in the initial block download.  Used to control when
5077
    /// to try and register VRF keys.
5078
    /// Called from the main thread.
5079
    /// Return true if we succeeded in carrying out the next task of the operation.
5080
    pub fn relayer_issue_tenure(&mut self, ibd: bool) -> bool {
348,873✔
5081
        if !self.is_miner {
348,873✔
5082
            // node is a follower, don't try to issue a tenure
5083
            return true;
2,353✔
5084
        }
346,520✔
5085

5086
        if let Some(burnchain_tip) = self.globals.get_last_sortition() {
346,520✔
5087
            if !ibd {
346,520✔
5088
                // try and register a VRF key before issuing a tenure
5089
                let leader_key_registration_state =
346,520✔
5090
                    self.globals.get_leader_key_registration_state();
346,520✔
5091
                match leader_key_registration_state {
346,520✔
5092
                    LeaderKeyRegistrationState::Active(ref key) => {
345,708✔
5093
                        debug!(
345,708✔
5094
                            "Tenure: Using key {:?} off of {}",
5095
                            &key.vrf_public_key, &burnchain_tip.burn_header_hash
×
5096
                        );
5097

5098
                        self.globals
345,708✔
5099
                            .relay_send
345,708✔
5100
                            .send(RelayerDirective::RunTenure(
345,708✔
5101
                                key.clone(),
345,708✔
5102
                                burnchain_tip,
345,708✔
5103
                                get_epoch_time_ms(),
345,708✔
5104
                            ))
345,708✔
5105
                            .is_ok()
345,708✔
5106
                    }
5107
                    LeaderKeyRegistrationState::Inactive => {
5108
                        warn!(
257✔
5109
                            "Tenure: skipped tenure because no active VRF key. Trying to register one."
5110
                        );
5111
                        self.globals
257✔
5112
                            .relay_send
257✔
5113
                            .send(RelayerDirective::RegisterKey(burnchain_tip))
257✔
5114
                            .is_ok()
257✔
5115
                    }
5116
                    LeaderKeyRegistrationState::Pending(..) => true,
555✔
5117
                }
5118
            } else {
5119
                // still sync'ing so just try again later
5120
                true
×
5121
            }
5122
        } else {
5123
            warn!("Tenure: Do not know the last burn block. As a miner, this is bad.");
×
5124
            true
×
5125
        }
5126
    }
348,873✔
5127

5128
    /// Notify the relayer of a sortition, telling it to process the block
5129
    ///  and advertize it if it was mined by the node.
5130
    /// returns _false_ if the relayer hung up the channel.
5131
    /// Called from the main thread.
5132
    pub fn relayer_sortition_notify(&self) -> bool {
60,803✔
5133
        if !self.is_miner {
60,803✔
5134
            // node is a follower, don't try to process my own tenure.
5135
            return true;
1,136✔
5136
        }
59,667✔
5137

5138
        if let Some(snapshot) = self.globals.get_last_sortition() {
59,667✔
5139
            debug!(
59,667✔
5140
                "Tenure: Notify sortition!";
5141
                "consensus_hash" => %snapshot.consensus_hash,
5142
                "burn_block_hash" => %snapshot.burn_header_hash,
5143
                "winning_stacks_block_hash" => %snapshot.winning_stacks_block_hash,
5144
                "burn_block_height" => &snapshot.block_height,
×
5145
                "sortition_id" => %snapshot.sortition_id
5146
            );
5147
            if snapshot.sortition {
59,667✔
5148
                return self
7,609✔
5149
                    .globals
7,609✔
5150
                    .relay_send
7,609✔
5151
                    .send(RelayerDirective::ProcessTenure(
7,609✔
5152
                        snapshot.consensus_hash,
7,609✔
5153
                        snapshot.parent_burn_header_hash,
7,609✔
5154
                        snapshot.winning_stacks_block_hash,
7,609✔
5155
                    ))
7,609✔
5156
                    .is_ok();
7,609✔
5157
            }
52,058✔
5158
        } else {
5159
            debug!("Tenure: Notify sortition! No last burn block");
×
5160
        }
5161
        true
52,058✔
5162
    }
60,803✔
5163

5164
    /// Process a state coming from the burnchain, by extracting the validated KeyRegisterOp
5165
    /// and inspecting if a sortition was won.
5166
    /// `ibd`: boolean indicating whether or not we are in the initial block download
5167
    /// Called from the main thread.
5168
    pub fn process_burnchain_state(
60,803✔
5169
        &mut self,
60,803✔
5170
        config: &Config,
60,803✔
5171
        sortdb: &SortitionDB,
60,803✔
5172
        sort_id: &SortitionId,
60,803✔
5173
        ibd: bool,
60,803✔
5174
    ) -> Option<BlockSnapshot> {
60,803✔
5175
        let mut last_sortitioned_block = None;
60,803✔
5176

5177
        let ic = sortdb.index_conn();
60,803✔
5178

5179
        let block_snapshot = SortitionDB::get_block_snapshot(&ic, sort_id)
60,803✔
5180
            .expect("Failed to obtain block snapshot for processed burn block.")
60,803✔
5181
            .expect("Failed to obtain block snapshot for processed burn block.");
60,803✔
5182
        let block_height = block_snapshot.block_height;
60,803✔
5183

5184
        let block_commits =
60,803✔
5185
            SortitionDB::get_block_commits_by_block(&ic, &block_snapshot.sortition_id)
60,803✔
5186
                .expect("Unexpected SortitionDB error fetching block commits");
60,803✔
5187

5188
        let num_block_commits = block_commits.len();
60,803✔
5189

5190
        update_active_miners_count_gauge(block_commits.len() as i64);
60,803✔
5191

5192
        for op in block_commits.into_iter() {
60,803✔
5193
            if op.txid == block_snapshot.winning_block_txid {
7,842✔
5194
                info!(
7,740✔
5195
                    "Received burnchain block #{block_height} including block_commit_op (winning) - {} ({})",
5196
                    op.apparent_sender, &op.block_header_hash
7,740✔
5197
                );
5198
                last_sortitioned_block = Some((block_snapshot.clone(), op.vtxindex));
7,740✔
5199
            } else if self.is_miner {
102✔
5200
                info!(
102✔
5201
                    "Received burnchain block #{block_height} including block_commit_op - {} ({})",
5202
                    op.apparent_sender, &op.block_header_hash
102✔
5203
                );
5204
            }
×
5205
        }
5206

5207
        let key_registers =
60,803✔
5208
            SortitionDB::get_leader_keys_by_block(&ic, &block_snapshot.sortition_id)
60,803✔
5209
                .expect("Unexpected SortitionDB error fetching key registers");
60,803✔
5210

5211
        self.globals.set_last_sortition(block_snapshot);
60,803✔
5212
        let ret = last_sortitioned_block.map(|x| x.0);
60,803✔
5213

5214
        let num_key_registers = key_registers.len();
60,803✔
5215
        debug!(
60,803✔
5216
            "Processed burnchain state at height {block_height}: {num_key_registers} leader keys, {num_block_commits} block-commits (ibd = {ibd})"
5217
        );
5218

5219
        // save the registered VRF key
5220
        let activated_key_opt = self
60,803✔
5221
            .globals
60,803✔
5222
            .try_activate_leader_key_registration(block_height, key_registers);
60,803✔
5223

5224
        let Some(activated_key) = activated_key_opt else {
60,803✔
5225
            return ret;
60,582✔
5226
        };
5227

5228
        let Some(path) = config.miner.activated_vrf_key_path.as_ref() else {
221✔
5229
            return ret;
84✔
5230
        };
5231

5232
        info!("Activated VRF key; saving to {path}");
137✔
5233

5234
        let Ok(key_json) = serde_json::to_string(&activated_key) else {
137✔
5235
            warn!("Failed to serialize VRF key");
×
5236
            return ret;
×
5237
        };
5238

5239
        let mut f = match fs::File::create(path) {
137✔
5240
            Ok(f) => f,
137✔
5241
            Err(e) => {
×
5242
                warn!("Failed to create {path}: {e:?}");
×
5243
                return ret;
×
5244
            }
5245
        };
5246

5247
        if let Err(e) = f.write_all(key_json.as_bytes()) {
137✔
5248
            warn!("Failed to write activated VRF key to {path}: {e:?}");
×
5249
            return ret;
×
5250
        }
137✔
5251

5252
        info!("Saved activated VRF key to {path}");
137✔
5253
        ret
137✔
5254
    }
60,803✔
5255

5256
    /// Join all inner threads
5257
    pub fn join(self) -> Option<PeerNetwork> {
235✔
5258
        self.relayer_thread_handle.join().unwrap();
235✔
5259
        self.p2p_thread_handle.join().unwrap()
235✔
5260
    }
235✔
5261
}
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2026 Coveralls, Inc