• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

nats-io / nats-streaming-server
92%
main: 92%

Build:
Build:
LAST BUILD BRANCH: add-eol-note
DEFAULT BRANCH: main
Repo Added 27 Apr 2017 11:06PM UTC
Files 24
Badge
Embed ▾
README BADGES
x

If you need to use a raster PNG badge, change the '.svg' to '.png' in the link

Markdown

Textile

RDoc

HTML

Rst

LAST BUILD ON BRANCH add_flag_to_proceed_on_restore_failure
branch: add_flag_to_proceed_on_restore_failure
CHANGE BRANCH
x
Reset
  • add_flag_to_proceed_on_restore_failure
  • 1183
  • add-RI-core-maintainer
  • add-deprecation-notice
  • add-eol-note
  • add_arm32v6_docker_build
  • add_arm64_and_docker_multi_platforms
  • add_arm_6_and_7_builds
  • add_clientid_in_monitor_chan_subs
  • add_clustering_queue_stalled_test
  • add_file_auto_sync
  • add_git_commit_to_docker_file
  • add_logfile_size_limit
  • add_logger_close
  • add_megacheck
  • add_nkey_support
  • add_node_id_in_serverz
  • add_num_subs_in_monitoring
  • add_opts_clone
  • add_record_size_limit_parsing
  • add_redelivery_count
  • add_remove_cluster_nodes
  • add_server_name_and_skip_verify
  • add_sigterm_trap
  • add_streaming_server_ready_log
  • add_support_for_client_pings
  • add_windows_to_travis
  • async_msg_processing
  • auth_config_file
  • better_report_client_id_error
  • bolt_config
  • change_localhost_to_loopback
  • change_msg_store_apis_to_return_error
  • change_sub_store_apis_to_return_error
  • change_test_localhost_to_loopback
  • close_files_on_recovery
  • close_msg_sub_store_on_recovery_failure
  • close_sub_by_inbox
  • cluster_fix_issue_after_snapshot
  • cluster_fix_panic_channel_delete
  • cluster_fix_transport_panic
  • cluster_keep_same_sub_id_on_replay
  • cluster_msg_store_flush
  • cluster_queue_rdlv
  • cluster_raft_free_list_improvement
  • cluster_raft_tport_shutdown
  • cluster_raft_tport_timeout
  • cluster_restore_msgs
  • clustering_ack_replication
  • clustering_ack_snapshots
  • clustering_channel_gossip
  • clustering_client_race
  • clustering_config
  • clustering_config_file
  • clustering_conn_replication
  • clustering_conn_snapshots
  • clustering_documentation
  • clustering_fix_defaults
  • clustering_fix_install_snapshot_failure
  • clustering_leader_fix
  • clustering_mvp_integration
  • clustering_no_mem_store
  • clustering_node_id
  • clustering_queue_redelivery_changes
  • clustering_recover_fix
  • clustering_spike_v2
  • clustering_sub_failover
  • clustering_sub_snapshots
  • clustering_test_fixes
  • clustering_upgrade_raft
  • cncf
  • common_store_tests
  • consolidated_config
  • crypto_store_wrapper
  • david-gurley-natscredentials
  • debug_file_maxage
  • debug_max_age_file_failure
  • debug_max_inactivity
  • debug_sql_failing_test
  • debug_travis_failing_test
  • delete_channel
  • delete_ghost_queue_durable_from_store
  • dependabot/go_modules/github.com/nats-io/nats-server/v2-2.9.23
  • dependabot/go_modules/github.com/nats-io/nkeys-0.4.6
  • dependabot/go_modules/golang.org/x/crypto-0.17.0
  • dont_expose_password_in_trace_and_cb
  • duplicate_redelivery_on_reconnect
  • embed_natslog
  • ensure_gofmt_report_error_on_travis
  • expose_boltdb_conf
  • expose_nats_options
  • file_encryption
  • filestore_optimizations
  • fix-async-error
  • fix_1002
  • fix_1034
  • fix_1058
  • fix_1060
  • fix_1064
  • fix_1086
  • fix_1106
  • fix_1118
  • fix_1135
  • fix_1138
  • fix_1176
  • fix_1189
  • fix_1229
  • fix_1235
  • fix_1239
  • fix_1255
  • fix_1256
  • fix_1261
  • fix_1263
  • fix_1265
  • fix_1284
  • fix_322
  • fix_333
  • fix_467
  • fix_474
  • fix_515
  • fix_520
  • fix_525
  • fix_529
  • fix_535
  • fix_536
  • fix_559
  • fix_622
  • fix_791
  • fix_792
  • fix_795
  • fix_809
  • fix_819
  • fix_821
  • fix_833
  • fix_834
  • fix_862
  • fix_869
  • fix_915
  • fix_921
  • fix_930
  • fix_934
  • fix_950
  • fix_974
  • fix_990
  • fix_993
  • fix_ack_processing
  • fix_bootstrap
  • fix_channel_delete
  • fix_client_health_and_pings
  • fix_client_pings
  • fix_client_pings_test
  • fix_cluster_sub_start_seq
  • fix_clustering_sub_start_position
  • fix_cmdline_override
  • fix_code_cov
  • fix_crash_on_startup
  • fix_cross_compile
  • fix_deadlock
  • fix_delay_caused_by_invalid_id
  • fix_delete_channel
  • fix_display_unlimited_channel_limit
  • fix_file_slice_max_bytes_cmd_line
  • fix_filestore_cache_tests
  • fix_filestore_empty
  • fix_filestore_init
  • fix_filestore_panic
  • fix_flags_usage
  • fix_flapper
  • fix_flapping_clustering_test
  • fix_flapping_test
  • fix_flapping_tests
  • fix_for_auto_sync
  • fix_fstore_expire_panic_on_store_close
  • fix_ft_and_channel_partitioning
  • fix_ft_sql_tick_update
  • fix_ghost_durables
  • fix_go_mod_x_sys
  • fix_issue_380
  • fix_latest_update_to_sqlstore
  • fix_leadership_acquired
  • fix_log_cache_issue
  • fix_logfile_issues
  • fix_logger_and_util_testmain
  • fix_logtime
  • fix_max_inactivity
  • fix_maxinactivity_test
  • fix_monitor_durable_subs
  • fix_monitor_test_data_race
  • fix_monitoring_offline_durable_queue_sub
  • fix_msg_expiration
  • fix_nosyslog_if_logfile_specified
  • fix_numsubs
  • fix_panic_no_leader
  • fix_panic_on_expire_msgs
  • fix_panic_on_redelivery
  • fix_partition_flapping_test
  • fix_pending_count
  • fix_possible_fs_panic_on_read_index
  • fix_queue_new_hold
  • fix_queue_redelivery
  • fix_queue_redelivery_on_startup
  • fix_queue_stall
  • fix_race_on_channel_delete
  • fix_race_on_delivery_and_store
  • fix_race_on_pubsub_with_partitioning
  • fix_race_tests_setup
  • fix_raft_logging
  • fix_raft_subject_name
  • fix_raft_subs
  • fix_rdlv_count_map
  • fix_redeliver_race
  • fix_restore_msgs_from_snapshot
  • fix_routes_cmdline_override
  • fix_script_cov
  • fix_snapshot_and_channel_delete
  • fix_snapshot_restore
  • fix_snapshot_sub_leak
  • fix_sql_cache_flush
  • fix_sql_no_caching_on_recovery
  • fix_sql_server_tests
  • fix_sql_substore_crash
  • fix_sqlstore_recovery_expire_msgs
  • fix_staticcheck_report
  • fix_staticcheck_reports
  • fix_store_encryption
  • fix_sub_acks_from_older_store
  • fix_sub_has_failed_hb
  • fix_sub_req_failed_on_client_close
  • fix_sub_stalled_on_recovery
  • fix_subs_count
  • fix_test_client_pings_race
  • fix_test_data_race
  • fix_test_sql
  • fix_tests
  • fix_tests_and_travis
  • fix_timers_and_tune_test
  • fix_waitnumsubs_helper
  • fs_add_missing_msgs
  • ft_retry_on_getexclusivelock_error
  • go_1_17_and_go_mod
  • handle_fstore_seq_gaps
  • handle_zeros_at_end_of_files
  • improve_ctrl_msg_processing
  • improve_sql_msg_insert_perf
  • improve_sub_debug_tracing
  • improve_sub_performance
  • improve_sub_stalled_handling
  • increase_raft_timeouts
  • keep_client_id_for_offline_subs_in_monitor_endpoints
  • main
  • master
  • monitoring
  • more_cleanup_on_shutdown
  • move_sql_drivers_imports
  • move_sql_scripts
  • new_release
  • no_close_on_filestore_recover_error
  • no_panic_on_unmarshal_errors
  • no_raft_but_streaming_stores
  • paced_sub_start
  • prepare_for_next_release
  • prepare_for_v090
  • prepare_new_release
  • put_back_no_free_list_sync
  • queue_sub_ack
  • raft_disable_no_free_list_sync
  • raft_multiple_conns
  • raft_shutdown
  • raft_sync
  • raft_transport_use_timeout_for_flush
  • raftlog_reduce_locking
  • recover_corrupted_file_store
  • reduce_test_files_size
  • reduce_tests_time
  • refactor_queue_groups
  • refactor_store_tests
  • reject_channel_name_with_slash
  • release_072
  • release_0_12_0
  • release_0_12_2
  • release_0_13_0
  • release_0_14_0
  • release_0_14_1
  • release_0_14_2
  • release_0_14_3
  • release_0_15_1
  • release_0_16_0
  • release_0_16_2
  • release_0_17_0
  • release_0_18_0
  • release_0_19_0
  • release_0_20_0
  • release_0_21_0
  • release_0_21_1
  • release_0_21_2
  • release_0_22_0
  • release_0_22_1
  • release_0_23_0
  • release_0_23_1
  • release_0_23_2
  • release_0_24_0
  • release_0_24_1
  • release_0_24_2
  • release_0_24_3
  • release_0_24_4
  • release_0_24_5
  • release_0_24_6
  • release_0_25_0
  • release_0_25_3
  • release_0_25_4
  • release_0_25_6
  • remove_channel
  • remove_go_1_10
  • remove_nodeid_from_peers
  • remove_unnecessary_raft_barrier_calls
  • report_err_on_chan_create_failure
  • revert_record_size_limit
  • revisit_fstore_getseqfromtimestamp
  • run_as_windows_service
  • snapshot_qsub_lastsent
  • split_test_files
  • sql_chan_max_seq_from_sub_last_sent
  • sql_fix_first_seq
  • sql_store
  • sql_updates
  • static_channels
  • store_encryption
  • store_refactoring
  • sub_set_pend_limits
  • support_sql_ft
  • system_channel_for_new_channels_notifications
  • test
  • test_cluster_channel_delete
  • tune_backofftimecheck_flapping_test
  • udpate_ctrl_msg_processing
  • update
  • update_bbolt_dep
  • update_boltdb
  • update_client_pings
  • update_clustering_tests_can_use_sql
  • update_dep_to_nats_2_0_0
  • update_deps
  • update_gnatsd_vendor
  • update_go_versions_in_travis
  • update_logger
  • update_nats_deps
  • update_nats_server_dep
  • update_nats_server_deps
  • update_ping_interval_protocol
  • update_procfs
  • update_protobuf_dep
  • update_raft_dep
  • update_raft_logging
  • update_some_deps
  • update_sql_statement
  • update_stan_go_deps
  • update_staticcheck
  • update_store_interface
  • update_test_certs
  • update_test_dumpfs
  • update_to_staticcheck
  • update_to_store_encryption
  • update_travis
  • update_vendor
  • update_vendor_gnatsd
  • update_vendors
  • user_cluster_id_in_internal_subjects
  • v0.10.0
  • v0.10.2
  • v0.11.0
  • v0.11.2
  • v0.12.0
  • v0.12.2
  • v0.14.0
  • v0.14.1
  • v0.14.2
  • v0.14.3
  • v0.15.0
  • v0.15.0-RC01
  • v0.15.1
  • v0.16.0
  • v0.16.2
  • v0.17.0
  • v0.18.0
  • v0.19.0
  • v0.20.0
  • v0.21.0
  • v0.21.1
  • v0.21.2
  • v0.22.0
  • v0.22.1
  • v0.5.0
  • v0.6.0
  • v0.7.0
  • v0.7.2
  • v0.8.0-beta
  • v0.9.0
  • v0.9.2

16 Jan 2020 10:15PM UTC coverage: 91.998% (+0.006%) from 91.992%
2170

Pull #1012

travis-ci

web-flow
[FIXED] Cluster: add flag to allow node to proceed on restore failure

When a node restore from a snapshot, it could be that the snapshot
information shows that a channel should have at least say messages
1 to 100. If the streaming store does not have these sequences,
the node is supposed to reconcile with the leader. However, if
all nodes are restarted and all have this issue, the cluster
would not be able to be restarted at all.
The new `cluster_proceed_on_restore_failure` will allow node(s)
to start.

Relates to #1010

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
Pull Request #1012: [FIXED] Cluster: add flag to allow node to proceed on restore failure

20 of 20 new or added lines in 2 files covered. (100.0%)

12118 of 13172 relevant lines covered (92.0%)

55926.71 hits per line

Relevant lines Covered
Build:
Build:
13172 RELEVANT LINES 12118 COVERED LINES
55926.71 HITS PER LINE
Source Files on add_flag_to_proceed_on_restore_failure
  • List 0
  • Changed 21
  • Source Changed 3
  • Coverage Changed 21
Coverage ∆ File Lines Relevant Covered Missed Hits/Line

Recent builds

Builds Branch Commit Type Ran Committer Via Coverage
2170 add_flag_to_proceed_on_restore_failure [FIXED] Cluster: add flag to allow node to proceed on restore failure When a node restore from a snapshot, it could be that the snapshot information shows that a channel should have at least say messages 1 to 100. If the streaming store does not ... Pull #1012 16 Jan 2020 10:30PM UTC web-flow travis-ci
92.0
2169 add_flag_to_proceed_on_restore_failure [FIXED] Cluster: add flag to allow node to proceed on restore failure When a node restore from a snapshot, it could be that the snapshot information shows that a channel should have at least say messages 1 to 100. If the streaming store does not ... push 16 Jan 2020 10:22PM UTC kozlovic travis-ci
91.99
See All Builds (1649)
  • Repo on GitHub
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc