For AI agents: Documentation index at /llms.txt

Skip to content

Cycles management

Canisters on ICP pay for compute and storage using cycles. Cycles are paid by the canister, not the caller: developers fund their own canisters, and users interact for free. See Cycles for a full explanation of the billing model.

This guide covers everything you need to manage cycles in production: acquiring them, monitoring balances, setting thresholds, and deploying to mainnet.

Local vs mainnet cycles

Local development uses fabricated cycles: canisters on a local network start with a large balance and never actually run out. Code that works locally can fail on mainnet if the canister is underfunded. Always test with realistic cycle amounts before deploying.

Acquiring cycles

To run canisters on mainnet you need ICP tokens, which you convert to cycles via the CMC.

Step 1: Create a mainnet identity

Terminal window
icp identity new mainnet-deployer
icp identity default mainnet-deployer
icp identity principal
# Output: xxxxx-xxxxx-xxxxx-xxxxx-xxx

Save your seed phrase: it is shown only once. Without it, you permanently lose access to the identity and any funds it controls.

Step 2: Get ICP tokens

Purchase ICP on an exchange. When withdrawing, use your principal as the destination address (or icp identity account-id if the exchange requires an account identifier).

Verify arrival:

Terminal window
icp token balance -n ic

Step 3: Convert ICP to cycles

Terminal window
# Convert 5 ICP to cycles
icp cycles mint --icp 5 -n ic
# Or request a specific cycle amount (ICP is calculated automatically)
icp cycles mint --cycles 5T -n ic

Verify your cycles balance:

Terminal window
icp cycles balance -n ic
# Output: ~5T cycles

Budget guidance: Plan for 1–2T cycles per canister as a starting balance. A simple backend canister with moderate traffic costs roughly 0.1–0.5T cycles per month, though this varies with storage and call volume. See the cycles costs reference for per-operation pricing.

Checking canister cycle balances

Only controllers can view a canister’s cycle balance via icp canister status.

Via icp-cli

Terminal window
# Check a canister in your project
icp canister status backend -e ic
# Check any canister by ID
icp canister status ryjl3-tyaaa-aaaaa-aaaba-cai -n ic

Example output:

Status: Running
Controllers: xxxxx-xxxxx-xxxxx-xxxxx-xxx
Memory allocation: 0
Compute allocation: 0
Freezing threshold: 2_592_000
Balance: 9_811_813_913_485 Cycles

The Balance line shows the current cycle balance. The Freezing threshold shows how many seconds of idle cycles the canister must retain before freezing (see Freezing threshold below).

Programmatically

Canisters can check their own balance at runtime:

import Cycles "mo:core/Cycles";
persistent actor {
public query func getBalance() : async Nat {
Cycles.balance()
};
}

Topping up canisters

Anyone can top up any canister: you do not need to be its controller.

Terminal window
# Top up by canister name (in your project environment)
icp canister top-up backend --amount 1T -e ic
# Top up by canister ID (no project context required)
icp canister top-up --amount 1T ryjl3-tyaaa-aaaaa-aaaba-cai -n ic

Amounts use human-readable suffixes: T = trillion, b = billion, m = million, k = thousand.

To convert ICP and top up in sequence:

Terminal window
icp cycles mint --icp 1.0 -n ic
icp canister top-up backend --amount 1T -e ic

Accepting cycles in your canister

Canisters can also accept cycles sent with an inter-canister call. This pattern is used for “tip jar” flows and payment routing:

import Cycles "mo:core/Cycles";
import Runtime "mo:core/Runtime";
persistent actor {
public func deposit() : async Nat {
let available = Cycles.available();
if (available == 0) {
Runtime.trap("No cycles sent with this call")
};
Cycles.accept<system>(available)
};
}

Freezing threshold

The freezing threshold is a canister setting that defines how long (in seconds) a canister can survive on its current balance while idle. When the canister’s balance would fall below the estimated cost of running for that many seconds, the canister is frozen: it stops processing update calls but still serves query calls.

The default is 2,592,000 seconds (30 days). Increase it for production canisters or those with large stable memory:

Terminal window
# Set freezing threshold to 90 days (7,776,000 seconds)
icp canister settings update backend --freezing-threshold 7776000 -e ic
# Or use icp.yaml to apply it per-environment

In icp.yaml:

environments:
- name: production
network: ic
canisters: [backend]
settings:
backend:
freezing_threshold: 90d

See Canister settings for all available settings and their syntax.

When a canister is frozen:

  • Update calls return an error immediately
  • Query calls still succeed (read-only)
  • The canister is not deleted yet: top it up to unfreeze

When a frozen canister runs out of cycles entirely:

  • The canister is deleted along with all its state
  • This is irreversible

Creating and funding canisters programmatically

You can also read and configure the freezing threshold from canister code:

import Principal "mo:core/Principal";
persistent actor Self {
type CreateCanisterSettings = {
controllers : ?[Principal];
compute_allocation : ?Nat;
memory_allocation : ?Nat;
freezing_threshold : ?Nat;
};
type CanisterId = { canister_id : Principal };
let ic = actor ("aaaaa-aa") : actor {
create_canister : shared { settings : ?CreateCanisterSettings } -> async CanisterId;
deposit_cycles : shared { canister_id : Principal } -> async ();
};
// Create a new canister with 1T cycles and a 30-day freezing threshold
public func createWithThreshold() : async Principal {
let result = await (with cycles = 1_000_000_000_000) ic.create_canister({
settings = ?{
controllers = ?[Principal.fromActor(Self)];
compute_allocation = null;
memory_allocation = null;
freezing_threshold = ?2_592_000; // 30 days
};
});
result.canister_id
};
// Top up another canister programmatically
public func topUp(canisterId : Principal, amount : Nat) : async () {
await (with cycles = amount) ic.deposit_cycles({ canister_id = canisterId });
};
}

Calling canisters that require cycles

Some canister methods expect cycles to be attached to the call itself. The cycles ledger cannot forward calls with cycles attached, so you need a different approach depending on whether you are calling from canister code or from the CLI.

From canister code

Attach cycles to an inter-canister call using Cycles.add (Motoko) or msg_cycles_add (Rust). The called canister receives the cycles as part of the message context and accepts them with Cycles.accept:

import Cycles "mo:core/Cycles";
persistent actor {
let target = actor ("rrkah-fqaaa-aaaaa-aaaaq-cai") : actor {
someMethod : () -> async ();
};
public func callWithCycles() : async () {
Cycles.add<system>(500_000_000);
await target.someMethod();
};
}

The calling canister uses cycles from its own balance, not from the cycles ledger. Top up the calling canister first using icp canister top-up or icp cycles mint.

From the CLI or an agent

The CLI cannot attach cycles directly to a canister call. Two approaches address this:

Top up the target canister first (preferred when you control it): transfer cycles to the target canister using icp canister top-up, then call the method normally. The canister uses its own balance when the method runs.

Terminal window
# Transfer 1T cycles to the target canister
icp canister top-up rrkah-fqaaa-aaaaa-aaaaq-cai --amount 1T -n ic
# Then call the method as normal
icp canister call rrkah-fqaaa-aaaaa-aaaaq-cai someMethod '()' -n ic

Proxy canister (required when you need to attach cycles to a call and don’t control the target): deploy a proxy canister that can forward calls with cycles attached.

Terminal window
# Deploy the proxy canister using the provided template
icp new proxy --template proxy
cd proxy
icp deploy -e ic
# Get the proxy canister ID
export PROXY_ID=$(icp canister status -e ic --id-only proxy)
# Call any canister through the proxy with cycles attached
icp canister call --proxy "$PROXY_ID" rrkah-fqaaa-aaaaa-aaaaq-cai someMethod '()' -n ic

The proxy canister template is available at icp-cli-templates/proxy. It deploys the proxy-canister, which is automatically provisioned on local networks but must be deployed manually on mainnet.

Multi-environment deployment

For production, use separate environments for staging and production to avoid accidentally affecting live canisters. Configure environments in icp.yaml:

environments:
- name: staging
network: ic
canisters: [frontend, backend]
settings:
backend:
freezing_threshold: 30d
environment_variables:
LOG_LEVEL: "debug"
- name: production
network: ic
canisters: [frontend, backend]
settings:
backend:
freezing_threshold: 90d
environment_variables:
LOG_LEVEL: "error"

Deploy to each environment independently:

Terminal window
# Deploy to staging first
icp deploy -e staging
# Verify, then deploy to production
icp deploy -e production

Each environment maintains separate canister IDs. Mainnet IDs are stored in .icp/data/mappings/<environment>.ids.json and should be committed to version control. See Managing environments for full configuration options.

Production deployment checklist

Before deploying to mainnet, verify each of the following:

  • Fund canisters: Top up all canisters with at least 2–5T cycles each before deploying
  • Set a freezing threshold: Use 90 days (7776000 seconds) or more for production
  • Add a backup controller: Without a backup, losing your identity means losing the canister permanently:
    Terminal window
    icp canister settings update backend --add-controller BACKUP_PRINCIPAL -e ic
  • Verify cycle balance after deploy: Check immediately after icp deploy -e ic:
    Terminal window
    icp canister status backend -e ic
  • Enable reproducible builds: See Reproducible builds to ensure your WASM is verifiable
  • Review canister settings: See Canister settings for memory allocation, compute allocation, and access controls
  • Review security: See Canister upgrades security for safe upgrade patterns

Monitoring cycle balances

There is no built-in alerting for low balances: monitoring is your responsibility. Options:

Manual monitoring: Check regularly via icp-cli:

Terminal window
# Check all canisters in an environment at once
icp canister status -e ic

Automated monitoring services: Third-party services can monitor balances and alert or auto-top-up:

  • CycleOps: Onchain monitoring with automated top-ups and email notifications
  • Canistergeek: Cycles, memory, and log monitoring in one place

Automated top-up libraries:

  • Rust: canfund: DFINITY-maintained library for automated canister funding
  • Motoko: cycles-manager: Permissioned multi-canister cycles management

Common mistakes

Sending cycles to the wrong canister: Cycles transferred to the wrong principal cannot be recovered. Double-check canister IDs before topping up.

Using the wrong flag (-n vs -e): Use -e ic for canister operations by name; use -n ic for token/cycles operations and canister IDs:

Terminal window
# Correct
icp canister top-up backend --amount 1T -e ic
icp cycles balance -n ic
# Incorrect (fails: canister name requires -e)
icp canister top-up backend --amount 1T -n ic

Forgetting to add a backup controller: Your identity is the only controller by default. If you lose access to it, the canister cannot be managed, upgraded, or deleted.

Confusing local and mainnet cycles: Local deployments use fabricated cycles and never freeze. Test with realistic amounts on a staging environment before going to production.

Using ExperimentalCycles in Motoko: In mo:core, the module is Cycles, not ExperimentalCycles. import ExperimentalCycles "mo:base/ExperimentalCycles" will fail with mo:core. Use import Cycles "mo:core/Cycles".

Next steps