Hardware failures happen when you least expect them. Last week, I experienced this firsthand when my primary Proxmox server started showing intermittent crashes with a green screen of death after running smoothly for a year or two. While I still had access to the system it was time to migrate my virtual machines to my backup Proxmox server.
Both servers are not part of a cluster, which meant I couldn’t use the standard cluster migration features. Instead, I found a newer, less documented feature that allows VM migration between standalone hosts, a command called qm remote-migrate
.
The Challenge
When dealing with standalone Proxmox servers, migrating VMs isn’t as straightforward as clicking a button in the web interface. You need to:
- Establish secure communication between the servers
- Ensure storage and network compatibility
- Execute the migration with proper authentication
Prerequisites
Before starting the migration process, ensure you have:
- Administrative access to both Proxmox servers
- Network connectivity between the servers
- Compatible storage types (or a plan to convert them)
- Sufficient resources on the target server
Step 1: Create an API Token
The first step is establishing secure authentication between your servers. On your target Proxmox server (where you want to migrate the VMs), create an API token:
- Navigate to Datacenter > Permissions > API Tokens
- Click Add to create a new token
- Set the User to
root@pam
- Give it a meaningful Token ID (e.g.,
migrate
) - Important: Uncheck “Privilege Separation” to ensure the token has full permissions
- Copy the generated token secret – you won’t see it again
The token format will look like: PVEAPIToken=root@pam!migrate=your-secret-token-here
I did not investigate specific roles required and gave the tool full access. Further investigation is need for this in a more critical environment.
Step 2: Get the SSL Certificate Fingerprint
For secure communication, you need the SSL certificate fingerprint of your target server. On the target server, run:
pvenode cert info
Look for the fingerprint of the pve-ssl.pem
file in the output.
┌─────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────┐
│ filename │ pve-ssl.pem │
├─────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┤
│ fingerprint │ BB:DD:EE:AA:44:AA:86:ZZ:00:ZZ:FE:YY:E3:YY:3D:YY:40:D6:XX:41:7B:1C:3E:XX:71:85:EF:B8:E6:F9:93:A1 │
├─────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┤
│ subject │ /OU=PVE Cluster Node/O=Proxmox Virtual Environment/CN=scully.example.com │
├─────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┤
│ issuer │ /CN=Proxmox Virtual Environment/OU=dc285e80-d921-48ed-a970-a8788b09ce3a/O=PVE Cluster Manager CA │
├─────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┤
│ notbefore │ 2023-03-31 22:44:36 │
├─────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┤
│ notafter │ 2025-03-30 22:44:36 │
├─────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┤
│ public-key-type │ rsaEncryption │
├─────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┤
│ public-key-bits │ 2048 │
├─────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┤
│ san │ - 127.0.0.1 │
│ │ - 0000:0000:0000:0000:0000:0000:0000:0001 │
│ │ - localhost │
│ │ - 172.16.250.8 │
│ │ - scully │
│ │ - scully.example.com │
└─────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────┘
Alternatively, you can manually extract it using the following command:
openssl x509 -in /etc/pve/nodes/[hostname]/pve-ssl.pem -noout -sha256 -fingerprint
The fingerprint will be a colon-separated string like: BB:DD:EE:AA:44:AA:86:ZZ:00:ZZ:FE:YY:E3:YY:3D:YY:40:D6:XX:41:7B:1C:3E:XX:71:85:EF:B8:E6:F9:93:A1
Step 3: Verify Storage and Network Compatibility
Before migrating, ensure your storage and network configurations are compatible:
Storage: Check that both servers have compatible storage types. If you’re using:
- Same storage type (e.g., both use local storage or both use ZFS): Migration is straightforward
- Different storage types: You may need to convert the disk format first
Network: Verify that both servers have the same bridge names (e.g., vmbr0
) or note the differences for the migration command.
To check your current VM’s configuration:
qm config <vmid>
Step 4: Execute the Migration
With everything prepared, you can now run the migration command from your source server:
qm remote-migrate <source-vmid> [<target-vmid>] <target-endpoint> \
--target-bridge <bridge-name> \
--target-storage <storage-name> \
[--online 1]
Here’s a real example:
qm remote-migrate 104 604 \
'apitoken=PVEAPIToken=root@pam!migrate=xxxxxxxx-yyyy-zzzz-aaaa-wwwwwwwwwwww,host=backup-proxmox.example.com,fingerprint=BB:DD:EE:AA:44:AA:86:ZZ:00:ZZ:FE:YY:E3:YY:3D:YY:40:D6:XX:41:7B:1C:3E:XX:71:85:EF:B8:E6:F9:93:A1' \
--target-bridge vmbr0 \
--target-storage local \
--online 1
Let me break down this command:
100
: Source VM ID200
: Target VM ID (optional, will use source ID if omitted)apitoken=...
: Your API token from step 1host=...
: Hostname or IP of your target serverfingerprint=...
: SSL certificate fingerprint from step 2--target-bridge vmbr0
: Network bridge on target server--target-storage local
: Storage pool on target server--online 1
: Enables live migration (optional)
Step 5: Verify the migrated server
Once the VM is migrated it is time to verify everything is working as required. You will also find the VM in the migrate state on the source proxmox server. If you have some issue with the migration you can unlock this server again to retry and start it up again. You can unlock the server using the following command:
qm unlock <vmid>
Important Notes
Live Migration: The --online 1
flag enables live migration, keeping the VM running during the process. This works best for VMs with relatively low disk I/O.
VM IDs: If you don’t specify a target VM ID, the migration will use the same ID as the source VM. Make sure there’s no conflict on the target server.
Storage Types: For different storage types, you might need to migrate to a temporary local storage first, then move to your desired storage type.
Troubleshooting Common Issues
Authentication Errors: Double-check your API token has sufficient privileges and that “Privilege Separation” is disabled.
Certificate Errors: Ensure the fingerprint matches exactly, including all colons and proper capitalization.
Network Issues: Verify that the target server is reachable and that the specified bridge exists.
Storage Errors: Confirm that the target storage has sufficient space and is accessible.
The Result
After running this process for several VMs, I successfully migrated my entire infrastructure to the backup server.
While Proxmox clustering offers more seamless migration capabilities, the remote migration feature provides a solid solution for standalone servers. It’s particularly useful for disaster recovery scenarios, hardware upgrades, or when you need to move VMs between geographically separated servers.
Now it’s time to debug the primary server and find the hardware issue.
Useful Resources
For more detailed information about the qm remote-migrate
command and its options, check out:
- Proxmox QM Manual
- Proxmox commit
- How to create an API token for Proxmox
- Migration Implementation Details
Remember, always test your migration process in a non-production environment first, and ensure you have proper backups before performing any major infrastructure changes.
Be First to Comment