Use Constraints With Swarm
In this post, i will explain you how to use constraints to limit the set of nodes where a task can be scheduled.
What are constraints ?
By default with Swarm, when you deploy a new service, your container will be scheduled somewhere on your cluster. You can’t choose the node where it will be scheduled (nodes in drain mode are excluded).
It’s the purpose of a Swarm cluster, all your nodes can accept containers.
But sometimes, you need to specify a subset of nodes for some reasons. For example, maybe all your nodes have not the same hardware and some are more powerfull.
That’s where constraints appear ! They will let you specify on which nodes your service can be scheduled. Constraints are based on labels.
How to use constraints ?
Let me show you my small Swarm cluster :
% docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
tez1zpw5oe5x8rrim4augz1h7 * docker00 Ready Active Reachable 18.06.1-ce
w1rxedpb1mwkwbg97tb45x2dd docker01 Ready Active Reachable 18.06.1-ce
jxgj5lhwlq7mep5e4jqx64frm docker02 Ready Active Leader 18.06.1-ce
s6zwvwsit6t6pdj7epul8lctk docker03 Ready Active 18.06.1-ce
kw1q3i59pxh47uxjun5m3ahhd docker04 Ready Active 18.06.1-ce
This cluster have 3 managers and 2 workers. By default, a new service will be scheduled on one of this 5 nodes.
-
Docker’s defaults constraints
By default, nodes already have labels.
Name matches example (with docker01) node.id Node ID w1rxedpb1mwkwbg97tb45x2dd node.hostname Node hostname docker01 node.role Node role manager You can use these labels to restrict scheduling on your service :
% docker service create --name TEST --constraint 'node.role == manager' ... % docker service create --name TEST --constraint 'node.id == w1rxedpb1mwkwbg97tb45x2dd' ... % docker service create --name TEST --constraint 'node.hostname != docker01' ...
If you specify multiples constraints, Docker will find nodes that satisfy every expression (it’s an AND match).
% docker service create --name TEST --constraint 'node.role == manager' --constraint 'node.hostname != docker01' ...
With this example, the new service will be scheduled on docker00 or docker02 (both are managers).
-
Add your own’s labels
With the defaults labels, you can affine scheduling but if you want to be more specific, add your own’s labels. Recently, in my cluster, i have updated docker00 and docker01 with the latest Raspberry Pi 3B+ (the others are Raspberry Pi 3B). So, i have 2 nodes more powerfull (cpu and network) than the others.
It could be usefull to schedule containers that need more CPU or network on these nodes.
For this, we need to :
-
Add a custom label to your nodes (only managers can add labels):
% docker node update --label-add powerfull=true docker00 % docker node update --label-add powerfull=true docker01
We added the label powerfull : true to the 2 nodes.
You can see labels with this command :
% docker node inspect docker00 ... "Spec": { "Labels": { "powerfull": "true" }, "Role": "manager", "Availability": "active" }, ...
-
Start the service with the new constraint :
% docker service create --name TEST --constraint 'node.labels.powerfull == true' ...
Please note that the syntax for your own’s labels is : node.labels.YOUR_LABEL_NAME
-
-
Delete your own’s labels
Just in case you need it :
% docker node update --label-rm powerfull docker00
Enjoy 😉