What I’m looking to achieve is that by default instances are spread out based on their name without digits (so web1, web2 and web3 will be seen as the same “group” web ) or group instances based on a user.placement_affinity key. That way everyone can just create instances and I can sleep well knowing clustered resources are spread out.
Idea was to also add user.zone or user.rack labels on my cluster members to initially prefer servers in separate locations inside the datacenter rather than servers that are in the same server rack (and by that extension network switch/power source)
I wasn’t even sure if all of this information would have been available from within the instances.placement.scriptlet functionality. Running a python script with the above logic every 30 minutes is more of a workaround. I would have preferred to get this done by LXD so that it can take it into account when evacuating a cluster member as well. (Wouldn’t want my sql master and slave on the same cluster member.)
Will I have to do this with external scripts/logic ?
Thanks for sharing what you are aiming to achieve with instance placement. We are currently working on a new placement-groups feature that will enable configurable instance placement policies in relation to cluster members (essentially affinity and anti-affinity, however the naming is still subject to change). Placement groups will be considered during cluster evacuation.
We are aiming to have this out in the next feature release.
Spreading instances across cluster members will be possible with placement-groups. Affinity for a specific group of cluster members will not be possible initially, but this is something we are considering for a future iteration of the feature.