Automating VMware vSphere 9 with Ansible: A Practical Blog Series

This series guides experienced VMware admins through using Ansible to automate vSphere 9 tasks. We start by deploying an Ubuntu control node in vSphere 9, then configure VS Code with Git and SSH access, install Ansible and set up our inventory, and finally explore Ansible’s VMware modules with real examples.

Part 1: Setting Up an Ubuntu VM for Ansible

Start by deploying a lightweight Ubuntu Server VM to act as your Ansible control node.

For example, use the latest LTS (e.g. 22.04 or 24.04) Ubuntu Server image. In the vSphere UI, create a new VM (Linux → Ubuntu 64-bit) with minimal resources (1‑2 vCPU, 2GB RAM, ~30GB disk). On the Networking step, attach the VM to a management portgroup or segment with Internet access. Use the VMXNET3 adapter for best performance.

  • OS Installation: Mount the Ubuntu Server ISO or use an OVF, then install Ubuntu Server. During setup, configure DHCP or Static IP as needed. Ensure the VM can reach the internet (test with ping or apt-get update).
  • Install open-vm-tools: For best performance and compatibility install the open-vm-tools:
sudo apt-get update && sudo apt-get install -y open-vm-tools && sudo reboot
  • Enable SSH: During installation choose to install OpenSSH. If missed, install it manually:
sudo apt-get update && sudo apt-get install -y openssh-server && sudo systemctl enable --now ssh

Confirm you can SSH to the VM (e.g. from your workstation: ssh <USERNAME>@<VM_IP>).

  • User and Keys: Create a non-root user (e.g. ansible) with sudo privileges and add your SSH key. On your workstation, generate a key (ssh-keygen -t ed25519) and copy it:
ssh-copy-id <USERNAME>@<VM_IP>

or for PowerShell:

$env:USERPROFILE.ssh\id_ed25519.pub | ssh <USERNAME>@<VM_IP> "cat >> .ssh/authorized_keys"

This enables password-less SSH login. (Disabling root SSH login is good practice once your ansible user is set up.)

  • Networking Tips: If using DHCP, ensure the VM’s DNS and gateway are correct. Verify DNS by checking /etc/resolv.conf or using systemd-resolve if needed. You can also add any static hostnames in /etc/hosts for local name resolution.

With these steps, you have a reachable Ubuntu VM on vSphere ready to run Ansible. Verify connectivity by SSH-ing and updating packages:

sudo apt-get update && sudo apt-get upgrade -y

Next, we’ll set up our IDE and Git.

Part 2: Configuring VS Code with GitHub and Remote SSH Access

We’ll use Visual Studio Code (VS Code) on your desktop for Ansible playbook development, integrating Git/GitHub and connecting to the Ubuntu VM via SSH. VS Code’s Remote – SSH extension creates a client–server setup: your local VS Code communicates with a small VS Code Server running on the Ubuntu VM. This means you can edit files on the VM as if they were local.

Install VS Code and Extensions: On your workstation, install VS Code (Windows/Mac/Linux as applicable). Install the Remote – SSH extension. Also install Git if not present, and the GitHub Pull Requests and Issues extension. To integrate GitHub, create/sign in to your GitHub account.

  • Connect in VS Code: Open VS Code and press F1, then type Remote-SSH: Connect to Host…. Enter <USERNAME>@<VM_IP> (or use your SSH config alias). First, test connectivity by running ssh <USERNAME>@<VM_IP> in a terminal. In VS Code you’ll see a progress notification as it installs the server on the VM. Once connected, the VS Code window title will show “SSH: <VM_FQDN>”. Now open a folder on the remote VM (File > Open Folder…) to start editing. You can now use all VS Code features (IntelliSense, terminal, extensions) on the remote files.
  • Working with Git/GitHub: In the VS Code window (connected to the VM), open the Source Control view. You can clone a GitHub repo by running Git: Clone (Ctrl+Shift+P), or use the “Clone Repository” button. The GitHub extension will prompt you to sign in if needed. You can create a new repo on GitHub (via website) and then push your local Ansible projects there. VS Code also lets you create branches, commit changes, and even review pull requests without leaving the editor.

With VS Code connected, you can easily edit Ansible playbooks on the Ubuntu VM over SSH, while keeping your code in GitHub. Next, let’s install Ansible and configure our inventory on the Ubuntu control node.

Part 3: Getting Started with Ansible

On the Ubuntu control VM, install Ansible and set up your environment. We’ll also create an inventory so Ansible knows what hosts to manage.

  • Install Ansible : The official Ansible docs recommend using the Ansible PPA for Ubuntu to get the latest stable version. To Install it use:
sudo apt-get install -y software-properties-common && sudo add-apt-repository --yes --update ppa:ansible/ansible && sudo apt-get install -y ansible

This adds the Ansible PPA and installs Ansible. Verify with ansible --version. (Ubuntu’s default repo often has an older Ansible, so the PPA ensures you get a recent release.)

  • Install VMware SDK (pyvmomi): To use VMware-specific modules later, install the VMware vSphere Python SDK:
sudo apt-get install -y python3-pip && pip3 install pyvmomi

Ansible’s VMware modules depend on pyVmomi (the VMware vSphere SDK).

  • Set up Inventory: Ansible inventory lists the nodes you will manage. By default, use /etc/ansible/hosts or create a project-level hosts file. Inventory can be INI or YAML. For example, an INI /etc/ansible/hosts might look like:
[vmhosts] 
192.168.10.10 ansible_user=root 
192.168.10.11 ansible_user=root

[vcenters] 
vcenter.example.com ansible_user=administrator@vsphere.local

Here we’ve defined two groups: [vmhosts] and [vcenters] with hostnames or IPs. Each host can have variables like ansible_user. This follows the basic inventory format. Adjust names/groups as needed. If you only use VMware API modules, you might not run tasks on those IPs, but listing vCenter under [vcenters] can be useful for credential management.

  • Generate SSH Keys: If managing ESX hosts via SSH (in addition to vSphere API), generate a key with no passphrase:
ssh-keygen -t ed25519

Copy the public key to each target:

ssh-copy-id <USERNAME>@ESX/VCENTER_IP>

This enables Ansible to connect without passwords. You can also configure Ansible to use a key by setting ansible_ssh_private_key_file=~/.ssh/id_ed25519 in the inventory.

  • Create ansible.cfg (optional): You can tweak Ansible’s behavior via an ansible.cfg in your project. For example:
[defaults]inventory = ./hostsremote_user = roothost_key_checking = False

This sets the default inventory file and disables strict SSH host key checking for ease. (On a secure network, you may leave host key checking on.)

  • Test Connectivity: Now try pinging your hosts:
ansible all -m ping -i hosts

This should return pong for each reachable host. If your inventory only has vCenter entries and you plan to use VMware modules, you can test vCenter connectivity by writing a simple play that uses vmware_guest_info (see Part 4) or use ansible-inventory --list to check the dynamic inventory.

Your control node is now ready: Ansible is installed, inventory is defined, and SSH keys are set. In the next post, we’ll dive into using Ansible’s VMware modules to automate vSphere tasks.

Part 4: Using Ansible with VMware vSphere Modules

Ansible’s community.vmware & vmware.vmware  collection provides modules to manage vSphere infrastructure (VMs, datastores, networks, etc.). These modules talk to vCenter/ESX via the Python SDK. (Note: all vSphere API writes require a non-free license; these modules won’t work on a free ESX license) First, ensure the VMware collection is installed:

ansible-galaxy collection install community.vmware
ansible-galaxy collection install vmware.vmware

This gives you modules like community.vmware.vmware_guest (for VMs) and vmware_guest_info (for facts). The vmware_guest module can create VMs from templates, power on/off VMs, modify hardware (CPU, memory, disks), rename or delete VMs, and more. You will typically run these tasks on the control node (using delegate_to: localhost) since the control node reaches out to vCenter’s API.

Below are practical playbook examples. In each, replace hostnameusernamepassword with your vCenter’s FQDN/IP and credentials. For security, store these in group_vars or encrypted files in production.

  • Create a VM from a Template: The following play creates a new VM named testvm_2 from an existing template template_el7. It specifies folder, cluster, CPU/memory, disk and network settings. After creation it powers on the VM. (We set delegate_to: localhost since we run on the control node.)
- name: Deploy VM from template on vCenter 
  hosts: localhost 
  gather_facts: no 
      
  tasks: 
  
  - name: Create a virtual machine from a template
    vmware_guest:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: false
      datacenter: Datacenter1
      folder: /testvms
      name: testvm_2
      state: poweredon
      template: template_el7
      disk:
      - size_gb: 40
        type: thin
        datastore: g73_datastore
      hardware:
        memory_mb: 512
        num_cpus: 6
        num_cpu_cores_per_socket: 3
        scsi: paravirtual
        memory_reservation_lock: True
        mem_limit: 8096
        mem_reservation: 4096
        cpu_limit: 8096
        cpu_reservation: 4096
        max_connections: 5
        hotadd_cpu: True
        hotremove_cpu: True
        hotadd_memory: False
        version: 21 # Hardware version of virtual machine
        boot_firmware: "efi"
      networks:
      - name: VM Network
        mac: aa:bb:dd:aa:00:14
      wait_for_ip_address: yes
    delegate_to: localhost
    register: deploy

This uses vmware_guest to clone the UbuntuTemplate and customize resources. The module’s synopsis confirms it can create VMs from templates and manage power state. Adjust parameters (datacenter, cluster, etc.) to match your environment.

  • Power On/Off an Existing VM: To change power state, call vmware_guest with state: poweredoff or poweredon. For example, to power off a VM by name:
- name: Power off a VM by name
  hosts: localhost 
  gather_facts: no 
      
  tasks: 
  
  - name: Shutdown VM 'testvm'
    vmware_guest:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: false
      datacenter: Datacenter1
      folder: "/testvms"  
      name: testvm
      state: poweredoff
    delegate_to: localhost

Likewise, use state: poweredon to turn it back on. The module will issue a graceful shutdown (or forceful if needed). See module docs for more options (e.g. force: yes or shutdown_timeout).

  • Modify VM Hardware: You can resize a VM’s CPU/memory or add/remove devices by re-running vmware_guest with state: present. For instance, to update testvm to 4 CPUs:
- name: Update CPU and memory of a VM
  hosts: localhost 
  gather_facts: no 
      
  tasks: 
  
  - name: Resize VM hardware
    vmware_guest:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: false
      datacenter: Datacenter1 
      name: testvm
      state: present 
      hardware: 
        num_cpus: 4 
        memory_mb: 4096
    delegate_to: localhost

(Note: The VM must be powered off to change most settings unless hot-add is enabled.)

  • Clone, Customize, and Delete VMs: You can clone VMs (even with guest OS customization). For example, to clone a Linux template and run a script on first boot: see VMware’s ansible examples in their docs. To delete a VM: set state: absent

For example, to remove a VM named oldvm from inventory:

- name: Remove a VM by name
  hosts: localhost 
  gather_facts: no 
      
  tasks: 
  
  - name: Delete VM 'oldvm' completely 
    vmware_guest:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: false
      datacenter: Datacenter1 
      name: "oldvm" 
      state: absent
    delegate_to: localhost

This removes the VM and all its files. (To only remove from inventory but keep files, use delete_from_inventory: true.) You can also delete by UUID using uuid: "{{ vm_uuid }}" if needed.

  • Gather Facts from vSphere: To query vCenter about a VM’s details, use vmware_guest_info. This can return CPU, memory, disk info, power state, etc. For example:
- name: Get VM information
  hosts: localhost 
  gather_facts: no 
      
  tasks: 
  
  - name: Gather information about 'testvm' 
    vmware.vmware_guest_info:
      hostname: "{{ vcenter_hostname }}"
      username: "{{ vcenter_username }}"
      password: "{{ vcenter_password }}"
      validate_certs: false
      datacenter: Datacenter1 
      name: "testvm" 
    delegate_to: localhost
    register: vm_info

After running this, the vm_info variable contains a dictionary of details. You can customize the output schema (e.g. schema: vsphere) or request specific properties, but the default summary is often sufficient.

Summary: In this post we used the vmware.vmware_guest module to manage VMs: creating from templates, changing power state, updating hardware, and deleting VMs, as well as vmware_guest_info to gather data. These modules (and others in the collection) cover a wide range of vSphere tasks. For more examples and advanced options, refer to the official VMware Ansible documentation and the VMware GitHub repo which includes detailed module guides and requirements.

By following this series, you have a working Ansible control node on vSphere, a convenient development environment in VS Code, and practical playbook examples for automating vSphere 9. Explore more modules (e.g. vmware_datacentervmware_clustervmware_host, etc.) to automate your entire VMware stack.

Happy automating!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert