Your First Snapshot Set

A snapshot set is a collection of snapshots, bound together with a name, and created at a common point in time1. At a bare minimum, to create one, you just need a name and a list of sources (mount points or block devices) that you would like to include in the set. More advanced options exist to give additional control over the creation process and the resulting snapshot set: we will cover these in detail in future blog posts.

Choosing Sources: First Things to know

Before you can create your first snapshot set you'll need to ensure that you have everything necessary in place:

  • You need snapshot capable storage: LVM2 regular or thin provisioned volumes, or Stratis file systems.

  • You need free space for your snapshots to use (either in an LVM2 volume group with regular volumes, or in the thin pool containing the volumes with LVM2 Thin provisioning or Stratis storage).

  • You'll need to know the mount point or block device path for the volumes you wish to include in your snapshot set.

  • You need snapm.

For your first experiment, it doesn't matter what you snapshot as long as it meets these criteria. Later, when you are using the tool in anger, you can craft specific commands for different use cases or automate the process via schedules. To get the most from this exercise, it is helpful to be able to make some changes to whatever you snapshot so you'll have genuine differences to examine.

If you haven't already installed the snapm command on your system read Getting snapm and follow the instructions there: then return here to continue.

Sources for Snapshot Manager can be either mount points, or block devices. A mount point source corresponds to a mounted file system. You can specify it either by mount point path (for example, /home) or by the underlying block device path (for example, /dev/vg0/home). A block device source is a block device that is not mounted and that need not contain a mountable file system image (for example, databases, virtual machine images, and other non file system content).

Snapshot Manager uses the mechanism of provider plugins to support different types of storage. A provider plugin corresponds to a particular storage technology (LVM2 CoW, LVM2 Thin, Stratis, and so on). The snapm command will determine the correct plugins to use for any given source (assuming that one exists and is able to snapshot the requested volume) — you do not need to specify which plugins to use on the command line.

Some familiarity with your storage stack's management tools will help when it comes to planning, monitoring, and finding your way around (whether that's vgs(8), lvs(8), et al. for LVM2, or the stratis(8) command line tool for Stratis storage). Knowledge of common Linux administrative tools like the mount(8) program, findmnt(8), lsblk(8), and blkid(8) will also be an advantage — refer to the respective manual pages for more details, or check out one of the many great Linux systems administration tutorials available on the internet.

Naming Your Snapshot Set

You are free to name your snapshot set anything you like within the constraints enforced by snapm and its provider plugins:

  1. The _ (underscore) character is not permitted.

  2. The valid characters for snapshot set names are the ASCII lowercase and uppercase alphabetic characters, the digits, and the symbols ., +, and -.

  3. Certain strings may be forbidden by particular provider plugins: for instance, LVM2 reserves the string _mlog (and others) for internal logical volume names (refer to lvm(8), lvmraid(7) and related manual pages for more information).

  4. Certain providers may enforce a maximum name length, for example, for LVM2 volumes this is 127 characters. Note that this limit includes parts of the internal name generated by snapm (which varies according to the mount point the snapshot refers to), so the usable name length is somewhat shorter. Nobody likes long device names anyway so this should not be a concern in most situations.

  5. Any provider that represents the snapshot name as an entry in the file system (which includes all current and planned snapshot providers) is naturally limited by the maximum allowed file name length on Linux (255 characters). Again, this absolute maximum is reduced somewhat by the need for snapm to encode various information into the names it generates.

  6. If you include a . followed by a nonnegative number at the end of your snapshot set name snapm will consider it to be an index. This is useful when taking recurring snapshots sets of the same things with a common base name. You can also have snapm add one automatically using the --autoindex option (more on that in a later post).

  7. it is generally best to not include a date or time reference in the name unless you really, really want to: snapm already stores the creation time of the snapshot set as a UNIX epoch timestamp value and renders it as human readable date and time strings in the local time zone.

  8. The name . is reserved by snapm for referencing the live root file system.

Aim for memorable and meaningful names: quux-foo-backup might seem like a good idea at the time, when you're typing in a hurry, but future you will thank present you for coming up with something more self-explanatory.

Good names might be:

  • before-upgrade

  • after-upgrade

  • pre-deployment

  • post-deployment

  • hourly.N (where N is an integer uniquifying index either appended manually or enabled automatically with --autoindex - see snapm(8))

Bad names would include:

  • backup

  • thing

  • thisone

  • thatone

  • something-something-something-snapshot-side

  • foo, fubar, plugh, quux, qwerty, thud, xyzzy, etc.

Overriding The Default Size Policy

Snapshot Manager provides size policies that allow the user to request that a certain amount of space be available when creating snapshots. For some providers (LVM2 Thin, Stratis) this is simply a check that this space is available at the time the snapshot set is created. For LVM2 CoW it determines the actual size of the snapshot exception store that the tool will allocate.

The default size policy is different depending on whether you are creating a snapshot from a mount point source or a block device source. For mount points the default size policy specifies twice the space currently used on the mount point (200%USED in size policy notation). For block devices it is one quarter the physical size of the device (25%SIZE).

To override these defaults, either append a colon and size policy to individual sources (:POLICY) or use --size-policy=POLICY for all sources without explicit policies.

For example:

  • /:2GiB — specify a source of / (the root file system) with a fixed 2GiB size policy

  • /:100%SIZE /home:50%SIZE — specify sources for root and home with a policy requesting 100% and 50% of their respective device sizes

  • --size-policy=100%SIZE — set the policy for all sources not otherwise specified on the command line to be 100% of the corresponding device size

The snapm snapset create command

With the basics explained it's time to actually create a snapshot set with the snapm snapset create command. Take your chosen name and list of sources and append them to the command line, then hit Enter to run the program and create the snapshot set:

# snapm snapset create my-first-set / /var
SnapsetName: my-first-set
Sources: /, /var
NrSnapshots: 2
Time: 2026-01-13 12:44:12
UUID: 0ea16412-8ceb-59d4-af0b-9afa79d306e9
Status: Inactive
Categories: daily, hourly
Autoactivate: no
OriginMounted: yes
Mounted: no
Bootable: no

That's all there is to it! Before we conclude let's take a brief look at the output the tool produces:

  • SnapsetName — the name you gave to the snapshot set

  • Sources — the list of sources (minus any size policy that was specified on the command line)

  • NrSnapshots — the number of snapshots contained in this set

  • Time — the time and date the snapshot set was created in your local time zone

  • UUID — a unique identifier for the snapshot set (a Version 5 UUID to be exact2)

  • Status — the status of this snapshot set: one of Active, Inactive, Invalid, or Reverting)

  • Categories — the timeline categories that this snapshot set belongs to: we will cover timeline categories, scheduling, and automatic garbage collection in a future blog post.

  • Autoactivate — whether automatic boot time activation is enabled for this snapshot set: yes or no

  • OriginMounted — the mount status of this snapshot set's origin devices: yes or no

  • Mounted — the mount status of this snapshot set: yes or no

  • Bootable — whether this snapshot set has boom(8) boot entries or not: yes or no

We will get to all of these properties and their precise meaning in good time, but for now the Status field deserves a little more explanation: some snapshot providers (LVM2 Thin, Stratis) support optional activation for snapshot devices.

This means that you don't have snapshot devices cluttering the system and using resources when they are not required and is therefore more efficient — especially when dealing with very large numbers of snapshots. For LVM2 CoW the snapshot volume must be active at all times that the origin is active.

If your snapshot set includes thin snapshots (LVM2 Thin or Stratis) then it will begin its life in the Inactive state3. If it only includes LVM2 CoW snapshots then it will begin in the Active state (and cannot, in fact, become Inactive unless the origin volume is also inactive).

If you receive an error message instead of the expected output shown above study it carefully to try to understand what it is telling you. Most error conditions have a clear message that explains exactly what is wrong. The most common error is ‘not enough space’: if you run into this you will see a message like this rather than the snapshot set summary:

# snapm snapset create my-first-set / /var
ERROR - Error creating lvm2-cow snapshot: Volume group fedora has insufficient free space to snapshot / (14.5GiB < 15.1GiB)
ERROR - Command failed: Insufficient free space for snapshot set my-first-set

If this happens you can resolve the problem in one of these three ways:

  • Add more space to the volume group or thin pool

  • Delete one or more unneeded volumes to reclaim space

  • Change the size policy to create a smaller snapshot

If you receive some other error refer to the snapm(8) manual page and the FAQ: you may find that your problem already has a straightforward answer. If not then you can reach out to your distribution's support channels or the developers for assistance4.

Care and Feeding of Your Snapshot Set

Now that you have a snapshot set that captures the state of your system at some moment in time you need to know how to look after it. This is essentially the same task as you would need to carry out for any other volume managed with your tool of choice. The major point to bear in mind is that both thin and LVM2 CoW snapshots can run out of space. It is important to ensure that this does not happen for the reasons explained in the Performance & Safety section of the Snapshot Manager FAQ.

As an example, using the snapshot set created in the previous step we see the following output when running the lvs(8) program5:

# lvs -o lv_name,lv_size,data_percent
LV                                           LSize Data% 
root                                        10.00g 
root-snapset_my-first-set_1768308252_-       1.00g  0.19 
swap                                       512.00m 
var                                         12.00g 
var-snapset_my-first-set_1768308252_-var     1.20g  0.19 

Note the 0.19% value in the Data% column: this is the percentage of the snapshot exception store that has been used so far. In this case it's a very healthy value. If the value begins to approach 100% you need to take action using lvresize(8) or snapm snapset resize in order to avoid the snapshot being invalidated.

We will discuss more comprehensive monitoring and maintenance strategies in later blog updates. For now, you can use your regular storage administration tools (whether that is lvs(8) for LVM2 volumes, or the stratis(8) command for Stratis storage), or the snapm snapshot list command to keep an eye on the space available to your snapshots. You can resize individual snapshots using the corresponding tool (for example, lvresize(8) for LVM2), or you can use the snapm snapset resize command to apply new size policies to an already created snapshot set.

Visualizing Changes

Before we conclude let's take a look at some of the advanced features in recent versions of snapm. For this part of the exercise, it helps if you can make some small change to the system. We will assume that your snapshot set included the root file system in this example but if that isn't the case you can adjust the paths used accordingly. We suggest creating a single new file — something simple made with the echo command is fine for our purposes:

# echo "Something changed!" > /etc/my-new-file.conf

Now that you have a definitive change in the file system state, use the snapm snapset diff command to view it:

# snapm snapset diff my-first-set . -s /etc -i /etc/my-new-file.conf
Gathering paths from my-first-set /etc: found 1879 paths
Scanned 1879 paths in 0:00:00.784354 (excluded 0)
Gathering paths from System Root /etc: found 1880 paths
Scanned 1880 paths in 0:00:00.539372 (excluded 0)
Found 7 differences in 0:00:00.012717
Found 0 moves in 0:00:00.004574
Saved 7 records to diffcache in 0:00:00.014500
Built tree with 7 nodes
/
└── [*] etc
    └── [+] my-new-file.conf

Note that my-new-file.conf appears with the [+] annotation (in green if your terminal supports color output) indicating that it is a new file system entry added since my-first-set was created.

To see the exact change in the file content you can use the -o/--output-format diff option of the snapm snapset diff command:

# snapm snapset diff my-first-set . -o diff -s /etc -i /etc/my-new-file.conf
Loaded 1 records from diffcache in 0:00:00.000226
Found 1 content differences
diff a/etc/my-new-file.conf b/etc/my-new-file.conf
new file mode 0o100644
--- /dev/null	
+++ b/etc/my-new-file.conf	2026-01-20 16:59:42.257636
@@ -0,0 +1 @@
+Something changed!

There are two key details to notice here:

  • The -o diff output is displayed in unified diff format (again with color coding if your terminal supports that)
  • The results appear quickly when the second command is run. This is because the differences from the first run are cached and reused.

To see all the differences found between the snapshot set and the live system, remove the -i /etc/my-new-file.conf argument and run the command again:

# snapm snapset diff my-first-set . -s /etc
Gathering paths from my-first-set /etc: found 1879 paths
Scanned 1879 paths in 0:00:00.662213 (excluded 0)
Gathering paths from System Root /etc: found 1880 paths
Scanned 1880 paths in 0:00:00.527266 (excluded 0)
Found 7 differences in 0:00:00.012590
Found 0 moves in 0:00:00.004542
Saved 7 records to diffcache in 0:00:00.014054
Built tree with 7 nodes
/
└── [*] etc
    ├── lvm
    │   ├── [*] archive
    │   │   ├── [-] fedora_00018-1908271671.vg
    │   │   └── [+] fedora_00028-486902226.vg
    │   └── [*] backup
    │       └── [*] fedora
    └── [+] my-new-file.conf

Note that in this example we see changes in the /etc/lvm directory: we are using LVM2 on this system and this is a natural consequence of LVM2's default metadata archival and backup settings. Do not be concerned if you see similar changes on your system when trying this out. In this example we also see that the cache is not used - this is because the new snapset diff command is using different options.

Conclusion

That's it for now! We will look at more advanced uses of snapm and the full set of Difference Engine features in future blog posts. In the meantime check out the snapm(8) manual page and the User Guide if this post whetted your appetite for more Snapshot Manager features!

If you would like to clean up and remove the snapshot set on your system, use the snapm snapset delete command:

# snapm snapset delete my-first-set
  1. The snapshots making up a set are created sequentially, since there is presently no interface in Linux for creating multiple snapshots simultaneously. For this reason the individual snapshots will all have very slightly different timestamps. Snapshot Manager quantises these timestamps to provide a single unambiguous creation time for the set as a whole.

  2. https://en.wikipedia.org/wiki/Universally_unique_identifier#Versions_3_and_5_(namespace_name-based)

  3. Unless creating bootable snapshot sets, in which case autoactivate is always enabled and the set will start out in the Active state (We will cover bootable snapshot sets in detail in a future post).

  4. https://github.com/snapshotmanager/snapm/issues/

  5. We are using the lvs program's -o/--options argument here to make sure the displayed fields fit on the page. By default you will see more columns in the tool output.