Bug 235132 - bectl destroy … cannot destroy …: dataset already exists
Summary: bectl destroy … cannot destroy …: dataset already exists
Status: Closed FIXED
Alias: None
Product: Base System
Classification: Unclassified
Component: bin (show other bugs)
Version: CURRENT
Hardware: Any Any
: --- Affects Only Me
Assignee: Kyle Evans
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2019-01-22 16:36 UTC by Graham Perrin
Modified: 2020-02-21 14:12 UTC (History)
2 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Graham Perrin freebsd_committer freebsd_triage 2019-01-22 16:36:57 UTC
<https://www.freebsd.org/cgi/man.cgi?query=bectl&sektion=8&manpath=FreeBSD+13-current>

Below: is it normal for simple destruction, without specifying a snapshot, to behave in this way? (Given the 'unknown error, I imagine that it's not normal.)

If it helps: I do not recall creating a snapshot.

TIA

----

root@momh167-gjp4-8570p:~ # bectl list
BE                 Active Mountpoint Space Created
r343023            NR     /          34.3G 2019-01-14 17:49
r342851            -      -          1.72M 2019-01-07 19:31
r342851-with-locks -      -          8.93G 2019-01-22 01:34
r342466            -      -          34.7M 2019-01-07 07:53
default            -      -          2.16M 2018-12-22 05:01
root@momh167-gjp4-8570p:~ # bectl destroy r342851-with-locks
cannot destroy 'copperbowl/ROOT/r342851-with-locks@2019-01-22-01:34:50': dataset already exists
unknown error
root@momh167-gjp4-8570p:~ # date ; uname -v
Tue Jan 22 16:30:47 GMT 2019
FreeBSD 13.0-CURRENT r343023 GENERIC-NODEBUG 
root@momh167-gjp4-8570p:~ # bectl list -s
BE/Dataset/Snapshot                                        Active Mountpoint Space Created

r343023
  copperbowl/ROOT/r343023                                  NR     /          34.3G 2019-01-14 17:49
  r343023@2019-01-07                                       -      -          1.08M 2019-01-07 07:48
  r343023@2019-01-07-07:53:37                              -      -          1.07M 2019-01-07 07:53
  r343023@2019-01-07-19:31:37                              -      -          33.6M 2019-01-07 19:31
  r343023@2019-01-14-17:49:17                              -      -          5.56G 2019-01-14 17:49

r342851
  copperbowl/ROOT/r342851                                  -      -          728K  2019-01-07 19:31
    copperbowl/ROOT/r342851-with-locks@2019-01-22-01:34:50 -      -          1.01M 2019-01-22 01:34

r342851-with-locks
  copperbowl/ROOT/r342851-with-locks                       -      -          3.36G 2019-01-22 01:34
    copperbowl/ROOT/r343023@2019-01-14-17:49:17            -      -          5.56G 2019-01-14 17:49
  r342851-with-locks@2019-01-22-01:34:50                   -      -          1.01M 2019-01-22 01:34

r342466
  copperbowl/ROOT/r342466                                  -      -          1.13M 2019-01-07 07:53
    copperbowl/ROOT/r343023@2019-01-07-19:31:37            -      -          33.6M 2019-01-07 19:31

default
  copperbowl/ROOT/default                                  -      -          1.09M 2018-12-22 05:01
    copperbowl/ROOT/r343023@2019-01-07-07:53:37            -      -          1.07M 2019-01-07 07:53
root@momh167-gjp4-8570p:~ #
Comment 1 Robert Wing freebsd_committer freebsd_triage 2019-01-28 18:59:18 UTC
 (In reply to Graham Perrin from comment #0)

This is normal behavior, although the error message could be more descriptive.

One thing to remember is, bectl creates a snapshot behind the scenes when you create a boot environment. A boot environment is cloned from the given snapshot.

Here's a quick summary of your output, we can see that the 'r342851-with-locks' boot environment is backed by the 'r343023@2019-01-14-17:49:17' snapshot. We can also see that a snapshot was created from the 'r342851-with-locks' boot environment, that snapshot is named 'r342851-with-locks@2019-01-22-01:34:50'. Looking at boot environment 'r342851',
we see that 'r342851' was created from snapshot 'r342851-with-locks@2019-01-22-01:34:50'. The boot environment 'r342851'
is dependant on the snapshot it is created from, 'r342851-with-locks@2019-01-22-01:34:50'.

You aren't able to destroy 'r342851-with-locks' because 'r342851' depends on snapshot 'r342851-with-locks@2019-01-22-01:34:50' which
depends on 'r342851-with-locks'. This means 'r342851' depends on 'r342851-with-locks'.


A better error message may be:

root@momh167-gjp4-8570p:~ # bectl destroy r342851-with-locks
cannot destroy 'r342851-with-locks': boot environment has dependent clones/boot environments
    r342851
Comment 2 Kyle Evans freebsd_committer freebsd_triage 2019-03-19 18:47:48 UTC
Take
Comment 3 Graham Perrin freebsd_committer freebsd_triage 2019-04-22 10:05:22 UTC
Thanks … I don't understand this: 

root@momh224-gjp4-8570p:~ # bectl list -aDs
BE/Dataset/Snapshot                               Active Mountpoint Space Created

r346536
  copperbowl/ROOT/r346536                         NR     /          31.6G 2019-04-22 10:23
  r346536@2019-04-10-23:28:17                     -      -          11.1G 2019-04-10 23:28
  r346536@2019-04-22-10:23:47-0                   -      -          2.19G 2019-04-22 10:23

20190131-0125
  copperbowl/ROOT/20190131-0125                   -      -          8K    2019-01-31 01:25

default
  copperbowl/ROOT/default                         -      -          1.09M 2018-12-22 05:01

r346108
  copperbowl/ROOT/r346108                         -      -          1.36M 2019-04-10 23:28

r343663
  copperbowl/ROOT/r343663                         -      -          35.3G 2019-02-01 18:04
  r343663@2019-01-07-07:53:37                     -      -          9.60G 2019-01-07 07:53
  r343663@2019-01-31-01:25:46                     -      -          487M  2019-01-31 01:25
  r343663@2019-02-01-18:04:56                     -      -          211M  2019-02-01 18:04
root@momh224-gjp4-8570p:~ # bectl destroy r343663@2019-02-01-18:04:56
cannot destroy 'copperbowl/ROOT/r343663@2019-02-01-18:04:56': dataset already exists
root@momh224-gjp4-8570p:~ # bectl destroy r343663@2019-01-31-01:25:46
cannot destroy 'copperbowl/ROOT/r343663@2019-01-31-01:25:46': dataset already exists
root@momh224-gjp4-8570p:~ # bectl destroy r343663@2019-01-07-07:53:37
cannot destroy 'copperbowl/ROOT/r343663@2019-01-07-07:53:37': dataset already exists
root@momh224-gjp4-8570p:~ # bectl destroy r343663
cannot destroy 'copperbowl/ROOT/r343663@2019-01-31-01:25:46': dataset already exists
unknown error
root@momh224-gjp4-8570p:~ #
Comment 4 Robert Wing freebsd_committer freebsd_triage 2019-04-24 18:33:47 UTC
(In reply to Graham Perrin from comment #3)

Uhh..yea, that is a bit confusing. I'm curious why the origins for your boot environments are not displayed in the output? 

I don't know what's happening here but, my guess is you have some dependencies between boot environments/snapshots that are preventing any of the r343663 snapshots from being destroyed. Without seeing/knowing the boot environment origins, it's hard to say.

Looking at the timestamps though, it appears BE '20190131-0125' may depend on snapshot 'r343663@2019-01-31-01:25:46'. Which would explain why you couldn't delete that snapshot. 

BE r343663, appears to be created from 'r343663@2019-02-01-18:04:56', which seems odd since there are snapshots before that one. It would be helpful to know the true origin of BE r343663.

I'm curious what the output of 'bectl list -s' is after the failed 'bectl destroy r343663' command.

A side note, the '-D' flag is ignored when using either the '-s' or '-a' flags.
Comment 5 Kyle Evans freebsd_committer freebsd_triage 2020-02-21 14:12:36 UTC
This situation should have been resolved with r356279; we'll now promote dependent clones when attempting to destroy a BE.