My killers

2025.01.20 05:32 _but-y_ My killers

My killers Almost all my rescue animals in one photo. Weasley (orange tabby), then Al (black and white), and most recently Poppy (rottie mix). My heart is so full🖤🖤
submitted by _but-y_ to pitbulls [link] [comments]


2025.01.20 05:32 Potential-Revenue813 Some of my recent rooms

Some of my recent rooms I tried focusing on the color scheme !
submitted by Potential-Revenue813 to CatsAndSoup [link] [comments]


2025.01.20 05:32 Spare-Clock-4803 I wish I was getting that much battle points

I wish I was getting that much battle points submitted by Spare-Clock-4803 to Rainbow6 [link] [comments]


2025.01.20 05:32 8ackwardsReddit Repost with (hopefully) better pictures. Can anybody help me identify these coins? I got them as apart of a random unidentified lot, I believe one of them is Indian and the others are Greek? Much appreciated, thank you. 🙏

Repost with (hopefully) better pictures. Can anybody help me identify these coins? I got them as apart of a random unidentified lot, I believe one of them is Indian and the others are Greek? Much appreciated, thank you. 🙏 submitted by 8ackwardsReddit to AncientCoins [link] [comments]


2025.01.20 05:32 icydata DET @ DAL - Matěj Blümel, snap

DET @ DAL - Matěj Blümel, snap submitted by icydata to icydata [link] [comments]


2025.01.20 05:32 Tazx3 Briar mechanic i havent seen anyone talk about yet

Briar mechanic i havent seen anyone talk about yet submitted by Tazx3 to BriarMains [link] [comments]


2025.01.20 05:32 jvc72 Buy Signal Wrapped Bitcoin USD - 20 Jan 2025 @ 00:29 -> USD101 033

Ticker: WBTCUSD
Exchange: CRYPTO
Time: 20 Jan 2025 @ 00:29
Price: USD101 033
Link: https://getagraph.com/crypto-currencies/WBTCUSD/ENG
submitted by jvc72 to getagraph [link] [comments]


2025.01.20 05:32 runnerforever3 I’m curious

To everyone who attended the recent assembly, was the attendance low? I’m curious. I’m hoping it was. It’s like light at the end of the tunnel. I want this cult to have barely no one showing up.
submitted by runnerforever3 to exjw [link] [comments]


2025.01.20 05:32 sunflowersprinkles98 People that have lived here longer than me - is this weather normal?

Particularly the temps on Monday and Tuesday. I’ve heard the last few winters have been relatively mild, but are these temperatures in the negative considered typical? Will I be laughed at if I ask my boss if I can work from home on Tuesday because of the cold?
I feel like I’ve acclimated decently, but a high of 2 degrees just seems insane. Is this not a big deal to you all? And what about schools, do they close because of the cold? They definitely do where I’m from. But maybe these frigid temperatures are considered normal here and the last few winters have been mild.
submitted by sunflowersprinkles98 to AskChicago [link] [comments]


2025.01.20 05:32 FiacreLaforest First Date?

First Date? submitted by FiacreLaforest to meowskulls [link] [comments]


2025.01.20 05:32 Ferocious-Ford Plane Jump Scare

Plane Jump Scare submitted by Ferocious-Ford to gtaonline [link] [comments]


2025.01.20 05:32 RIP_GerlonTwoFingers Who needs mult?

Who needs mult? Hanging Chad Stone Joker +200 Odd Todd Brainstorm Square Joker +80 Lusty Joker (My only Mult, it’s negative so it stays) Marble Joker
submitted by RIP_GerlonTwoFingers to balatro [link] [comments]


2025.01.20 05:32 EI-Joe Cold outside but sun is so warm

Cold outside but sun is so warm In
submitted by EI-Joe to labrador [link] [comments]


2025.01.20 05:32 himbo_top question about hormones after hysto

i’m trying to get a hysto scheduled (ideally with both ovaries gone) but i’m sort of worried about some things hormone-wise and was hoping some elder trans guys might be able to help me.
i’ve been on T for about 2 years now and i’ve settled at a level that i’m happy with. that includes using finasteride to negate body and facial hair growth which i don’t want for sensory reasons. recently we increased my dose because my levels weren’t consistently high enough and i’m feeling great mentally with it now which is super important.
that being said, i’m wondering if taking the estrogen out of my body via hysto is going to change things. my doctor said that normally they don’t prescribe both e and t to trans people, but i’m concerned that this is the perfect hormone balance to make me feel great and give me the changes i DO want and then after hysto it’s going to go downhill.
so anyway, what changes did y’all experience after your ovaries were out and you were just T-based? mental AND physical, including voice please!
submitted by himbo_top to FTMHysto [link] [comments]


2025.01.20 05:32 SunVast343 Need Money Click Here

https://s.binance.com/h2dBiBFj?utm_medium=app_share_link_reddit
submitted by SunVast343 to space [link] [comments]


2025.01.20 05:32 Radiant-Ad617 Coldplay Mumbai jan 21 ground standing tickets available if anyone need tell me

submitted by Radiant-Ad617 to Coldplaytickets [link] [comments]


2025.01.20 05:32 GuileFem I don't know why, but I still think of this as one of Peter's most insane feats.

I don't know why, but I still think of this as one of Peter's most insane feats. submitted by GuileFem to KillerPeter [link] [comments]


2025.01.20 05:32 Guthix_Wraith [H] Tyranids [W] AOS Storm cast, PayPal [Loc] Missouri

Verification
https://imgur.com/a/IHkZmNX
Have: 1x Broodlord 36x genestealers
Want:
PayPal $90
AOS Stormcast
A single Stormcast dragon dude (or Drake if that's a thing)
Tyranid warriors (highest priority)
submitted by Guthix_Wraith to Miniswap [link] [comments]


2025.01.20 05:32 esiy0676 Taking advantage of ZFS on root with Proxmox VE


Better formatted at: https://free-pmx.github.io/insights/zfs-root/ No tracking. No ads.
Proxmox seem to be heavily in favour of the use of ZFS, including for the root filesystem. In fact, it is the only available option in the stock installer 1 in case you would want to make use of e.g. a mirror. However, the only benefit of ZFS in terms of Proxmox VE feature set lies in the support for replication 2 across nodes, which is a perfectly viable alternative for smaller clusters to shared storage. Beyond that, Proxmox do not cater for ZFS. For instance, if you make use of Proxmox Backup Server (PBS), 3 there's absolutely no benefit in using ZFS in terms of its native snapshot support. 4

NOTE The designations of various ZFS setups in the Proxmox installer are incorrect - there's no RAID0, RAID1 and other such levels in ZFS. Instead these are single, striped or mirrored virtual devices the pool is made up of (and they all still allow for redundancy), meanwhile the so-called (and correctly designated) RAIDZ levels are not directly comparable to classical parity RAID (with different than expected meaning to the numbering). This is where Proxmox prioritised the ease of onboarding over the opportunity to educate its users, which is to their detriment when consulting the authoritative documentation. 5
ZFS on root In turn, there's seemingly few benefits of ZFS on root with a stock Proxmox VE install. If you require replication of guests, you absolutely do NOT need ZFS for the host install itself. Instead, creation of ZFS pool (just for the guests) after the bare install would be advisable. Many would find this confusing as non-ZFS installs set you up with with LVM 6 instead, a configuration you would then need to revert, i.e. delete the superfluous partitioning prior to creating a non-root ZFS pool.
Further, if it were for the mirroring of the root filesystem itself, one would get much simpler setup with a traditional no-frills Linux/md software RAID 7 solution which does NOT suffer from write amplification inevitable for any copy-on-write filesystems.
No support No built-in backup features of Proxmox take advantage of the fact that ZFS for root specifically allows convenient snapshotting, serialisation and sending the data away in a very efficient way of the very filesystem the operating system is running off - both in terms of space utilisation and performance.
Finally, since ZFS is not reliably (over time) supported by common bootloaders, certainly not the bespoke versions of ZFS as shipped by Proxmox, it is also necessary to keep juggling the initramfs 8 and available kernels around the from the regular /boot directory (which might be inaccessible for the bootloader when residing on an unusual filesystem such as ZFS) and EFI System Partition (ESP), which was not exactly meant to hold full images of about-to-be booted up systems originally, something that further complicates the already non-standard setup by making use of bespoke tools. 9
So what are the actual out-of-the-box benefits of with Proxmox VE install? None whatsoever.
A better way While this might be an opportunity to take a step back and migrate your install away from ZFS on root or - as we will have a closer look here - take actually real advantage of it. The good news is that it is NOT at all complicated, it only requires a different bootloader solution that happens to come with lots of bells and whistles. That and some understanding of ZFS concepts, but then again, using ZFS makes only sense if we want to put such understanding to good use as Proxmox do not do this for us.
ZFS-friendly bootloader A staple of any sensible on-root ZFS install, at least with a UEFI system, is the conspicuously named bootloader of ZFSBootMenu (ZBM) 10 - a solution that is an easy add-on for an existing system such as Proxmox VE, actually. It will not only allow us to boot with our root filesystem directly off the actual /boot location within - so no more intimate knowledge of Proxmox use of bootloaders needed - but also let us have multiple root filesystems at any given time to choose from. Moreover, it will also allow us to create e.g. a snapshot of a cold system before it booted up, similarly as we did in bit more manual (and seemingly tedious) process with the Proxmox installer before - but with just a couple of keystrokes and native to ZFS.
Install ZFSBootMenu Getting an extra bootloader is straightforward. We place it onto EFI System Partition (ESP), where it belongs (unlike kernels - changing the contents of the partition as infrequent as possible is arguably a great benefit of this approach) and update the EFI variables - our firmware will then default to it the next time we boot. We do not even have to remove the existing bootloader(s), they can stay behind as a backup, but in any case they are also easy to install back later on.
As Proxmox do not casually mount the ESP on a running system, we have to do that first. We identify it by its type:
sgdisk -p /dev/sda Disk /dev/sda: 268435456 sectors, 128.0 GiB Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): 6EF43598-4B29-42D5-965D-EF292D4EC814 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 268435422 Partitions will be aligned on 2-sector boundaries Total free space is 0 sectors (0 bytes) Number Start (sector) End (sector) Size Code Name 1 34 2047 1007.0 KiB EF02 2 2048 2099199 1024.0 MiB EF00 3 2099200 268435422 127.0 GiB BF01 
It is the one with partition type shown as EF00 by sgdisk, 11 typically second partition on a stock PVE install.
TIP Alternatively, you can look for the sole FAT32 partition with lsblk -f 12 which will also show whether it has been already mounted, but it is NOT the case on a regular setup. Additionally, you can check with findmnt /boot/efi. 13
Let's mount it: 14
mount /dev/sda2 /boot/efi 
Create a separate directory for our new bootloader and downloading it:
mkdir /boot/efi/EFI/zbm wget -O /boot/efi/EFI/zbm/zbm.efi https://get.zfsbootmenu.org/efi 
The only thing left is to tell UEFI where to find it, which in our case is disk /dev/sda and partition 2: 15
efibootmgr -c -d /dev/sda -p 2 -l "EFI\zbm\zbm.efi" -L "Proxmox VE ZBM" BootCurrent: 0004 Timeout: 0 seconds BootOrder: 0001,0004,0002,0000,0003 Boot0000* UiApp Boot0002* UEFI Misc Device Boot0003* EFI Internal Shell Boot0004* Linux Boot Manager Boot0001* Proxmox VE ZBM 
We named our boot entry Proxmox VE ZBM and it became default, i.e. first to be attempted to boot off. We can now reboot and will be presented with the new bootloader.
---8<---
If we do not press anything, it will just boot off our root filesystem stored in rpool/ROOT/pve-1 dataset - more on this term below. That easy.
ZFS does things differently While introducing ZFS is well beyond the scope here, it is important to summarise the basics in terms of differences to a "regular" setup.
ZFS is not a mere filesystem, it doubles as a volume manager (such as LVM), and if it were not for the requirement of UEFI for a separate EFI System Partition with FAT filesystem - something ordinarily sharing the same (or sole) disk in the system - it would be possible to present the entire physical medium to ZFS and even skip the regular partitioning 16 altogether.
In fact, the OpenZFS docs boast 17 that a ZFS pool is "full storage stack capable of replacing RAID, partitioning, volume management, fstab/exports files and traditional single-disk file systems." This is because a pool can indeed be made up of multiple so-called virtual devices (vdevs). This is just a matter of conceptual approach, as a most basic vdev is nothing more than would be otherwise considered a block device, e.g. a disk, or a traditional partition of a disk, even just a file.
IMPORTANT It might be often overlooked that vdevs, when combined (e.g. into a mirror), constitute a vdev itself, which is why it is possible to create e.g. striped mirrors without much thinking about it.
Vdevs are organised in a tree-like structure and therefore the top-most vdev in such hierarchy is considered a root vdev. The simpler and more commonly used reference to the entirety of this structure is a pool, however.
We are not particularly interested in the substructure of the pool here - after all a typical PVE install with a single vdev pool (but also all other setups) results in a single pool named rpool getting created and can be simply seen as a single entry: 18
zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 126G 1.82G 124G - - 0% 1% 1.00x ONLINE - 
But pool is not a filesystem in the traditional sense, even though it could appear as such. Without any special options specified, creating a pool, such as rpool indeed results in filesystem getting mounted under /rpool location in the filesystem, which can be checked as well:
findmnt /rpool TARGET SOURCE FSTYPE OPTIONS /rpool rpool zfs rw,relatime,xattr,noacl,casesensitive 
But this pool as a whole is not really our root filesystem per se, i.e. rpool is not what is mounted to / upon system start. If you explore further, there is a structure to the /rpool mountpoint:
apt install -y tree tree /rpool /rpool ├── data └── ROOT └── pve-1 4 directories, 0 files 
These are called datasets within ZFS parlance (and they indeed are equivalent to regular filesystems, except for a special type such as zvol) and would be ordinarily mounted into their respective (or intuitive) locations, but if you went to explore the directories further with PVE specifically, those are empty.
The existence of datasets can also be confirmed with another command: 19
zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 1.82G 120G 104K /rpool rpool/ROOT 1.81G 120G 96K /rpool/ROOT rpool/ROOT/pve-1 1.81G 120G 1.81G / rpool/data 96K 120G 96K /rpool/data rpool/var-lib-vz 96K 120G 96K /valib/vz 
This also gives a hint where each of them will have a mountpoint - they do NOT have to be analogous.
IMPORTANT A mountpoint as listed by zfs list does not necessarily mean that the filesystem is actually mounted there at the given moment.
Datasets may appear like directories, but they - as in this case - can be independently mounted (or not) anywhere into the filesystem at runtime - and in this case, it is a perfect example of the root filesystem / mounted, but actually held by the rpool/ROOT/pve-1 dataset.
IMPORTANT Do note that paths of datasets start with a pool name, which can be arbitrary (the rpool here has no special meaning to it), but they do not contain the leading / as an absolute filesystem path would.
Mounting of regular datasets happens automatically, something that in case of PVE installer resulted in superfluously appearing directories like /rpool/ROOT which are virtually empty. You can confirm such empty dataset is mounted and even unmount it without any ill-effects:
findmnt /rpool/ROOT TARGET SOURCE FSTYPE OPTIONS /rpool/ROOT rpool/ROOT zfs rw,relatime,xattr,noacl,casesensitive umount -v /rpool/ROOT umount: /rpool/ROOT (rpool/ROOT) unmounted 
Some default datasets for Proxmox VE are simply not mounted and/or accessed under /rpool - a testament how disentangled datasets and mountpoints can be.
You can even go about deleting such (unmounted) subdirectories. You will however notice that - even if the umount command does not fail - the mountpoints will keep reappearing.
But there is nothing in the usual mounts list as defined in /etc/fstab 20 which would imply where they are coming from:
cat /etc/fstab #       proc /proc proc defaults 0 0 
The issue is that mountpoints are handled differently when it comes to ZFS. Everything goes by the properties of the datasets, which can be examined: 21
zfs get mountpoint rpool NAME PROPERTY VALUE SOURCE rpool mountpoint /rpool default 
This will be the case of all of them except the explicitly specified ones, such as the root dataset:
NAME PROPERTY VALUE SOURCE rpool/ROOT/pve-1 mountpoint / local 
When you do NOT specify a property on a dataset, it would typically be inherited by child datasets from their parent (that is what the tree structure is for) and there are also defaults when all of them (in the path) are left unspecified. This is generally meant to facilitate a friendly behaviour of a new dataset appearing immediately as a mounted filesystem in a predictable path - and we should not be caught by surprise by this with ZFS.
It is completely benign to stop mounting empty parent datasets when all their children have locally specified mountpoint property and we can absolutely do that right away: 22
zfs set mountpoint=none rpool/ROOT 
Even the empty directories will NOW disappear. And this will be remembered upon reboot.
TIP It is actually possible to specify mountpoint=legacy in which case it can be managed such as a regular filesystem with /etc/fstab.
So far, we have not really changed any behaviour, just learned some basics of ZFS and ended up in a neater mountpoints situation:
rpool 1.82G 120G 96K /rpool rpool/ROOT 1.81G 120G 96K none rpool/ROOT/pve-1 1.81G 120G 1.81G / rpool/data 96K 120G 96K /rpool/data rpool/var-lib-vz 96K 120G 96K /valib/vz 
Forgotten reservation It is fairly strange that PVE takes up the entire disk space by default and calls such pool rpool as it is obvious that the pool WILL have to be shared for datasets other than the one holding root filesystem(s).
That said, you can create separate pools, even with the standard installer - by giving it smaller than actual full available disk size:
---8<---
But the issue for us is not as much in the naming or separation as the situation that a non-root dataset, e.g. a guest without any quota set, CAN fill up the entire rpool. We should at least do the minimum to ensure there is always ample space for the root filesystem. We could meticulously be setting quotas on all the other datasets, but instead, we really should make a reservation for the root one, or more precisely a refreservation: 23
zfs set refreservation=16G rpool/ROOT/pve-1 
This will guarantee that 16G is reserved for the root dataset at all circumstances. Of course it does not prevent us filling up the entire space by some runaway process, but it cannot be usurped by other datasets, such as the guests.
TIP The refreservation reserves space for the dataset itself, i.e. the filesystem occupying it. If we were to set just reservation instead, we would include all possible e.g. snapshots and clones of the dataset into the limit, which we do NOT want.
A fairly useful command to make sense of space utilisation in a ZFS pool and all its datasets is:
zfs list -ro space  
This will actually make a distinction between USEDDS (i.e. used by the dataset itself), USEDCHILD (only by the children datasets), USEDSNAP (snapshots), USEDREFRESERV (buffer kept to be available when refreservation was set) and USED (everything together). None of which should be confused with AVAIL, which is then the space available for each particular dataset and the pool itself, which will include USEDREFRESERV of those particular ones that had any refreservation set, but not for others.
Snapshots and clones We almost forgot about our new bootloader, but the whole point of us doing all this was to take advantage of ZFS features without much extra tooling. It would be great if we could e.g. take a copy of a filesystem at an exact point, e.g. before a risky upgrade and know we can revert back to it, i.e. boot from it should anything go wrong. ZFS allows for this with its snapshots which record exactly the kind of state we need - they are very fast to create as they do not initially take any space, it is simply a marker on filesystem state that from this point on will be tracked for changes - in the snapshot. As more changes accumulate, snapshots will keep taking up more space. Once not needed, it is just a matter of ditching the snapshot - which drops the "tracked changes" data.
Snapshots with ZFS, however, are read-only. They are great to e.g. recover a forgotten customised - since accidentally overwritten - configuration file, or reverting to, but not to boot from if we - at the same time - want to retain the current dataset state - as a rollback would have us go back in time without the ability to jump "back" forward again. For that, a snapshot can be turned into a clone.
It is very easy to create a snapshot off an existing dataset and then checking for its existence: 24
zfs snapshot rpool/ROOT/pve-1@snapshot1 zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/pve-1@snapshot1 300K - 1.81G - 
Note the naming convention with @ separator with the precedeing dataset name - the snapshot belongs to it.
We can then perform some operation, such as upgrade and check again to see the used space increasing:
NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/pve-1@snapshot1 46.8M - 1.81G - 
Clones can only be created from a snapshot. Let's create one now as well: 25
zfs clone rpool/ROOT/pve-1@snapshot1 rpool/ROOT/pve-2 
As clones are as capable as a regular dataset, they are listed as such:
zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 17.8G 104G 96K /rpool rpool/ROOT 17.8G 104G 96K none rpool/ROOT/pve-1 17.8G 120G 1.81G / rpool/ROOT/pve-2 8K 104G 1.81G none rpool/data 96K 104G 96K /rpool/data rpool/var-lib-vz 96K 104G 96K /valib/vz 
Do notice that while both pve-1 and the cloned pve-2 refer the same amount of data and the available space did not drop. Well, except that the pve-1 had our refreservation set which guarantees it its very own claim on extra space, whilst that is not the case for the clone. Clones simply do not take extra space until they start to refer other data than the original.
Importantly, the mountpoint was inherited from the parent - the rpool/ROOT dataset, which we had previously set to none.
TIP This is quite safe - NOT to have unused clones mounted at all times - but does not preclude us from mounting them on demand, if need be:
mount -t zfs -o zfsutil rpool/ROOT/pve-2 /mnt 
Running system backup There is always one issue with the approach above, however. When creating a snapshot, even in a fixed point in time, there might be some processes running and part of their state is not on disk, but e.g. resides in RAM, and is crucial to the system's consistency, i.e. such snapshot might get us a corrupt state as we are not capturing anything that was in-flight. A prime candidate for such a fragile component would be a database, something that Proxmox heavily relies on with its own pmxcfs - and indeed the proper way to snapshot a system like this while running is more convoluted, e.g. the database has to be given special consideration, e.g. be temporarily shut down or the configuration state as presented under /etc/pve has to be backed up by its own means.
This can be however easily resolved in more streamlined way - by making all the backup operations from a different, i.e. not the running system itself. For the case of root filesystem, we have to boot off a different environment, such as when we created a full backup from a rescue-like boot. But that is relatively inconvenient. And not necessary - in our case. Because we have our ZFS-aware bootloader with extra tools.
We will ditch the potentially inconsistent clone and snapshot and redo them - as they depend on each other, the need to go in reverse order:
WARNING Exercise EXTREME CAUTION when issuing zfs destroy 26 commands - there is NO confirmation asked and it is easy to execute them without due care, in particular in terms omitting a snapshots name part following @ and thus removing entire dataset when passing on -r and -f switch which we will NOT use here for that reason.
It might be a good idea to prepend these command by a space character, which on a common regular Bash shell setup would omit them from getting recorded in history and accidentally re-executed. This is one of the reasons to avoid running everything under the root user all of the time.
zfs destroy rpool/ROOT/pve-2 zfs destroy rpool/ROOT/pve-1@snapshot1 
Booting directly off ZFS Before we start exploring our bootloader and its convenient features, let us first appreciate how it knew how to boot us into the current system, simply after installation. We had NOT have to update any boot entries as would have been the case with other bootloaders.
Boot environments We simply let EFI know where to find the bootloader itself and it then found our root filesystem, just like that. It did it be sweeping the available pools and looking for datasets with / mountpoints and then looking for kernels in /boot directory - which we have only one instance of. There is further rules at play in regards to the so-called boot environments - which you are free to explore further 27 - but we happened to have satisfied them.
Kernel command line The bootloader also happens to append kernel command line parameters 28 - as we can check for the current boot: 29
cat /proc/cmdline root=zfs:rpool/ROOT/pve-1 quiet loglevel=4 spl.spl_hostid=0x7a12fa0a 
Where did these come from? Well, the rpool/ROOT/pve-1 was intelligently found by our bootloader. The hostid parameter is added for the kernel - something we briefly touched on before in the post on rescue boot. This is part of Solaris Porting Layer (SPL) that helps kernel to get to know the /etc/hostid 30 value despite it would not be accessible within the initramfs - something we will keep out of scope here.
The rest are defaults which we can change to our own liking. You might have already sensed that it will be equally elegant as the overall approach i.e. no rebuilds of initramfs needed, as this is the objective of the entire escapade with ZFS booting - and indeed it is, via a ZFS dataset property org.zfsbootmenu:commandline - obviously specific to our bootloader. 31
We can make our boot verbose by simply omitting quiet from the command line:
zfs set org.zfsbootmenu:commandline="loglevel=4" rpool/ROOT/pve-1 
The effect could be observed on the next boot off this dataset.
IMPORTANT Do note that we did NOT include root= parameter. If we did, it would have been ignored as this is determined and injected by the bootloader.
Forgotten default Proxmox VE comes with very unfortunate default for the ROOT dataset, and thus all its children. It does not cause any issues insofar we do not start adding up multiple children datasets with alternative root filesystems, but it is unclear what the reason for this was as even the default install invites us to create more of them by naming the stock one pve-1.
More precisely, if we went on and added more datasets with mountpoint=/ - something we actually WANT so that our bootloader can recongise them as menu options, there is another tricky option that should NOT really be set on any root dataset, namely canmount=on which is a perfectly reasonable default for any OTHER dataset.
The property canmount 32 determines whether dataset can be mounted or whether it will be auto-mounted on a pool import event. The current on value would cause all the datasets that are children of rpool/ROOT be automounted when calling zpool import -a - and this is exactly what Proxmox set us up with due to its zfs-import-scan.service, i.e. such import happens every time on startup.
It is nice to have pools auto-imported and mounted, but this is a horrible idea when there is multiple pools set up with the same mountpount, such as with a root pool. We will set it to noauto so that this does not happen to us when we later have multiple root filesystems. This will apply to all future children datasets, but we also explicitly set it to the existing one. Unfortunately, there appears to be a ZFS bug where it is impossible to issue zfs inherit 33 on a dataset that is currently mounted.
zfs set canmount=noauto rpool/ROOT zfs set -u canmount=noauto rpool/ROOT/pve-1 
NOTE Setting root datasets to not be automatically mounted does not really cause any issues as the pool is already imported and root filesystem mounted based on the kernel command line.
ZFSBootMenu Now finally, let's reboot and press ESC before the 10 seconds timeout passes on our bootloader screen. The boot menu cannot be any more self-explanatory, we should be able to orient ourselves easily after what we have learnt above:
---8<---
We can see the only dataset available pve-1, we see the kernel 6.8.12-6-pve is about to be used as well as complete command line. What is particularly neat however are all the other options (and shortcuts) here. One can switch between different screens also by left and right arrow keys.
For instance, on the Kernels screen we would see (and be able to choose) an older kernel:
---8<---
We can even make it default with C^D (pr CTRL+D key combination) as the footer provides a hints for - this is what Proxmox call "pinning a kernel" and wrapped into their own extra tooling - which we do not need.
We can even see the Pool Status and explore the logs with C^L or get into Recovery Shell with C^R all without any need for an installer, let alone bespoke one that would support ZFS to begin with. We can even hop into a chroot environment with C^J with ease. This bootloader simply doubles as a rescue shell.
Snapshot and clone But we are not here for that now, we will navigate to the Snapshots screen and create a new one with C^N, we will name it snapshot1. Wait a brief moment. And we have one:
---8<---
If we were to just press ENTER on it, it would "duplicate" it into a fully fledged standalone dataset (that would be an actual copy), but we are smarter than that, we only want a clone, so we press C^C and name it pve-2. This is a quick operation and we get what we expected:
---8<---
We can now make the pve-2 dataset our default boot option with a simple press of C^D on the entry when selected - this sets a property bootfs on the pool (NOT the dataset) we had not talked about before, but it is so conveniently transparent to us, we can abstract from it all.
Clone boot If we boot into pve-2 now, nothing will appear any different, except our root filesystem is running of a cloned dataset:
findmnt / TARGET SOURCE FSTYPE OPTIONS / rpool/ROOT/pve-2 zfs rw,relatime,xattr,posixacl,casesensitive 
And both datasets are available:
zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 33.8G 88.3G 96K /rpool rpool/ROOT 33.8G 88.3G 96K none rpool/ROOT/pve-1 17.8G 104G 1.81G / rpool/ROOT/pve-2 16G 104G 1.81G / rpool/data 96K 88.3G 96K /rpool/data rpool/var-lib-vz 96K 88.3G 96K /valib/vz 
We can also check our new default set through the bootloader: 34
zpool get bootfs NAME PROPERTY VALUE SOURCE rpool bootfs rpool/ROOT/pve-2 local 
Yes, this means there is also an easy way to change the default boot dataset for the next reboot from a running system: 35
zpool set bootfs=rpool/ROOT/pve-1 rpool 
And if you wonder about the default kernel, that is set in: org.zfsbootmenu:kernel property.
Clone promotion Now suppose we have not only tested what we needed in our clone, but we are so happy with the result, we want to keep it instead of the original dataset based off which its snaphost was created. That sounds like a problem as a clone depends on a snapshot and that in turn depends on its dataset. This is exactly what promotion is for. We can simply:
zfs promote rpool/ROOT/pve-2 
Nothing will appear to have happened, but if we checked about pve-1:
zfs get origin rpool/ROOT/pve-1 NAME PROPERTY VALUE SOURCE rpool/ROOT/pve-1 origin rpool/ROOT/pve-2@snapshot1 - 
Its origin now appears to be a snapshot of pve-2 instead - the very snapshot that was previously made off pve-1.
And indeed it is the pve-2 now that has a snapshot instead:
zfs list -t snapshot rpool/ROOT/pve-2 NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/pve-2@snapshot1 5.80M - 1.81G - 
We can now even destroy pve-1 and the snapshot as well:
zfs destroy rpool/ROOT/pve-1 zfs destroy rpool/ROOT/pve-2@snapshot1 
And if you wonder - yes, there was an option to clone and right away promote the clone in the boot menu itself - the C^X shortkey.
Accomplishments We got quite a complete feature set when it comes to ZFS on root install. We can actually create snapshots before risky operations, rollback to them, but on a more sophisticated level have several clones of our root dataset any of which we can decide to boot off on a whim.
None of this requires some intricate bespoke boot tools that would be copying around files from /boot to the EFI System Partition and keep it "synchronised" or that need to have the menu options rebuilt every time there is a new kernel coming up.
Most importantly, we can do all the sophisticated operations NOT on a running system, but from a separate environment while the host system is not running, thus achieving the best possible backup quality in which we do not risk any corruption. And the host system? Does not know a thing. And does not need to.
In fact, we did not even bother to remove the original bootloader. And it would continue to boot if we were to re-select it in UEFI. Well, as long as it finds its target at rpool/ROOT/pve-1. But we could just as well go and remove it, similarly as when we installed GRUB instead of systemd-boot.
Finally, as there are some popular tokens of "wisdom" around such as "snapshot is not a backup", but they are not particularly meaningful, let's also consider what else we could do with our snapshots and clones. A backup is as good as it is safe from consequences of indvertent actions we expect. E.g. a snapshot is as safe as the system that has access to it, i.e. not any less than tar archive would have been stored in a separate location whilst still accessible from the same system. Of course, that does not mean that it would be futile to send our snapshots somewhere away. It is something we can still easily do with serialisation that ZFS provides for. But that is for another time.
Enjoy your proper ZFS-friendly bootloader, one that actually understands your storage stack better than stock Debian install ever would.
submitted by esiy0676 to ProxmoxQA [link] [comments]


2025.01.20 05:31 Bangarz Rate my setup

iPad Pro m4 11”
I think I’m good with the setup now, thoughts?
submitted by Bangarz to ipad [link] [comments]


2025.01.20 05:31 playboyyss BUIDL Automatic LP.

5% of the trading fees return to the liquidity ensuring BUIDL increasing collateral value. No manual override ability to pause or stop liquidity from being added. Allows for complete APY sustainability until maximum supply is reached.
submitted by playboyyss to DigitalCryptoWorld [link] [comments]


2025.01.20 05:31 TRWars Skeleton Crew - Animation style of End Credits?

The animation used to convey the stories on Wins tablet, and the end credits montage, was absolutely fascinating. I'm not 100% convinced it's CGI versus photographed laser cut acrylic, but it's likely digital.
Does anyone have a name for that style or method of diorama? I really enjoyed the aesthetic and the details of the lighting and reflections.
submitted by TRWars to StarWars [link] [comments]


2025.01.20 05:31 Bigjiggle87 Jalen green

Take jalen green on prediction strike for a 2x spark.
Green is Over in 22 points alone in 8 STRAIGHT GAMES and has 22+ Points in 9/9 games in 2025
And he’s over in 11/12 without Jabari Smith (only miss was 19 points shooting 5/14)
On top of him averaging 20 shots per game in this 8 straight games going over stretch which is huge as he’s over in 42/50 with 20+ shots
Which gives him a great chance to have a ton of fantasy points as the Pistons DO NOT have anyone on defense with the speed and quickness to guard Green
submitted by Bigjiggle87 to PredictionStrike [link] [comments]


2025.01.20 05:31 bryle_m 阪神淡路大震災 阪急全線記録復旧への1405日 (The Great Hanshin-Awaji Earthquake: 1,405 Days to Restore the Hankyu Line)

submitted by bryle_m to transit [link] [comments]


2025.01.20 05:31 Professional_Cap7660 Career ideas

Hey my fellow managers!
Since often people ask what realistic transfers they could do, I thought I'd sit down and create some transfer ideas for several clubs. Maybe this also gives you folks an inspiration on which team to lead next! :)
Little note and advice: The outgoing transfers are based on what currently happens in real life, for example Salah leaving at the end of season. If you want it to be realistic, you could transfer the players to the correct club before you start the career mode and give yourself a financial help to simulate the inbound transfer sum.
Also, disable the first transfer window!!! Pretend to start after the 25/26 summer window.
Stars indicating difficulty: 1 easy up to 5 super hard
Liverpool - Not much activity, but replacing the gaps \*
Out: Salah (Saudi?), TAA (Madrid)
In: Nico Williams (Athletic Bilbao), Davies (Bayern, on a free) / Aina (Forest) / Tiago Santos (Lille), Martin Zubimendi (Sociedad)
ManUtd - Big overhaul with Ruben Amorim ***\*
Out: Rashford (Bayern/PSG/Milan), Zirkzee (loan, Juve), Antony (loan, Ajax), Casemiro (Brazil/Saudi), Eriksen (Fulham), Lindelöf (Benfica)
In: Ederson (Atalanta), Adam Wharton (Palace), Chris Rigg (Sunderland), Patrick Dorgu (Lecce)
If you're unhappy with Onana, you can replace him with Verbruggen or D. Costa.
Arsenal - Injecting youth into the squad after losing a top CB to Madrid *\*
Out: Gabriel (Real Madrid), Trossard (Brighton return), Jorginho (Roma), Partey (Valencia)
In: Murillo (Forest), Enzo Millot (Stuttgart), Christos Mouzakitis (Olympiakos)
ManCity - new hungry players, actually cutting a little cost of the wage bill \*
Out: De Bruyne (Saudi), Grealish (Villa return), Bernardo Silva (Bayern)
In: Dani Olmo (Free), Jeremie Frimpong (Bayer 04)
Dortmund - Reinforcing the squad to attack top 3 by selling top players to England **\*
Out: Gittens (Chelsea), Malen (Newcastle), Nmecha (Bournemouth)
In: Jobe Bellingham (Sunderland), Sverre Nypan (Rosenborg), Martin Baturina (Zagreb), Taylor Harwood Bellis (Southampton), Gianluca Prestianni (Benfica)
Bayern Munich - The era after Neuer, bringing in top talent to challenge for UCL glory *\*
Out: Goretzka (Newcastle), Sane (Arsenal), Upamecano (PSG), Davies (Liverpool/Real), Neuer (Schalke return)
In: Rashford (ManUtd), Theo Hernandez (Milan), Verbruggen (Brighton), Florian Wirtz (Bayer 04), Ousmane Diomande (Sporting), Frenkie de Jong (Barca), Samu (Porto)
Leipzig - Klopps impact forging RB into a world beating machinery **\*
Out: Kampl (Austria Wien), Poulsen (RB Salzburg), Andre Silva (Villareal), Klostermann (Gladbach)
In: Xavi (permanent transfer), Lucas Chevalier (Lille), Antonio Silva (Benfica), J.P. van Hecke (Brighton), Cheick Doucoure (Palace)
Werder Bremen - Make Bremen great again with smart transfers and loans (challenge for european places) ****\*
Out: Milos Veljkovic (Sevilla), Marvin Duksch (Leeds United)
In: Issa Kabore (ManCity), Mathys Tel (Loan), Jaouen Hadjam (Young Boys), Noah Markmann (Nordsjaelland), Giorgi Chakvetadze (Watford), Jordan James (Rennes, loan)
Barcelona - Wallet is tight, superstar Lewa old, Olmo and Pau Victor not registered. Can you challenge Madrid? **\*
Out: Olmo (ManCity), Pau Victor (Inter Milan), de Jong (Bayern)
In: Gyokeres (Sporting)
If you want to up the difficulty, edit ter Stegen and drop his stats, transfer him anywhere (career ending injury). Apart from Gyokeres, you must promote La Masia talent.
Real Madrid - Freshen the squad for the era after Kroos, Modric etc. *\*
Out: Modric (Zagreb), Carvajal (Sevilla), Rodrygo (Arsenal or ManUtd), Vallejo (Granada)
In: Gabriel (Arsenal), Trent Alexander Arnold (Liverpool), Grimaldo (Bayer 04)
AC Milan - Conceição revolution, make Milan feared again in Europe **\*
Out: Florenzi (Monza), Hernandez (Bayern), Jovic (Frankfurt), Jimenez (Watford)
In: Diogo Leite (Union), Pepe (Porto), Rashford (ManUtd), Marc Guehi (Palace)
submitted by Professional_Cap7660 to seriousfifacareers [link] [comments]


https://google.com/