6.9.0
版本 6.9.0 2021-02-27
新功能摘要
多个池
该功能允许您定义多达35个命名的池,每个池最多包含30个存储设备。池通过主页创建和管理。
- Note: A pre-6.9.0 cache disk/pool is now simply a pool named
"cache". When you upgrade a server which has a cache disk/pool
defined, a backup of
config/disk.cfgwill be saved toconfig/disk.cfg.bak, and then cache device assignment settings are moved out ofconfig/disk.cfgand into a new file,config/pools/cache.cfg. If later you revert back to a pre-6.9.0 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact.
创建用户共享或修改现有用户共享时,您可以指定与该共享关联的池。指定的池功能与当前缓存池操作相同。
需注意的是:当获取共享的目录列表时,Unraid 数组磁盘卷和所有包含该共享的池将按以下顺序合并。
分配给共享的池
disk1
:disk28
所有其他的池在 [strverscmp()](https://man7.org/linux/man-pages/man3/strverscmp.3.html) 顺序中。
A single-device pool may be formatted with either xfs, btrfs, or (deprecated) reiserfs. A multiple-device pool may only be formatted with btrfs. A future release will include support for multiple "Unraid array" pools, as well as a number of other pool types.
- Note: Something else to be aware of: Let's say you have a 2-device
btrfs pool. This will be what btrfs calls "raid1" and what most
people would understand to be "mirrored disks". Well, this is
mostly true in that the same data exists on both disks but not
necessarily at the block-level. Now let's say you create another
pool, and what you do is un-assign one of the devices from the
existing 2-device btrfs pool and assign it to this pool. Now you
have x2 single-device btrfs pools. Upon array Start user might
understandably assume there are now x2 pools with exactly the same
data. However, this is not the case. Instead, when Unraid OS
sees that a btrfs device has been removed from an existing
multi-device pool, upon array Start it will do a
wipefson that device so that upon mount it will not be included in the old pool. This of course effectively deletes all the data on the moved device.
额外的 btrfs 平衡选项
多个设备池仍然默认使用 btrfs raid1 配置文件创建。如果池中有 3 个或更多设备,您现在可以重新平衡到 raid1c3 配置文件(在不同设备上的 3 个数据副本)。如果您在一个池中有 4 个或更多设备,您现在可以重新平衡到 raid1c4(在不同设备上的 4 个数据副本)。我们还修改了 raid6 平衡操作,将元数据设置为 raid1c3(以前是 raid1)。
然而,我们注意到将其中一个平衡过滤器应用于完全空的卷后,会留下带有先前配置文件的数据区块。解决方案是简单地再次运行相同的平衡。我们认为这是一个btrfs错误,如果没有解决方案,我们将 默认添加第二个平衡。目前,保持不变。
SSD 1 MiB 分区对齐
我们添加了另一种分区布局,其中分区1的开始对齐在1 MiB边界。这意味着,对于呈现512字节扇区的设备,分区1将从扇区2048开始;对于具有4096字节扇区的设备,从扇区256开始。这种分区类型现在用于格式化所有未格式化的非旋转存储(仅此)。
It is not clear what benefit 1 MiB alignment offers. For some SSD devices, you won't see any difference; for others, perhaps big performance difference. LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be).
要重新分区SSD,必须先清除设备上的现有分区结构。当然,这将擦除设备上的所有数据。或许最简单的方法是,在阵列停止时,识别要清除的设备,并使用'blkdiscard'命令:
blkdiscard /dev/xxx # 例如 /dev/sdb 或 /dev/nvme0n1 等
警告:请确保您键入了正确的设备标识符,因为设备上的 所有数据将丢失!
在下次阵列启动时,设备将显示为未格式化,因为现在没有分区结构,Unraid OS 将创建它。
- 注意:如果您想要重新分区基于SSD的缓存磁盘/池并保留数据,请考虑在Unraid社区论坛上发帖,以获得对于您特定配置的帮助。同时参考在预发布版板块的这篇帖子