Version 7.0.0 2025-01-09
This version of Unraid OS includes significant improvements across all subsystems, while attempting to maintain backward compatibility as much as possible.
Special thanks to:
- @bonienl, @dlandon, @ich777, @JorgeB, @SimonF, and @Squid for their direction, support, and development work on this release
- @bonienl for merging their Dynamix File Manager plugin into the webgui
- @Squid for merging their GUI Search and Unlimited Width Plugin plugins into the webgui
- @ludoux (Proxy Editor plugin) and @Squid (Community Applications plugin) for pioneering the work on http proxy support, of which several ideas have been incorporated into the webgui
- @ich777 for maintaining third-party driver plugins, and for the Tailscale Docker integration
- @SimonF for significant new features in the Unraid OS VM Manager
- @EDACerton for development of the Tailscale plugin
View the contributors to Unraid on GitHub with shoutouts to these community members who have contributed PRs (these are GitHub ids):
- almightyYantao
- baumerdev
- Commifreak
- desertwitch
- dkaser
- donbuehl
- FunkeCoder23
- Garbee
- jbtwo
- jski
- Leseratte10
- Mainfrezzer
- mtongnz
- othyn
- serisman
- suzukua
- thecode
And sincere thanks to everyone who has requested features, reported bugs, and tested pre-releases!
Upgrading
Known issues
ZFS pools
If you are using ZFS pools, please take note of the following:
- You will see a warning about unsupported features in your existing ZFS pools. This is because the version of ZFS in 7.0 is upgraded vs. 6.12 and contains new features. This warning is harmless, meaning your pool will still function normally. A button will appear letting you upgrade a pool to support the new ZFS features; however, Unraid OS does not make use of these new features, and once upgraded previous versions of Unraid OS will not be able to mount the pool.
- Similarly, new pools created in 7.0 will not mount in 6.12 due to ZFS not supporting downgrades. There is no way around this.
- If you decide to downgrade from 7.0 to 6.12 any previously existing hybrid pools will not be recognized upon reboot into 6.12. To work around this, first click Tools/New Config in 7.0, preserving all slots, then reboot into 6.12 and your hybrid pools should import correctly.
- ZFS spares are not supported in this release. If you have created a hybrid pool in 6.12 which includes spares, please remove the 'spares' vdev before upgrading to v7.0. This will be fixed in a future release.
- Currently unable to import TrueNAS pools. This will be fixed in a future release.
- If you are using Docker data-root=directory on a ZFS volume, see Add support for overlay2 storage driver.
- We check that VM names do not include characters that are not valid for ZFS. Existing VMs are not modified but will throw an error and disable update if invalid characters are found.
General pool issues
If your existing pools fail to import with Wrong Pool State, invalid expansion or Wrong pool State. Too many wrong or missing devices, see this forum post.
Drive spindown issues
Drives may not spin down when connected to older Marvell drive controllers that use the sata_mv driver (i.e. Supermicro SASLP and SAS2LP) or to older Intel controllers (i.e. ICH7-ICH10). This may be resolved by a future kernel update.
Excessive flash drive activity slows the system down
If the system is running slowly, check the Main page and see if it shows significant continuous reads from the flash drive during normal operation. If so, the system may be experiencing sufficient memory pressure to push the OS out of RAM and cause it to be re-read from the flash drive. From the web terminal type:
touch /boot/config/fastusr
and then reboot. This will use around 500 MB of RAM to ensure the OS files always stay in memory. Please let us know if this helps.
New Windows changes may result in loss of access to Public shares
Due to recent security changes in Windows 11 24H2, "guest" access of Unraid public shares may not work. The easiest way around this is to create a user in Unraid with the same name as the Windows account you are using to connect. If the Unraid user password is not the same as the Windows account password, Windows will prompt for credentials.
If you are using a Microsoft account, it may be better to create a user in Unraid with a simple username, set a password, then in Windows go to Control Panel → Credential Manager → Windows credentials → Add a Windows Credential and add the correct Unraid server name and credentials.
Alternately you can re-enable Windows guest fallback (not recommended).
Problems due to Realtek network cards
There have been multiple reports of issues with the Realtek driver plugin after upgrading to recent kernels. You may want to preemptively uninstall it before upgrading, or remove it afterwards if you have networking issues.
A virtual NIC is being assigned to eth0 on certain systems
On some systems with IPMI KVM, a virtual NIC is being assigned to eth0 instead of the expected NIC. See this forum post for options.
Issues using Docker custom networks
If certain custom Docker networks are not available for use by your Docker containers, navigate to Settings → Docker and fix the CIDR definitions for the subnet mask and DHCP pool on those custom networks. The underlying systems have gotten more strict and invalid CIDR definitions which worked in earlier releases no longer work.
Rolling back
See the warnings under Known Issues above.
The Dynamix File Manager, GUI Search, and Unlimited Width Plugin plugins are now built into Unraid. If you rollback to an earlier version you will need to reinstall those plugins to retain their functionality.
If you disabled the unRAID array we recommend enabling it again before rolling back.
If you previously had Outgoing Proxies set up using the Proxy Editor plugin or some other mechanism, you will need to re-enable that mechanism after rolling back.
If you roll back after enabling the overlay2 storage driver you will need to delete the Docker directory and let Docker re-download the image layers.
If you roll back after installing Tailscale in a Docker container, you will need to edit the container, make a dummy change, and Apply to rebuild the container without the Tailscale integration.
After rolling back, make a dummy change to each WireGuard config to get the settings appropriate for that version of Unraid.
If rolling back earlier than 6.12.14, also see the 6.12.14 release notes.
Storage
unRAID array optional
You can now set the number of unRAID array slots to 'none'. This will allow the array to Start without any devices assigned to the unRAID array itself.
If you are running an all-SSD/NMVe server, we recommend assigning all devices to one or more ZFS/BTRFS pools, since Trim/Discard is not supported with unRAID array devices.
To unassign the unRAID array from an existing server, first unassign all Array slots on the Main page, and then set the Slots to 'none'.
For new installs, the default number of slots to reserve for the unRAID array is now 'none'.
Share secondary storage may be assigned to a pool
Shares can now be configured with pools for both primary and secondary storage, and mover will move files between those pools.
ReiserFS file system option has been disabled
Since ReiserFS is scheduled to be removed from the Linux kernel, the option to format a device with ReiserFS has also been disabled. You may use this mover function to empty an array disk prior to reformatting with another file system, see below. We will add a webGUI button for this in a future release.
Using 'mover' to empty an array disk
Mover can now be used to empty an array disk. With the array started, run this at a web terminal:
mover start -e diskN |& logger & # where N is [1..28]
Mover will look at each top-level director (share) and then move files one-by-one to other disks in the array, following the usual config settings (include/exclude, split-level, alloc method). Move targets are restricted to just the unRAID array.
Watch the syslog for status. When the mover process ends, the syslog will show a list of files which could not be moved:
- maybe file was in-use
- maybe file is at the top-level of /mnt/diskN
- maybe we ran out of space
Predefined shares handling
The Unraid OS Docker Manager is configured by default to use these predefined shares:
- system - used to store Docker image layers in a loopback image stored in system/docker.
- appdata - used by Docker applications to store application data.
The Unraid OS VM Manager is configured by default to use these predefined shares:
- system - used to store libvirt loopback image stored in system/libvirt
- domains - used to store VM vdisk images
- isos - used to store ISO boot images
When either Docker or VMs are enabled, the required predefined shares are created if necessary according to these rules:
- if a pool named 'cache' is present, predefined shares are created with 'cache' as the Primary storage with no Secondary storage.
- if no pool named 'cache' is present, the predefined shares are created with the first alphabetically present pool as Primary with no Secondary storage.
- if no pools are present, the predefined shares are created on the unRAID array as Primary with no Secondary storage.
ZFS implementation
- Support Hybrid ZFS pools aka subpools (except 'spares')
- Support recovery from multiple drive failures in a ZFS pool with sufficient protection
- Support LUKS encryption on ZFS pools and drives
- Set reasonable default profiles for new ZFS pools and subpools
- Support upgrading ZFS pools when viewing the pool status. Note: after upgrading, the volume may not be mountable in previous versions of Unraid
Allocation profiles for btrfs, zfs, and zfs subpools
When a btrfs or zfs pool/subpool is created, the default storage allocation is determined by the number of slots (devices) initially assigned to the pool:
-
for zfs main (root) pool:
- slots == 1 => single
- slots == 2 => mirror (1 group of 2 devices)
- slots >= 3 => raidz1 (1 group of 'slots' devices)
-
for zfs special, logs, and dedup subpools:
- slots == 1 => single
- slots%2 == 0 => mirror (slots/2 groups of 2 devices)
- slots%3 == 0 => mirror (slots/3 groups of 3 devices)
- otherwise => stripe (1 group of 'slots' devices)
-
for zfs cache and spare subpools:
- slots == 1 => single
- slots >= 2 => stripe (1 group of 'slots' devices)
-
for btrfs pools:
- slots == 1 => single
- slots >= 2 => raid1 (ie, what btrfs called "raid1")
Pool considerations
When adding devices to (expanding) a single-slot pool, these rules apply:
For btrfs: adding one or more devices to a single-slot pool will result in converting the pool to raid1 (that is, what btrfs defines as raid1). Adding any number of devices to an existing multiple-slot btrfs pool increases the storage capacity of the pool and does not change the storage profile.
For zfs: adding one, two, or three devices to a single-slot pool will result in converting the pool to 2-way, 3-way, or 4-way mirror. Adding a single device to an existing 2-way or 3-way mirror converts the pool to a 3-way or 4-way mirror.
Changing the file system type of a pool:
For all single-slot pools, the file system type can be changed when array is Stopped.
For btrfs/zfs multi-slot pools, the file system type cannot be changed. To repurpose the devices you must click the Erase pool buton.
Other features
- Add Spin up/down devices of a pool in parallel
- Add "Delete Pool" button, which unassigns all devices of a pool and then removes the pool. The devices themselves are not modified. This is useful when physically removing devices from a server.
- Add ability to change encryption phrase/keyfile for LUKS encrypted disks
- Introduce 'config/share.cfg' variable 'shareNOFILE' which sets maximum open file descriptors for shfs process (see the Known Issues)
VM Manager
Improvements
Added support for VM clones, snapshots, and evdev passthru.
The VM editor now has a new read-only inline XML mode for advanced users, making it clear how the GUI choices affect the underlying XML used by the VM.
Big thanks to @SimonF for his ongoing enhancements to VMs.
Other changes
- VM Tab
- Show all graphics cards and IP addresses assigned to VMs
- noVNC version: 1.5
- VM Manager Settings
- Added VM autostart disable option
- Add/edit VM template
- Added "inline xml view" option
- Support user-created VM templates
- Add qemu ppc64 target
- Add qemu:override support
- Add "QEMU command-line passthrough" feature
- Add VM multifunction support, including "PCI Other"
- VM template enhancements for Windows VMs, including hypervclock support
- Add "migratable" on/off option for emulated CPU
- Add offset and timer support
- Add no keymap option and set Virtual GPU default keyboard to use it
- Add nogpu option
- Add SR-IOV support for Intel iGPU
- Add storage override to specify where images are created at add VM
- Add SSD flag for vdisks
- Add Unmap Support
- Check that VM name does not include characters that are not valid for ZFS.
- Dashboard
- Add VM usage statistics to the dashboard, enable on Settings → VM Manager → Show VM Usage
Docker
Docker fork bomb prevention
To prevent "Docker fork bombs" we introduced a new setting, Settings → Docker → Docker PID Limit, which specifies the maximum number of Process ID's which any container may have active (with default 2048).
If you have a container that requires more PID's you may either increase this setting or you may override
for a specific container by adding, for example, --pids-limit 3000
to the Docker
template Extra Parameters setting.
Add support for overlay2 storage driver
If you are using Docker data-root=directory on a ZFS volume, we recommend that you navigate to Settings → Docker and switch the Docker storage driver to overlay2, then delete the directory contents and let Docker re-download the image layers. The legacy native setting causes significant stability issues on ZFS volumes.
If retaining the ability to downgrade to earlier releases is important, then switch to Docker data-root=xfs vDisk instead.
Other changes
- See Tailscale integration
- Allow custom registry with a port specification
- Use "lazy unmount" unmount of docker image to prevent blocking array stop
- Updated to address multiple security issues (CVE-2024-21626, CVE-2024-24557)
- Docker Manager:
- Allow users to select Container networks in the WebUI
- Correctly identify/show non dockerman Managed containers
- rc.docker:
- Only stop Unraid managed containers
- Honor restart policy from 3rd party containers
- Set MTU of Docker Wireguard bridge to match Wireguard default MTU
Networking
Tailscale integration
Unraid OS supports Tailscale through the use of a plugin created by Community Developer EDACerton. When this plugin is installed, Tailscale certificates are supported for https webGUI access, and the Tailnet URLs will be displayed on the Settings → Management Access page.
And in Unraid natively, you can optionally install Tailscale in almost any Docker container, giving you the ability to share containers with specific people, access them using valid https certificates, and give them alternate routes to the Internet via Exit Nodes.
For more details see the docs
Support iframing the webGUI
Added "Content-Security-Policy frame-ancestors" support to automatically allow the webGUI to be iframed by domains it
has certificates for. It isn't exactly supported, but additional customization is possible by using a script to modify
NGINX_CUSTOMFA in /etc/defaults/nginx
Other changes
- Upgraded to OpenSSL 3
- Allow ALL IPv4/IPv6 addresses as listener. This solves the issue when IPv4 or IPv6 addresses change dynamically
- Samba:
- Add ipv6 listening address only when NetBIOS is disabled
- Fix MacOS unable to write 'flash' share and restore Time Machine compatibility (fruit changes)
- The VPN manager now adds all interfaces to WireGuard tunnels, make a dummy change to the tunnel after upgrading or changing network settings to update WireGuard tunnel configs.
webGUI
Integrated Dynamix File Manager plugin
Click the file manager icon and navigate through your directory structure with the ability to perform common operations such as copy, move, delete, and rename files and directories.
Integrated GUI Search plugin
Click the search icon on the Menu bar and type the name of the setting you are looking for.
Outgoing Proxy Manager
If you previously used the Proxy Editor plugin or had an outgoing proxy setup for CA, those will automatically be removed/imported. You can then adjust them on Settings → Outgoing Proxy Manager.
For more details, see the manual.
Note: this feature is completely unrelated to any reverse proxies you may be using.
Notification Agents
Notification agents xml are now stored as individual xml files, making it easier to add notification agents via plugin.
See this sample plugin by @Squid
- fix: Agent notifications do not work if there is a problem with email notifications
NTP Configuration
For new installs, a single default NTP server is set to 'time.google.com'.
If your server is using our previous NTP defaults of time1.google.com, time2.google.com etc, you may notice some confusing NTP-related messages in your syslog. To avoid this, consider changing to our new defaults: navigate to Settings → Date & Time and configure NTP server 1 to be time.google.com, leaving all the others blank.
Of course, you are welcome to use any time servers you prefer, this is just to let you know that we have tweaked our defaults.
NFS Shares
We have added a few new settings to help resolve issues with NFS shares. On Settings → Global Share Settings you can adjust the number of fuse file descriptors and on Settings → NFS you can adjust the NFS protocol version and number of threads it uses. See the inline help for details.
- Added support for NFS 4.1 and 4.2, and permit NFSv4 mounts by default
- Add a text box to configure multi-line NFS rules
- Bug fix: nfsd doesn't restart properly