Disk Manager
Snare Central includes a Disk Manager utility that allows the administrator to easily increase storage capacity for event data allocation by adding extra hard drives to existing system or by allowing the server to connect to an existing NAS system.
Disk Manager also allows the administrator to have transparent access to data backups in CD, DVD or USB media created with the Snare Central Data Backup utility directly. There is no need to restore to the hard disk any more to view old data.
With this flexibility it is easier for the administrator to cope with Snare Central growing requirements in large and busy networks.
Snare Central disk layout
Snare Central complies with the “Red Hat Enterprise Linux 6 Security Technical Implementation Guide (STIG)” recommendation from DoD with the following independent file systems structure using Linux logical volume manager (LVM):
/ | 3.60 GB |
/boot | 0.25 GB |
/usr | 2.00 GB |
/var | 2.00 GB |
/var/log | 4.00 GB |
/var/log/audit | 0.50 GB |
/home | 0.50 GB |
/tmp | 1.00 GB noudev,nosuid,noexec |
/data/snareArchive00 | rest of disk space |
With Snare Central using LVM for its file systems, it allows users to easily resize any of them if enough free disk space is available. One thing to notice is that the Snare Central main data storage is the rest of the server's disk capacity. If a new physical disk is added to the system, it can fully or partially be assigned to this file system within the Disk Manager.
Interface
The Disk Manager user interface shows the existing file systems represented as cylinders and their current usage (in above example, the root file system is showed in black and currently is a 53% of its capacity).
The menu includes:
- Reset (circular arrow icon). To reset the disks to their original sizes.
- Submit (right pointing arrow). To submit disk resize changes.
- NAS (cloud icon). To mount or unmount a NAS system.
- DVD (CD icon). To mount or unmount a CD, DVD or USB data backup.
The following image show the disk summary available by clicking on the corresponding disk or hovering the mouse on top of it.
Mounting a CD, DVD or USB
The following image shows the DVD dialogue which allows to mount and/or unmount a data backup device making it available directly into Snare storage. Thus making the archived data immediately available to Snare so the user can run any objective right after mounting the corresponding device[SC1] .
All that is needed is to specify what kind of device to mount (or unmount) and if access to this device after reboot is needed or not (this checkbox actually updates /etc/fstab system file so its persistent after a reboot if desired).
NOTE: When mounting or unmounting any device all Snare back end processes need to be stopped manually.
Mounting a NAS
The above image shows the NAS dialogue which allows the user to mount and/or unmount a Network Attached Storage device making it available directly for Snare to use.
A NAS can be mounted to increase Snare Central capacity so any new data will be stored in the network device and at the same time, all previous data stored in the server's local hard drive will still be accessible for the system to use. Be aware that that a NAS device will never be as fast as a local hard drive and this could lead to performance constraints is the system has a high EPS demand on it. Most NAS systems don’t implement synchronous write acceleration like SAN disk systems do so will perform at a lower performance than conventional local disk or fibre attached SAN disk will. Another considerations is that if Snare Central loose network connectivity to the NAS access all data stored there won't be accessible and the system may experience long time-outs when trying to retrieve any data or become non responsive.
In order to mount a NAS the user needs to provide:
- A name to identify this device (e.g NAS1 or central_storage).
- NAS IP address or name (FQDN) and port number to use.
- The type of NAS to attach to (CIFS or NFS)
- The share name inside the NAS (or directory name in case of NFS).
- User name and Password.
- Workgroup if required (CIFS only).
- If access to this device after reboot is needed or not (this checkbox actually updates /etc/fstab system file so becomes persistent[SC2] ).
- When you create files in lower or upper, they will appear in the overlay folder.
- Any files you create in the overlay folder are only created in the upper dir.
- Changes you make to existing files in the overlay folder only appear in the upper dir.
- If you delete a file in the overlay folder that was in the lower dir, then a "whiteout" is created so that it does not appear in the overlay folder, but still remains in the lower dir. The same applies to folders.
NOTE: When mounting or unmounting any device all Snare back processes need to be stopped manually.
Resizing a local file system
IMPORTANT. Before changing the sizes on any files system unmount any NAS, DVD, CD or USB device from the server as it may interfere with the resizing process and lead to unpredictable results.
As mentioned in the beginning, each of the local file systems in the server is represented by a cylinder. There is one cylinder for each file system plus another that represents the available “Free Space” in the server. Some of these file systems can be modified (grown or shrunk) by dragging the handler in the top left corner of the cylinder. It is also possible to change the file system size by entering a size directly in the entry in the top of the cylinder. The user can enter a new size in G (GB default), T (TB), M (MB) or K (KB) if no units are specified the manager uses GB[SC3] .
The user will notice that when growing a file system the free space will shrink and when reducing a disk the free space will grow.
Any editable file system can grow up to the available free space” so when there is no more free space available no other file system can be expanded.
Any editable file system can be shrunk to a maximum of 20% of it available free space. If there is no free space in the files system (100% use) no shrinking is possible.
At any moment the user can reset the cylinders to their original values by clicking the reset icon in the Disk Manager menu.
Once all the editable file systems sizes are set as required, the user must submit the changes to the server with the submit button (right pointing arrow). It highly recommended to resize only one files system at a time. Upon submit a warning as shown in the next image will be prompted.
NOTE: When resizing any file system all Snare back processes need to be stopped and depending on the size of the file system this could take several minutes.
Adding a new hard disk
If no more disk space is available, the administrator can add another physical disk (or disks) to the server and after qa system reboot the new drive will be available as free space in the Disk Manager ready to be assigned to existing files systems as described above.
–---------------- Intersect Alliance ONLY ---------------------
Overlay files system[SC4]
Remember that:
· OverlayFS provides a way to merge directories or filesystems such that one of the filesystems (called the "lower" one) never gets written to, but all changes are made to the "upper" one.
After a fresh Snare Central 7.2 install, SnareArchive file system is mounted in /data/SnareArchive00 and is made accessible as the upper layer in /data/SnareArchive by means of an overlay folder with the empty directory /data/SnareArchive01.
The reason behind this is to make /data/SnareArchive the union mount for all the possible set-ups of layered directories. In order to achieve this, I made some definitions trying to create our own “standard” for better administration of the overlays. The combinations and rules that are supported and implemented (not all tested though) at the moment are:
DEFINITIONS:
- Local HD /dev/mapper/vg00-snareArchive is always mounted in /data/SnareArchive00
- NAS devices will be mounted in /data/SnareArchive01-09
- optic devices will be mounted in /data/SnareArchive10-19
- usb devices will be mounted on /data/SnareArchive20-29
- overlay1 will be /data/SnareArchive30
- overlay2 will be /data/SnareArchive31
- main overlay /data/SnareArchive
NOTE: Programmatically (php and JavaScript disk manager's code) there is a parameter called lowerflag that when true, switches a mounting device to the lower layer. It only applies to NAS and USB device types and is set to false by default. It supposed to be another check box in the mounting form but is not implemented in the UI as I do not know how to explain the user it's usage (may be as a read-only check box[SC5] ?). Is mentioned here because it affects the behaviour of the following scenarios.
SETUPS:
a) Just local disk (SnareArchive00):
· SnareArchive01(empty) and SnareArchive00 in overlay on SnareArchive. This is the default after a fresh install.
b) Just one device (mounted accordingly to the above DEFINITIONS):
· 1) if device is CD or DVD goes in the bottom layer.
· 1) otherwise (USB or NAS) device goes on the top layer
· device and SnareArchive00 in overlay on SnareArchive
c) DVD and USB mounted:
· 1) DVD goes in the bottom layer.
· 2) SnareArchive00 middle (or top if lowerflag is true).
· 3) USB goes top (or middle if lowerflag is true).
· 2) and 3) in overlay on SnareArchive30.
· 1 and SnareArchive30 in overlay on SnareArchive.
d) NAS and DVD or USB mounted:
· 1) DVD (or USB) goes in the bottom layer.
· 2) SnareArchive00 middle (or top if lowerflag is true).
· 3) NAS goes top (or middle if lowerflag is true).
· 2) and 3) in overlay on SnareArchive30.
· 1 and SnareArchive30 in overlay on SnareArchive.
e) NAS, DVD and USB mounted:
· 1) DVD goes in the bottom layer.
· 2) USB goes second.
· 3) SnareArchive00 third (or top if lowerflag is true).
· 4) NAS goes top (or third if lowerflag is true).
· 1) and 2) in overlay on SnareArchive30.
· 3) and 4) in overlay on SnareArchive31.
· SnareArchive30 and SnareArchive31 in overlay on SnareArchive.
At the moment this the testing matrix of the above set-ups (directory contents is just the verification of the merged directory and that the contents of all layers are there):
Disk Manager API
Disk Manager uses the SnareWizard Angular based REST-API. The following are the “routes” for the different verbs that the disk manager supports:
"routes": [
{
"href": "/api/v1/config/diskm",
"method": "GET",
"command": "show",
"callback": "/SNARE/ServerConf/DataManagement/DiskManager::AJAX_diskm"
},
{
"href": "/api/v1/config/diskm",
"method": "POST",
"command": "resize",
"callback": "/SNARE/ServerConf/DataManagement/DiskManager::AJAX_diskm"
},
{
"href": "/api/v1/config/diskm",
"method": "PUT",
"command": "mount",
"callback": "/SNARE/ServerConf/DataManagement/DiskManager::AJAX_diskm"
},
{
"href": "/api/v1/config/diskm",
"method": "DELETE",
"command": "umount",
"callback": "/SNARE/ServerConf/DataManagement/DiskManager::AJAX_diskm"
},
GET method send back a JSON structure with all files systems details in a single call.
POST receives a JSON structure with only the changes to apply to modified file systems.
Important to note that DELETE method requests unmount the resource identified by the Request-URI so the form data is embedded into the URL as follows
/SNARE/ServerConf/DataManager/DiskManager.php/api/v1/config/diskm/type/ID/fstabflag
So when unmounting NAS1 which is a CIFS NAS and is mounted in /data/SnareArchive01 and also delete the corresponding entry in /etc/fstab the request will look like this:
…...Conf/DataManager/DiskManager.php/api/v1/config/diskm/CIFS/NAS1/1
In the case of DVD/CD since there is no identifier associated with it, the mount-dir is used instead:
…...Conf/DataManager/DiskManager.php/api/v1/config/diskm/DVD/SnareArchive10/1
Disk Manager Current status[SC6]
1. NFS: Not tested
2. USB: Not tested
3. Adding new disk: Needs more testing
4. Performance: Not tested
5. FYI control over which file systems are editable and how much a file system can be shrunk is available through two variables in the list_disks.pl perl script.
6. Even though the UI allows you to resize all editable file systems; internally Disk manager will limit to only shrink/expand the first disk for security concerns. This can be changed easily.
7. Definition on what to do with the overlayfs when more than one NAS
8. Definition on what to do with the overlayfs when more that one USB
9. Definition on what to do with the overlayfs when more than DVD
10. Definition on what to do with the overlayfs when any combination of 4, 5 and 6
11. ATM resizing of /, /usr. /var, /tmp, /log, /log/audit is restricted even though expanding (not shrinking) these file systems in a live environment is possible but has not been tested.
12. Lowerflag implementation in UI pending.
13. NFS is not supported as a upper layer so when NAS type is NFS disk manager will set the lowerflag to true son NFS can't be used to increase disk space but as a data backup repository (like a DVD). So I need definition if we shall keep NFS as an available option to the user or not. This is a restriction from overlayfs.
14. For this Disk Manager to work the Snare Central must use LVM. In the case of upgrades from 7.1 to 7.2 there won't be LVM available. In this case it is possible to make the upgrade script to move SnareArchive to SnareArchive00, create an empty directory SnareArchive01 ant mount them in an overlay to SnareArchive. This will allow to add an extra hard drive and after partitioning, creating an ext4 files system and mounting it on SnareArchive01 to make it available to Snare. In order to do this we need to add the partitioning and formatting capabilities to Disk Manger. This of course wont be as flexible as LVM since the user will only be able to increase SnareArchive capacity exclusively.
15. Disk failure: Not tested.
Nice to have
1. Due to the fact that the resizing disk interface animation behaves as a linear equation, when the difference in sizes of the disks is too big the UI does not look good. I think that a logarithmic approach will solve this.
2. A support button that collects configuration files and all data required for trouble shooting any issues with Disk manager, pack it in gzip format and send an email to back to us.
Testing procedure
In order to be able to test the Disk Manager follow these steps:
1. Make a fresh install of Snare Central 7.2 in a VM
2. After install go to Disk Manager and make sure that all 8 files systems are displayed
3. Mount an ISO with a Snare Central backup image to your VM and mount it from the Disk Manager. Verify that the data is available to Snare by running an objective with the dates of the archived data or login into the server and go to /data/SnareArchive and make sure the directories with the old data are there.
4. Mount an USB with a Snare Central backup image to your VM and mount it from the Disk Manager. Verify that the data is available to Snare by running an objective with the dates of the archived data or login into the server and go to /data/SnareArchive and make sure the directories with the old data are there.
5. Mount a NAS and make sure that the new incoming data is stored in the NAS do this by mounting the same share on another system. You should be able to see only the new data and also make sure that the local data is still available.
6. Shutdown the Snare Central and attach a new HD to you VM. Turn on the VM and you shall see a new free disk in the Disk Manager.
7. Assign some free space to SnareArchive disk and submit your changes. If everything ok the new space will be shown in the Disk Wizard. Reboot the system to make sure the new sizes were set correctly.
8. If you have a data Backup in DVD and another in USB you can mount both and test that all their data along with the current data are available in /data/SnareArchive.
Release Notes
· Snare Central now includes a Disk Manager utility that allows the administrator to easily increase storage capacity. This interface now offers a flexible method for easy expansion of local hard disk storage. It allows for easy connection to an available NAS via the management interface. Direct access to data backups on optical or USB media created with Data Backup utility is also available; so there is no need to restore a data backup to have access to archived data.
- Snare Central 7.2 now complies with the “Red Hat Enterprise Linux 6 Security Technical Implementation Guide (STIG)” recommendation from DoD for disk layout and minimum sizing.
[SC1]Need to look at running the metadata index recreation after loading dvd or other backups, and also purge the indexs when the device is unmounted
[SC2]Do we use a hard mount or soft mount as this can help any system hang issues if the NAS becomes non responsive or there is a network error.
[SC3]Do we use the option for high inodes in the file system for the /data/SnareArchive partition as we have a special option for the normal install that covers this need, as the system makes lots of files and directories.
[SC4]This section is good to be put in confluence so we can share it.
[SC5]Yes I think that would be a good idea
[SC6]Yes need to look at these as part of the QA process as well