Velociraptor can collect a lot of data quickly but usually the data is only relevant for short periods of time.
Disk space management is an important part of Velociraptor administrators tasks. You can keep an eye on the disk utilization as shown on the dashboard.
If you need to grow the disk during an investigation, and you are using a cloud VM from Amazon with Elastic Block Storage (EBS), disk space management is very easy. In the AWS cloud it is possible to resize disk space dynamically. See Requesting Modifications to Your EBS Volumes and Extending a Linux File System After Resizing a Volume. You can do this without even restarting the server.
If you must attach a new volume you can migrate data from the old
datastore directory (as specified in the config file) to the new
directory by simply copying all the files. You must ensure permissions
remain the same (typically files are owned by the velociraptor
low
privilege local linux account).
It is also possible to start with an empty datastore directory and only copy selected files:
users
directory contains user accounts (hashed password etc)acl
directory contains user ACLsartifact_definitions
contains custom artifactsconfig
directory contains various configuration settings.orgs
directory contains data from various orgs.Velociraptor will automatically re-enroll clients with the same client id (The client id is set by the client itself) as needed.
You can also check the backups directory to recover from backup.
You can automatically delete old collections using the Server.Utils.DeleteManyFlows and Server.Utils.DeleteMonitoringData artifacts. These are server artifacts which can delete flows and monitoring data older than the specified time.