I am very excited to announce that the latest Velociraptor release 0.73 is available for download.
In this post I will discuss some of the interesting new features.
We would like to extend our thanks to the entire Velociraptor Community, with a special mention for Andreas Misje and Justin Welgemoed who provided invaluable testing, feedback and ideas to make this release awesome!
Previously Velociraptor was able to acquire physical memory on Windows using the Winpmem binary as an external tool - which was delivered to the endpoint and executed to obtain the memory image.
In this release, the Winpmem driver is incorporated into the Velociraptor binary itself so there is no need to introduce additional binaries to the endpoint. The driver is inserted on demand when an image is required using the new VQL function winpmem(). This VQL function can compress the memory image, to make it faster to acquire (less IO) and deliver over the network (less network bandwidth required).
The ability to access physical memory simply is also leveraged with
the
winpmem
accessor which allows for direct Yara scans with
Windows.Detection.Yara.PhysicalMemory
Journald is Linux’s answer to structured logging. Previously
Velociraptor implemented a simple parser using pure VQL. In this
release Velociraptor introduces a dedicated journald
parser.
The new parser emulates the windows event log format, with common
fields grouped under the System
column and variable fields in
EventData
.
This release also introduces a new VQL plugin watch_journald()
which
follows journald logs and forwards events to the server.
Attackers commonly use Remote Desktop (RDP) to laterally move between systems. The Microsoft RDP client maintains a tile cache with fragments of the screen.
Sometimes the RDP cache holds crucial evidence as to the activity of
the attacker on systems that ran the RDP client. This information is
now easily accessible using the new Windows.Forensics.RDPCache
artifact contributed by Matt Green.
Velociraptor clients are often deployed in complex networks. It is sometimes difficult to debug why network communications fail.
This release introduces the ability for the client to record the plain text communications between the client and server to a local file for debugging purposes.
Network communications are usually wrapped in TLS making network captures useless for debugging. Because of the way Velociraptor pins the TLS communications it is not easy to insert a MITM interceptor proxy either.
Adding the following to the client’s config will write plain text communications into the specified file:
Client:
insecure_network_trace_file: /tmp/trace.txt
Running the client will show the following log message:
[INFO] 2024-09-19T11:50:07Z Insecure Spying on network connections in /tmp/trace.txt
Make sure to disable this trace in production and only use it for debugging communications, as it does weaken the network security.
Velociraptor uses two layers of encryption - messages between client and server are encrypted using Velociraptor’s internal PKI scheme, and in addition, a HTTP over TLS connection is used to exchange those messages.
This means that the trace file is still not really completely in plain text - it contains the encrypted messages in among the clear text HTTP messages.
However this should help debug issues around reverse proxies and MITM proxies in production.
In previous versions, flows could only be in the RUNNING
, FINISHED
or ERROR
states. When the user schedules a collection from an
endpoint, the collection is in the RUNNING
state and when it is
completed it is either in the FINISHED
or ERROR
state.
However, this has proved to be insufficient when things go wrong,
leaving users wondering what is happening in cases where the client
crashes or reboots, or even if it becomes unresponsive. In such cases
sometimes flows remained stuck in the RUNNING
state indefinitely, so
it is not easy for users to re-launch them.
In this release, the Velociraptor client goes through more states:
Scheduled
state and has the icon .In Progess
state .Unresponsive
and have the icon.Error
with icon .Previously, the server sent all outstanding requests to the client at
the same time. This meant that if there were many hunts scheduled, all
requests were delivered immediately. If the client subsequently timed
out, crashed or disappeared from the network during execution, all
requests were lost leaving flows in the hung RUNNING
state
indefinitely.
In this release the server only sends 2 requests simultaneously, waiting until they complete, before sending further requests. This means if the client reboots only the currently executing queries are lost, and further queries will continue once the client reconnects.
Velociraptor enables powerful automation in everyday DFIR work. Some users start many hunts automatically via the API or VQL queries.
Over time there can be many hunts active simultaneously, and they can be used for multiple uses. In this release, the GUI’s hunt view is streamlined by enabling hunts to contains labels.
Clicking on the hunt label in the table will automatically filter the table for that label. Hunt Labels are a way to group large numbers of hunts and clean up the display.
The Velociraptor GUI presents most data in tabular form. It is important that tables are easy to navigate and use. This release made a lot of updates to the table view.
The navigation pager is now placed at the top of the table.
If a filter term starts with ! it will now be excluded from the rows (i.e. a negative search term).
Many tables have varying width columns. By default, Velociraptor will try to fit column width automatically to make them more readable, but sometimes it is necessary to manually adjust column widths for optimal viewing.
Columns can now be resized by dragging the right edge of a cell or header.
Column ordering usually depends on the VQL query that produces the table. However it is sometimes easier to reorder columns on an adhoc basis.
You can now reorder columns by dragging the column header and dropping it on the new position.
Sometimes columns contain a lot of data taking up large vertical space. This makes it difficult to quickly review the table because the extra row height makes the table unable to fit in the screen vertically.
Velociraptor is often used to fetch potentially malicious binaries from endpoints for further analysis. Users can schedule a collection from the endpoint and then download the binaries using the browser.
However, this can sometimes result in analyst workstations triggering virus scanners or other warnings as they download potential malware.
As in previous versions, the user can set a download password in their preferences. However, previously the password only applied to hunt or collection exports.
In this release, the password setting also applies to individual file downloads such as the VFS
Or the uploads tab in specific collections.
The Windows.KapeFiles.Targets
artifact allows to collect many bulk
forensic artifacts like registry hives etc. People often use it to
collect offline collections for preservation of hosts.
Although best practice is to also collect parsing artifacts at the
same time, sometimes this is left out (See Preserving Forensic
Evidence
for
a full discussion. It is particularly problematic when using the
offline collector to collect the Windows.KapeFiles.Targets
artifact,
because once the collection is imported back into Velociraptor there
is no possibility or returning to the endpoint to collect other
artifacts.
In this case the user needs to parse the collected raw files (for
example collecting the $MFT
then needing to apply Windows.NTFS.MFT
to parse it).
In the new release, a notebook suggestion was added to
Windows.KapeFiles.Targets
to apply a remapping on the collection in
such as way that some regular artifacts designed to run on the live
system can work to some extent off the raw collection.
Let’s examine a typical workflow. I will begin by preparing an offline
collector with the Windows.KapeFiles.Targets
artifact configured to
collect all event logs.
Once the collection is complete I receive a ZIP file containing all the collected files. I will now import it into Velociraptor.
Since this is an offline client and not a real client, Velociraptor will create a new client id to contain the collections.
Of course we can not schedule new collections for the client because it is not a real client, but once imported, the offline collection appears as just another collection in the GUI.
Suppose now I wanted to use the Windows.Hayabusa.Rules
artifact to
triage the system according to the Hayabusa Sigma ruleset. Ordinarily,
with a connected endpoint, I would just schedule a new collection on
the endpoint and receive the triaged data in a few minutes.
However, this is not a real client since I used the offline collector to retrieve the event logs. I can not schedule new collections on it as easily (without preparing a new offline collector and manually running it on the endpoint).
Instead, the Windows.KapeFiles.Targets
artifact now offers a VQL
snippet as a notebook suggestion to post process the collection. I
access this from the collection’s notebook.
The new cell contains some template VQL. I can modify it to run other
artifacts. In this case I will collect the Windows.Hayabusa.Rules
artifact with all the rules (event noisy ones) and Windows.NTFS.MFT
artifact.
The post processing steps added a new distinct collection to the offline client, as if we collected it directly from the endpoint. However, the artifacts were collected from the triage files directly imported from the offline bundle.
Although this new workflow makes it more convenient to post process bulk file triage collections, note that this is not an ideal workflow for a number of reasons (for example parsing event logs on systems other than where they were written will result in a loss of some log messages).
It is always better to collect and parse the required artifacts directly from the endpoint (even in an offline collection) and not rely on bulk file collections.
Timelines has been part of the Velociraptor GUI for a few releases now. In this release we have really expanded their functionality into a complete end to end timelining analysis tool.
The details of the new workflow are described in the Timelines in Velociraptor blog post, but below is a screenshot to illustrate the final product - an annotated timeline derived from analysis of multiple artifacts.
In addition to an enhanced built in timelining feature, this release
also features enhanced integration with Timesketch
, a popular open
source timelining tool. The details of the integration are also
discussed in the blog post above, but here is a view of Timesketch
with some Velociraptor timelines exported.
Velociraptor allows arbitrary key/value pairs to be added the Client
record. We call this the Client Metadata
. Previously the metadata
could be set in the GUI but there was no way to search for it from the
main search bar.
In this release client metadata can be searched directly in the search box. Additionally, the user can specify custom metadata fields in the configuration file to have all clients present this information.
Consider this example. I wanted to record maintain the department that each endpoint belongs to. I will add the following the server’s configuration file:
defaults:
indexed_client_metadata:
- department
This tells the server to index the client metadata field
department
. This allows the user to search all clients by
department.
Indexed metadata fields exist on all clients. Additional non-indexed fields can be added by the user.
Velociraptor’s user permission system ensures that only users that are
granted certain permissions are able to carry out actions that require
these permissions. For example, to launch an external binary on the
server is a highly privileged permission (basically it gives a server
shell). So the execve()
plugin requires a special EXECVE
permission to run. This is normally only given to administrators on
the server.
If a user has a lower role (e.g. investigator
) they are not able to
shell out by calling the execve()
VQL plugin in a notebook or a
server artifact.
While this is what we want in most cases, sometimes we want to provide
the low privileged user a mechanism for performing privileged
operations in a safe manner. For example, say we want to allow the
investigator
user to call the timesketch
CLI tool to upload some
timelines. It clearly would not be appropriate to allow the
investigator
user to call any arbitrary programs, but it is
probably ok to allow them to call the timesketch
program
selectively in a controlled way.
This idea is very similar to Linux’s SUID or Windows’s impersonation mechanisms - both mechanisms allow a low privileged user to run a program as another high privileged user, taking on their privileges for the duration of the task. The program itself controls access to the privileged commands by suitably filtering user input.
In the 0.73 release, server artifacts may specify that they will run with an impersonated user.
Consider the following artifact:
name: Server.Utils.StartHuntExample
type: SERVER
impersonate: admin
sources:
- query: |
-- This query will run with admin ACLs.
SELECT hunt(
description="A general hunt",
artifacts='Generic.Client.Info')
FROM scope()
This artifact launches a new hunt for the Generic.Client.Info
artifact. Usually a user needs the START_HUNT
permission to actually
create a new hunt.
Ordinarily, if a user has the COLLECT_SERVER
permission allowing
them to collect server artifacts, they will be able to start this
server artifact, but unless they also have the START_HUNT
permission they will be unable to schedule the new hunt.
With the impersonate
field, any user that is able to start
collecting this artifact will be able to schedule a hunt.
This feature allows an administrator to carefully delegate higher privilege tasks to users with lower roles. This makes it easier to create users with lower levels of access and improves a least privilege permission model.
There are many more new features and bug fixes in the latest release.
If you like the new features, take Velociraptor for a spin! It is available on GitHub under an open source license. As always please file issues on the bug tracker or ask questions on our mailing list velociraptor-discuss@googlegroups.com . You can also chat with us directly on discord https://www.velocidex.com/discord .