# Hunting at scale --- ## Module overview * We have learned how to write some powerful VQL. This module is all about getting ready to put it into practice! * We will be writing some artifacts to detect common adversarial attack patterns - this is the true strength of Velociraptor - applying your knowledge through VQL and uncovering real anomalies.
## Typical hunting workflow 1. Get an idea for a new artifact by reading blogs, articles and doing research! 2. Explore the VQL in the notebook 3. Convert VQL into an artifact 4. Go hunting! 5. Back in the notebook - post process and analyze --- ## Mitre Att&ck framework * Mitre framework is the industry standard in documenting and identifying attacker Tactics, Techniques and Procedures (TTP) --- ## Atomic Red Team * We will use Atomic Red Team to help develop our artifacts! --- ## Exercise: Detect Att&ck Techniques https://attack.mitre.org/techniques/T1183/ --- ## First plant a signal on your machine ``` REG ADD "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\notepad.exe" /v Debugger /t REG_SZ /d """"C:\Program Files\Notepad++\notepad++.exe""" -notepadStyleCmdline -z" /f ``` * Type notepad - you get notepad++ (useful but….) --- ## Windows.Persistence.Debug * Write an artifact to detect this modification. * Hash and upload a copy of each binary specified by this key. * Create a whitelist of OK binaries to have here - Use digital signatures to verify the authenticity of the binary. * The exercise demonstrate extending the basic detection capability with enrichment and customization --- ## Hunting - mass collections Hunting is Velociraptor's strength - collect the same artifact from thousands of endpoints in minutes! * Two types of hunts: * Detection hunts are very targeted aimed at yes/no answer * Collection hunts collect a lot more data and can be used to build a baseline. --- ## Exercise - baseline event logs For this exercise we start a few more clients. ```text c:\Users\test>cd c:\Users\test\AppData\Local\Temp\ c:\Users\test\AppData\Local\Temp>Velociraptor.exe --config client.config.yaml pool_client --number 100 ``` This starts 100 virtual clients so we can hunt them * We use pool clients to simulate load on the server --- ## Pool clients Simply multiple instances of the same client ![](../../modules/bit_log_disable_hunting/pool_clients.png) --- ## Create a hunt ![](../../modules/bit_log_disable_hunting/create-hunt_2.png) --- ## Select hunt artifacts ![](../../modules/bit_log_disable_hunting/create-hunt_3.png) --- ## Collect results ![](../../modules/bit_log_disable_hunting/create-hunt.png) --- ## Exercise - Stacking * The previous collection may be considered the baseline * For this exercise we want to create a few different clients. * Stop the pool client * Disable a log channel * Start the pool client with an additional number of clients ``` Velociraptor.exe --config client.config.yaml pool_client --number 110 ``` --- ## Stacking can reveal results that stand out ![](../../modules/bit_log_disable_hunting/stacking-a-hunt.png) --- ## Exercise: Labeling suspicious hits * After stacking it becomes obvious which machines are out of place. * We can label those machines in order to narrow further hunting to them. * Use the `label()` function to add a label to all machines with the disabled log sources. --- ## Exercise: Post processing a large hunt * For this exercise collect all services from all systems. * Start the pool client with 100 clients * Run a hunt on the clean system --- ## Create a malicious service * Lets create a malicious service ``` sc.exe create backdoor binpath="c:\Windows\Notepad.exe" ``` * And start a couple more clients ``` velociraptor.exe --config client.config.yaml pool_client --number 102 --writeback_dir filestore ``` --- ## Optimizing filtering * By default VQL run each query on one core, examining a row at a time. * You can speed up filtering by using the `parallelize()` plugin * Same parameters as the `source()` plugin with the addition of a query. * The specified query will run on multiple workers and receive rows from the `source()` plugin. * Faster than `foreach(workers=30)` because the result set parsing is also parallelized. --- ## Recollecting failed hunts * Sometimes a collection may have failed (e.g. timeout exceeded) * We might want to redo the same collection in that hunt: * Find the failed collection * Press the "Copy Collection" button in the toolbar * Modify the collection parameters (e.g. timeout) * Relaunch the new collection * When satisfied, simply add the new collection to the hunt manually.
## Review And Summary * Velociraptor's hunting feature allows collecting the same artifact from many systems at the same time. * Hunts are a logical set of collections. * You can dynamically add collections to the hunt --- ## Review And Summary * Two main approaching to hunting: * Baselining: Collect information about how the system should look like. * Detection: Zero in on anomalous behavior. * Hunts can be automated and repeated easily.