I built a compact mini SOC on my laptop using Oracle VirtualBox and four virtual machines. I ran a manager and dashboard on Ubuntu, a Windows 11 endpoint, an Ubuntu endpoint, and a Kali attacker machine. This project recreated a full security workflow in a small environment.
My aim was practical validation. I enabled vulnerability detection, integrated Sysmon on Windows, added custom detections for PowerShell, and linked VirusTotal to verify alerts using a safe EICAR file. I also proved active response by blocking an attacker IP during an SSH brute-force test.
Throughout the build I faced networking trade-offs, agent/version mismatches, and a file integrity monitoring issue that I fixed by editing the agent config. The result is a reproducible system to collect logs, analyze information, trigger automated containment, and view everything in a live dashboard.
Get your copy now. PowerShell Essentials for Beginners – With Script Samples – Limited Edition
Get your copy now. PowerShell Essentials for Beginners – With Script Samples – Limited Edition
I wanted a practical platform where I could trigger simulated threats and measure how fast I found and contained them.
I built this project as a focused training ground. It let me run realistic case scenarios and collect clear information from endpoints.
Testing exposed key network differences, such as how VirtualBox NAT vs NAT Network handled east‑west traffic. A host antivirus silently blocked inter‑VM packets, and agent/server version mismatch produced an “Incompatible version” error on a Windows endpoint.
I documented fixes so readers can reproduce the same steps and reduce mean time to repair.
| Issue | Symptom | Action |
|---|---|---|
| Inter‑VM traffic blocked | Missing logs between hosts | Disable host AV filter or adjust adapter |
| Agent mismatch | “Incompatible version” alerts | Align agent and server versions |
| File integrity false failure | Stale FIM alerts | Edit local ossec.conf to bypass faulty sync |
My architecture centered on clear telemetry flows and simple management channels for reliable testing.
I ran four machines in Oracle VirtualBox: a wazuh server on Ubuntu, a Windows 11 endpoint, an Ubuntu endpoint, and a Kali attacker. Each machine had a role so I could trace data from source to dashboard.
I mapped the design so the server receives telemetry and orchestrates responses while endpoints generate logs. The attacker acted as an external source to simulate lateral movement and validate detections.
NAT gave isolation but limited east‑west tests. NAT Network shared connectivity between VMs for intra‑VM traffic. Bridged placed machines directly on the LAN for realism.
“A Host‑Only adapter provided a dependable management channel when the host firewall blocked VM traffic.”
| Mode | Benefit | Drawback |
|---|---|---|
| NAT | Isolated testing | Limited VM-to-VM traffic |
| NAT Network | Shared VM connectivity | Less realism than bridged |
| Bridged | Real LAN presence | Depends on external network |
I started by listing every image, version, and tool so I could reproduce the environment reliably.
I used Oracle VirtualBox as the virtualization platform. The machines were: Ubuntu Server for the manager and dashboard, Windows 10/11 for the Windows endpoint, Ubuntu Server for the Linux endpoint, and Kali Linux as the attacker machine.
I installed Sysmon on Windows using the SwiftOnSecurity configuration. I accessed the dashboard at https://<Ubuntu-IP>:443. On Windows I verified agent status with the Agent Manager. On Ubuntu I ran agent_control -l to list agents and used systemctl to manage wazuh-manager and wazuh-dashboard services.
“Documenting image names and service names saved me hours when a version mismatch popped up.”
I kept a short checklist of critical files to edit, especially local_rules.xml and configuration templates. I also noted where to find authoritative information for each component so I could follow vendor docs if needed.
| Item | Version / Name | Purpose |
|---|---|---|
| Oracle VirtualBox | 6.x / 7.x | Virtualization platform |
| Ubuntu Server | 20.04 LTS | Manager & dashboard system |
| Windows | 10 / 11 images | Endpoint telemetry and Sysmon |
| Kali Linux | Rolling | Attacker machine for validation |
My process focused on reliable telemetry: deploy the manager, attach agents, and confirm events flow to the dashboard. I kept each phase small so I could validate quickly and iterate on detection logic.
I deployed the wazuh server and opened the dashboard at https://<Ubuntu-IP>:443. Then I added at least two endpoints and verified connections.
On Windows I installed the wazuh agent and Sysmon using Sysmon64.exe -accepteula -i sysmonconfig.xml.
On Ubuntu I confirmed agents with sudo /var/ossec/bin/agent_control -l and checked the manager for healthy heartbeats.
I added a simple local rule in /var/ossec/etc/rules/local_rules.xml to detect Nmap, Ncat, and Nping. After editing the file I restarted the wazuh-manager and tested detection by running nmap -sS against a target.
“Start small: get logs flowing, then tune alerts to cut noise and keep meaningful detections.”
Result: a reproducible project that turns raw logs into alerts and automated actions while keeping files, configs, and notes organized for future growth.
I provisioned four virtual machines and tuned each one to handle realistic event loads.
I sized the server and Windows guest first since they process the most events. I gave the server extra CPU cores and RAM so services stayed responsive during ingestion.
Tip: keep plenty of disk space for indexes and snapshots. Snapshots let me roll back a file or config change quickly.
I used Bridged mode for realistic inter‑VM visibility and added a Host-Only adapter as an out‑of‑band management plane. That kept console access stable when host firewalls interfered with test network traffic.
I validated connectivity early with ping and curl probes and reviewed packet flows when a case required deeper inspection. I also aligned NIC order so management and test traffic always used the right interfaces.
| Item | Recommended | Reason |
|---|---|---|
| Server CPU / RAM | 4 vCPU / 8–12 GB | Handle ingestion and dashboard tasks |
| Windows endpoint | 2 vCPU / 6–8 GB | Support Sysmon and event bursts |
| Storage | 40+ GB per VM | Index growth and snapshots |
| Network modes | Bridged + Host-Only | Real traffic + stable management |
My initial priority was getting the manager healthy and the dashboard served over TLS. I installed the wazuh server on Ubuntu and ran basic checks immediately after package setup.
I verified each service with systemctl status, confirming wazuh-manager and wazuh-dashboard were active and running.
I reviewed logs for startup errors and made small adjustments to the config files when a process failed to bind to the expected interface.
I configured TLS certificates and validated the chain so the dashboard loads cleanly at https://<Ubuntu-IP>:443. I rotated default admin credentials and created named admin accounts for access control.
Firewall rules were minimized to allow agent connections, admin HTTPS, and SSH for remote management. I checked that edits to the ossec.conf file stayed under version control.
“Confirm service outputs, open ports, and manager queue metrics during routine checks.”
| Check | Command / File | Purpose |
|---|---|---|
| Service status | systemctl status wazuh-manager | Verify manager is active |
| Dashboard TLS | HTTPS at :443 | Prevent browser warnings |
| Config backup | /var/ossec/etc/ | Track edits and recover quickly |
| Firewall | ufw or iptables rules | Limit exposed ports |
The first verification step was ensuring every host reported a name, IP, and a healthy heartbeat to the manager.
I install the wazuh agent on Windows, then open the Agent Manager GUI to confirm the agent name, assigned ID, IP address, and that the service shows Running.
In the Windows console I check the listed agent name and the ID match my inventory. I look for the Running state and recent heartbeat time.
On Ubuntu I register the agent and run sudo /var/ossec/bin/agent_control -l to list hosts. The manager should show both Windows and Linux as Active and sending log heartbeats.
When I hit an “Incompatible version” case, I align package versions on the agent and manager. Reinstalling the matching package restored the handshake and cleared connection errors.
“Aligning versions early prevents long enrollment delays and reduces false negatives in event delivery.”
| Task | Check | Outcome |
|---|---|---|
| Windows GUI | Name, ID, IP, Running | Agent visible and healthy |
| Linux agent | agent_control -l | Manager recognizes host and logs |
| Version mismatch | Package alignment | Handshake restored |
| FIM issue | Edit ossec.conf file locally | File monitoring resumed |
I installed a system-level monitor on the Windows machine to capture detailed process, network, and file telemetry. I used the official installer command Sysmon64.exe -accepteula -i sysmonconfig.xml and applied the SwiftOnSecurity configuration for broad coverage.
The added visibility changed detection quality quickly. I could see richer data in every event that crossed the pipeline.
I confirmed local output in Event Viewer under Microsoft > Sysmon, then watched the same entries appear in the manager dashboard. This proved end-to-end flow from the agent on the machine to the central display.
I created a custom rule that elevates PowerShell execution to a high-priority alert. I tested by launching a simple PowerShell command, observed the event IDs and fields, and verified the alert fired in the dashboard.
“Map Sysmon fields to your decoders early. That saves time when you author more detections.”
| Action | Where to check | Expected result |
|---|---|---|
| Install Sysmon | Windows Event Viewer | Sysmon events appear locally |
| Forward logs | Manager dashboard | Same events visible upstream |
| Create custom rule | Rule file on manager | High-priority alert on PowerShell |
| Map fields | Decoder config | Reusable tokens for future detections |
My edits concentrated on compact, testable rules that trigger on specific tool signatures and behaviors.
I add a focused custom rule in /var/ossec/etc/rules/local_rules.xml to detect PowerShell misuse and reconnaissance tools like Nmap, Ncat, and Nping. I place new entries under a clear comment block so future changes stay traceable.
How I validate: restart the manager after changes, run a known command to trigger the pattern, and inspect the alert details in the dashboard. I tune severity and conditional fields to reduce noise while keeping meaningful alerts.
When a persistent Linux file monitoring problem appeared, I bypassed a faulty server sync by editing the agent’s local ossec.conf file. That local edit restored FIM checks without a full server-side rollback.
“Keep a versioned copy of every file you edit so you can revert fast if a detection change breaks coverage.”
| Change | File edited | Purpose |
|---|---|---|
| Detect Nmap family | /var/ossec/etc/rules/local_rules.xml | Alert on process names and command patterns |
| PowerShell activity | /var/ossec/etc/rules/local_rules.xml | Elevate suspicious script execution |
| FIM persistence fix | Agent local ossec.conf | Bypass server sync to restore file checks |
| Validation | Manager restart & event replay | Confirm parsing and dashboard alerts |
I split detection logic between fast agent checks and heavier indexed queries to balance speed and precision. This hybrid design keeps immediate matching near the endpoint and reserves broader analytic searches for the indexed store.
Which detections live where? I run simple string matches and login patterns at the agent for instant action. I convert standardized rules into Elastalert queries for periodic scans against Elasticsearch when I need richer context.
That mix supports my security operations center goals without overwhelming a single engine. It also makes alerts more meaningful and reduces noise during a case review.
I verify that every field a Sigma rule references exists in my pipeline and mappings. I test converted rule runtime, confirm output includes actionable context, and log the source of truth for reproduction.
“Keep fast-path detections lean, and let indexed queries do the heavy lifting for hunts.”
I tied global threat feeds into the dashboard so file indicators show context the moment an event appears. That enrichment gave me quick scoring and references for every suspicious file that touched the pipeline.
I integrated VirusTotal as an automated lookup so new files are checked against a global source before analysts act. I validated the flow using the EICAR test file and observed a high-severity alert when multiple engines flagged the file.
Validation mattered: the dashboard showed engine verdicts, a risk score, and links back to the original detection event. I confirmed that both windows and Linux logs captured lookup metadata for auditing.
“Enrichment turned raw logs into prioritized alerts and helped me decide whether to run a sandbox or issue a containment command.”
| Check | Where | Outcome |
|---|---|---|
| Test file | EICAR on endpoint | High-severity alert and engine hits |
| Logs preserved | Endpoint & server logs | Lookup metadata for audits |
| Rate limits | Integration config | Stability and key rotation plan |
I simulated an SSH credential attack to measure how detection triggered an automated firewall block and produced an auditable trail.
I ran a brute-force from the Kali host against the Linux server. Repeated failed logins generated a Level 10 alert that invoked Active Response.
The manager issued a command to add 10.0.2.15 to the server firewall. I confirmed connectivity by pinging the host: reachable before the attack and “Destination host unreachable” after the block.
Both the manager and the local log files captured the event, the firewall change, and timestamps so the case could be reviewed later.
I tuned a threshold-based rule so only repeated failures within a short time window raise a high-severity alert. That keeps legitimate users from being blocked on the network.
I also added rollback guidance to remove a block if the case is a false positive and tested manager queue behavior under bursts so automation stays stable.
“Automated containment worked fast, but careful thresholds and clear artifacts made the action safe and auditable.”
| Test | Expected | Outcome |
|---|---|---|
| Attack | Level 10 alert & block | IP blocked; ping failed |
| Verification | Logs show command and file change | Manager and host logs preserved |
| Tuning | Minimize false positives | Thresholds and cool-downs applied |
I treated every outage like a small case: collect facts, name the hypothesis, and test fixes.
I diagnosed lost traffic by testing each path and reviewing VirtualBox NIC modes. NAT blocked some east‑west flows while NAT Network behaved differently, so I mapped connectivity before I changed anything.
A host antivirus (Kaspersky) silently filtered inter‑VM packets. I created a Host-Only management plane so I always had a reliable channel for updates and remote checks.
A persistent Linux FIM alert turned out to be a ghost failure. I fixed it by editing the agent’s local ossec.conf file to bypass the broken central sync and restored expected file checks.
I standardized a service restart checklist and used systemctl to verify each service state. That avoided inconsistent states after updates.
To detect when config sync partially applied, I compare component names and version metadata and look for missing entries in the manager. This small check saved a lot of wasted debugging time.
I tamed log volume by filtering chatty sources while keeping the signals that matter. After each fix I reran the test case and confirmed critical rules still fired and that alerting information reached the dashboard.
“Diagnose methodically, keep a reliable management channel, and close the loop by retesting every fix.”
| Issue | Action | Outcome |
|---|---|---|
| Inter‑VM traffic loss | Map NIC modes; add Host-Only | Stable management connectivity |
| Ghost FIM failure | Edit agent local ossec.conf | File monitoring resumed |
| Excessive logs | Filter noisy sources; preserve alerts | Cleaner logs; key alerts remain |
My focus shifted from a single prototype to a repeatable operational design. I balanced capacity, alerting, and validation so the environment could scale without breaking daily work.
I keep rule sets lean by reviewing, de‑duplicating, versioning, and retiring old entries. This lowers noise and speeds triage.
Alert routing sends urgent items to Slack and creates Jira tickets for tracked investigation. Less urgent items queue as email digests so analysts can schedule work.
I schedule content updates and validate them in the wazuh dashboard before promotion. I also back up key files like ossec.conf and keep a short rollback plan for every change.
I distribute agents across multiple managers and place managers regionally to reduce latency. This prevents a thundering herd when services restart.
I set per‑host log caps to protect ingestion pipelines and monitor agent flaps so I can act fast on instability.
Periodic red‑team exercises mapped to MITRE ATT&CK expose gaps. I use those results to tune thresholds and improve the detection set.
| Topic | Action | Benefit |
|---|---|---|
| Manager distribution | Agents attach to nearest collector | Lower latency; steady ingestion |
| Alert routing | Slack for urgent; Jira for tracking | Faster response; auditable cases |
| Content updates | Validate in dashboard; backup ossec.conf | Safe promotion; quick rollback |
I completed the build by proving reliable telemetry, verified lookups, and automated containment across the environment. The final state is a compact operations center that produced testable alerts, audit trails, and repeatable responses.
Key value: building the project taught me how to turn obstacles into durable runbooks and safer practices. The wazuh dashboard made it fast to verify changes and to confirm that agents and detections behaved as expected.
Useful artifacts for triage were process logs, network captures, and file hashes. I stored configs and my custom source on GitHub so others can fork and adapt the design. I plan to add more scenarios and deeper integrations over time.
If you want to collaborate or ask questions, reach out on LinkedIn and review the repo for concrete configs and notes.
I planned a lightweight manager on Ubuntu, Windows and Linux endpoints, and an attacker VM. I separated management traffic on a host-only adapter and used bridged networking for realistic traffic. That split keeps monitoring stable while letting me test real network scenarios.
I give the manager at least 2 CPUs, 4–8 GB RAM, and 40 GB disk. Agents run with 1 CPU and 2–4 GB RAM each. Heavy tasks like Sigma translation or VirusTotal enrichment get more memory. Those targets keep the lab responsive without wasting host resources.
I follow the official install, enable and test the Wazuh service, then restrict dashboard access to HTTPS, use strong certs, and limit IPs via firewall rules. I also disable unused services and enable automatic updates for the OS and manager.
For Windows I run the MSI and register the agent with the manager key; for Linux I install the package and use agent_control to register and check status. I verify version compatibility, confirm heartbeat messages, and watch the manager dashboard for agent health.
Sysmon provides rich process, network, and file activity needed for reliable detection. I use the SwiftOnSecurity profile as a baseline, then tune it to reduce noise. Once installed, I confirm Sysmon events arrive in the manager and map to meaningful rules.
I write local rules for environment-specific needs like detecting PowerShell abuse, Nmap scans, or proprietary scripts. I convert Sigma rules for broader behavioral detections when I want standardized, platform-agnostic coverage that I can reuse across projects.
I always back up ossec.conf before edits, make changes on a test manager, and restart the service to validate. For per-host needs I prefer agent-local ossec.conf modifications, but I document every change to avoid drift and config conflicts.
I select relevant Sigma signatures, convert them using a translator tool, then map fields to the manager schema. I test by generating representative events, tweak the detection logic, and then deploy to avoid false positives.
I integrate VirusTotal API lookups in alert workflows to check hashes and URLs. When alerts trigger, enrichment adds context to the dashboard so I can triage faster. I rate-limit API calls to avoid quotas and cache results for common indicators.
I configure high-confidence rules to trigger active-response scripts that add IPs to firewall deny lists. I test blocks with controlled SSH brute-force attempts and include whitelists and cooldown windows to reduce accidental lockouts.
I adjust rule severity, use aggregation windows, and suppress repetitive alerts tied to benign processes. I run weekly reviews of alert logs, raise thresholds for noisy detections, and create exception lists for known safe behavior.
I checked adapter types, resolved IP conflicts, disabled third-party host firewalls, and ensured time sync across systems. For FIM ghost errors I validated file permissions and restarted the manager. When configs failed to sync, I re-registered agents and verified keys.
I separate managers for load balancing, offload heavy enrichment to dedicated workers, and use regional manager designs for segmented networks. I also integrate alert routing to email, Slack, or Jira, and keep Sigma and rule libraries updated.
I use EICAR for antivirus test alerts, scripted PowerShell abuse patterns, and controlled Nmap scans. I log each test, monitor resulting alerts, and refine rules until detections match expected behavior without causing noise.
Get my expert guide to Understanding Data Centre Architecture: Core Components Every IT Pro Should…
I setup my Wazuh network at home to enhance security. Follow my guide to understand…
I analyze the risks of a decripted blockchain by quantum computer and its implications on…
Discover how Wazuh for business can enhance your enterprise security with my comprehensive guide, covering…
Get started with Wazuh using my Wazuh for Beginners: A Comprehensive Guide, your ultimate resource…
I examine the impact of past conflicts on IT projects post war in Europe, providing…