This page explains how I planned and executed a compact lab deployment to gain clear security visibility.
The central server combines a manager that analyses agent telemetry and Filebeat that forwards alerts and archived events to the wazuh indexer. The dashboard shows actionable data and makes triage faster.
I outline two installation paths: a single all-in-one OVA for quick labs and a manual route for deeper learning. Recommended resources for the OVA are 4 CPU cores, 8 GB RAM and 50 GB storage so the indexer and dashboard stay responsive.
The guide maps each component to its role, shows the sequence you’ll follow — planning, installation choice, core configuration, first-boot checks, agent rollout and optional NIDS — and flags where precise commands and snippets will appear.
Expect immediate benefits: centralised information, faster triage and clearer insight into risky behaviour. Later sections cover maintenance tasks for indices, storage and performance.
Get your copy now! Change your focus, change your stress. Limited editions

Key Points
- One concise lab option is the all-in-one OVA for fast deployment.
- The manager, Filebeat and indexer form the core data flow to the dashboard.
- Resource sizing matters: 4 CPU cores, 8 GB RAM, 50 GB storage recommended.
- The guide follows repeatable steps to reduce guesswork and errors.
- Maintenance for indices and storage is essential for steady performance.
What I planned before installing Wazuh at home
Before installing anything, I sketched clear objectives so each component had a measurable role.
My primary goal was broad visibility across hosts, timely alerts and concrete evidence of attacks or misconfigurations. These outcomes defined success: dashboard access, agents connected and initial events flowing.
My home lab goals, scope and security outcomes
Security goals were simple and testable: detect suspicious activity, collect forensic data and reduce false positives. I kept the scope small so validation remained fast.
Prerequisites: hardware, operating system, IPs and required ports
I confirmed CPU and RAM headroom so the system did not struggle while indexing. A static address plan reserved entries for the server, wazuh indexer and dashboard to ensure agents always reach the manager.
- Choose the operating system family (Debian or RHEL) for the server host.
- Decide co‑location or distribution across nodes for later scaling.
- Plan admin access (root/sudo), keystore handling and certificate distribution.
- Ensure firewalls allow required ports between components before installation.
| Item | Recommendation | Reason |
|---|---|---|
| CPU / RAM | 4 vCPU / 8 GB+ | Headroom for indexing and rule processing |
| Addressing | Static IPs + DNS entries | Reliable agent‑to‑server connectivity |
| Ports | Open required ports per guide | Firewalls can block component communication |
| Credential handling | Keystore + certificates | Encrypted management and indexer access |
Choosing an installation path: all‑in‑one OVA vs. manual stack
Your choice of installation method sets the pace: rapid verification with a bundled appliance or a measured, production‑like build that teaches component relationships and upgrades.
All‑in‑one OVA for quick lab trials
The OVA bundles the indexer, manager and dashboard so you can validate indexing, alerting and visualisation within minutes.
Import the OVA, allocate at least 4 CPU cores, 8 GB RAM and 50 GB storage, set Display to VMSVGA and attach one NIC to LAN (DHCP) and a second to the capture/SPAN interface with Promiscuous Mode set to Allow All.
Default logins are admin/admin for the dashboard and wazuh-user/wazuh on the console. If the dashboard is “not yet ready”, restarting wazuh-indexer and wazuh-manager services usually fixes it.
Manual installation for control and production‑like learning
Installing each node separately forces you to handle certificates, keystores and service dependencies.
This installation method mirrors real deployments: the Wazuh server, Wazuh indexer and dashboard live on distinct hosts and can be upgraded independently.
Resource considerations and version alignment
Plan CPU, RAM and storage to avoid indexer bottlenecks and keep versions consistent: manager, agents and dashboard must remain compatible to prevent unexpected behaviour.
Pick the option that fits your time, resources and learning goals, and document every step so migration or replication is straightforward.
Installing and configuring the Wazuh server, indexer and dashboard

Stepwise commands add trusted repositories, bring up the manager and install Filebeat so events flow to the indexer.
Add repositories and keys. On Debian, install gnupg and apt-transport-https, import the GPG key with curl | gpg and add: deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main. Run apt-get update.
Package installation
On RHEL, import the RPM GPG key and create /etc/yum.repos.d/wazuh.repo with baseurl https://packages.wazuh.com/4.x/yum/.
Install the Wazuh manager and Filebeat with apt-get -y install wazuh-manager filebeat or yum/dnf -y install wazuh-manager filebeat. These commands require root privileges.
TLS, Filebeat and ossec.conf
Download the provided /etc/filebeat/filebeat.yml, set output.elasticsearch hosts to [“https://:9200”], protocol https and use the Filebeat keystore for credentials. Load the alerts template and enable the Wazuh module.
Deploy certificates from wazuh-certificates.tar into /etc/filebeat/certs, rename to filebeat.pem and filebeat-key.pem and tighten permissions. Store indexer username/password in the wazuh-keystore.
Edit /var/ossec/etc/ossec.conf: add an <indexer> block with host entries (https://10.0.0.1:9200), and reference /etc/filebeat/certs/root-ca.pem, filebeat.pem and filebeat-key.pem.
| Step | Command / File | Purpose |
|---|---|---|
| Repository | apt or yum repo file | Trust packages and updates |
| Install | apt-get/yum install wazuh-manager filebeat | Deploy manager and shipper |
| Certificates | /etc/filebeat/certs/* | TLS for indexer communications |
| Configuration | /var/ossec/etc/ossec.conf | Indexer hosts and cert paths |
| Validation | systemctl start/enable + filebeat test output | Confirm TLS and version compatibility |
Ports to allow: ensure manager↔indexer and agents↔manager ports are open so components communicate without drops. Verify service status and run filebeat test output; TLS v1.3 and the expected indexer version should appear.
I set up my Wazuh network at home: first boot, access and basic checks
When the appliance finished booting, the priority was to reach the dashboard and verify the data pipeline.
Accessing the Wazuh Dashboard and logging in
I browse to the assigned address — https://<assigned_ip> — and log in with the default credentials admin/admin for the wazuh dashboard. After confirming the page renders, I plan an immediate password rotation.
Confirming indexer connectivity and Filebeat output
The first technical check is Filebeat’s test output. A successful run shows a TLS handshake OK and version 7.10.2 when talking to the wazuh indexer. That result proves the pipeline from server to indexer is working.
- Check systemctl for wazuh-manager and wazuh-indexer; restart both if the dashboard shows “not ready”.
- Ensure the manager writes alerts and archives, and that Filebeat ships them without errors in logs.
- Verify node time synchronisation and confirm certificates and keystore credentials match configured values.
- Confirm the firewall allows bidirectional traffic between manager and indexer as required by the chosen deployment.
Note that empty widgets on a new page are normal; they populate once agents send data. I keep a brief record of these steps so the same checks can be repeated after changes, ensuring the core service state is a known baseline before onboarding agents.
Get your copy now. PowerShell Essentials for Beginners – With Script Samples

Get your copy now. PowerShell Essentials for Beginners – With Script Samples
Deploying Wazuh agents and validating events in the dashboard
Agent deployment starts in the UI where a tailored install command is created for the target endpoint.
Generate and run the install snippet
Open Deploy a new agent in the wazuh dashboard, select the operating system and architecture, then set the server address. Copy the generated script to the host and run it over SSH with administrative rights.
Start the service and confirm enrolment
Run the following command sequence on Debian‑family systems: systemctl daemon-reload; systemctl enable wazuh-agent; systemctl start wazuh-agent. Wait a few minutes and check the dashboard’s agent list for the endpoint status.
Validate events and investigate alerts
Generate simple test events on the host, then use Discover to search by rule.id. For example, filter for rule.id:5760 to locate failed SSH authentication events. Inspect the agent detail view for inventory, vulnerabilities and configuration state to confirm the endpoint profile is complete.
“Confirm version parity between manager and agents to avoid unexpected behaviour.”
- Repeat enrolment for critical hosts first.
- Verify the following command examples in the UI match each endpoint OS.
- Document versions and remedial steps for stale agents.
Optional NIDS: pfSense SPAN and Suricata feeding Wazuh
An optional NIDS can copy traffic into the SIEM so packet-level detection complements host telemetry.
Why add a mirror port? Passive capture reveals east–west flows and complements agent data. It is a low-risk option that increases visibility for threat hunting.
Adding the SPAN and VM interfaces
Add an extra interface to pfSense and bridge each subnet to the SPAN target so copies of traffic reach the SIEM/IDS VM. In VirtualBox, attach two NICs: LAN for management and the SPAN network for capture.
Set the capture NIC to Promiscuous Mode to Allow All so frames from multiple segments are visible.
Installing and configuring Suricata
Build and install Suricata with make install-full and edit suricata.yaml to enable EVE JSON logging to /var/log/suricata/eve.json. Configure AF_PACKET on eth1 for high-speed capture.
- Rules: enable et/open with suricata-update and schedule daily updates via crontab.
- Service: run Suricata as a systemd service so it auto-starts and is manageable with standard commands.
- Ingest: in ossec.conf add a <localfile> entry with <log_format>json</log_format> and <location>/var/log/suricata/eve.json</location> then restart the manager.
Tuning and housekeeping matter: prune logs older than 30 days with cron jobs, monitor CPU/memory to reduce packet drops and trim rulesets to balance detection and performance.
Validation: generate test traffic, confirm Suricata writes to eve.json and that events reach the wazuh indexer for search and correlation.
Scaling up: clustering, nodes and ongoing management
A clustered design shares processing and removes a single point of failure, but it requires consistent configuration and routine checks across hosts.
When to stay single‑node: keep one server for small labs or low event volume. Move to multiple nodes when alerting delays or CPU spikes appear.
Single‑node vs. multi‑node configuration
For a multi‑node cluster, edit /var/ossec/etc/ossec.conf and add a <cluster> stanza. Set <name>, a unique <node_name>, and <node_type> as master or worker.
Keys, roles, addresses and testing cluster status
Generate a 32‑character key (for example: openssl rand -hex 16) and insert it in the <key> field on every node. Bind to the correct address, set <port> to 1516 and ensure that firewalls allow that port.
Restart the service on each host after changes. Validate membership and versions with the following command:
/var/ossec/bin/cluster_control -l
| Task | Action | Why it matters |
|---|---|---|
| Cluster stanza | Edit /var/ossec/etc/ossec.conf | Defines role and peers for discovery |
| Key | Generate 32 hex chars, apply to all nodes | Secures intra‑cluster messages |
| Port | Open 1516 between hosts | Enables membership and messaging |
| Validation | Run cluster_control -l | Shows master/worker list, versions and addresses |
Document components, key rotation steps and a short runbook for adding nodes. Add simple service checks (systemd status and health scripts) to catch issues early in a multi‑node deployment.
Dashboards, threat hunting and maintenance routines
A clear set of dashboards and saved searches turns raw alerts into repeatable threat hunts.
Saving queries in Discover is a simple habit that pays back. I build focused searches (for example, rule.id:5760) and save them so they become tiles on a dashboard.
Dashboards surface the events and data that matter most. Create panels for authentication failures, suspicious process activity and Suricata detections. The Wazuh dashboard appears as a central page for these views.
Prioritise findings by severity, then open the agent page for inventory and details. Research CVEs linked to alerts and decide remediation order based on exposure and exploitability.
Extending visibility and housekeeping
Integrate Windows Sysmon by adding the community ruleset, installing Sysmon on endpoints and grouping Windows agents so enriched telemetry flows with standard agent logs.
For housekeeping, add crontab tasks to rotate Suricata and Wazuh logs, prune indices older than 30 days and test file retention scripts. Disable the package repo until scheduled upgrades are planned.
“Turn ad-hoc hunts into repeatable workflows by saving searches and baking them into dashboards.”
Conclusion
This section summarises outcomes and practical next steps after the installation and validation pass.
I designed a compact system, chose an installation path that matched goals and completed a clean configuration so telemetry is reliable. The Wazuh indexer, server and dashboard now form a cohesive monitoring capability that covers key endpoints.
Agent rollout finished with enrolment checks, version parity and secure key storage in keystores to keep the service stable. An optional NIDS was added for packet‑level context where required.
Plan for scaling: add a node and enable a cluster only when load demands it, and use simple cluster_control checks. Maintain dashboards, saved searches and documentation so management stays predictable. Finally, schedule periodic reviews of version, capacity and configuration as the environment grows.
Related posts:
CISSP Domain 3: Security Architecture and Engineering
CISSP Domain 7: Security Operations Essential Guide
Powershell from A to Z: A Step-by-Step Tutorial
Get Started with Microsoft Intune: A Beginner’s Step-by-Step
CISSP Domain 2: Guide to Asset Security Fundamentals
Top 7 Free Web Tools to Boost Productivity
