This page explains how I planned and executed a compact lab deployment to gain clear security visibility.
The central server combines a manager that analyses agent telemetry and Filebeat that forwards alerts and archived events to the wazuh indexer. The dashboard shows actionable data and makes triage faster.
I outline two installation paths: a single all-in-one OVA for quick labs and a manual route for deeper learning. Recommended resources for the OVA are 4 CPU cores, 8 GB RAM and 50 GB storage so the indexer and dashboard stay responsive.
The guide maps each component to its role, shows the sequence you’ll follow — planning, installation choice, core configuration, first-boot checks, agent rollout and optional NIDS — and flags where precise commands and snippets will appear.
Expect immediate benefits: centralised information, faster triage and clearer insight into risky behaviour. Later sections cover maintenance tasks for indices, storage and performance.
Get your copy now! Change your focus, change your stress. Limited editions
Before installing anything, I sketched clear objectives so each component had a measurable role.
My primary goal was broad visibility across hosts, timely alerts and concrete evidence of attacks or misconfigurations. These outcomes defined success: dashboard access, agents connected and initial events flowing.
Security goals were simple and testable: detect suspicious activity, collect forensic data and reduce false positives. I kept the scope small so validation remained fast.
I confirmed CPU and RAM headroom so the system did not struggle while indexing. A static address plan reserved entries for the server, wazuh indexer and dashboard to ensure agents always reach the manager.
| Item | Recommendation | Reason |
|---|---|---|
| CPU / RAM | 4 vCPU / 8 GB+ | Headroom for indexing and rule processing |
| Addressing | Static IPs + DNS entries | Reliable agent‑to‑server connectivity |
| Ports | Open required ports per guide | Firewalls can block component communication |
| Credential handling | Keystore + certificates | Encrypted management and indexer access |
Your choice of installation method sets the pace: rapid verification with a bundled appliance or a measured, production‑like build that teaches component relationships and upgrades.
The OVA bundles the indexer, manager and dashboard so you can validate indexing, alerting and visualisation within minutes.
Import the OVA, allocate at least 4 CPU cores, 8 GB RAM and 50 GB storage, set Display to VMSVGA and attach one NIC to LAN (DHCP) and a second to the capture/SPAN interface with Promiscuous Mode set to Allow All.
Default logins are admin/admin for the dashboard and wazuh-user/wazuh on the console. If the dashboard is “not yet ready”, restarting wazuh-indexer and wazuh-manager services usually fixes it.
Installing each node separately forces you to handle certificates, keystores and service dependencies.
This installation method mirrors real deployments: the Wazuh server, Wazuh indexer and dashboard live on distinct hosts and can be upgraded independently.
Plan CPU, RAM and storage to avoid indexer bottlenecks and keep versions consistent: manager, agents and dashboard must remain compatible to prevent unexpected behaviour.
Pick the option that fits your time, resources and learning goals, and document every step so migration or replication is straightforward.
Stepwise commands add trusted repositories, bring up the manager and install Filebeat so events flow to the indexer.
Add repositories and keys. On Debian, install gnupg and apt-transport-https, import the GPG key with curl | gpg and add: deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main. Run apt-get update.
On RHEL, import the RPM GPG key and create /etc/yum.repos.d/wazuh.repo with baseurl https://packages.wazuh.com/4.x/yum/.
Install the Wazuh manager and Filebeat with apt-get -y install wazuh-manager filebeat or yum/dnf -y install wazuh-manager filebeat. These commands require root privileges.
Download the provided /etc/filebeat/filebeat.yml, set output.elasticsearch hosts to [“https://:9200”], protocol https and use the Filebeat keystore for credentials. Load the alerts template and enable the Wazuh module.
Deploy certificates from wazuh-certificates.tar into /etc/filebeat/certs, rename to filebeat.pem and filebeat-key.pem and tighten permissions. Store indexer username/password in the wazuh-keystore.
Edit /var/ossec/etc/ossec.conf: add an <indexer> block with host entries (https://10.0.0.1:9200), and reference /etc/filebeat/certs/root-ca.pem, filebeat.pem and filebeat-key.pem.
| Step | Command / File | Purpose |
|---|---|---|
| Repository | apt or yum repo file | Trust packages and updates |
| Install | apt-get/yum install wazuh-manager filebeat | Deploy manager and shipper |
| Certificates | /etc/filebeat/certs/* | TLS for indexer communications |
| Configuration | /var/ossec/etc/ossec.conf | Indexer hosts and cert paths |
| Validation | systemctl start/enable + filebeat test output | Confirm TLS and version compatibility |
Ports to allow: ensure manager↔indexer and agents↔manager ports are open so components communicate without drops. Verify service status and run filebeat test output; TLS v1.3 and the expected indexer version should appear.
When the appliance finished booting, the priority was to reach the dashboard and verify the data pipeline.
Accessing the Wazuh Dashboard and logging in
I browse to the assigned address — https://<assigned_ip> — and log in with the default credentials admin/admin for the wazuh dashboard. After confirming the page renders, I plan an immediate password rotation.
The first technical check is Filebeat’s test output. A successful run shows a TLS handshake OK and version 7.10.2 when talking to the wazuh indexer. That result proves the pipeline from server to indexer is working.
Note that empty widgets on a new page are normal; they populate once agents send data. I keep a brief record of these steps so the same checks can be repeated after changes, ensuring the core service state is a known baseline before onboarding agents.
Get your copy now. PowerShell Essentials for Beginners – With Script Samples
Get your copy now. PowerShell Essentials for Beginners – With Script Samples
Agent deployment starts in the UI where a tailored install command is created for the target endpoint.
Open Deploy a new agent in the wazuh dashboard, select the operating system and architecture, then set the server address. Copy the generated script to the host and run it over SSH with administrative rights.
Run the following command sequence on Debian‑family systems: systemctl daemon-reload; systemctl enable wazuh-agent; systemctl start wazuh-agent. Wait a few minutes and check the dashboard’s agent list for the endpoint status.
Generate simple test events on the host, then use Discover to search by rule.id. For example, filter for rule.id:5760 to locate failed SSH authentication events. Inspect the agent detail view for inventory, vulnerabilities and configuration state to confirm the endpoint profile is complete.
“Confirm version parity between manager and agents to avoid unexpected behaviour.”
An optional NIDS can copy traffic into the SIEM so packet-level detection complements host telemetry.
Why add a mirror port? Passive capture reveals east–west flows and complements agent data. It is a low-risk option that increases visibility for threat hunting.
Add an extra interface to pfSense and bridge each subnet to the SPAN target so copies of traffic reach the SIEM/IDS VM. In VirtualBox, attach two NICs: LAN for management and the SPAN network for capture.
Set the capture NIC to Promiscuous Mode to Allow All so frames from multiple segments are visible.
Build and install Suricata with make install-full and edit suricata.yaml to enable EVE JSON logging to /var/log/suricata/eve.json. Configure AF_PACKET on eth1 for high-speed capture.
Tuning and housekeeping matter: prune logs older than 30 days with cron jobs, monitor CPU/memory to reduce packet drops and trim rulesets to balance detection and performance.
Validation: generate test traffic, confirm Suricata writes to eve.json and that events reach the wazuh indexer for search and correlation.
A clustered design shares processing and removes a single point of failure, but it requires consistent configuration and routine checks across hosts.
When to stay single‑node: keep one server for small labs or low event volume. Move to multiple nodes when alerting delays or CPU spikes appear.
For a multi‑node cluster, edit /var/ossec/etc/ossec.conf and add a <cluster> stanza. Set <name>, a unique <node_name>, and <node_type> as master or worker.
Generate a 32‑character key (for example: openssl rand -hex 16) and insert it in the <key> field on every node. Bind to the correct address, set <port> to 1516 and ensure that firewalls allow that port.
Restart the service on each host after changes. Validate membership and versions with the following command:
/var/ossec/bin/cluster_control -l
| Task | Action | Why it matters |
|---|---|---|
| Cluster stanza | Edit /var/ossec/etc/ossec.conf | Defines role and peers for discovery |
| Key | Generate 32 hex chars, apply to all nodes | Secures intra‑cluster messages |
| Port | Open 1516 between hosts | Enables membership and messaging |
| Validation | Run cluster_control -l | Shows master/worker list, versions and addresses |
Document components, key rotation steps and a short runbook for adding nodes. Add simple service checks (systemd status and health scripts) to catch issues early in a multi‑node deployment.
A clear set of dashboards and saved searches turns raw alerts into repeatable threat hunts.
Saving queries in Discover is a simple habit that pays back. I build focused searches (for example, rule.id:5760) and save them so they become tiles on a dashboard.
Dashboards surface the events and data that matter most. Create panels for authentication failures, suspicious process activity and Suricata detections. The Wazuh dashboard appears as a central page for these views.
Prioritise findings by severity, then open the agent page for inventory and details. Research CVEs linked to alerts and decide remediation order based on exposure and exploitability.
Integrate Windows Sysmon by adding the community ruleset, installing Sysmon on endpoints and grouping Windows agents so enriched telemetry flows with standard agent logs.
For housekeeping, add crontab tasks to rotate Suricata and Wazuh logs, prune indices older than 30 days and test file retention scripts. Disable the package repo until scheduled upgrades are planned.
“Turn ad-hoc hunts into repeatable workflows by saving searches and baking them into dashboards.”
This section summarises outcomes and practical next steps after the installation and validation pass.
I designed a compact system, chose an installation path that matched goals and completed a clean configuration so telemetry is reliable. The Wazuh indexer, server and dashboard now form a cohesive monitoring capability that covers key endpoints.
Agent rollout finished with enrolment checks, version parity and secure key storage in keystores to keep the service stable. An optional NIDS was added for packet‑level context where required.
Plan for scaling: add a node and enable a cluster only when load demands it, and use simple cluster_control checks. Maintain dashboards, saved searches and documentation so management stays predictable. Finally, schedule periodic reviews of version, capacity and configuration as the environment grows.
Get my expert guide to Understanding Data Centre Architecture: Core Components Every IT Pro Should…
I analyze the risks of a decripted blockchain by quantum computer and its implications on…
Discover how Wazuh for business can enhance your enterprise security with my comprehensive guide, covering…
Get started with Wazuh using my Wazuh for Beginners: A Comprehensive Guide, your ultimate resource…
I examine the impact of past conflicts on IT projects post war in Europe, providing…
Discover the power of augmented reality in marketing: top strategies for success. Learn how to…