Post

Home Lab 1 e4

Home Lab 1 e4

Wazuh SIEM: A Comprehensive Guide to Installation, Configuration, and Advanced Threat Detection

Wazuh stands as a robust, open-source security platform, uniquely combining the capabilities of Security Information and Event Management (SIEM) and Extended Detection and Response (XDR).1 This dual nature allows it to offer comprehensive protection for endpoints and cloud workloads by monitoring, detecting, and alerting on a multitude of security events and incidents.1

This guide aims to provide a detailed, step-by-step walkthrough for understanding, installing, configuring, and utilizing the Wazuh platform, Version 4.12.0.4 A particular focus will be placed on practical threat detection, specifically for identifying network reconnaissance activities like Nmap scans and mitigating Distributed Denial of Service (DDoS) attacks. By grounding the instructions in a specific version, this guide ensures that commands and feature discussions remain relevant and accurate for users implementing this version.

I. Understanding Wazuh: Core Concepts and Architecture

A foundational understanding of Wazuh’s capabilities and its architectural components is essential before diving into deployment and configuration.

Detailed Explanation of Wazuh SIEM and XDR Capabilities

Wazuh’s power stems from its integration of SIEM and XDR functionalities. As a SIEM, it excels at collecting logs from a myriad of sources, normalizing this data for consistent analysis, correlating events to identify patterns, and providing real-time alerts for potential threats. This is complemented by threat visualization through intuitive dashboards and features to aid in compliance management.6

Beyond traditional SIEM, Wazuh incorporates XDR capabilities, extending its reach to endpoints and cloud environments for more proactive threat detection and response.1 This means Wazuh is not just about logging and alerting; it’s about providing a comprehensive security monitoring solution that can also initiate responses to detected incidents.1

Wazuh Architecture Deep Dive

The Wazuh platform is composed of several key components that work in concert to deliver its security monitoring and response features.

  • Wazuh Server (Manager): This is the central nervous system of the Wazuh deployment. It collects data from monitored endpoints (via Wazuh agents) and agentless devices (like firewalls or routers through syslog).2 The server processes this incoming data using a sophisticated analysis engine that employs decoders to parse logs and rules to identify security events. It leverages threat intelligence, including Indicators of Compromise (IOCs), to enhance detection accuracy.2 The Wazuh Server also centrally manages agent configurations and monitors their operational status.2 For larger environments, Wazuh Servers can be clustered to provide horizontal scalability and high availability.2 Key services within the Wazuh Server include the agent connection service (for communication with agents), the analysis engine (wazuh-analysisd), a RESTful API for management and integration, a cluster daemon for multi-node setups, and Filebeat, which is responsible for forwarding processed alerts and events to the Wazuh Indexer.2

  • Wazuh Indexer: This component is a highly scalable, full-text search and analytics engine specifically optimized for security data.9 Built upon OpenSearch (a fork of Elasticsearch, with Wazuh 4.12.0 upgrading to OpenSearch 2.19.1 4), the Indexer stores all alerts and archived events generated by the Wazuh Server.2 Data is stored as JSON documents, allowing for flexible and powerful querying.2 The Indexer can be deployed as a single node for smaller setups or as a multi-node cluster to ensure high availability, data redundancy, and increased query capacity for larger environments.9 It achieves this by distributing documents across different containers called shards, which are then distributed across the cluster nodes.9 The Wazuh Indexer maintains several distinct indices for different types of data:

    • wazuh-alerts: Stores alerts generated by the Wazuh server when an event triggers a rule of sufficient priority.
    • wazuh-archives: Stores all events received by the Wazuh server, regardless of whether they trigger a rule (if archiving is enabled).
    • wazuh-monitoring: Stores data related to Wazuh agent status over time.
    • wazuh-statistics: Stores performance metrics for the Wazuh server.2
  • Wazuh Dashboard: This is the primary web user interface for interacting with the Wazuh platform.2 Built on OpenSearch Dashboards (a fork of Kibana 11), it provides a flexible and intuitive way to visualize security data, analyze events, and manage the Wazuh deployment.2 The Dashboard comes with numerous out-of-the-box visualizations and dashboards tailored for security events, regulatory compliance (such as PCI DSS, GDPR, CIS, HIPAA, NIST 800-53), detected vulnerabilities, file integrity monitoring (FIM) data, configuration assessment results, and cloud infrastructure monitoring events.2 It is also used to manage Wazuh configurations and monitor the overall status of the system.12

  • Wazuh Agents: These are lightweight software components installed on the endpoints that require monitoring. Supported platforms include Linux, Windows, macOS, Solaris, AIX, and others, covering laptops, desktops, servers, cloud instances, containers, and virtual machines.2 Agents are responsible for collecting logs, security events, system inventory data, performing FIM, and detecting vulnerabilities. This data is then securely forwarded to the Wazuh Server for analysis.7 The agent ensures that all relevant security events are captured and transmitted securely, typically using an encrypted and authenticated channel.7

Data Flow: The typical data flow in a Wazuh environment begins with the Wazuh Agent on a monitored endpoint collecting local logs and security-relevant data. This data is securely transmitted to the Wazuh Server (Manager). The Manager’s analysis engine decodes and analyzes the data against its ruleset, generating alerts for suspicious activities or policy violations. These alerts, along with raw logs (if archiving is enabled), are then sent by Filebeat (part of the Manager) to the Wazuh Indexer. The Indexer indexes this data, making it searchable. Finally, the Wazuh Dashboard queries the Indexer to present the alerts and events in a human-readable format, allowing security analysts to visualize trends, investigate incidents, and manage the platform.7

Table: Wazuh Core Components and Their Roles

   
Component NamePrimary RoleKey Responsibilities/Features
Wazuh AgentEndpoint Data Collection & Local DetectionCollects logs/events, FIM, Security Configuration Assessment (SCA), system inventory, vulnerability data, local command execution, container security monitoring.
Wazuh Server (Manager)Centralized Analysis, Alerting & Agent ManagementAnalyzes data from agents/agentless sources, generates alerts based on rules/decoders, manages agent configurations, integrates threat intelligence.
Wazuh IndexerData Storage & Search/Analytics EngineStores and indexes log data and alerts (JSON format), enables fast full-text search and complex analytics, supports clustering for HA and scalability.
Wazuh DashboardVisualization, Management & Reporting UIVisualizes security data and alerts, allows platform configuration management, provides UI for threat hunting, compliance reporting, and agent status.

The architectural design of Wazuh presents a blend of centralized and distributed characteristics. The fundamental agent-server model provides centralized analysis and management, which simplifies oversight.2 However, for larger deployments, both the Wazuh Server and Wazuh Indexer components can be clustered, introducing distributed processing and storage. This distributed capability is crucial for handling high data volumes and ensuring high availability, though it does add a layer of complexity to the initial setup and maintenance.2 Organizations must therefore carefully consider their current and future scale when planning their Wazuh architecture.

A critical element often underestimated in its importance is the Wazuh Indexer. More than just a database, it functions as a high-performance search and analytics engine.8 Its ability to perform “almost instantaneous searches” through vast amounts of security data is fundamental to Wazuh’s effectiveness as a SIEM.8 Consequently, the health, proper sizing (CPU, RAM, disk), and configuration (shards, replicas) of the Indexer directly dictate the platform’s responsiveness and overall utility.9 Any performance degradation in the Indexer will inevitably lead to sluggish dashboards and delayed alert retrieval, significantly hampering security operations.

Furthermore, while Wazuh is predominantly an agent-based monitoring solution, offering deep visibility into endpoint activities, it also accommodates agentless monitoring.2 This is particularly useful for network devices like firewalls, routers, and switches, from which Wazuh can ingest logs via syslog. This flexibility allows organizations to extend their monitoring reach to a wider array of IT assets, although the depth of information gathered from agentless sources is typically less comprehensive than that from agent-equipped endpoints. A balanced approach, leveraging both agent-based and agentless methods, will provide the most holistic security view.

II. Why Choose Wazuh? Key Benefits and Use Cases

Wazuh has gained significant traction in the cybersecurity community due to a compelling set of advantages and a wide range of practical applications.

Advantages of Using Wazuh

  • Open-Source and Cost-Effectiveness: Being an open-source platform, Wazuh is free to download and use, which significantly reduces initial software licensing costs often associated with commercial SIEM solutions.2 This makes it an attractive option for organizations of all sizes, particularly those with budget constraints. However, it’s important to recognize that while the software is free, implementing and managing a Wazuh deployment internally requires specialized knowledge for setup, ongoing maintenance, and system upgrades. The “cost” effectively shifts from licensing fees to operational expenditure, including hardware, personnel training, and the time investment for tuning and upkeep.2
  • Comprehensive Threat Detection and Response: Wazuh offers a unified platform combining SIEM and XDR functionalities. This allows for robust log data analysis, intrusion detection, file integrity monitoring (FIM), vulnerability detection, malware detection, and even active response capabilities to mitigate threats in real-time.2
  • Scalability: The platform is architected for scalability. A standalone Wazuh manager, with adequate hardware, can support up to 10,000 endpoints.7 For larger environments, Wazuh managers and indexers can be deployed in clusters, enabling the system to monitor hundreds of thousands of endpoints effectively.7 This ensures that the security infrastructure can adapt to evolving business requirements and growing data volumes.
  • Regulatory Compliance: Wazuh is a powerful ally in meeting various regulatory compliance mandates. It provides features and out-of-the-box content (dashboards, rulesets) mapped to standards such as PCI DSS, HIPAA, GDPR, NIST 800-53, CIS benchmarks, and TSC.2 This significantly simplifies the process of demonstrating adherence to these complex requirements.
  • Flexibility and Customization: Its open architecture facilitates integration with a wide array of existing security tools and technologies, enhancing its capabilities within a broader security ecosystem.2 Moreover, users can create custom rules and decoders to tailor Wazuh’s detection logic to their specific environment and threat landscape.22
  • Real-time Alerting and Visualization: Wazuh provides immediate notifications for detected security incidents. Its dashboard offers intuitive visualizations of security data, enabling security teams to quickly identify anomalies, trends, and potential threats, thereby enhancing threat detection, investigation, and response efficiency.6

Common Use Cases

Wazuh’s versatility allows it to address a multitude of security challenges:

  • Endpoint Security: Provides robust endpoint protection through configuration assessment (ensuring systems adhere to security policies and hardening guides), malware detection (identifying malicious activities and IOCs), and file integrity monitoring (tracking changes to critical files and directories).3
  • Log Data Analysis: Acts as a centralized log management system, collecting, normalizing, parsing, correlating, and analyzing log data from diverse sources including operating systems, applications, and network devices. This provides comprehensive visibility for threat detection and forensic analysis.3
  • Vulnerability Detection: Wazuh agents collect software inventory data from monitored endpoints. The Wazuh server then correlates this inventory with continuously updated Common Vulnerabilities and Exposures (CVE) databases to identify known vulnerable software installations.3
  • Threat Hunting: Offers comprehensive visibility into monitored endpoints and infrastructure. Its log retention, indexing, and querying capabilities empower security teams to proactively investigate potential threats that may have bypassed initial security controls. Wazuh also maps detected events to tactics, techniques, and procedures (TTPs) in the MITRE ATT&CK framework, simplifying threat hunting investigations.3
  • Incident Response: Provides out-of-the-box active response capabilities to perform countermeasures against ongoing threats, such as blocking network access from a threat source. Users can also create custom active responses to tailor actions to specific incident types.3
  • Cloud Security Monitoring: Extends its monitoring capabilities to cloud environments. Wazuh can integrate with cloud platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), as well as services like Microsoft 365 and GitHub, to monitor services, virtual machines, and activities occurring on these platforms.3
  • Container Security: Provides security visibility into Docker hosts and containers by monitoring their behavior and detecting threats, vulnerabilities, and anomalies. It achieves this through native integration with the Docker engine and Kubernetes API, allowing monitoring of images, volumes, network settings, and running containers.3
  • IT Hygiene: Helps organizations optimize asset visibility and maintain good IT hygiene by building an up-to-date system inventory of all monitored endpoints. This, combined with vulnerability detection, SCA, and malware detection, improves the overall security posture.3

The true strength of Wazuh is significantly amplified by its capacity for integration. It is designed to ingest data from a vast array of sources—operating systems, custom applications, network hardware, and various cloud platforms—and to interoperate with other security tools such as threat intelligence platforms, Intrusion Prevention Systems (IPS), and ticketing systems.2 This interconnectedness allows Wazuh to serve as a central hub for security data, providing a more holistic view than if it were used in isolation. Organizations should therefore plan to integrate Wazuh deeply within their existing security infrastructure to unlock its full potential.

A significant driver for Wazuh adoption, particularly in regulated industries, is its robust support for compliance. Features like File Integrity Monitoring (FIM), Security Configuration Assessment (SCA), vulnerability detection, and specialized compliance dashboards and rulesets directly address requirements from standards such as PCI DSS, HIPAA, and GDPR.2 This can be a compelling factor, potentially justifying the investment in expertise and operational effort required for an open-source solution, as it helps streamline complex auditing and reporting processes.

Beyond merely reacting to alerts, Wazuh empowers organizations to adopt a more proactive security stance. Capabilities such as threat hunting, which allows analysts to search for subtle signs of compromise, continuous vulnerability detection to identify weaknesses before they are exploited, and configuration assessment to ensure systems are hardened against attacks, enable security teams to actively reduce their attack surface and improve their overall defensive posture.18 This shift from reactive to proactive security is a key benefit of a well-implemented Wazuh deployment.

III. Installing the Wazuh Platform: Central Components

Deploying the Wazuh central components—Server (Manager), Indexer, and Dashboard—requires careful planning regarding operating system compatibility and hardware resources.

Operating System Support for Central Components

The Wazuh Server, Indexer, and Dashboard are designed to run on Linux-based systems. Specifically, they require a 64-bit Intel (x86_64/AMD64) or ARM (AARCH64/ARM64) processor architecture.25 The introduction of ARM architecture support in Wazuh version 4.12.0 expanded its deployment flexibility.4

Recommended Linux distributions include:

  • Amazon Linux 2, Amazon Linux 2023
  • CentOS 7, 8
  • Red Hat Enterprise Linux (RHEL) 7, 8, 9
  • Ubuntu 16.04, 18.04, 20.04, 22.04, 24.04.25

Hardware Requirements and Sizing

Hardware requirements for Wazuh are highly dependent on the scale of the deployment, primarily the number of monitored endpoints and the volume of events per second (EPS) they generate.7

  • Wazuh Server (Manager):

    • For small deployments (1-25 agents), a system with 4 vCPU and 8 GiB RAM is a good starting point.25
    • Minimum recommendations often cite 2 vCPU and 2GB RAM, but 4GB RAM and 8 CPU cores are generally preferred for better performance.19
    • As the agent count increases, so do the resource needs. For instance, managing 1000 agents on a single server might demand 8 vCPUs and 16GB RAM, or potentially up to 32 CPU cores and 64GB RAM for environments with high EPS rates to ensure smooth operation.19
    • A robust standalone manager can handle up to 10,000 agents if adequately resourced.7
  • Wazuh Indexer:

    • The Java Virtual Machine (JVM) heap size is a critical parameter for the Indexer, typically recommended to be set to half of the total system RAM.16
    • Storage capacity is determined by the EPS rate and the desired data retention period. As a general estimate, 250 servers might generate close to 500GB of data over a 90-day retention period.7 Solid-State Drives (SSDs) are highly recommended for optimal performance.
  • Wazuh Dashboard:

    • Minimum: 2 CPU cores, 4GB RAM.
    • Recommended: 4 CPU cores, 8GB RAM.26

Table: Recommended Hardware for Wazuh All-in-One Deployment (Single Host)

This table provides a quick reference for initial hardware planning for smaller environments or testing purposes, based on an all-in-one deployment where the Server, Indexer, and Dashboard reside on the same host.25

    
AgentsCPU (vCPU)RAM (GiB)Storage (90 days)
1–254850 GB
25–5088100 GB
50–10088200 GB

Table: Recommended Hardware for Distributed Wazuh Server (Manager) Nodes

For larger deployments, distributing components is necessary. This table guides resource allocation for the Wazuh Server (Manager) component as endpoint counts grow significantly, underscoring the need for clustering in such scenarios.7

     
EndpointsCPU (Cores)RAM (GB)Storage (SSD)Clustered?
1,000816-32 GB500GBNo/Optional
10,0001664 GB1TBOptional/Yes
50,000+32+128+ GB2TB+Yes (Recommended)

A key consideration for scaling is that Wazuh managers tend to scale more effectively horizontally (by adding more manager nodes to a cluster) rather than vertically (by continually increasing the resources of a single manager node).19 For substantial environments, planning a clustered architecture from the beginning is often more efficient and resilient than attempting to upgrade a single, monolithic server repeatedly.

Installation Approaches

Wazuh offers two primary methods for installing its central components:

  • Assisted Installation: This method utilizes the wazuh-install.sh script to perform an all-in-one deployment, installing the Wazuh Server, Indexer, and Dashboard on a single host. It is the quickest way to get a Wazuh environment up and running, ideal for testing or small deployments.25 The command typically looks like:

    curl -sO https://packages.wazuh.com/4.12/wazuh-install.sh && sudo bash./wazuh-install.sh -a

    (Note: The version in the URL, here 4.12, should match the desired Wazuh version).25

  • Step-by-Step Installation: This approach involves installing each central component (Indexer, Server, Dashboard) individually. It offers greater flexibility, allowing for distributed deployments where components reside on separate servers, and provides more control over customization. This is the recommended method for production environments.29 The general workflow is to install the Wazuh Indexer first, followed by the Wazuh Server, and then the Wazuh Dashboard.26

A. Step-by-Step Installation on Linux (Debian/Ubuntu Focus, applicable to Kali Linux)

The following instructions detail the step-by-step installation process for a single-node setup on Debian/Ubuntu systems. Since Kali Linux is Debian-based, these commands are generally applicable. Root privileges are required for these operations.

Prerequisites:

  • Root user privileges.
  • Ensure the system meets the hardware requirements outlined previously.
  • All nodes (Indexer, Server, Dashboard) must be able to communicate with each other over the network.

1. Installing the Wazuh Indexer (Single Node Example)

The Wazuh Indexer is the foundation of the data layer. SSL certificates are crucial for securing communication between all Wazuh components.

  • Certificates Creation:

    1. Download the certificate generation tool and configuration template for Wazuh 4.12 30:

      Bash

      1
      2
      
       curl -sO https://packages.wazuh.com/4.12/wazuh-certs-tool.sh
       curl -sO https://packages.wazuh.com/4.12/config.yml
      
    2. Edit the config.yml file. This file defines the nodes in your Wazuh cluster. Replace the placeholder names and IP addresses with the actual values for your Wazuh Indexer, Server, and Dashboard nodes. For a single-node setup of each component, you’ll define one of each.30 Example config.yml for a simple setup:

      YAML

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      
       nodes:
         indexer:
           - name: node-1 # Or your chosen indexer node name
             ip: <YOUR_INDEXER_NODE_IP>
         server:
           - name: wazuh-1 # Or your chosen server node name
             ip: <YOUR_SERVER_NODE_IP>
         dashboard:
           - name: dashboard-1 # Or your chosen dashboard node name
             ip: <YOUR_DASHBOARD_NODE_IP>
      
    3. Generate the certificates by running the script:

      Bash

      1
      
       bash./wazuh-certs-tool.sh -A
      

      This will create a directory named wazuh-certificates containing the necessary SSL certificates and keys for each defined node.30

    4. Compress the generated certificates into a tarball:

      Bash

      1
      2
      
       tar -cvf./wazuh-certificates.tar -C./wazuh-certificates/.
       rm -rf./wazuh-certificates
      

      This wazuh-certificates.tar file will need to be copied to each server that will host a Wazuh component.30

  • Node Installation (Debian/Ubuntu):

    1. Install prerequisite packages:

      Bash

      1
      2
      
       sudo apt-get update
       sudo apt-get install -y gnupg apt-transport-https
      

      30

    2. Add the Wazuh GPG key and repository (using version 4.x, which covers 4.12):

      Bash

      1
      2
      
       curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
       echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | sudo tee -a /etc/apt/sources.list.d/wazuh.list
      

      30

    3. Update the package list again:

      Bash

      1
      
       sudo apt-get update
      

      30

    4. Install the Wazuh Indexer package:

      Bash

      1
      
       sudo apt-get -y install wazuh-indexer
      

      30

    5. Configure the Wazuh Indexer by editing /etc/wazuh-indexer/opensearch.yml. Key settings to adjust include 30:
      • network.host: Set to the IP address of your Indexer node (e.g., 0.0.0.0 to bind to all interfaces, or the specific IP used in config.yml).
      • node.name: The name of this Indexer node (e.g., node-1, matching config.yml).
      • cluster.initial_master_nodes: For a single-node Indexer cluster, this should be the node.name (e.g., ["node-1"]). For multi-node, list all master-eligible nodes.
      • discovery.seed_hosts: For multi-node clusters, list the IP addresses of other master-eligible nodes. For single-node, this can often be commented out or set to the local node.
      • plugins.security.nodes_dn: Update with the Distinguished Names (DNs) from the generated certificates for all Indexer nodes.
    6. Deploy the certificates. Copy wazuh-certificates.tar to the Indexer node. Then, extract and place the relevant certificates into /etc/wazuh-indexer/certs/. The specific files needed are root-ca.pem, admin.pem, admin-key.pem, and the node-specific certificate (e.g., node-1.pem) and key (e.g., node-1-key.pem), which should be renamed to indexer.pem and indexer-key.pem respectively.30

      Bash

      1
      2
      3
      4
      5
      6
      7
      8
      9
      
       # Example commands, adjust NODE_NAME
       NODE_NAME="node-1" # Or your indexer node name from config.yml
       sudo mkdir /etc/wazuh-indexer/certs
       sudo tar -xf./wazuh-certificates.tar -C /etc/wazuh-indexer/certs/./$NODE_NAME.pem./$NODE_NAME-key.pem./admin.pem./admin-key.pem./root-ca.pem
       sudo mv /etc/wazuh-indexer/certs/$NODE_NAME.pem /etc/wazuh-indexer/certs/indexer.pem
       sudo mv /etc/wazuh-indexer/certs/$NODE_NAME-key.pem /etc/wazuh-indexer/certs/indexer-key.pem
       sudo chmod 500 /etc/wazuh-indexer/certs
       sudo chmod 400 /etc/wazuh-indexer/certs/*
       sudo chown -R wazuh-indexer:wazuh-indexer /etc/wazuh-indexer/certs
      
    7. Enable and start the Wazuh Indexer service:

      Bash

      1
      2
      3
      
       sudo systemctl daemon-reload
       sudo systemctl enable wazuh-indexer
       sudo systemctl start wazuh-indexer
      

      30

  • Cluster Initialization (Run on one Indexer node ONLY if setting up a cluster, or on the single node):

    1. Initialize the security plugin for the cluster:

      Bash

      1
      
       sudo /usr/share/wazuh-indexer/bin/indexer-security-init.sh
      

      This step is crucial for loading the security configurations and certificates.10

    2. Test the Indexer installation:

      Bash

      1
      
       curl -k -u admin:admin https://<YOUR_WAZUH_INDEXER_IP>:9200
      

      You should receive a JSON response with cluster information.30 The default password for admin after indexer-security-init.sh might be admin, but it’s strongly recommended to change it immediately. If using the assisted installation script, passwords are often generated and stored in wazuh-passwords.txt.25

  • Alternative Indexer Installation using wazuh-install.sh (Simplified for single or multiple nodes) 31:

    1. Download wazuh-install.sh and config.yml (e.g., for version 4.11, adapt to 4.12 if available).
    2. Edit config.yml with all node IPs (Indexer, Server, Dashboard).
    3. Generate configuration files and certificates: sudo bash wazuh-install.sh --generate-config-files. This creates wazuh-install-files.tar.
    4. Copy wazuh-install-files.tar to all relevant server nodes.
    5. On each Indexer node: sudo bash wazuh-install.sh --wazuh-indexer <node-name-from-config.yml>.
    6. On one Indexer node: sudo bash wazuh-install.sh --start-cluster.

2. Installing the Wazuh Server (Manager) (Single Node Example)

The Wazuh Server analyzes data and manages agents. It requires Filebeat to send its alerts and events to the Indexer.

  • Adding Wazuh Repository (if on a new machine or not done previously for Debian/Ubuntu):

    Follow steps 1-3 from the Indexer installation for adding the GPG key and repository.29

  • Installing Wazuh Manager and Filebeat:

    Bash

    1
    
      sudo apt-get -y install wazuh-manager filebeat
    

    29

  • Configuring Filebeat (/etc/filebeat/filebeat.yml):

    1. Ensure Filebeat is configured to send data to your Wazuh Indexer. Modify output.elasticsearch.hosts to point to your Indexer’s IP and port (default 9200) 29:

      YAML

      1
      2
      
       output.elasticsearch:
         hosts:
      
    2. If your Indexer uses SSL (which it should, given the certificate generation):

      YAML

      1
      2
      3
      4
      
       protocol: "https"
       ssl.certificate_authorities: ["/etc/filebeat/certs/root-ca.pem"]
       ssl.certificate: "/etc/filebeat/certs/filebeat.pem" # Certificate for Filebeat client
       ssl.key: "/etc/filebeat/certs/filebeat-key.pem"   # Private key for Filebeat client
      

      (Ensure filebeat.pem and filebeat-key.pem are generated for the server node during the initial wazuh-certs-tool.sh run if you named your server node, e.g., wazuh-1, and copy them from wazuh-certificates.tar to /etc/filebeat/certs/ renaming appropriately, along with root-ca.pem).

    3. Configure authentication credentials for Filebeat to connect to the Indexer. It’s recommended to use the Filebeat keystore for passwords 29:

      Bash

      1
      2
      3
      4
      
       sudo filebeat keystore create
       # Assuming default user 'admin' and password 'admin' for indexer, change as appropriate
       echo 'admin' | sudo filebeat keystore add username --stdin --force
       echo '<YOUR_INDEXER_ADMIN_PASSWORD>' | sudo filebeat keystore add password --stdin --force
      

      Then, in filebeat.yml:

      YAML

      1
      2
      
       username: "${username}"
       password: "${password}"
      
  • Configuring Wazuh Manager (/var/ossec/etc/ossec.conf):

    1. Configure the <indexer> block to specify the Wazuh Indexer node(s) 29:

      XML

      1
      2
      3
      4
      5
      6
      7
      
       <ossec_config>
        ...
         <indexer>
           <node><address>YOUR_INDEXER_IP</address></node>
           </indexer>
        ...
       </ossec_config>
      
  • Starting Services:

    Bash

    1
    2
    3
    4
    5
    
      sudo systemctl daemon-reload
      sudo systemctl enable wazuh-manager
      sudo systemctl start wazuh-manager
      sudo systemctl enable filebeat
      sudo systemctl start filebeat
    

    29

    Test Filebeat connection:

    Bash

    1
    
      sudo filebeat test output
    

    This should show a successful connection to Elasticsearch (Wazuh Indexer).32

  • Alternative Server Installation using wazuh-install.sh 33:

    1. Ensure wazuh-install-files.tar (generated earlier) is present on the server node.
    2. Run: sudo bash wazuh-install.sh --wazuh-server <wazuh-1> (replace <wazuh-1> with your server node name from config.yml).

3. Installing the Wazuh Dashboard

The Dashboard provides the web interface.

  • Adding Wazuh Repository (if on a new machine or not done previously for Debian/Ubuntu):

    Follow steps 1-3 from the Indexer installation.

  • Installing Package Dependencies:

    Bash

    1
    
      sudo apt-get -y install debhelper tar curl libcap2-bin
    

    34 (Ensure debhelper is version 9 or later).

  • Installing Wazuh Dashboard:

    Bash

    1
    
      sudo apt-get -y install wazuh-dashboard
    

    34

  • Configuring Wazuh Dashboard (/etc/wazuh-dashboard/opensearch_dashboards.yml):

    1. server.host: Set to 0.0.0.0 to listen on all interfaces, or a specific IP address for the dashboard to bind to.34
    2. opensearch.hosts: Point to your Wazuh Indexer URL(s) 34:

      YAML

      1
      
       opensearch.hosts:
      
    3. If using SSL for the Indexer (recommended):

      YAML

      1
      2
      
       opensearch.ssl.verificationMode: certificate # or full or none
       opensearch.ssl.certificateAuthorities: [ "/etc/wazuh-dashboard/certs/root-ca.pem" ]
      
  • Deploying Certificates: Copy wazuh-certificates.tar to the Dashboard node. Extract root-ca.pem, and the dashboard node’s certificate (e.g., dashboard-1.pem) and key (e.g., dashboard-1-key.pem) into /etc/wazuh-dashboard/certs/. Rename the node certificate to dashboard.pem and key to dashboard-key.pem.34

    Bash

    1
    2
    3
    4
    5
    6
    7
    8
    9
    
      # Example commands, adjust NODE_NAME
      NODE_NAME="dashboard-1" # Or your dashboard node name from config.yml
      sudo mkdir /etc/wazuh-dashboard/certs
      sudo tar -xf./wazuh-certificates.tar -C /etc/wazuh-dashboard/certs/./$NODE_NAME.pem./$NODE_NAME-key.pem./root-ca.pem
      sudo mv /etc/wazuh-dashboard/certs/$NODE_NAME.pem /etc/wazuh-dashboard/certs/dashboard.pem
      sudo mv /etc/wazuh-dashboard/certs/$NODE_NAME-key.pem /etc/wazuh-dashboard/certs/dashboard-key.pem
      sudo chmod 500 /etc/wazuh-dashboard/certs
      sudo chmod 400 /etc/wazuh-dashboard/certs/*
      sudo chown -R wazuh-dashboard:wazuh-dashboard /etc/wazuh-dashboard/certs
    
  • Configure Wazuh API Connection in Dashboard: Edit /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml and set the url to your Wazuh Server (Manager) IP address 34:

    YAML

    1
    2
    3
    4
    5
    6
    
      hosts:
        - default:
            url: https://<WAZUH_SERVER_IP_ADDRESS>
            port: 55000 # Default Wazuh API port
            # username: wazuh-wui # Default, ensure this user exists in Wazuh API
            # password: <WAZUH_WUI_PASSWORD> # Default, ensure this user exists
    
  • Starting Service:

    Bash

    1
    2
    3
    
      sudo systemctl daemon-reload
      sudo systemctl enable wazuh-dashboard
      sudo systemctl start wazuh-dashboard
    

    34

  • Access the dashboard via https://<WAZUH_DASHBOARD_IP>. Default credentials are often admin for username and password, or check wazuh-passwords.txt if generated by the assisted script.11

  • Alternative Dashboard Installation using wazuh-install.sh 11:

    1. Ensure wazuh-install-files.tar is present.
    2. Run: sudo bash wazuh-install.sh --wazuh-dashboard <dashboard-node-name-from-config.yml>.

B. Wazuh Server on Windows: Is it Possible?

It is important to clarify that the Wazuh central components—namely the Wazuh Server (Manager), Wazuh Indexer, and Wazuh Dashboard—are not officially supported for installation on Windows operating systems.25 These core components are designed and built to run in a Linux environment. Windows is, however, a fully supported operating system for deploying Wazuh agents.

The foundational role of SSL certificates in a Wazuh deployment cannot be overstated. The step-by-step installation process meticulously details their creation and deployment, underscoring their necessity for secure, encrypted communication between the Indexer, Server, Dashboard, and Filebeat components.10 Any misconfiguration or error in certificate management will likely result in components being unable to communicate, effectively crippling the SIEM’s functionality. Therefore, a thorough understanding of the config.yml file used for certificate generation and precise execution of deployment steps are paramount, especially in distributed environments.

When choosing an installation method, a trade-off exists between speed and control. The assisted installation script (wazuh-install.sh -a) offers a rapid deployment for all-in-one setups, suitable for learning environments or small proof-of-concepts.25 However, for production environments, particularly those requiring scalability and customization, the step-by-step installation method is preferred. It provides the granular control needed to configure each component individually, potentially on separate hosts, and to fully understand the interplay between them.26

Within the Wazuh ecosystem, Filebeat plays a critical, often overlooked, role when performing a step-by-step installation. It is not merely an auxiliary tool but the essential link responsible for shipping alerts and events from the Wazuh Manager to the Wazuh Indexer.2 If Filebeat is not correctly installed, configured (with the correct Indexer address, credentials, and certificates), or running on the Wazuh Server node(s), no data will reach the Indexer, rendering the analysis and visualization components useless.

IV. Wazuh Agents: Your Eyes on the Endpoints

Wazuh agents are the primary data collectors in the Wazuh ecosystem, providing the raw information necessary for security monitoring and threat detection.

A. What are Wazuh Agents?

Wazuh agents are lightweight, multi-platform software components designed to be deployed on the endpoints you wish to monitor. This includes a wide range of systems such as laptops, desktops, servers, cloud instances, virtual machines, and even containers.14

Detailed Role and Data Collection:

Agents are responsible for a diverse array of data collection and local security tasks through their various modules 15:

  • Log Collection (Logcollector module): Gathers log messages from the operating system (e.g., /var/log/secure, Windows Event Logs) and various applications.15
  • File Integrity Monitoring (FIM): Detects changes to specified files and directories, including modifications to content, permissions, ownership, and attributes. It can identify when files are created or deleted.15
  • System Inventory (Syscollector module): Periodically collects information about the endpoint, such as OS version, running processes, installed applications, network interfaces, and open ports.2 This data is crucial for vulnerability detection and asset management.
  • Security Configuration Assessment (SCA): Scans the endpoint’s configuration against predefined policies, often based on benchmarks like those from the Center for Internet Security (CIS), to identify misconfigurations and security weaknesses. Custom policies can also be created.2
  • Malware Detection: Employs non-signature-based techniques to detect anomalies, potential rootkits, hidden processes, and suspicious system calls.15
  • Vulnerability Detection: The collected software inventory is sent to the Wazuh server, which correlates it with CVE databases to identify known vulnerabilities affecting the endpoint.18
  • Command Execution: Agents can be configured to run specific commands periodically and send their output to the server for analysis (e.g., checking disk space).15
  • Container Security Monitoring: Integrates with Docker to monitor containerized environments for changes to images, volumes, network settings, and detects risky configurations like privileged containers.15

Beyond data collection, agents also provide local threat prevention, detection, and response capabilities on the endpoint itself.14

B. The Indispensable Role of Wazuh Agents

Wazuh agents are not merely optional add-ons; they are fundamental to the Wazuh security model for several reasons:

  • Endpoint Visibility: Agents provide deep, granular visibility into the activities occurring directly on endpoints. This is essential for detecting threats that might not be visible at the network level, such as local privilege escalation, malware execution, or unauthorized file modifications.15 Without agents, Wazuh’s ability to understand and respond to endpoint-specific threats would be severely limited.
  • Primary Data Source for Analysis: The rich telemetry collected by agents serves as the primary input for the Wazuh server’s analysis engine. This data is what the server decodes and correlates against rules to identify threats, anomalies, and policy violations.7
  • Agent-Server Communication:
    • Agents establish a connection with the Wazuh server to transmit collected data and to receive configuration updates or commands.15
    • This communication occurs over a secure channel, typically using TCP port 1514 by default, though UDP is also supported. The connection is encrypted and authenticated to ensure data integrity and confidentiality.15
    • Enrollment Process: A critical step before an agent can communicate with the server is enrollment. During enrollment, the agent is registered with the Wazuh manager, and a unique pre-shared key is generated and assigned to the agent. This key is used for authenticating the agent to the server and encrypting the communication between them.8
    • For environments with multiple Wazuh server nodes (a cluster), agents can be configured to connect to these nodes. This can be done by listing multiple server addresses in the agent’s configuration (failover mode) or, more robustly, by pointing agents to a load balancer that distributes connections across the available server nodes. Using a load balancer is the recommended approach for larger deployments as it improves load distribution and high availability.40
    • The connection status of an agent (e.g., Active, Disconnected, Pending, Never connected) can be monitored through various means: the Wazuh dashboard, the agent_control utility on the server, the Wazuh API, or by inspecting the wazuh-agentd.state file on the agent itself.39

The modular design of the Wazuh agent, encompassing Logcollector, FIM, SCA, Syscollector, and other specialized modules 15, is what underpins its ability to ingest such a diverse range of security-relevant data. This inherent diversity at the agent level is key to Wazuh’s comprehensive monitoring capabilities. Consequently, users must understand and thoughtfully configure these agent modules, often through the local ossec.conf file or the centralized agent.conf on the manager, to tailor data collection precisely to their security requirements and to avoid the overhead of collecting unnecessary or low-value data.

The integrity and confidentiality of the data flowing from endpoints to the server hinge on the security of the agent enrollment process and the subsequent communication channel.8 Ensuring a secure enrollment, for example, by using password authentication or manager/agent identity verification options 36, and diligently protecting the generated agent keys are critical operational security measures. A compromised agent communication channel could lead to data tampering, unauthorized access, or the injection of false information, undermining the reliability of the entire SIEM.

C. Deploying Wazuh Agents: Step-by-Step Installation

Deploying Wazuh agents across various operating systems can be streamlined using deployment variables, especially WAZUH_MANAGER, which specifies the IP address or hostname of the Wazuh server the agent should register with.41 Other useful variables include WAZUH_AGENT_NAME and WAZUH_AGENT_GROUP for better organization.41

1. Linux Agent Installation

  • Debian/Ubuntu & Kali Linux (using APT):

    1. Import the Wazuh GPG key:

      Bash

      1
      
       curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
      

      For older systems (e.g., Debian 8, Ubuntu 14.04), an alternative key import method might be needed: curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -.41

    2. Add the Wazuh repository (ensure the URL reflects the desired Wazuh major version, e.g., 4.x for version 4.12):

      Bash

      1
      
       echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
      

    .41

    1. Update the package list:

    bash sudo apt-get update

    .41

    1. Install the Wazuh agent, providing the manager’s IP address:

    bash sudo WAZUH_MANAGER=”" apt-get install -y wazuh-agent

    .25

    1. Enable and start the Wazuh agent service:

    bash sudo systemctl daemon-reload sudo systemctl enable wazuh-agent sudo systemctl start wazuh-agent

    .41

  • RPM-based (CentOS/RHEL/Fedora - using YUM/DNF):

    1. Import the Wazuh GPG key:

      Bash

      1
      
       sudo rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
      

    .41

    1. Add the Wazuh repository. For RHEL 8 and earlier 41:

    bash sudo tee /etc/yum.repos.d/wazuh.repo « EOF [wazuh] gpgcheck=1 gpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH enabled=1 name=EL-$releasever - Wazuh baseurl=https://packages.wazuh.com/4.x/yum/ protect=1 EOF

    For RHEL 9 and later, priority=1 might be used instead of protect=1.41

    1. Install the Wazuh agent:

    bash sudo WAZUH_MANAGER=”" yum install -y wazuh-agent

    (Use dnf if yum is not available, e.g., on modern Fedora).41 General guidance often points to the main Linux agent installation page.46

    1. Enable and start the Wazuh agent service:

    bash sudo systemctl daemon-reload sudo systemctl enable wazuh-agent sudo systemctl start wazuh-agent

    .41

2. Windows Agent Installation

  1. Download the Wazuh agent MSI installer (e.g., wazuh-agent-4.12.0-1.msi) from the official Wazuh packages repository..4842
  2. Using Command Line (Recommended for automation):

    • Open Command Prompt (CMD) as Administrator:

      DOS

      1
      
        wazuh-agent-4.12.0-1.msi /q WAZUH_MANAGER="<MANAGER_IP_OR_HOSTNAME>"
      

    .42

    • Or, open PowerShell as Administrator:

    .\wazuh-agent-4.12.0-1.msi /q WAZUH_MANAGER=”" ``` .42 The `/q` flag enables quiet (unattended) installation.

  3. Using Graphical User Interface (GUI):
    • Double-click the downloaded .msi file and follow the installation wizard prompts. You will be asked to enter the Wazuh Manager’s IP address or hostname during the setup.42
  4. Start the Wazuh agent service:
    • From CMD or PowerShell as Administrator:

      DOS

      1
      
        NET START Wazuh
      
    • Or, open the Services application (services.msc), find the “Wazuh” service, and click “Start”.42

  5. The default installation path for the Wazuh agent on Windows is C:\Program Files (x86)\ossec-agent.42 An alternative method for deployment involves using the PowerShell command generated from the Wazuh Dashboard under “Deploy new Agent”.49

3. macOS Agent Installation

  1. Download the appropriate Wazuh agent .pkg installer for your macOS architecture (Intel or Apple Silicon) for the desired Wazuh version (e.g., 4.12.0).43
    • Intel: wazuh-agent-4.12.0-1.intel64.pkg
    • Apple Silicon: wazuh-agent-4.12.0-1.arm64.pkg
  2. Using Command Line (Recommended):

    • Open Terminal.
    • For Intel-based Macs:

      Bash

      1
      
        echo "WAZUH_MANAGER='<MANAGER_IP_OR_HOSTNAME>'" > /tmp/wazuh_envs && sudo installer -pkg wazuh-agent-4.12.0-1.intel64.pkg -target /
      

    .43

    • For Apple Silicon-based Macs:

      Bash

      1
      
        echo "WAZUH_MANAGER='<MANAGER_IP_OR_HOSTNAME>'" > /tmp/wazuh_envs && sudo installer -pkg wazuh-agent-4.12.0-1.arm64.pkg -target /
      

    .43

  3. Using Graphical User Interface (GUI):
    • Double-click the downloaded .pkg file and follow the installation wizard.43 You will likely need to configure the manager address post-installation if not prompted.
  4. Start the Wazuh agent service:

    Bash

    1
    
     sudo launchctl bootstrap system /Library/LaunchDaemons/com.wazuh.agent.plist
    

.24

  1. The default installation path for the Wazuh agent on macOS is /Library/Ossec/.43

Table: Wazuh Agent Installation Commands Summary

   
Operating System FamilyPackage Manager/TypeKey Installation Command(s) with WAZUH_MANAGER variable
Debian/Ubuntu/Kali LinuxAPTWAZUH_MANAGER="<IP_OR_HOSTNAME>" apt-get install -y wazuh-agent
RPM-based (RHEL/CentOS/Fedora)YUM/DNFWAZUH_MANAGER="<IP_OR_HOSTNAME>" yum install -y wazuh-agent (or dnf)
WindowsMSIwazuh-agent-<VERSION>.msi /q WAZUH_MANAGER="<IP_OR_HOSTNAME>" (CMD)
macOSPKGecho "WAZUH_MANAGER='<IP_OR_HOSTNAME>'" > /tmp/wazuh_envs && sudo installer -pkg wazuh-agent-<ARCH>.pkg -target /

For any large-scale deployment, leveraging deployment variables like WAZUH_MANAGER directly within installation commands is paramount for efficiency, automation, and consistency.41 This approach is highly recommended when using scripting or configuration management tools such as Ansible or Puppet for agent rollouts.29 Manually installing and configuring agents on hundreds or thousands of endpoints is impractical and error-prone.

V. Configuring Wazuh for Advanced Threat Detection

Once the Wazuh platform and agents are installed, the next critical phase is configuring them to collect relevant logs and detect specific threats like Nmap scans and DDoS attacks. This often involves customizing decoders and rules.

A. Mastering Log Data Collection

Effective security monitoring begins with comprehensive log collection. Wazuh’s Logcollector module, present in its agents, is central to this process.50

  • Logcollector Module Overview:

    The Logcollector module is responsible for reading logs from various sources on monitored endpoints, including OS logs, application logs, and even the output of specific commands. It supports multiple log formats such as plain text (syslog-like), Windows Event Log, Windows Event Channel (which provides richer, often JSON-formatted events), native JSON logs, and multi-line logs common in applications like Java stack traces.15 Configuration of what Logcollector monitors is typically done in the ossec.conf file on the agent, or it can be managed centrally from the Wazuh server using agent.conf files pushed to agent groups.51 Key configuration options within the block include (path to the log file or event channel name), (specifying the log type), (to execute and capture output), (for filtering Windows Event Channel events using XPath), and (for handling logs that span multiple lines).51

  • Receiving Syslog from Network Devices:

    For devices where a Wazuh agent cannot be installed (e.g., firewalls, routers, switches), Wazuh can act as a syslog receiver. The Wazuh server can be configured to listen for syslog messages on a specified UDP or TCP port (commonly port 514).50 This is configured in the Wazuh server’s /var/ossec/etc/ossec.conf file within a block:

    XML

    1
    2
    3
    4
    5
    6
    7
    8
    
      <ossec_config>
       ...
        <remote>
          <connection>syslog</connection>
          <port>514</port>
          <protocol>udp</protocol> <allowed-ips>192.168.1.0/24</allowed-ips> </remote>
       ...
      </ossec_config>
    

    The <allowed-ips> directive is mandatory and specifies which IP addresses or ranges are permitted to send syslog messages to the Wazuh server.56 This agentless collection method is vital for gaining visibility into network infrastructure events.

  • Collecting Local Application Logs:

    To monitor logs from applications running on agent-equipped endpoints, the directive is used in the agent's ossec.conf. The tag points to the application's log file, and specifies its format. For standard syslog-formatted logs, syslog can be used. For other formats, like JSON, the json log format can be specified. If the application produces logs in a unique, non-standard format, custom decoders may be necessary to parse them effectively.51 For instance, to monitor Apache access logs, the configuration might look like:

    XML

    1
    2
    3
    
      <localfile>
        <location>/var/log/apache2/access.log</location>
        <log_format>apache</log_format> </localfile>
    

.58

  • Integrating Suricata for Network IDS Data:

    Suricata, a powerful Network Intrusion Detection System (NIDS), generates detailed alerts about network traffic, often in the eve.json format.60 To integrate these alerts into Wazuh, a Wazuh agent is typically installed on the same host as Suricata (or on a dedicated log collector that receives Suricata logs). The agent’s ossec.conf is then configured to read the eve.json file 60:

    XML

    1
    2
    3
    4
    
      <localfile>
        <log_format>json</log_format>
        <location>/var/log/suricata/eve.json</location>
      </localfile>
    

    Wazuh’s built-in JSON decoder can parse these logs, and Wazuh’s Suricata-specific rules are then used to generate alerts based on the content of the eve.json events.61

  • Log Parsing and Normalization:

    Once logs are collected, they are sent to the Wazuh server for processing by the wazuh-analysisd engine.50 This involves several stages:

    1. Pre-decoding: The engine first attempts to extract common header information from syslog-like messages, such as the timestamp, hostname, and program name.64
    2. Decoding: Decoders, which are XML-based and utilize regular expressions (<regex>), pre-match conditions (<prematch>), and field ordering (<order>), parse the log messages. They extract relevant pieces of information and assign them to standardized fields (e.g., srcip, dstip, user, id, status). This normalization step is crucial for consistent rule application across diverse log sources.64
    3. Rule Matching: The decoded fields are then compared against the defined ruleset. If a log’s fields match the conditions of a rule, an alert is generated.64

Table: Key Log Sources and Wazuh Integration Methods

   
Log Source TypeCollection MethodKey Wazuh Configuration (ossec.conf or server-side)
Endpoint OS Logs (Linux, Windows)Wazuh Agent (Logcollector)<localfile> with <log_format>syslog</log_format> (Linux) or <log_format>eventchannel</log_format> / <log_format>eventlog</log_format> (Windows)
Custom Application LogsWazuh Agent (Logcollector)<localfile> with <log_format>syslog</log_format>, <log_format>json</log_format>, or custom decoder needed.
Firewall Logs (e.g., iptables, pfSense)Remote Syslog to Wazuh Server, or Agent on syslog hostServer: <remote><connection>syslog</connection>...<allowed-ips>...</allowed-ips></remote>. Agent: <localfile> for syslog file.
Web Server Logs (Apache, Nginx)Wazuh Agent (Logcollector)<localfile> with <log_format>apache</log_format>/<log_format>nginx</log_format> or custom for non-standard formats.
Suricata IDS Logs (eve.json)Wazuh Agent (Logcollector)<localfile> with <log_format>json</log_format> and <location>/var/log/suricata/eve.json</location>.

B. Detecting Nmap Scans

Network Mapper (Nmap) is a widely used tool for network discovery and security auditing. Detecting its use can indicate reconnaissance activity, a precursor to targeted attacks.

  • Typical Nmap Scan Log Signatures:

    • Firewall Logs (e.g., iptables): A common indicator is a rapid succession of connection attempts from a single source IP address to many different destination ports on one or more target hosts. These attempts may be logged as “DROP” or “ACCEPT” by the firewall, depending on its ruleset. iptables logs typically contain fields like SRC= (source IP), DST= (destination IP), PROTO= (protocol, usually TCP or UDP for scans), SPT= (source port, often ephemeral), and DPT= (destination port, which will vary during a scan).73
    • Suricata eve.json Logs: Suricata includes specific signatures designed to detect various Nmap scan types (e.g., SYN scan, FIN scan, Xmas scan, OS detection probes). When such a scan is detected, Suricata generates an alert in eve.json with event_type: "alert". The alert.signature field will describe the detected scan (e.g., “ET SCAN Potential Nmap Scan”, “ET SCAN NMAP OS Detection Probe”). Other relevant fields include src_ip, dest_ip, and dest_port.60 Nmap’s SYN scan (-sS) is a common stealthy technique that Suricata can flag.60
    • System Logs (if Nmap is executed on a monitored host): If Nmap is run directly on a system with a Wazuh agent, process execution logs (e.g., from Auditd on Linux, or Sysmon on Windows if configured for process creation events) might show the nmap command being launched.76
  • Utilizing Pre-built Wazuh Rules & Decoders:

    Wazuh’s default ruleset contains rules that can identify some forms of scanning activity, particularly when logs from firewalls or IDS like Suricata are properly decoded and ingested.80 The integration with Suricata is especially potent, as Wazuh has rules that specifically match Suricata alert signatures; these are often grouped under the suricata rule group.61 For detecting Nmap run locally, Wazuh’s command monitoring capability can be configured to alert on the execution of the nmap process, similar to how Netcat execution can be monitored.79

  • Crafting Custom Rules and Decoders for Nmap:

    To enhance Nmap detection, especially from firewall logs that might not have highly specific default decoders, custom rules and decoders are often necessary.

    • Log Sources: iptables logs (often sent via syslog), other firewall syslogs, Suricata eve.json files.

    • Custom Decoder Location: /var/ossec/etc/decoders/local_decoder.xml or new .xml files within the /var/ossec/etc/decoders/ directory.66

    • Custom Rule Location: /var/ossec/etc/rules/local_rules.xml or new .xml files within the /var/ossec/etc/rules/ directory. Custom rule IDs should typically be in the range 100000-120000.83

    • Analyzing iptables Logs:

      • Decoder for iptables: A custom decoder would need to parse the specific format of your iptables logs. Key fields to extract include source IP (srcip), destination IP (dstip), destination port (dstport), protocol (proto), and the firewall action (e.g., DROP, ACCEPT). *Example (Conceptual) iptables log: Jul 20 10:00:00 fw_host kernel:: IN=eth0 OUT= MAC=xx:xx:xx:xx:xx:xx:yy:yy:yy:yy:yy:yy SRC=192.168.1.100 DST=192.168.1.200 PROTO=TCP SPT=12345 DPT=22... *Example (Conceptual) Decoder Snippet for iptables:

        XML

        1
        2
        3
        4
        5
        6
        7
        8
        
          <decoder name="iptables-custom">
            <prematch_pcre2>^kernel: \: </prematch_pcre2> </decoder>
                    
          <decoder name="iptables-fields">
            <parent>iptables-custom</parent>
            <regex_pcre2>SRC=(\S+) DST=(\S+) PROTO=(\S+).* DPT=(\d+)</regex_pcre2>
            <order>srcip, dstip, protocol, dstport</order>
          </decoder>
        
      • Rule for iptables Port Scan Detection: This rule would aim to detect multiple connection attempts from the same source IP to different destination ports on a target.

        XML

        1
        2
        3
        4
        5
        6
        7
        
          <group name="firewall,portscan,nmap_scan,">
            <rule id="100100" level="8">
              <if_group>iptables</if_group> <srcip_as_field />
              <description>Potential port scan from $(srcip) detected by iptables (multiple destination ports).</description>
              <group>pci_dss_11.4,nist_800_53_CA.7,</group>
            </rule>
          </group>
        

        The example from a GitHub repository 73 shows a rule for Nmap output converted to JSON, which is different from direct iptables log analysis but illustrates field matching. 120 mentions blocking IPs after port scan detection, implying rules exist or can be created.

    • Analyzing Suricata eve.json Logs:

      • Wazuh’s default JSON decoder generally handles eve.json logs well, extracting fields like event_type, src_ip, dest_ip, dest_port, and alert.signature.60
      • Rule for Specific Suricata Nmap Signatures: Create rules that match known Nmap-related signatures from Suricata.

        XML

        1
        2
        3
        4
        5
        6
        7
        
          <group name="suricata,ids,nmap_scan,">
            <rule id="100105" level="10">
              <decoded_as>json</decoded_as> <field name="event_type">alert</field>
              <field name="alert.signature" type="pcre2">(?i)NMAP|ET SCAN</field> <description>Nmap activity detected by Suricata: $(alert.signature). Source: $(src_ip), Destination: $(dest_ip):$(dest_port).</description>
              <group>pci_dss_11.4,gdpr_IV_32.2,nist_800_53_SI.4,</group>
            </rule>
          </group>
        

        This rule looks for alerts from Suricata where the signature contains “NMAP” or “ET SCAN”, common in Nmap-related detections.62

    • Testing Rules and Decoders: Always use the /var/ossec/bin/wazuh-logtest utility on the Wazuh server to test new or modified decoders and rules with sample log entries before putting them into production. This tool shows how logs are pre-decoded, decoded, and which rules they match.66

C. Detecting DDoS Attacks

Distributed Denial of Service (DDoS) attacks aim to overwhelm a target system with traffic, making it unavailable to legitimate users. Detecting these often involves looking for anomalous traffic patterns.

  • Common DDoS Log Patterns:

    • Web Server Logs (Apache/Nginx Access Logs): A sudden, massive spike in the number of requests, often from a distributed set of IP addresses (or a few very aggressive IPs for DoS). These requests might target specific, resource-intensive URLs, or be a flood of requests for large files. An increase in HTTP error codes like 403 (Forbidden), 429 (Too Many Requests), or 503 (Service Unavailable) can also indicate that the server is struggling to cope. Sometimes, attackers use common or unusual User-Agent strings.86
    • Firewall Logs: A large volume of incoming connections, often SYN packets in the case of a SYN flood attack, originating from many different source IPs (or a single IP for DoS) and targeting a specific destination IP and port. Firewall logs might show these connections being accepted (if they bypass initial filters) or, more likely, being dropped if rate-limiting or flood protection rules are in place. High packet and byte counts to the target are also indicative.88
  • Utilizing Pre-built Wazuh Rules & Decoders:

    Wazuh includes default rules for common web server events, including various HTTP error codes and access patterns (e.g., Apache rules are in 0250-apache_rules.xml 59). These can serve as a basis for DDoS detection. Firewall log analysis rules can also help identify connection flood patterns. Furthermore, Wazuh’s active response capability includes scripts like firewall-drop (for iptables) or netsh.exe (for Windows Firewall) that can be triggered to block attacking IP addresses once identified.89

  • Crafting Custom Rules and Decoders for DDoS:

    • Log Sources: Apache/Nginx access logs, firewall logs (e.g., iptables, commercial firewalls sending syslog).
    • Analyzing Web Server Logs (Apache/Nginx):
      • Decoders: Ensure your web server access log decoders correctly extract fields like source IP (srcip), HTTP status code (status_code or id in some decoders), requested URI (url or request_uri), and user agent (agent or user_agent). Wazuh has default decoders for common Apache and Nginx formats 59, but custom formats may require custom decoders. The generic JSON decoder 63 can be used if logs are output in JSON.
      • Rule for High Request Frequency from a Single IP (DoS): This rule triggers if a single IP address sends an unusually high number of requests within a short period.

        XML

        1
        2
        3
        4
        5
        
          <group name="web,accesslog,ddos,">
            <rule id="100200" level="10" frequency="100" timeframe="60"> <if_matched_group>apache_accesslog</if_matched_group> <srcip_as_field /> <description>Potential DoS: High request rate from IP $(srcip) to web server.</description>
              <group>pci_dss_1.0,gdpr_IV_35.7.d,nist_800_53_SC.5,</group>
            </rule>
          </group>
        

        This rule structure, using frequency and timeframe, is common for detecting repeated events indicative of brute-force or DoS.90

      • Rule for Spike in HTTP Error Codes (e.g., 4xx, 5xx) (DDoS indicator): This rule looks for a high volume of client-side or server-side errors, which can occur when a server is under duress from a DDoS attack.

        XML

        1
        2
        3
        4
        5
        6
        
          <group name="web,accesslog,ddos,">
            <rule id="100201" level="8" frequency="200" timeframe="120"> <if_matched_group>apache_accesslog</if_matched_group>
              <field name="status_code" type="pcre2">^\d\d$</field> <description>Potential DDoS: High volume of HTTP error codes $(status_code) from web server.</description>
              <group>pci_dss_11.4,gdpr_IV_32.2,nist_800_53_SI.4,</group>
            </rule>
          </group>
        
    • Analyzing Firewall Logs for Connection Floods (e.g., SYN Flood):
      • Decoder: The firewall log decoder must extract source IP (srcip), destination IP (dstip), protocol (protocol), and ideally TCP flags (specifically the SYN flag).
      • Rule for High Rate of SYN Packets:

        XML

        1
        2
        3
        4
        5
        6
        
          <group name="firewall,connection_flood,ddos,">
            <rule id="100202" level="12" frequency="500" timeframe="30"> <if_matched_group>firewall_connection</if_matched_group> <field name="protocol" type="pcre2">(?i)^TCP$</field>
              <field name="tcp_flags" type="pcre2">(?i)SYN</field> <description>Potential SYN Flood DDoS: High rate of SYN packets from $(srcip) targeting $(dstip) detected by firewall.</description>
              <group>pci_dss_1.0,gdpr_IV_35.7.d,nist_800_53_SC.5,</group>
            </rule>
          </group>
        
    • Testing: As always, use /var/ossec/bin/wazuh-logtest to verify custom decoders and rules.84

D. Visualizing Threats and Setting Up Notifications

Visualizing alerts and receiving timely notifications are crucial for effective incident response.

  • Wazuh Dashboard for Alert Visualization:

    The Wazuh Dashboard provides powerful tools for visualizing security data. It comes with pre-built dashboards for general security events, threat hunting, MITRE ATT&CK mapping, and compliance.12 Users can create custom visualizations and dashboards by navigating to Explore > Visualize.92 The wazuh-alerts-* index pattern is the primary source for alert data visualizations.92

  • Creating Custom Dashboards for Nmap & DDoS:

    • Nmap Scan Visualization:

      • Geographical Map: Visualize the source IPs of Nmap scans using the GeoLocation.location field (if geolocation is enabled and data is present).92
      • Data Table: Display key details of Nmap alerts, including timestamp, rule.description, src_ip, dest_ip, dest_port, and alert.signature (if alerts originate from Suricata).92
      • Top Scanners (Bar Chart): Show the top source IP addresses (src_ip) generating Nmap scan alerts (Aggregation: Terms on src_ip, Metric: Count).92
      • Scan Activity Timeline (Line Chart): Plot the number of Nmap alerts over time (Aggregation: Date Histogram on timestamp, Metric: Count).92
      • Examples of visualizing Suricata Nmap alerts can be found in community resources and guides.62
    • DDoS Attack Visualization:

      • Traffic Volume Over Time (Line Chart): Monitor the overall request volume to web servers (Metric: Count of web access log entries, X-axis: Date Histogram on timestamp) to spot unusual spikes.92
      • Top Attacking IPs (Bar Chart): Identify the source IP addresses (src_ip) most frequently triggering DDoS-related rules.92
      • HTTP Status Code Distribution (Pie Chart): Visualize the distribution of HTTP status codes (e.g., 2xx, 4xx, 5xx) during a suspected attack period to see if error rates are abnormally high.92
      • DDoS Alert Table: List DDoS alerts with details like timestamp, rule.description, src_ip, targeted request_uri (for web DDoS), and rule-specific fields like rule.frequency or rule.timeframe.89
      • The Wazuh dashboard can display alerts related to DoS attacks where active response has blocked malicious IPs.89
  • Configuring Email Notifications for Critical Alerts:

    Wazuh manager can be configured to send email notifications when specific alerts are triggered.95

    1. Global Email Configuration: In /var/ossec/etc/ossec.conf, within the <global> section, configure your SMTP server details:

      XML

      1
      2
      3
      4
      5
      
       <global>
         <email_notification>yes</email_notification>
         <smtp_server>your.smtp.server.com</smtp_server>
         <email_from>wazuh-alerts@yourdomain.com</email_from>
         <email_to>security_team@yourdomain.com</email_to> </global>
      

    .95

    1. Alert Level Threshold for Emails: In the section of ossec.conf, set the minimum severity level for an alert to trigger an email. The default is often 12, but this can be adjusted.95

    xml 3 10

    1. Granular Email Alerting: For more specific notifications, use the block. This allows sending emails based on rule ID, rule group, or event location, and can direct alerts to different recipients.95

    Example for Nmap and DDoS alerts:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    
        
      <ossec_config>
        
      ... <email_alerts>
        
      <email_to>nmap-notifications@yourdomain.com</email_to>
        
      <rule_id>100100, 100105</rule_id> <do_not_delay/> </email_alerts>
        
    
    1
    2
    3
    4
    
    <email_alerts>
      <email_to>ddos-notifications@yourdomain.com</email_to>
      <rule_id>100200, 100201, 100202</rule_id> <do_not_delay/>
    </email_alerts>
    

    </ossec_config>

    1
    
    1. Rule-Specific Email Option: Within a specific rule definition (in /var/ossec/etc/rules/local_rules.xml or other custom rule files), you can add <options>alert_by_email</options> to ensure an email is always sent when that particular rule fires, irrespective of the global email_alert_level.95
    2. Restart the Wazuh manager service after making changes to ossec.conf.

The effectiveness of threat detection in Wazuh is deeply rooted in the synergy between decoders and rules.64 Decoders must accurately parse raw log data and extract the crucial fields. Without this, even the most sophisticated rules will fail. For instance, to detect a SYN flood from firewall logs, the decoder must reliably extract the source IP and TCP flags. If it cannot, a rule looking for a high frequency of SYN packets from a single source will be ineffective. This implies that users must often invest significant effort in understanding their log sources and potentially developing custom decoders. The wazuh-logtest utility is an indispensable tool in this iterative process of decoder and rule refinement.84

Furthermore, for alerts to be actionable, contextualization is paramount. A simple alert for “Nmap scan detected” or “high traffic volume” can lead to alert fatigue. Rules should be designed to incorporate context that helps prioritize responses. For example, an Nmap scan targeting critical servers, or originating from an IP address on a known threat intelligence feed, is far more concerning than a routine internal vulnerability scan.76 Similarly, a DDoS alert should ideally provide information about the targeted service, the nature of the traffic, and its volume. Custom rules should leverage specific fields and conditions to differentiate between benign activity and genuine threats, and alert severity levels should reflect this enriched context.

Finally, dashboards are not static entities; they require iterative development. As new threat vectors emerge, monitoring priorities shift, or new log sources are integrated, dashboards must be adapted to reflect these changes. Visualizing the right data in the most effective way is critical for enabling security analysts to quickly comprehend the situation during an incident.12 Users should begin by creating basic visualizations for Nmap and DDoS alerts—such as tables of source IPs, timelines of attack frequency, or charts of traffic volume—and then iteratively refine these dashboards based on operational feedback and the specific insights they need to derive quickly during investigations.

VI. Best Practices: Wazuh Management and Maintenance

Effective management and consistent maintenance are crucial for ensuring the long-term reliability, performance, and security of a Wazuh deployment.

A. Effective Agent Management

Managing a fleet of Wazuh agents, especially in larger environments, requires a strategic approach.36

  • Agent Grouping: A cornerstone of efficient agent management is the use of agent groups. Wazuh allows administrators to group agents logically (e.g., by operating system, server function like “webservers”, or geographical location). Centralized configurations can then be applied to these groups via agent.conf files stored on the Wazuh manager. This approach ensures consistent policy enforcement and simplifies mass configuration updates, as changes made in agent.conf are automatically pushed to the relevant agents.52 For example, one could define a specific FIM policy for all “Linux Servers” or a unique set of SCA checks for “PCI DSS Scope Systems.”

    XML

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
      <agent_config os="Linux">
        <localfile>
          <location>/var/log/custom_linux_app.log</location>
          <log_format>syslog</log_format>
        </localfile>
      </agent_config>
        
      <agent_config profile="webserver"> <syscheck>
          <directories check_all="yes">/var/www/html</directories>
        </syscheck>
      </agent_config>
    

.52

  • Secure Remote Agent Upgrades: Keeping Wazuh agents up-to-date with the latest versions is important for security patches and new features. Wazuh provides mechanisms for remote agent upgrades using Wazuh Signed Package (WPK) files. The agent upgrade module on the server facilitates this process, allowing administrators to roll out updates to agents without manual intervention on each endpoint.36
  • Monitoring Agent Health and Connectivity: Continuous monitoring of agent status is vital to ensure uninterrupted data collection.
    • The Wazuh Dashboard (typically under an “Agents” or “Endpoints” section) provides an overview of agent statuses: Active, Disconnected, Pending, or Never connected.39
    • On the Wazuh server, the agent_control utility can be used to query the status of a specific agent: /var/ossec/bin/agent_control -i <AGENT_ID>.39
    • The Wazuh API also exposes endpoints to retrieve agent status and statistics (e.g., GET /agents/<AGENT_ID>/stats/agent).39
    • Locally on an agent, the /var/ossec/var/run/wazuh-agentd.state file (on Linux) contains the current connection status.39
    • Regularly checking agent logs (/var/ossec/logs/ossec.log on Linux, C:\Program Files (x86)\ossec-agent\ossec.log on Windows, /Library/Ossec/logs/ossec.log on macOS) for errors related to connectivity or module function is crucial for troubleshooting.39
  • Events Per Second (EPS) Limits: Each Wazuh agent has a default EPS limit (typically 500) for sending events to the manager. This is a throttling mechanism to prevent a single noisy agent from overwhelming the server. The agent also has an internal buffer (default 5000 events) to queue events if the sending rate is exceeded.52 In environments with high event volumes, these limits might need adjustment, but this should be done cautiously while monitoring the Wazuh manager and indexer resource utilization to avoid creating bottlenecks upstream.52

B. Robust Server Maintenance

Maintaining the health and integrity of the Wazuh central components (Server, Indexer, Dashboard) is paramount.

  • Update Strategies:

    • The official Wazuh quickstart installation guide often recommends disabling automatic package updates for Wazuh components.25 This is a precautionary measure to prevent automated updates from potentially breaking a stable environment, especially if customizations have been made.
    • A manual update strategy allows for testing new versions in a non-production environment before rolling them out to production. This provides a controlled approach to adopting new features and security patches.
    • Starting from Wazuh version 4.8, the Wazuh Dashboard may display notifications when a new version is available, linking to the release notes.98
    • It is essential to regularly consult the official Wazuh release notes to understand the changes, new features, bug fixes, and any potential breaking changes in new versions.98
  • Backup and Restore Procedures:

    A comprehensive backup strategy is critical for disaster recovery, migration, or rolling back from failed upgrades.99

    • Wazuh Manager/Server: Key data and configurations to back up include:
      • Configuration files: /var/ossec/etc/ (this directory contains ossec.conf, custom rules and decoders, agent keys in client.keys, lists, etc.).101
      • Agent databases and state information: /var/ossec/queue/ (contains subdirectories for agent information, FIM databases, Syscheck databases, etc.). The global.db is particularly important and is automatically backed up by Wazuh-DB to /var/ossec/backup/db/ before upgrades.103
      • Log files (alerts and archives): /var/ossec/logs/ (optional, depending on retention strategy, as these can be very large).
      • Custom integration scripts: /var/ossec/integrations/.
    • Wazuh Indexer: The primary method for backing up Wazuh Indexer data (which is OpenSearch/Elasticsearch data) is through its native snapshot and restore mechanism. Index State Management (ISM) policies can be configured to automate snapshot creation.104 The actual data directories are defined in the Indexer’s opensearch.yml (e.g., /var/lib/wazuh-indexer/).
    • Wazuh Dashboard: The main configuration file is /etc/wazuh-dashboard/opensearch_dashboards.yml. Saved objects like custom dashboards, visualizations, and searches are stored within the Wazuh Indexer itself (in specific OpenSearch indices like .kibana or .opensearch_dashboards). Backing up the Indexer effectively backs up these dashboard objects.

    Table: Critical Wazuh File Paths for Backup

   
ComponentKey Paths/FilesDescription
Wazuh Manager/var/ossec/etc/All configurations, rules, decoders, agent keys (client.keys), CDB lists.
 /var/ossec/queue/agent-info/, /var/ossec/queue/db/, /var/ossec/queue/fim/, /var/ossec/queue/syscollector/Agent information, main operational database, FIM/Syscheck databases.
 /var/ossec/api/configuration/Wazuh API configuration (if customized).
 /var/ossec/integrations/Custom integration scripts.
Wazuh IndexerData directories (from opensearch.yml, e.g., /var/lib/wazuh-indexer/). Configuration: /etc/wazuh-indexer/opensearch.yml, /etc/wazuh-indexer/jvm.options.Primarily use Indexer’s snapshot/restore for data. Backup configuration files separately.
Wazuh Dashboard/etc/wazuh-dashboard/opensearch_dashboards.yml. /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml.Main configuration file. Wazuh API connection config. Saved objects (dashboards, visualizations) are stored in the Wazuh Indexer.

C. Performance Optimization Strategies

Optimizing Wazuh performance involves tuning its various components to handle the specific load of an environment.7

  • Wazuh Manager (ossec.conf, analysisd, EPS):

    • Hardware Resources: Adequate CPU, ample RAM, and fast SSD storage are fundamental. Resource allocation should scale with the number of agents and their EPS generation.7
    • ossec.conf Tuning:
      • The <queue_size> setting within the <remote> block of the Wazuh server’s ossec.conf controls the capacity of the queue that holds events from agents. The default is 131072, with a maximum of 262144. Increasing this can help absorb temporary bursts but doesn’t solve underlying processing bottlenecks.108
      • For the File Integrity Monitoring (Syscheck) module, options like <max_eps> can throttle the rate of FIM events, and <max_files_per_second> can control scanning speed to reduce load.107
    • wazuh-analysisd Tuning (via /var/ossec/etc/local_internal_options.conf): The analysis daemon (wazuh-analysisd) is responsible for decoding logs and matching them against rules.
      • Queue Sizes: analysisd maintains several internal queues for different types of events (e.g., analysisd.decode_syscheck_queue_size for FIM events, analysisd.decode_event_queue_size for general events). If event drops are observed (monitored via API or daemon state files), increasing the relevant queue sizes (default 16384) might be necessary. However, this consumes more RAM.108
      • Decoder Threads: The number of threads dedicated to decoding different event types (e.g., analysisd.event_threads, analysisd.syscheck_threads) and rule matching (analysisd.rule_matching_threads) can be adjusted. By default (0), Wazuh attempts to auto-configure these based on available CPU cores. Manually increasing these might help if CPU is underutilized and queues are backing up, but too many threads can lead to contention.109
    • EPS Management: Monitor the EPS rate from agents. If the Wazuh manager is consistently overloaded (indicated by event drops in wazuh-analysisd or full queues), strategies include:
      • Optimizing agent configurations to send fewer, more relevant logs.
      • Filtering noisy rules or logs at the source or server.
      • Scaling the Wazuh manager horizontally by adding more nodes to the cluster.19
  • Wazuh Indexer (OpenSearch/Elasticsearch):

    • Memory Locking: This is crucial to prevent the JVM from being swapped to disk, which severely degrades Indexer performance. Enable bootstrap.memory_lock: true in opensearch.yml and adjust system limits (LimitMEMLOCK=infinity for systemd service).16
    • JVM Heap Size: Configure the JVM heap size in jvm.options. A common recommendation is to set the initial (-Xms) and maximum (-Xmx) heap size to the same value, and this value should be no more than 50% of the system’s total RAM (and generally not exceeding ~30-32GB even on larger servers due to JVM limitations with compressed ordinary object pointers).16
    • Shards and Replicas:
      • Primary Shards: The number of primary shards for an index is defined at its creation and cannot be changed without re-indexing. A general guideline for optimal performance is to have the number of primary shards align with the number of data nodes in the Indexer cluster (e.g., a 3-node cluster might have 3 primary shards per index).16 Over-sharding (too many small shards) or under-sharding (too few large shards) can both lead to performance issues.
      • Replica Shards: Replicas provide data redundancy (high availability) and can improve search performance by distributing query load. In a multi-node cluster, at least one replica is recommended. The number of replicas can be changed dynamically.16
    • Index Lifecycle Management (ILM/ISM): Implement ISM policies to automate the lifecycle of indices, especially wazuh-alerts-* and wazuh-archives-*. This includes actions like rolling over to a new index when the current one reaches a certain size or age, moving older indices to different storage tiers (hot-warm-cold architecture), and eventually deleting very old data to manage storage space and maintain query performance.104
    • Hot-Warm-Cold Architecture: For environments with large volumes of data and long retention requirements, a tiered storage architecture can be beneficial. Hot nodes use fast storage for recent, frequently accessed data. Warm nodes use slower, more cost-effective storage for older, less frequently queried data. Cold nodes might use even cheaper storage for archival purposes.104 ISM policies manage the data transition between these tiers.
    • Node Roles: In larger Indexer clusters, explicitly define node roles (cluster manager, data, ingest, coordinating) in opensearch.yml. This allows for specialized optimization of nodes based on their function.106 For example, data nodes require significant disk space and RAM, while dedicated cluster manager nodes can be less resource-intensive.

D. Common Troubleshooting Scenarios

Addressing common issues promptly is key to maintaining a healthy Wazuh deployment.39

  • Agent Connection Issues:
    • Symptoms: Agent shows as “Disconnected” or “Never connected” in the dashboard.
    • Checks:
      • Verify network connectivity between agent and manager on the designated ports (default TCP 1514 for communication, UDP/TCP 1515 for registration).39 Firewalls (host-based or network) might be blocking traffic.
      • Examine agent logs (ossec.log) for errors related to authentication (e.g., “Invalid key”), certificate issues (if using manager/agent identity verification), or inability to resolve/reach the manager’s address.36
      • Confirm the Wazuh manager’s IP address or hostname is correctly configured in the agent’s ossec.conf.
      • Ensure the agent has been properly enrolled and its key exists in the manager’s client.keys file.
  • Wazuh Dashboard Unavailability or Errors:
    • Symptoms: Dashboard web page doesn’t load, shows errors, or data is missing.
    • Checks:
      • Verify the wazuh-dashboard service is running on its host. Check its logs for errors (e.g., using journalctl -u wazuh-dashboard).103
      • Confirm the Dashboard can connect to the Wazuh Indexer. The Indexer address is configured in /etc/wazuh-dashboard/opensearch_dashboards.yml. Test connectivity (e.g., using curl from the dashboard host to the indexer IP:port).103
      • Ensure the wazuh-indexer service is running and healthy. Check its logs (e.g., /var/log/wazuh-indexer/<CLUSTER_NAME>.log) for errors.103
      • Verify the Wazuh API connection configured in /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml is correct and the API is responsive.
  • Vulnerability Detector Issues (e.g., “Dangling entries,” missing vulnerability data):
    • Symptoms: Vulnerabilities are shown for agents that have been removed, or for packages that are no longer installed/vulnerable.
    • Causes: Often due to agent removal events not being processed correctly by the vulnerability detector, or issues during inventory synchronization, especially if content updates trigger re-scans while other events are queued.113
    • Fixes (Wazuh 4.10.0+ improved handling, but pre-existing issues might need cleanup):
      • Ensure the agent is active and reporting its inventory correctly (check Syscollector).
      • Verify manager-indexer communication is healthy.
      • If issues persist, a reset of the vulnerability detector module might be needed. This involves stopping the manager, disabling the vulnerability-detector in ossec.conf, deleting its state databases (e.g., /var/ossec/queue/vd/), cleaning the wazuh-states-vulnerabilities-* index in the Wazuh Indexer, restarting the manager (to process the disabled state), re-enabling the module, and restarting again to trigger a full re-scan.113
  • Event Drops (Events lost before processing):
    • Agent-side: The agent’s internal buffer (default 5000 events) might overflow if it cannot send events to the manager quickly enough, or its EPS limit to the manager (default 500 EPS) is consistently exceeded. Consider adjusting agent-side log filtering or, cautiously, the leaky bucket parameters in local_internal_options.conf on the agent.52
    • Manager-side (wazuh-analysisd): Internal queues within the analysis engine might be full, leading to events being discarded. This is often logged with messages indicating queue overruns or event drops due to overload. Solutions include increasing specific queue sizes in the manager’s /var/ossec/etc/local_internal_options.conf (e.g., analysisd.decode_event_queue_size), optimizing rules and decoders for efficiency, or scaling out by adding more Wazuh manager nodes to the cluster.97

A proactive approach to monitoring is fundamental for effective Wazuh maintenance. Regularly checking agent connectivity, server and indexer logs for errors, monitoring EPS rates, and observing queue utilization statistics can help identify and resolve potential problems before they escalate into significant outages or data loss.39 This preventative stance is far more effective than merely reacting to failures.

Discipline in configuration management is another critical best practice. Utilizing centralized agent configuration via agent.conf for agent groups promotes consistency and simplifies updates.52 All server and indexer configuration files (ossec.conf, opensearch.yml, local_internal_options.conf, custom rule/decoder files) should be managed carefully, ideally with version control for custom elements. Regular and verified backups of these configurations and critical data stores are non-negotiable for disaster recovery.52

Finally, performance tuning should be viewed as an ongoing, iterative process rather than a one-time setup task. The optimal configuration for hardware resources, OS settings, and Wazuh component parameters (like JVM heap size, indexer shard/replica counts, analysisd queues, and EPS limits) will change as the monitored environment evolves, agent counts grow, and data volumes fluctuate.7 A cycle of monitoring key performance indicators, identifying bottlenecks, making informed adjustments, and re-evaluating is essential to ensure Wazuh continues to perform effectively at scale.

Conclusion

Wazuh emerges as a highly capable and versatile open-source security platform, offering a powerful combination of SIEM and XDR functionalities. Its ability to collect and analyze data from a wide array of sources, coupled with its robust detection mechanisms for intrusions, vulnerabilities, malware, and configuration anomalies, makes it an invaluable asset for organizations seeking to enhance their security posture. The platform’s scalability, from single-host deployments to extensive multi-node clusters, allows it to adapt to diverse environmental needs.

This guide has provided a comprehensive walkthrough, covering the fundamental architecture of Wazuh, the benefits it offers, and detailed, step-by-step instructions for installing its central components (Server, Indexer, Dashboard) on Linux systems. It has also delved into the critical role of Wazuh agents, detailing their installation across various operating systems including Linux, Windows, and macOS. Furthermore, practical guidance has been offered on configuring Wazuh for advanced threat detection, with a specific focus on identifying Nmap scans and DDoS attacks through log analysis and the creation of custom rules and decoders. Finally, best practices for ongoing agent management, server maintenance, performance optimization, and troubleshooting have been outlined to ensure the long-term efficacy of a Wazuh deployment.

By leveraging the detailed instructions and insights provided, users can effectively deploy and manage Wazuh, transforming it into a cornerstone of their security operations. The open-source nature of Wazuh, combined with its extensive capabilities and active community, provides a solid foundation for building a resilient and responsive security monitoring solution.

Further Resources

To continue your journey with Wazuh and deepen your understanding, the following official resources are highly recommended:

To get in touch with me or for general discussion please visit ZeroDayMindset Discussion

This post is licensed under CC BY 4.0 by the author.