Forwarders
The forwarder is a standalone agent that transmits log data to Security Data Lake. It typically runs as a background service, continuously streaming log data to the configured Security Data Lake destination.
The Forwarder connects to Security Data Lake over Transport Layer Security (TLS) to ensure that log data is transmitted securely. It also uses API token authentication so that only authorized forwarders can connect to your environment.
Installing and configuring forwarders
At least one Forwarder is required to ingest data, though multiple forwarders can be configured for load distribution or redundancy. To function properly, the forwarder must be connected to your environment using Transport Layer Security (TLS) for encrypted communication and authenticated with an API token to ensure that only authorized Forwarders can transmit log data.
To set up a forwarder, it must first be installed, started, then configured with the required credentials so it can register in Security Data Lake and begin securely sending log data.
Obtain the API token and hostname (ingestion) URL
Create an API token that will be used during the installation and configuration of the forwarder:
Select the user icon on the upper right side of the screen and select Edit profile.

Select the Edit tokens button on the upper right side of the screen.

Enter the token details and select Create Token.

Obtain your hostname(ingestion) URL that will be used for the same purpose:
Log in to your Security Data Lake console.
From the home page URL, copy this part of the address:
https://***.gravityzone.bitdefender.com/welcomeAttach it to the
ingest-prefix:ingest-***.gravityzone.bitdefender.com
Installing a forwarder
The Forwarder is distributed in similar packaging and installation methods as the Security Data Lake server. You can choose between operating system packages, Docker, and binary installation methods to install the forwarder. If you plan to run the tool on an OS package or binary, ensure that Java is installed on your operating system. See the Forwarder Installation page for more information.
You can install a forwarder using one of the following methods:
Operating system package installation:
DEB package – For Debian-based distributions such as Ubuntu.
RPM package – For Red Hat-based distributions such as RHEL, CentOS, or Fedora.
Docker – For containerized environments where the forwarder runs as a managed container.
Standalone binary – For manual installation or custom environments where package management is not used.
Prerequisites
For optimal performance, it is recommended to allocate at least 2 CPU cores (3 GHz) and 4 GB of RAM for the Forwarder.
Binary Installation
To install the Forwarder as a binary, download the binary files and manually install them on your system.
For the latest version of the Forwarder, download this file and install the forwarder.
Note
Use the information obtained during the Obtain the API token and hostname (ingestion) URL step.
Operating system package installation
The most common installation method for the Forwarder is using Linux operating system packages. You can choose between DEB and RPM formats.
Before installation, ensure that:
Java is installed on your system. Refer to System Requirements for the supported version.
You have access to a valid TLS certificate.
You have an API token generated from Security Data Lake for authentication.
Install via DEB (Debian Package)
To install the Forwarder on a Debian-based Linux system using the DEB package, follow these steps:
Download the DEB package and install required dependencies.
sudo apt-get install apt-transport-https openjdk-17-jdk-headless wget wget https://packages.graylog2.org/repo/packages/graylog-forwarder-repository_6-5_all.deb sudo dpkg -i graylog-forwarder-repository_6-5_all.deb sudo apt-get update
Install the Security Data Lake forwarder package.
sudo apt-get install graylog-forwarder
Create the TLS certificate and update the configuration file.
sudo vi /etc/graylog/forwarder/forwarder.conf
Note
Use the information obtained during the Obtain the API token and hostname (ingestion) URL step.
Start the Security Data Lake forwarder service.
sudo systemctl start graylog-forwarder.service
Install via RPM (Red Hat Package Manager)
To install the Forwarder on a Red Hat–based Linux system using the RPM package, follow these steps:
Install Java.
sudo yum install java-17-openjdk-headless
Install the Security Data Lake repository configuration.
sudo rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-forwarder-repository-6-5.noarch.rpm
Install the Security Data Lake forwarder package.
sudo yum install graylog-forwarder
Create the TLS certificate and update the configuration file.
sudo vi /etc/graylog/forwarder/forwarder.conf
Note
Use the information obtained during the Obtain the API token and hostname (ingestion) URL step.
Start the Security Data Lake Forwarder service:
sudo systemctl start graylog-forwarder.service
Install via Docker
The forwarder is also available as a Docker image. Regardless of the installation method, you must create a digital certificate to secure communication between the forwarder and Security Data Lake.
To download the image, run:
docker pull graylog/graylog-forwarder:<release-version>
Note
We suggest using the latest stable version.
When running the container, you must provide the following environment variables:
Note
When configuring container options, all option names must be written in uppercase and prefixed with GRAYLOG_.
GRAYLOG_FORWARDER_SERVER_HOSTNAME GRAYLOG_FORWARDER_GRPC_API_TOKEN
You must also mount the certificate file as a volume when running the container.
docker run \ -e GRAYLOG_FORWARDER_SERVER_HOSTNAME=ingest-<SERVER_NAME> \ -e GRAYLOG_FORWARDER_GRPC_API_TOKEN=<API_TOKEN> \ -v /path/to/cert/cert.pem:/etc/graylog/forwarder/cert.pem \ graylog/graylog-forwarder:<RELEASE_VERSION>
Note
Use the information obtained during the Obtain the API token and hostname (ingestion) URL step.
Example: Docker Compose Configuration for Graylog Forwarder
The following example shows how to deploy the Security Data Lake Forwarder using Docker Compose along with a supporting .env file.
.env file
# The Forwarder ingest hostname is composed of an "ingest-" string plus the domain of your Security Data Lake cluster (for example ingest-<your-account>.datainsights.gravityzone.bitdefender.com). # Provided in the Forwarder Setup Wizard in Security Data Lake. [required] GRAYLOG_FORWARDER_SERVER_HOSTNAME="" # The API token used to authenticate the forwarder. # Provided in the Forwarder Setup Wizard in Security Data Lake. [required] GRAYLOG_FORWARDER_GRPC_API_TOKEN="" # Enables TLS for forwarder communication. # Always enable TLS for production environments. GRAYLOG_FORWARDER_GRPC_ENABLE_TLS="true"
docker-compose.yml
services:
forwarder:
image: "graylog/graylog-forwarder:6.6";
environment:
GRAYLOG_FORWARDER_SERVER_HOSTNAME:
"${GRAYLOG_FORWARDER_SERVER_HOSTNAME:?Please configure
GRAYLOG_FORWARDER_SERVER_HOSTNAME in the .env file}"
GRAYLOG_FORWARDER_GRPC_API_TOKEN:
"${GRAYLOG_FORWARDER_GRPC_API_TOKEN:?Please configure
GRAYLOG_FORWARDER_GRPC_API_TOKEN in the .env file}"
GRAYLOG_FORWARDER_GRPC_ENABLE_TLS: "${GRAYLOG_FORWARDER_GRPC_ENABLE_TLS:-true}"
ports:
- "5044:5044/tcp" # Beats
- "5140:5140/udp" # Syslog
- "5140:5140/tcp" # Syslog
- "5555:5555/tcp" # RAW TCP
- "5555:5555/udp" # RAW TCP
- "12201:12201/tcp" # GELF TCP
- "12201:12201/udp" # GELF UDP
#- "10000:10000/tcp" # Custom TCP port
#- "10000:10000/udp" # Custom UDP port
volumes:
- "forwarder-data:/var/lib/graylog-forwarder";
#- "/path/to/custom/jvm.options:/etc/graylog/forwarder/jvm.options";
restart: "on-failure"
volumes:
forwarder-data:Starting a forwarder
After completing configuration, start the forwarder so that the Setup Wizard can detect it in the next step.
If you installed the Forwarder using OS packages or Docker, the start command is included in the installation process, and the Forwarder should already be running.
If you installed the forwarder using binaries, follow these steps to start it:
Open a terminal window.
Go to the application directory.
Run the startup script using the following command:
./bin/graylog-forwarder run --configfile forwarder.conf
Configuring a forwarder
Forwarder Setup Wizard
Security Data Lake includes a Forwarder Setup Wizard that simplifies the process of configuring and connecting forwarders to your environment.
To configure a forwarder, follow these steps:
Go to System > Forwarders.
The Forwarders page is displayed.

Select New Forwarder or Get Started to launch the wizard.
The New Forwarder window is displayed, with the Welcome tab selected.
Under the Welcome tab, select Start configuration.

Under the Select forwarder tab, follow these steps:
Select Start new Forwarder or select one that was recently installed and select Configure selected Forwarder.

Under the 1. Install Forwarder section, select Continue.

Under the 2. Create API Token section, Select Skip this step if you have performed the steps under Obtain the API token and hostname (ingestion) URL. Continue with step 5.
Alternatively, enter a Token Name and a Token TTL, then select Create Token.

Under the 3. Configure Forwarder section, fill in the following fields and select Continue:

forwarder_server_hostname - Mandatory. The forwarder ingest hostname (for example,
ngest-<your-account>.datainsights.gravityzone.bitdefender.com).forwarder_grpc_api_token - The API Token used to authenticate the forwarder.
Under the 4. Select Forwarder, select the forwarder you just added, and select Configure selected Forwarder.
Under the Configure Forwarder section, fill in the following fields and select Add Forwarded inputs.
Title - Enter a descriptive name to identify this forwarder.
Description (Optional) - Provide additional details to help identify the Forwarder and describe its purpose.
Note
TheConfigure Forwarder section also contains the Hostname field, which is filled in automatically. It displays the hostname of the system where the forwarder is running.
Under the Add Inputs tab, follow these steps:
Select Create Input Profile.

Enter a title and description for the input and select Add Inputs.

Select the type of input you want to create and select Create Input.

Fill in the following fields to configure the input, and select Create Input.

Input Type – Select the input type to create (e.g., Bitdefender GravityZone).
Title – Enter a descriptive name to identify this input.
Bind Address – Specify the network address for the input to listen on (e.g.,
0.0.0.0or127.0.0.1).Port – Define the TCP port number used by the input to receive data.
Timezone – Set the timezone for message timestamps (e.g.,
Etc/UTCorAmerica/Chicago).Receive Buffer Size (optional) – Specify the size, in bytes, of the receive buffer for network connections.
No. of Worker Threads (optional) – Define how many worker threads process incoming connections.
TLS Cert File (optional) – Path to the TLS certificate file for secure connections.
TLS Private Key File (optional) – Path to the private key file associated with the TLS certificate.
Enable TLS – Enables TLS encryption for incoming connections to secure communication.
TLS Key Password (optional) – Password for the encrypted TLS key file, if applicable.
TLS Client Authentication (optional) – Determines whether clients must authenticate using TLS certificates.
TLS Client Auth Trusted Certs (optional) – File or directory containing trusted client certificates for TLS authentication.
TCP Keepalive – Enables keepalive packets to maintain idle TCP connections.
Enable Bulk Receiving (must always be checked) – Allows batch processing of messages received together, separated by newlines.
Enable CORS – Enables Cross-Origin Resource Sharing headers for browser-based requests.
Max. HTTP Chunk Size (optional) – Maximum HTTP chunk size in bytes for handling incoming request bodies.
Idle Writer Timeout (optional) – Time, in seconds, before closing an idle client connection (use 0 to disable).
Authorization Header Name – Name of the HTTP header used for authorization. It is used by Event Push to authenticate its requests
Authorization Header Value – Expected authorization header value (e.g., bearer token) used for request validation. It is used by Event Push to authenticate its requests
Locale (optional) – Defines the locale for parsing timestamps (e.g.,
enoren_US).Use Full Field Names – Enables full Common Event Format (CEF) field names for compatibility with CEF specifications.
Select Add another Input and repeat the steps above or select Save Inputs.


Under the Summary tab, select Configure Another forwarder and repeat the steps above or select Exit configuration.

The forwarder will now appear under the Forwarders list, under the Forwarders page.

Setting up a new input profile
Input profiles act as templates that forwarders use to know what data to collect and how to send it to Security Data Lake.
An input profile is a reusable configuration that groups one or more inputs for collecting log data from different sources. Each input profile contains a set of input configurations that can be applied to one or multiple forwarders, providing a consistent and efficient way to manage data collection across your environment.
Input profiles simplify large-scale deployments by allowing you to:
Standardize input configurations across multiple forwarders.
Reuse the same input definitions for different locations, systems, or environments.
Easily modify data collection settings in one place instead of configuring each forwarder manually.
How input profiles work
When you create a forwarder, you assign an input profile to it. The forwarder then uses the inputs defined in that profile to collect logs from the configured sources (for example, Syslog, Beats, or Raw/Plaintext).
You can assign the same input profile to multiple forwarders if they need to collect identical types of data, such as:
Windows event logs from several hosts.
Syslog traffic from multiple routers.
Filebeat logs from servers in different regions.
Note
If you need to separate or organize data flows—for instance, by business unit or geographic region—you can create multiple input profiles and assign them accordingly.
Creating a new input profile
To create a new input profile, follow these steps:
Go to System > Forwarders.

Select the Input Profiles tab from the upper left side of the page, then select New Input Profile.
Enter a descriptive title and description for the profile, hen select Create.

Select Create Input to create and configure a new input that will be assigned to this profile.
Note
You can find a list of all compatible inputs and configuration instructions under Input types.
Repeat this step for each input you want to add to the profile.
When finished, go back to System > Forwarders. The new input profile will appear under the Inputs table.

Editing and deleting input profiles
To edit or delete an input profile, go to System > Forwarders, then select the Input Profiles tab from the upper left side of the page,
To edit the name or description of an input profile, click the profile name under the Name column.

To enter the configuration window, select the Edit button under the Actions column.

To delete an input profile, select Choose action under the Actions column, then select Delete.

Note
When an input profile is deleted, all inputs created from that profile are also removed from any forwarders where they were assigned. If a forwarder is actively using the profile, the associated inputs will stop collecting or forwarding data immediately. Deleting a profile cannot be undone, so make sure no forwarder depends on it before confirming the deletion.
Monitoring forwarders
After your forwarders are connected to Security Data Lake, you can monitor them and their inputs to track activity and performance. This can be done by viewing active forwarders in your Security Data Lake environment, using REST API calls to check health status and input details, and exporting forwarder metrics to monitoring tools such as Prometheus.
Monitoring forwarders in Security Data Lake
To access the Forwarders page, select the Enterprise menu on he upper right side of the console, and select Forwarders.

The Forwarders page provides a list of all forwarders that have been added to your Security Data Lake instance. Each forwarder is displayed in a table containing relevant details organized under the following columns:

Title – The unique identifier of the forwarder.
Status – Displays the current connection state. A green Connected badge indicates that the forwarder is actively sending messages to your cloud instance.
Description – A short description of the forwarder.
Hostname – The hostname of the system where the forwarder or te forwarder input is installed.
Version – The forwarder’s version number.
Input Profile – The input profile associated with the forwarder.
Metrics – Shows the current message rate (msg/s), which helps verify activity.
Note
Active message rates in the Metrics column also indicate that a forwarder is functioning properly.
Actions – Allows you to edit the forwarder or perform other available actions.
Monitoring forwarders using REST API calls
The Forwarder Agent REST API allows you to check the health status and inputs of your forwarders and export Prometheus metrics for monitoring. To enable the API, update your forwarder configuration following these steps:
Open the
forwarder.conffile.Add the following line:
forwarder_api_enabled = true
When enabled, the API listens on a Unix domain socket defined by the forwarder_api_socket_path parameter, unless a TCP address is specified using forwarder_api_tcp_bind_address. You can use tools such as curl to query the endpoint and retrieve forwarder information. If you’re unfamiliar with Unix sockets, refer to the official guide for usage details.
Checking the status of a forwarder
To verify the health status of your forwarder, query the following REST endpoint:
GET /api/health
A successful response returns a JSON object that summarizes the forwarder’s overall health, input status, and upstream connectivity. For example:
{
"healthy": true,
"inputs": {
"healthy": true,
"running": 2,
"failed": 0,
"not running": 0
},
"upstream": {
"healthy": true
}
}
In this example, all components are healthy, and both configured inputs are running without issues.
Getting a list of inputs for a specific forwarder
To retrieve a list of inputs running on your forwarder, query the following REST endpoint:
GET /api/inputs
A successful response returns a JSON object that lists each configured input along with its ID and title. For example:
{
"inputs": [
{
"id": "5fc91564d44bfd2000249e8c",
"title": "Random"
},
{
"id": "5fc91550d44bfd2000249e74",
"title": "Beats"
}
]
}
You can further explore input details by drilling down into the input profile and checking the Forwarder section to ensure it is receiving messages. If no data is visible, create a new input.
To review configuration details, click the input name to open its main profile — this page displays the settings you defined during the initial setup.
If an input is in a failed state, the /api/inputs endpoint does not return any information for its ID or title.
For additional insights into forwarder activity and node health, select the Details button on your node to view message cache (buffer) information. The number of messages in the journal represents the forwarder’s on-disk storage queue, which temporarily holds messages to ensure no data is lost, even if other Security Data Lake components experience issues.
Monitoring forwarders using Prometheus metrics exports
The forwarder itself does not provide a built-in interface to view internal metrics. To access this information, you can configure a local Prometheus container, which serves as the interface for forwarder metrics. These metrics are similar to traditional Security Data Lake metrics but are exported in the standard Prometheus format.
To set up Prometheus for forwarder monitoring, follow these steps:
Download and start Prometheus.
Install Docker on your machine.
Create a Prometheus configuration file, for example:
touch /tmp/prometheus.yml
Add the following configuration to the file:
global: scrape_interval: 15s scrape_timeout: 10s evaluation_interval: 15s alerting: alertmanagers: - static_configs: - targets: [] scheme: http timeout: 10s api_version: v1 scrape_configs: - job_name: prometheus honor_timestamps: true scrape_interval: 15s scrape_timeout: 10s metrics_path: /api/metrics/prometheus scheme: http static_configs: - targets: - host.docker.internal:9001Run the following Docker command to start the Prometheus container:
docker run \ -p 9090:9090 \ -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml \ prom/prometheus
Once the container is running, Prometheus will begin scraping forwarder metrics from the specified endpoint, allowing you to monitor forwarder performance in real time.
Forwarder reliability
To improve reliability and ensure continuity during major outages, whether widespread, long-lasting, or destructive, you can deploy additional forwarders as part of a resiliency strategy. This requires incorporating appropriate tools, procedures, and policies to maintain operations even under adverse conditions.
If your environment is at risk of disruption, consider implementing both message recovery mechanisms and load balancing to protect data flow and maintain performance.
Message recovery
The forwarder’s disk journal provides built-in caching to ensure data resiliency during network outages. When a connection to Security Data Lake is interrupted, incoming messages are temporarily stored on disk.
The forwarder continues to receive and store messages even if the internet connection is unavailable. Once connectivity is restored, the forwarder automatically resumes normal operation, sending the cached messages from the journal to Security Data Lake.
Load balancing options
As your deployment grows, data throughput increases, meaning more requests are processed across your systems. In larger or multi-forwarder environments, it’s recommended to configure a load balancer to evenly distribute data traffic. This helps manage high request volumes, reduce latency, and maintain overall resiliency.
A load balancer automatically routes requests to healthy nodes within your local or external datacenters, ensuring consistent performance and fault tolerance.
Versioning
The forwarder follows a MAJOR.MINOR versioning scheme and is released independently from Security Data Lake.
Regularly update your forwarders to the latest available release.