--- title: Elastic Integration filter plugin description: Plugin version: v9.2.0 (Other versions), Released on: 2025-10-02, Changelog. For questions about the plugin, open a topic in the Discuss forums. For bugs... url: https://www.elastic.co/docs/reference/logstash/plugins/plugins-filters-elastic_integration --- # Elastic Integration filter plugin - Plugin version: v9.2.0 ([Other versions](https://www.elastic.co/docs/reference/logstash/versioned-plugins/filter-elastic_integration-index)) - Released on: 2025-10-02 - [Changelog](https://github.com/elastic/logstash-filter-elastic_integration/blob/v9.2.0/CHANGELOG.md) ## Getting help For questions about the plugin, open a topic in the [Discuss](http://discuss.elastic.co) forums. For bugs or feature requests, open an issue in [Github](https://github.com/elastic/logstash-filter-elastic_integration). For the list of Elastic supported plugins, please consult the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_logstash_plugins). **Elastic Enterprise License** Use of this plugin requires an active Elastic Enterprise [subscription](https://www.elastic.co/subscriptions). ## Description Use this filter to process Elastic integrations powered by Elasticsearch Ingest Node in Logstash. **Extending Elastic integrations with Logstash** This plugin can help you take advantage of the extensive, built-in capabilities of [Elastic Integrations](https://docs.elastic.co/en/integrations)—such as managing data collection, transformation, and visualization—and then use Logstash for additional data processing and output options. For more info about extending Elastic integrations with Logstash, check out [Using Logstash with Elastic Integrations](https://www.elastic.co/guide/en/logstash/current/ea-integrations.html). When you configure this filter to point to an Elasticsearch cluster, it detects which ingest pipeline (if any) should be executed for each event, using an explicitly-defined [`pipeline_name`](#plugins-filters-elastic_integration-pipeline_name) or auto-detecting the event’s data-stream and its default pipeline. It then loads that pipeline’s definition from Elasticsearch and run that pipeline inside Logstash without transmitting the event to Elasticsearch. Events that are successfully handled by their ingest pipeline will have `[@metadata][target_ingest_pipeline]` set to `_none` so that any downstream Elasticsearch output in the Logstash pipeline will avoid running the event’s default pipeline *again* in Elasticsearch. Some multi-pipeline configurations such as logstash-to-logstash over http(s) do not maintain the state of `[@metadata]` fields. In these setups, you may need to explicitly configure your downstream pipeline’s Elasticsearch output with `pipeline => "_none"` to avoid re-running the default pipeline. Events that *fail* ingest pipeline processing will be tagged with `_ingest_pipeline_failure`, and their `[@metadata][_ingest_pipeline_failure]` will be populated with details as a key/value map. ### Requirements and upgrade guidance - This plugin requires Java 17 minimum with Logstash `8.x` versions and Java 21 minimum with Logstash `9.x` versions. - When you upgrade the Elastic Stack, upgrade Logstash (or this plugin specifically) *before* you upgrade Kibana. (Note that this requirement is a departure from the typical Elastic Stack [installation order](https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html#install-order-elastic-stack).) The Elasticsearch-Logstash-Kibana installation order recommended here ensures the best experience with Elastic Agent-managed pipelines, and embeds functionality from a version of Elasticsearch Ingest Node that is compatible with the plugin version (`major`.`minor`). ### Using `filter-elastic_integration` with `output-elasticsearch` Elastic Integrations are designed to work with [data streams](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-data-streams) and [ECS-compatible](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#_compatibility_with_the_elastic_common_schema_ecs) output. Be sure that these features are enabled in the [`output-elasticsearch`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) plugin. - Set [`data-stream`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-data_stream) to `true`. (Check out [Data streams](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-data-streams) for additional data streams settings.) - Set [`ecs-compatibility`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-ecs_compatibility) to `v1` or `v8`. Check out the [`output-elasticsearch` plugin](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html) docs for additional settings. ## Minimum configuration You will need to configure this plugin to connect to Elasticsearch, and may need to also need to provide local GeoIp databases. ``` filter { elastic_integration { cloud_id => "YOUR_CLOUD_ID_HERE" cloud_auth => "YOUR_CLOUD_AUTH_HERE" geoip_database_directory => "/etc/your/geoip-databases" } } ``` Read on for a guide to configuration, or jump to the [complete list of configuration options](#plugins-filters-elastic_integration-options). ## Connecting to Elasticsearch This plugin communicates with Elasticsearch to identify which ingest pipeline should be run for a given event, and to retrieve the ingest pipeline definitions themselves. You must configure this plugin to point to Elasticsearch using exactly one of: - A Cloud Id (see [`cloud_id`](#plugins-filters-elastic_integration-cloud_id)) - A list of one or more host URLs (see [`hosts`](#plugins-filters-elastic_integration-hosts)) Communication will be made securely over SSL unless you explicitly configure this plugin otherwise. You may need to configure how this plugin establishes trust of the server that responds, and will likely need to configure how this plugin presents its own identity or credentials. ### SSL Trust Configuration When communicating over SSL, this plugin fully-validates the proof-of-identity presented by Elasticsearch using the system trust store. You can provide an *alternate* source of trust with one of: - A PEM-formatted list of trusted certificate authorities (see [`ssl_certificate_authorities`](#plugins-filters-elastic_integration-ssl_certificate_authorities)) - A JKS- or PKCS12-formatted Keystore containing trusted certificates (see [`ssl_truststore_path`](#plugins-filters-elastic_integration-ssl_truststore_path)) You can also configure which aspects of the proof-of-identity are verified (see [`ssl_verification_mode`](#plugins-filters-elastic_integration-ssl_verification_mode)). ### SSL Identity Configuration When communicating over SSL, you can also configure this plugin to present a certificate-based proof-of-identity to the Elasticsearch cluster it connects to using one of: - A PKCS8 Certificate/Key pair (see [`ssl_certificate`](#plugins-filters-elastic_integration-ssl_certificate)) - A JKS- or PKCS12-formatted Keystore (see [`ssl_keystore_path`](#plugins-filters-elastic_integration-ssl_keystore_path)) ### Request Identity You can configure this plugin to present authentication credentials to Elasticsearch in one of several ways: - ApiKey: (see [`api_key`](#plugins-filters-elastic_integration-api_key)) - Cloud Auth: (see [`cloud_auth`](#plugins-filters-elastic_integration-cloud_auth)) - HTTP Basic Auth: (see [`username`](#plugins-filters-elastic_integration-username) and [`password`](#plugins-filters-elastic_integration-password)) Your request credentials are only as secure as the connection they are being passed over. They provide neither privacy nor secrecy on their own, and can easily be recovered by an adversary when SSL is disabled. ## Minimum required privileges This plugin communicates with Elasticsearch to resolve events into pipeline definitions and needs to be configured with credentials with appropriate privileges to read from the relevant APIs. At the startup phase, this plugin confirms that current user has sufficient privileges, including: | Privilege name | Description | |--------------------------|-------------------------------------------------------------------------------------------------------------------------------------| | `monitor` | A read-only privilege for cluster operations such as cluster health or state. Plugin requires it when checks Elasticsearch license. | | `read_pipeline` | A read-only get and simulate access to ingest pipeline. It is required when plugin reads Elasticsearch ingest pipeline definitions. | | `manage_index_templates` | All operations on index templates privilege. It is required when plugin resolves default pipeline based on event data stream name. | This plugin cannot determine if an anonymous user has the required privileges when it connects to an Elasticsearch cluster that has security features disabled or when the user does not provide credentials. The plugin starts in an unsafe mode with a runtime error indicating that API permissions are insufficient, and prevents events from being processed by the ingest pipeline. To avoid these issues, set up user authentication and ensure that security in Elasticsearch is enabled (default). ## Supported Ingest Processors This filter can run Elasticsearch Ingest Node pipelines that are *wholly* comprised of the supported subset of processors. It has access to the Painless and Mustache scripting engines where applicable: | Source | Processor | Caveats | |-------------------|---------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **Ingest Common** | `append` | *none* | | | `bytes` | *none* | | | `community_id` | *none* | | | `convert` | *none* | | | `csv` | *none* | | | `date` | *none* | | | `date_index_name` | *none* | | | `dissect` | *none* | | | `dot_expander` | *none* | | | `drop` | *none* | | | `fail` | *none* | | | `fingerprint` | *none* | | | `foreach` | *none* | | | `grok` | *none* | | | `gsub` | *none* | | | `html_strip` | *none* | | | `join` | *none* | | | `json` | *none* | | | `kv` | *none* | | | `lowercase` | *none* | | | `network_direction` | *none* | | | `pipeline` | resolved pipeline *must* be wholly-composed of supported processors | | | `registered_domain` | *none* | | | `remove` | *none* | | | `rename` | *none* | | | `reroute` | *none* | | | `script` | `lang` must be `painless` (default) | | | `set` | *none* | | | `sort` | *none* | | | `split` | *none* | | | `trim` | *none* | | | `uppercase` | *none* | | | `uri_parts` | *none* | | | `urldecode` | *none* | | | `user_agent` | side-loading a custom regex file is not supported; the processor will use the default user agent definitions as specified in [Elasticsearch processor definition](https://www.elastic.co/guide/en/elasticsearch/reference/current/user-agent-processor.html) | | **Redact** | `redact` | *none* | | **GeoIp** | `geoip` | requires MaxMind GeoIP2 databases, which may be provided by Logstash’s Geoip Database Management *OR* configured using [`geoip_database_directory`](#plugins-filters-elastic_integration-geoip_database_directory) | ### Field Mappings During execution the Ingest pipeline works with a temporary mutable *view* of the Logstash event called an ingest document. This view contains all of the as-structured fields from the event with minimal type conversions. It also contains additional metadata fields as required by ingest pipeline processors: - `_version`: a `long`-value integer equivalent to the event’s `@version`, or a sensible default value of `1`. - `_ingest.timestamp`: a `ZonedDateTime` equivalent to the event’s `@timestamp` field After execution completes the event is sanitized to ensure that Logstash-reserved fields have the expected shape, providing sensible defaults for any missing required fields. When an ingest pipeline has set a reserved field to a value that cannot be coerced, the value is made available in an alternate location on the event as described below. | Logstash field | type | value | |----------------|-------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `@timestamp` | `Timestamp` | First coercible value of the ingest document’s `@timestamp`, `event.created`, `_ingest.timestamp`, or `_now` fields; or the current timestamp. When the ingest document has a value for `@timestamp` that cannot be coerced, it will be available in the event’s `_@timestamp` field. | | `@version` | String-encoded integer | First coercible value of the ingest document’s `@version`, or `_version` fields; or the current timestamp. When the ingest document has a value for `@version` that cannot be coerced, it will be available in the event’s `_@version` field. | | `@metadata` | key/value map | The ingest document’s `@metadata`; or an empty map. When the ingest document has a value for `@metadata` that cannot be coerced, it will be available in the event’s `_@metadata` field. | | `tags` | a String or a list of Strings | The ingest document’s `tags`. When the ingest document has a value for `tags` that cannot be coerced, it will be available in the event’s `_tags` field. | Additionally, these Elasticsearch IngestDocument Metadata fields are made available on the resulting event *if-and-only-if* they were set during pipeline execution: | Elasticsearch document metadata | Logstash field | |---------------------------------|-----------------------------------------------| | `_id` | `[@metadata][_ingest_document][id]` | | `_index` | `[@metadata][_ingest_document][index]` | | `_routing` | `[@metadata][_ingest_document][routing]` | | `_version` | `[@metadata][_ingest_document][version]` | | `_version_type` | `[@metadata][_ingest_document][version_type]` | | `_ingest.timestamp` | `[@metadata][_ingest_document][timestamp]` | ## Resolving Pipeline Definitions This plugin uses Elasticsearch to resolve pipeline names into their pipeline definitions. When configured *without* an explicit [`pipeline_name`](#plugins-filters-elastic_integration-pipeline_name), or when a pipeline uses the Reroute Processor, it also uses Elasticsearch to establish mappings of data stream names to their respective default pipeline names. It uses hit/miss caches to avoid querying Elasticsearch for every single event. It also works to update these cached mappings *before* they expire. The result is that when Elasticsearch is responsive this plugin is able to pick up changes quickly without impacting its own performance, and it can survive periods of Elasticsearch issues without interruption by continuing to use potentially-stale mappings or definitions. To achieve this, mappings are cached for a maximum of 24 hours, and cached values are reloaded every 1 minute with the following effect: - when a reloaded mapping is non-empty and is the *same* as its already-cached value, its time-to-live is reset to ensure that subsequent events can continue using the confirmed-unchanged value - when a reloaded mapping is non-empty and is *different* from its previously-cached value, the entry is *updated* so that subsequent events will use the new value - when a reloaded mapping is newly *empty*, the previous non-empty mapping is *replaced* with a new empty entry so that subsequent events will use the empty value - when the reload of a mapping *fails*, this plugin emits a log warning but the existing cache entry is unchanged and gets closer to its expiry. ## Troubleshooting Troubleshooting ingest pipelines associated with data streams requires a pragmatic approach, involving thorough analysis and debugging techniques. To identify the root cause of issues with pipeline execution, you need to enable debug-level logging. The debug logs allow monitoring the plugin’s behavior and help to detect issues. The plugin operates through following phases: pipeline *resolution*, ingest pipeline *creation*, and pipeline *execution*. ### Ingest Pipeline Resolution Errors **Plugin does not resolve ingest pipeline associated with data stream** If you encounter `No pipeline resolved for event ...` messages in the debug logs, the error indicates that the plugin is unable to resolve the ingest pipeline from the data stream. To further diagnose and resolve the issue, verify whether the data stream’s index settings include a `default_pipeline` or `final_pipeline` configuration. You can inspect the index settings by running a `POST _index_template/_simulate_index/{type}-{dataset}-{namespace}` query in the Kibana Dev Tools. Make sure to replace `{type}-{dataset}-{namespace}` with values corresponding to your data stream. For further guidance, we recommend exploring [Manage Elastic Agent Integrations](https://www.elastic.co/guide/en/fleet/current/integrations.html), Elasticsearch [Ingest pipelines for fleet](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html#pipelines-for-fleet-elastic-agent) and [Elastic Integrations](https://docs.elastic.co/en/integrations) resources. **Ingest pipeline does not exist** If you notice `pipeline not found: ...` messages in the debug logs or `Pipeline {pipeline-name} could not be loaded` warning messages, it indicates that the plugin has successfully resolved the ingest pipeline from `default_pipeline` or `final_pipeline`, but the specified pipeline does not exist. To confirm whether pipeline exists, run a `GET _ingest/pipeline/{ingest-pipeline-name}` query in the Kibana Dev Tools console. For further guidance, we recommend exploring [Manage Elastic Agent Integrations](https://www.elastic.co/guide/en/fleet/current/integrations.html), Elasticsearch [Ingest pipelines for fleet](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html#pipelines-for-fleet-elastic-agent) and [Elastic Integrations](https://docs.elastic.co/en/integrations) resources. ### Ingest Pipeline Creation Errors If you encounter `failed to create ingest pipeline {pipeline-name} from pipeline configuration` error messages, it indicates that the plugin is unable to create an ingest pipeline from the resolved pipeline configuration. This issue typically arises when the pipeline configuration contains unsupported or invalid processor(s) that the plugin cannot execute. In such situations, the log output includes information about the issue. For example, the following error message indicating `inference` processor in the pipeline configuration which is not supported processor type. ``` [source] ---- 2025-01-21 12:29:13 [2025-01-21T20:29:13,986][ERROR][co.elastic.logstash.filters.elasticintegration.IngestPipelineFactory][main] failed to create ingest pipeline logs-my.custom-1.0.0 from pipeline configuration 2025-01-21 12:29:13 org.elasticsearch.ElasticsearchParseException: No processor type exists with name [inference] 2025-01-21 12:29:13 at org.elasticsearch.ingest.ConfigurationUtils.newConfigurationException(ConfigurationUtils.java:470) ~[logstash-filter-elastic_integration-0.1.16.jar:?] 2025-01-21 12:29:13 at org.elasticsearch.ingest.ConfigurationUtils.readProcessor(ConfigurationUtils.java:635) ---- ``` For further guidance, we recommend exploring [Manage Elastic Agent Integrations](https://www.elastic.co/guide/en/fleet/current/integrations.html), Elasticsearch [Handling pipeline failures](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html#handling-pipeline-failures) resources. ### Ingest Pipeline Execution Errors These errors typically fall into two main categories, each requiring specific investigation and resolution steps: **Logstash catches issues while running ingest pipelines** When errors occur during the execution of ingest pipelines, Logstash attaches the `_ingest_pipeline_failure` tag to the event, making it easier to identify and investigate problematic events. The detailed logs are available in the Logstash logs for your investigation. The root cause may depend on configuration, environment or integration you are running. For further guidance, we recommend exploring [Manage Elastic Agent Integrations](https://www.elastic.co/guide/en/fleet/current/integrations.html), Elasticsearch [Handling pipeline failures](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html#handling-pipeline-failures) resources. **Errors internally occurred in the ingest pipeline** If an ingest pipeline is configured with `on_failure` conditions, failures during pipeline execution are internally handled by the ingest pipeline itself and not be visible to Logstash. This means that errors are captured and processed within the pipeline, rather than being passed to Logstash for logging or tagging. To identify and analyze such cases, go to the Kibana → Stack Management → Ingest pipelines and find the ingest pipeline you are using. Click on it and navigate to the *Failure processors* section. If processors are configured, they may specify which field contains the failure details. For example, the pipeline might store error information in a `error.message` field or a custom field defined in the *Failure processors* configuration. Go to the Kibana Dev Tools and search for the data (`GET {index-ingest-pipeline-is-writing}/_search`) and look for the fields mentioned in the failure processors . The fields have error details which help you to analyze the root cause. For further guidance, we recommend exploring [Manage Elastic Agent Integrations](https://www.elastic.co/guide/en/fleet/current/integrations.html), Elasticsearch [Handling pipeline failures](https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html#handling-pipeline-failures) resources. ## Elastic Integration Filter Configuration Options This plugin supports the following configuration options plus the [Common options](#plugins-filters-elastic_integration-common-options) described later. | Setting | Input type | Required | |---------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|----------| | [`api_key`](#plugins-filters-elastic_integration-api_key) | [password](/docs/reference/logstash/plugins/value-types#password) | No | | [`cloud_auth`](#plugins-filters-elastic_integration-cloud_auth) | [password](/docs/reference/logstash/plugins/value-types#password) | No | | [`cloud_id`](#plugins-filters-elastic_integration-cloud_id) | [string](/docs/reference/logstash/plugins/value-types#string) | No | | [`geoip_database_directory`](#plugins-filters-elastic_integration-geoip_database_directory) | [path](/docs/reference/logstash/plugins/value-types#path) | No | | [`hosts`](#plugins-filters-elastic_integration-hosts) | [array](/docs/reference/logstash/plugins/value-types#array) | No | | [`password`](#plugins-filters-elastic_integration-password) | [password](/docs/reference/logstash/plugins/value-types#password) | No | | [`pipeline_name`](#plugins-filters-elastic_integration-pipeline_name) | [string](/docs/reference/logstash/plugins/value-types#string) | No | | [`proxy`](#plugins-filters-elastic_integration-proxy) | [uri](/docs/reference/logstash/plugins/value-types#uri) | No | | [`ssl_certificate`](#plugins-filters-elastic_integration-ssl_certificate) | [path](/docs/reference/logstash/plugins/value-types#path) | No | | [`ssl_certificate_authorities`](#plugins-filters-elastic_integration-ssl_certificate_authorities) | [array](/docs/reference/logstash/plugins/value-types#array) | No | | [`ssl_enabled`](#plugins-filters-elastic_integration-ssl_enabled) | [boolean](/docs/reference/logstash/plugins/value-types#boolean) | No | | [`ssl_key`](#plugins-filters-elastic_integration-ssl_key) | [path](/docs/reference/logstash/plugins/value-types#path) | No | | [`ssl_keystore_password`](#plugins-filters-elastic_integration-ssl_keystore_password) | [password](/docs/reference/logstash/plugins/value-types#password) | No | | [`ssl_keystore_path`](#plugins-filters-elastic_integration-ssl_keystore_path) | [path](/docs/reference/logstash/plugins/value-types#path) | No | | [`ssl_key_passphrase`](#plugins-filters-elastic_integration-ssl_key_passphrase) | [password](/docs/reference/logstash/plugins/value-types#password) | No | | [`ssl_truststore_path`](#plugins-filters-elastic_integration-ssl_truststore_path) | [path](/docs/reference/logstash/plugins/value-types#path) | No | | [`ssl_truststore_password`](#plugins-filters-elastic_integration-ssl_truststore_password) | [password](/docs/reference/logstash/plugins/value-types#password) | No | | [`ssl_verification_mode`](#plugins-filters-elastic_integration-ssl_verification_mode) | [string](/docs/reference/logstash/plugins/value-types#string), one of `["full", "certificate", "none"]` | No | | [`username`](#plugins-filters-elastic_integration-username) | [string](/docs/reference/logstash/plugins/value-types#string) | No | ### `api_key` - Value type is [password](/docs/reference/logstash/plugins/value-types#password) - There is no default value for this setting. The encoded form of an API key that is used to authenticate this plugin to Elasticsearch. ### `cloud_auth` - Value type is [password](/docs/reference/logstash/plugins/value-types#password) - There is no default value for this setting. Cloud authentication string (":" format) is an alternative for the `username`/`password` pair and can be obtained from Elastic Cloud web console. ### `cloud_id` - Value type is [string](/docs/reference/logstash/plugins/value-types#string) - There is no default value for this setting. - Cannot be combined with `[`ssl_enabled`](#plugins-filters-elastic_integration-ssl_enabled)⇒false`. Cloud Id, from the Elastic Cloud web console. When connecting with a Cloud Id, communication to Elasticsearch is secured with SSL. For more details, check out the [Logstash-to-Cloud documentation](https://www.elastic.co/guide/en/logstash/current/connecting-to-cloud.html). ### `geoip_database_directory` - Value type is [path](/docs/reference/logstash/plugins/value-types#path) - There is no default value for this setting. When running in a Logstash process that has Geoip Database Management enabled, integrations that use the Geoip Processor wil use managed Maxmind databases by default. By using managed databases you accept and agree to the [MaxMind EULA](https://www.maxmind.com/en/geolite2/eula). You may instead configure this plugin with the path to a local directory containing database files. This plugin will discover all regular files with the `.mmdb` suffix in the provided directory, and make each available by its file name to the GeoIp processors in integration pipelines. It expects the files it finds to be in the MaxMind DB format with one of the following database types: - `AnonymousIp` - `ASN` - `City` - `Country` - `ConnectionType` - `Domain` - `Enterprise` - `Isp` Most integrations rely on databases being present named *exactly*: - `GeoLite2-ASN.mmdb`, - `GeoLite2-City.mmdb`, or - `GeoLite2-Country.mmdb` ### `hosts` - Value type is a list of [uri](/docs/reference/logstash/plugins/value-types#uri)s - There is no default value for this setting. - Constraints: - When any URL contains a protocol component, all URLs must have the same protocol as each other. - `https`-protocol hosts use HTTPS and cannot be combined with [`ssl_enabled => false`](#plugins-filters-elastic_integration-ssl_enabled). - `http`-protocol hosts use unsecured HTTP and cannot be combined with [`ssl_enabled => true`](#plugins-filters-elastic_integration-ssl_enabled). - When any URL omits a port component, the default `9200` is used. - When any URL contains a path component, all URLs must have the same path as each other. A non-empty list of Elasticsearch hosts to connect. Examples: - `"127.0.0.1"` - `["127.0.0.1:9200","127.0.0.2:9200"]` - `["http://127.0.0.1"]` - `["https://127.0.0.1:9200"]` - `["https://127.0.0.1:9200/subpath"]` (If using a proxy on a subpath) When connecting with a list of hosts, communication to Elasticsearch is secured with SSL unless configured otherwise. Disabling SSL is dangerous The security of this plugin relies on SSL to avoid leaking credentials and to avoid running illegitimate ingest pipeline definitions. There are two ways to disable SSL: - Provide a list of `http`-protocol hosts - Set `<<{version}-plugins-{type}s-{plugin}-ssl_enabled>>=>false` ### `password` - Value type is [password](/docs/reference/logstash/plugins/value-types#password) - There is no default value for this setting. - Required when request auth is configured with [`username`](#plugins-filters-elastic_integration-username) A password when using HTTP Basic Authentication to connect to Elasticsearch. ### `pipeline_name` - Value type is [string](/docs/reference/logstash/plugins/value-types#string) - There is no default value for this setting. - When present, the event’s initial pipeline will *not* be auto-detected from the event’s data stream fields. - Value may be a [sprintf-style](https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#sprintf) template; if any referenced fields cannot be resolved the event will not be routed to an ingest pipeline. ### `proxy` - Value type is [uri](/docs/reference/logstash/plugins/value-types#uri) - There is no default value for this setting. Address of the HTTP forward proxy used to connect to the Elasticsearch cluster. An empty string is treated as if proxy was not set. Environment variables may be used to set this value, e.g. `proxy => '${LS_PROXY:}'`. ### `ssl_certificate` - Value type is [path](/docs/reference/logstash/plugins/value-types#path) - There is no default value for this setting. - When present, [`ssl_key`](#plugins-filters-elastic_integration-ssl_key) and [`ssl_key_passphrase`](#plugins-filters-elastic_integration-ssl_key_passphrase) are also required. - Cannot be combined with configurations that disable SSL Path to a PEM-encoded certificate or certificate chain with which to identify this plugin to Elasticsearch. ### `ssl_certificate_authorities` - Value type is a list of [path](/docs/reference/logstash/plugins/value-types#path)s - There is no default value for this setting. - Cannot be combined with configurations that disable SSL - Cannot be combined with `[`ssl_verification_mode`](#plugins-filters-elastic_integration-ssl_verification_mode)⇒none`. One or more PEM-formatted files defining certificate authorities. This setting can be used to *override* the system trust store for verifying the SSL certificate presented by Elasticsearch. ### `ssl_enabled` - Value type is [boolean](/docs/reference/logstash/plugins/value-types#boolean) - There is no default value for this setting. Secure SSL communication to Elasticsearch is enabled unless: - it is explicitly disabled with `ssl_enabled => false`; OR - it is implicitly disabled by providing `http`-protocol [`hosts`](#plugins-filters-elastic_integration-hosts). Specifying `ssl_enabled => true` can be a helpful redundant safeguard to ensure this plugin cannot be configured to use non-ssl communication. ### `ssl_key` - Value type is [path](/docs/reference/logstash/plugins/value-types#path) - There is no default value for this setting. - Required when connection identity is configured with [`ssl_certificate`](#plugins-filters-elastic_integration-ssl_certificate) - Cannot be combined with configurations that disable SSL A path to a PKCS8-formatted SSL certificate key. ### `ssl_keystore_password` - Value type is [password](/docs/reference/logstash/plugins/value-types#password) - There is no default value for this setting. - Required when connection identity is configured with [`ssl_keystore_path`](#plugins-filters-elastic_integration-ssl_keystore_path) - Cannot be combined with configurations that disable SSL Password for the [`ssl_keystore_path`](#plugins-filters-elastic_integration-ssl_keystore_path). ### `ssl_keystore_path` - Value type is [path](/docs/reference/logstash/plugins/value-types#path) - There is no default value for this setting. - When present, [`ssl_keystore_password`](#plugins-filters-elastic_integration-ssl_keystore_password) is also required. - Cannot be combined with configurations that disable SSL A path to a JKS- or PKCS12-formatted keystore with which to identify this plugin to Elasticsearch. ### `ssl_key_passphrase` - Value type is [password](/docs/reference/logstash/plugins/value-types#password) - There is no default value for this setting. - Required when connection identity is configured with [`ssl_certificate`](#plugins-filters-elastic_integration-ssl_certificate) - Cannot be combined with configurations that disable SSL A password or passphrase of the [`ssl_key`](#plugins-filters-elastic_integration-ssl_key). ### `ssl_truststore_path` - Value type is [path](/docs/reference/logstash/plugins/value-types#path) - There is no default value for this setting. - When present, [`ssl_truststore_password`](#plugins-filters-elastic_integration-ssl_truststore_password) is required. - Cannot be combined with configurations that disable SSL - Cannot be combined with `[`ssl_verification_mode`](#plugins-filters-elastic_integration-ssl_verification_mode)⇒none`. A path to JKS- or PKCS12-formatted keystore where trusted certificates are located. This setting can be used to *override* the system trust store for verifying the SSL certificate presented by Elasticsearch. ### `ssl_truststore_password` - Value type is [password](/docs/reference/logstash/plugins/value-types#password) - There is no default value for this setting. - Required when connection trust is configured with [`ssl_truststore_path`](#plugins-filters-elastic_integration-ssl_truststore_path) - Cannot be combined with configurations that disable SSL Password for the [`ssl_truststore_path`](#plugins-filters-elastic_integration-ssl_truststore_path). ### `ssl_verification_mode` - Value type is [string](/docs/reference/logstash/plugins/value-types#string) - There is no default value for this setting. - Cannot be combined with configurations that disable SSL Level of verification of the certificate provided by Elasticsearch. SSL certificates presented by Elasticsearch are fully-validated by default. - Available modes: - `none`: performs no validation, implicitly trusting any server that this plugin connects to (insecure) - `certificate`: validates the server-provided certificate is signed by a trusted certificate authority and that the server can prove possession of its associated private key (less secure) - `full` (default): performs the same validations as `certificate` and also verifies that the provided certificate has an identity claim matching the server we are attempting to connect to (most secure) ### `username` - Value type is [string](/docs/reference/logstash/plugins/value-types#string) - There is no default value for this setting. - When present, [`password`](#plugins-filters-elastic_integration-password) is also required. A user name when using HTTP Basic Authentication to connect to Elasticsearch. ## Common options These configuration options are supported by all filter plugins: | Setting | Input type | Required | |-------------------------------------------------------------------------|-----------------------------------------------------------------|----------| | [`add_field`](#plugins-filters-elastic_integration-add_field) | [hash](/docs/reference/logstash/plugins/value-types#hash) | No | | [`add_tag`](#plugins-filters-elastic_integration-add_tag) | [array](/docs/reference/logstash/plugins/value-types#array) | No | | [`enable_metric`](#plugins-filters-elastic_integration-enable_metric) | [boolean](/docs/reference/logstash/plugins/value-types#boolean) | No | | [`id`](#plugins-filters-elastic_integration-id) | [string](/docs/reference/logstash/plugins/value-types#string) | No | | [`periodic_flush`](#plugins-filters-elastic_integration-periodic_flush) | [boolean](/docs/reference/logstash/plugins/value-types#boolean) | No | | [`remove_field`](#plugins-filters-elastic_integration-remove_field) | [array](/docs/reference/logstash/plugins/value-types#array) | No | | [`remove_tag`](#plugins-filters-elastic_integration-remove_tag) | [array](/docs/reference/logstash/plugins/value-types#array) | No | ### `add_field` - Value type is [hash](/docs/reference/logstash/plugins/value-types#hash) - Default value is `{}` If this filter is successful, add any arbitrary fields to this event. Field names can be dynamic and include parts of the event using the `%{field}`. Example: ``` filter { elastic_integration { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" } } } ``` ``` # You can also add multiple fields at once: filter { elastic_integration { add_field => { "foo_%{somefield}" => "Hello world, from %{host}" "new_field" => "new_static_value" } } } ``` If the event has field `"somefield" == "hello"` this filter, on success, would add field `foo_hello` if it is present, with the value above and the `%{host}` piece replaced with that value from the event. The second example would also add a hardcoded field. ### `add_tag` - Value type is [array](/docs/reference/logstash/plugins/value-types#array) - Default value is `[]` If this filter is successful, add arbitrary tags to the event. Tags can be dynamic and include parts of the event using the `%{field}` syntax. Example: ``` filter { elastic_integration { add_tag => [ "foo_%{somefield}" ] } } ``` ``` # You can also add multiple tags at once: filter { elastic_integration { add_tag => [ "foo_%{somefield}", "taggedy_tag"] } } ``` If the event has field `"somefield" == "hello"` this filter, on success, would add a tag `foo_hello` (and the second example would of course add a `taggedy_tag` tag). ### `enable_metric` - Value type is [boolean](/docs/reference/logstash/plugins/value-types#boolean) - Default value is `true` Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin. ### `id` - Value type is [string](/docs/reference/logstash/plugins/value-types#string) - There is no default value for this setting. Add a unique `ID` to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 elastic_integration filters. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs. ``` filter { elastic_integration { id => "ABC" } } ``` ### `periodic_flush` - Value type is [boolean](/docs/reference/logstash/plugins/value-types#boolean) - Default value is `false` Call the filter flush method at regular interval. Optional. ### `remove_field` - Value type is [array](/docs/reference/logstash/plugins/value-types#array) - Default value is `[]` If this filter is successful, remove arbitrary fields from this event. Fields names can be dynamic and include parts of the event using the %{field} Example: ``` filter { elastic_integration { remove_field => [ "foo_%{somefield}" ] } } ``` ``` # You can also remove multiple fields at once: filter { elastic_integration { remove_field => [ "foo_%{somefield}", "my_extraneous_field" ] } } ``` If the event has field `"somefield" == "hello"` this filter, on success, would remove the field with name `foo_hello` if it is present. The second example would remove an additional, non-dynamic field. ### `remove_tag` - Value type is [array](/docs/reference/logstash/plugins/value-types#array) - Default value is `[]` If this filter is successful, remove arbitrary tags from the event. Tags can be dynamic and include parts of the event using the `%{field}` syntax. Example: ``` filter { elastic_integration { remove_tag => [ "foo_%{somefield}" ] } } ``` ``` # You can also remove multiple tags at once: filter { elastic_integration { remove_tag => [ "foo_%{somefield}", "sad_unwanted_tag"] } } ``` If the event has field `"somefield" == "hello"` this filter, on success, would remove the tag `foo_hello` if it is present. The second example would remove a sad, unwanted tag as well.