Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/shipping/Code/dotnet-traces-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ builder.Services.AddOpenTelemetry()
var app = builder.Build();
```

You can configure the otlp endpoint and protocol using envurinment variables:
You can configure the otlp endpoint and protocol using environment variables:
- OTEL_EXPORTER_OTLP_ENDPOINT (The destination of your otel collector 'http://localhost:4318`)
- OTEL_EXPORTER_OTLP_PROTOCOL (`grpc` or `http/protubuf`)

Expand Down Expand Up @@ -180,7 +180,7 @@ It will deploy a small pod that extracts your cluster domain name from your Kube

### Configure your .NET Kafka application to send spans to `logzio-monitoring`

You can configure the otlp endpoint and protocol using envurinment variables:
You can configure the otlp endpoint and protocol using environment variables:
- OTEL_EXPORTER_OTLP_ENDPOINT (The destination of your otel collector `<<logzio-monitoring-service-dns>>`)
- OTEL_EXPORTER_OTLP_PROTOCOL (`grpc` or `http/protubuf`)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,173 @@ slug: /distributed-tracing/troubleshooting/tracing-troubleshooting/

Setting up a Distributed Tracing account might require some additional help. In this doc, we'll walk through troubleshooting some common issues.

## Post install troubleshooting

There are a few common issues that may prevent data from reaching Logz.io. Use the following checklist to validate your setup:

**1. No traces showing in the UI**

Even if installation succeeded, you might not see any trace data. To troubleshoot:

* Confirm that your application is sending traces to the correct collector endpoint.

* Check the collector/agent logs for connection errors or dropped spans.

* Ensure that the correct token is configured for the tracing account.

**2. Missing service or operation names**

If the UI doesn’t show services or operations in the dropdown menus:

* Restart the collector to re-send metadata (especially in OTEL setups).
Comment on lines +29 to +31
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like there's a header below in the page "Distributed Tracing is not showing data in Service/Operations dropdown lists" that is also referring to the same issue, is the intention for this to act as an overview section?


* Verify that your instrumentation sets the service.name resource attribute.

* Search for recent trace logs in Open Search Dashboards to confirm data is reaching Logz.io.

**3. Metrics dashboards showing "No data"**

If you’re using Service Performance Monitoring and dashboards show no results:

* Make sure your metrics account is configured to receive tracing-related metrics (like `calls_total`).

* Confirm that the collector is configured to export metrics as well as traces.

**4. Use debug mode to test setup**

If you’re still unsure whether trace data is being collected:

* Temporarily enable a debug exporter in your collector config to log incoming spans locally.

* Run a trace from your application and check the collector output.

This can help confirm that data is generated and received before reaching Logz.io.

**5. Missing data after installing the chart**

If you've installed the Logz.io Helm chart for tracing but no data appears in the UI:

* Check that the collector pods are running and healthy.

* Ensure your instrumented applications are correctly configured to send traces to the OTLP endpoint exposed by the chart (e.g., `otel-collector.monitoring.svc.cluster.local:4317`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. in logzio-monitoring versions +7.x.x, they need to use logzio-apm-collector.monitoring.svc.cluster.local:<port>
  2. the port can be either 4317 or 4318, depending if the output protocol of the traces that their instrumentation generates is otlp/grpc or otlp/http


* Verify that your token and endpoint are set correctly in your app’s environment variables.

* Use the debug exporter to confirm data is reaching the collector.

## Routing data to the collector

When Tracing data doesn't appear in your account, one of the most common root causes is misconfigured instrumentation or data not reaching the collector. Before diving into deeper debugging steps, it’s important to verify that your application is correctly set up to export tracing data.

Your instrumented application needs to send telemetry data (traces, metrics, or logs) to the OpenTelemetry Collector using the OTLP protocol.

Set environment variables to configure the destination collector and other tracing options.

If your collector runs on the same host or as a sidecar, set the OTLP endpoint to localhost:

```shell
ENV OTEL_TRACES_SAMPLER=always_on
ENV OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the port may be 4318 too depending on the instrumentation

ENV OTEL_RESOURCE_ATTRIBUTES="service.name=<SERVICE_NAME>"
```

If you're running your app on ECS, define the environment variables in your task definition:

```json
"environment": [
{
"name": "OTEL_TRACES_SAMPLER",
"value": "always_on"
},
{
"name": "OTEL_EXPORTER_OTLP_ENDPOINT",
"value": "http://localhost:4317"
},
{
"name": "OTEL_RESOURCE_ATTRIBUTES",
"value": "service.name=<SERVICE_NAME>"
}
]
```

Replace <SERVICE_NAME> with the name you want to appear in the Tracing UI.


If the collector runs in another service or namespace, set the OTLP endpoint to its DNS or internal IP:

```shell
ENV OTEL_EXPORTER_OTLP_ENDPOINT="http://otel-collector.monitoring.svc.cluster.local:4317"
```

Make sure the collector is reachable and exposes the OTLP receiver on that port.


## Debug mode for instrumentation and collector

To enable debug mode for Opentelemetry Operator, add the `OTEL_LOG_LEVEL` environment variable with value `DEBUG`.

### Enable debug mode for a single pod
To enable debug mode for a specific pod, add the following environment variable directly to its spec:

```yaml
spec:
template:
spec:
containers:
- name: "<CONTAINER_NAME>"
env:
- name: OTEL_LOG_LEVEL
value: "debug"
```

### Enable debug mode for all instrumented pods
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is relevant specifically only if they are using our logzio-monitoring helm chart and they enabled the opentelemetry operetor

To apply debug mode to all pods instrumented by the OpenTelemetry Operator, update your Logz.io Helm chart with the following configuration, replacing <APP_LANGUAGE> with your application's programming language:

```yaml
instrumentation:
<APP_LANGUAGE>:
extraEnv:
- name: OTEL_LOG_LEVEL
value: "debug"
```

:::tip
`<APP_LANGUAGE>` can be one of `dotnet`, `java`, `nodejs` or `python`.
:::

:::caution
Enabling debug mode generates highly verbose logs. It is recommended to apply it per pod and not for all pods.
:::

### Debugging with the Collector

To troubleshoot data routing issues in your OpenTelemetry Collector, you can use the built-in debug exporter. This exporter prints telemetry data directly to the console or logs, making it easy to validate what the collector is receiving.

To enable it, add this to your Collector config:

```yaml
exporters:
debug:
verbosity: detailed # or: basic, normal
sampling_initial: 5
sampling_thereafter: 200
```

Then attach it to the relevant pipeline:

```yaml
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
```

With verbosity: `detailed`, you'll get full trace/span info, which is useful to check attributes like `service.name`, `trace ID`, `span.kind`, and verify routing logic.

This debug output is for local validation only and not intended for production.

For more, see [OpenTelemetry Collector Debug Exporter](https://github.com/open-telemetry/opentelemetry-collector/blob/v0.111.0/exporter/debugexporter/README.md).


## Distributed Tracing is not showing data in Service/Operations dropdown lists
Expand Down