Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(opentelemetry source): support trace ingestion #19728

Merged
merged 27 commits into from
Jun 4, 2024

Conversation

caibirdme
Copy link
Contributor

This PR supports ingesting trace data in opentelemetry-protocol

Here's the example config:

sources:
  foo:
    type: opentelemetry
    grpc:
      address: 0.0.0.0:4317
    http:
      address: 0.0.0.0:4318
      keepalive:
        max_connection_age_jitter_factor: 0.1
        max_connection_age_secs: 300
sinks:
  bar:
    type: console
    inputs: [foo.traces]
    encoding:
      codec: json
  baz:
    type: console
    inputs: [foo.logs]
    encoding:
      codec: logfmt

we can use opentelemetry-trace-gen to test it.

  1. install: go install github.com/open-telemetry/opentelemetry-collector-contrib/cmd/telemetrygen@latest
  2. produce some demo traces: telemetrygen traces --otlp-insecure --duration 30s
  3. produce some demo logs: telemetrygen logs --duration 30s --otlp-insecure

@caibirdme caibirdme requested a review from a team January 27, 2024 03:05
@github-actions github-actions bot added the domain: sources Anything related to the Vector's sources label Jan 27, 2024
@gcuberes
Copy link

@caibirdme, by any chance are you also working on an opentelemetry sink for traces?

@caibirdme
Copy link
Contributor Author

@caibirdme, by any chance are you also working on an opentelemetry sink for traces?

@gcuberes In technical terms, the issue of converting data to the OpenTelemetry (OTel) format should not be difficult. The process only involves serializing the data into a specified Protocol Buffers (PB) format and sending it via gRPC or HTTP to a fixed endpoint. However, there are some challenges in terms of how to use this approach.

Firstly, users can modify the received trace data through the VRL, which makes it difficult to ensure that the final sent data still conforms to the OTel protocol specifications (some fields may be missing or have incorrect types). Currently, I have not found any method to constrain the schema of sink data.

Secondly, if we want to convert any logs into OTel traces, users need to configure all the required fields for OTel traces, which can be cumbersome in terms of usage.

If the goal is simply to receive OpenTelemetry traces and then forward them unchanged to another location using the OTel protocol, similar to a proxy, this may be an acceptable solution for some use cases but I don't know if this is what people want.

In summary, while technically feasible, there are some challenges in terms of ensuring data conformance to the OTel protocol and ease of use for the end-users.

@caibirdme caibirdme requested a review from a team as a code owner March 18, 2024 08:19
@caibirdme
Copy link
Contributor Author

Anybody take a look on this pr?

@StephenWakely
Copy link
Contributor

Hi, sorry this one slipped my attention. Vector has pretty poor support for traces, the only sink that can send out traces is the Datadog sink, but even then the format of those traces needs to be pretty precise. Anything else would be fairly error prone. I'm curious what your use case for this source would be?

@caibirdme
Copy link
Contributor Author

We're using clickhouse to store the logs&traces, so clickhouse sink is enough for us. The path looks like client -> send otlp trace/log -> vector -> vrl -> clickhouse. And we want to try loki in the future, the loki sink is also supported

@jszwedko
Copy link
Member

We're using clickhouse to store the logs&traces, so clickhouse sink is enough for us. The path looks like client -> send otlp trace/log -> vector -> vrl -> clickhouse. And we want to try loki in the future, the loki sink is also supported

Gotcha. So you are converting the trace to a log event to send it to one of those sinks? I'm ok with us adding this, it's just that the user experience will be pretty rough so I'd want to label it as experimental in the docs. Do you mind making the docs updates by adding a new "how it works" section here:

how_it_works: {
tls: {
title: "Transport Layer Security (TLS)"
body: """
Vector uses [OpenSSL](\(urls.openssl)) for TLS protocols due to OpenSSL's maturity. You can
enable and adjust TLS behavior via the `grpc.tls.*` and `http.tls.*` options and/or via an
[OpenSSL configuration file](\(urls.openssl_conf)). The file location defaults to
`/usr/local/ssl/openssl.cnf` or can be specified with the `OPENSSL_CONF` environment variable.
"""
}
}
?

@caibirdme
Copy link
Contributor Author

Gotcha. So you are converting the trace to a log event to send it to one of those sinks? I'm ok with us adding this, it's just that the user experience will be pretty rough so I'd want to label it as experimental in the docs. Do you mind making the docs updates by adding a new "how it works" section here:

Sure, but I don't know exactly what should be to put into how it works. You mean how the source ingests trace data and converts it into log event?

@jszwedko
Copy link
Member

Gotcha. So you are converting the trace to a log event to send it to one of those sinks? I'm ok with us adding this, it's just that the user experience will be pretty rough so I'd want to label it as experimental in the docs. Do you mind making the docs updates by adding a new "how it works" section here:

Sure, but I don't know exactly what should be to put into how it works. You mean how the source ingests trace data and converts it into log event?

Right, I would just add something like about it being experimental, have limited processing functionality available, and that the internal data format may change in the future.

Copy link
Contributor

@StephenWakely StephenWakely left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. The code is looking pretty solid, just a fairly minor suggestion. Can you add an integration tests that covers this here.

This will also need a changelog entry. Can you add one here.

.gitignore Outdated Show resolved Hide resolved
src/sources/util/grpc/mod.rs Outdated Show resolved Hide resolved
@caibirdme caibirdme requested review from a team as code owners April 19, 2024 10:53
@github-actions github-actions bot added the domain: external docs Anything related to Vector's external, public documentation label Apr 19, 2024
Copy link

Regression Detector Results

Run ID: 71d27108-0499-4b38-b646-71f9e463d748
Baseline: 234b126
Comparison: 4a31e4e
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -6.06 [-6.18, -5.94]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +4.49 [+4.38, +4.61]
otlp_grpc_to_blackhole ingress throughput +2.02 [+1.92, +2.11]
datadog_agent_remap_blackhole ingress throughput +1.61 [+1.52, +1.71]
socket_to_socket_blackhole ingress throughput +0.76 [+0.70, +0.83]
http_text_to_http_json ingress throughput +0.51 [+0.39, +0.63]
fluent_elasticsearch ingress throughput +0.30 [-0.18, +0.78]
http_to_s3 ingress throughput +0.21 [-0.07, +0.49]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.19 [+0.11, +0.27]
http_to_http_acks ingress throughput +0.18 [-1.18, +1.54]
http_to_http_noack ingress throughput +0.10 [+0.02, +0.18]
http_to_http_json ingress throughput +0.03 [-0.05, +0.10]
splunk_hec_indexer_ack_blackhole ingress throughput -0.00 [-0.15, +0.14]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.01 [-0.14, +0.13]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.06 [-0.17, +0.05]
enterprise_http_to_http ingress throughput -0.07 [-0.15, +0.02]
http_elasticsearch ingress throughput -0.53 [-0.60, -0.45]
syslog_log2metric_humio_metrics ingress throughput -0.73 [-0.86, -0.60]
file_to_blackhole egress throughput -0.95 [-3.44, +1.55]
datadog_agent_remap_blackhole_acks ingress throughput -1.22 [-1.30, -1.13]
datadog_agent_remap_datadog_logs ingress throughput -1.23 [-1.34, -1.12]
syslog_regex_logs2metric_ddmetrics ingress throughput -1.41 [-1.50, -1.31]
splunk_hec_route_s3 ingress throughput -1.46 [-1.92, -1.00]
syslog_humio_logs ingress throughput -2.62 [-2.74, -2.50]
syslog_log2metric_splunk_hec_metrics ingress throughput -4.56 [-4.70, -4.41]
syslog_loki ingress throughput -4.80 [-4.85, -4.75]
syslog_splunk_hec_logs ingress throughput -4.96 [-5.06, -4.86]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -6.06 [-6.18, -5.94]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks May 30, 2024
@StephenWakely StephenWakely added this pull request to the merge queue May 30, 2024
Copy link

Regression Detector Results

Run ID: 5681deea-26f8-4dfa-9ba5-bc7cdf745941
Baseline: 4a4fc2e
Comparison: eb67ee0
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
syslog_splunk_hec_logs ingress throughput -5.22 [-5.28, -5.16]
syslog_log2metric_splunk_hec_metrics ingress throughput -5.43 [-5.59, -5.27]
syslog_loki ingress throughput -7.18 [-7.25, -7.11]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +4.67 [+4.54, +4.81]
splunk_hec_route_s3 ingress throughput +2.70 [+2.22, +3.18]
file_to_blackhole egress throughput +2.49 [+0.03, +4.96]
otlp_grpc_to_blackhole ingress throughput +2.47 [+2.38, +2.56]
http_text_to_http_json ingress throughput +2.44 [+2.32, +2.57]
syslog_log2metric_humio_metrics ingress throughput +2.00 [+1.84, +2.17]
socket_to_socket_blackhole ingress throughput +1.05 [+0.96, +1.13]
http_to_http_acks ingress throughput +0.96 [-0.40, +2.33]
datadog_agent_remap_datadog_logs ingress throughput +0.80 [+0.68, +0.92]
fluent_elasticsearch ingress throughput +0.58 [+0.09, +1.06]
http_to_s3 ingress throughput +0.34 [+0.06, +0.62]
datadog_agent_remap_blackhole_acks ingress throughput +0.10 [+0.00, +0.19]
http_to_http_noack ingress throughput +0.09 [-0.01, +0.19]
http_to_http_json ingress throughput +0.04 [-0.04, +0.12]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.14, +0.14]
splunk_hec_indexer_ack_blackhole ingress throughput -0.01 [-0.15, +0.14]
datadog_agent_remap_datadog_logs_acks ingress throughput -0.03 [-0.11, +0.06]
enterprise_http_to_http ingress throughput -0.03 [-0.13, +0.06]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.05 [-0.16, +0.07]
http_elasticsearch ingress throughput -0.15 [-0.22, -0.08]
syslog_regex_logs2metric_ddmetrics ingress throughput -0.78 [-0.94, -0.62]
datadog_agent_remap_blackhole ingress throughput -1.66 [-1.74, -1.57]
syslog_humio_logs ingress throughput -1.68 [-1.80, -1.56]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -2.83 [-2.96, -2.71]
syslog_splunk_hec_logs ingress throughput -5.22 [-5.28, -5.16]
syslog_log2metric_splunk_hec_metrics ingress throughput -5.43 [-5.59, -5.27]
syslog_loki ingress throughput -7.18 [-7.25, -7.11]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks May 30, 2024
@StephenWakely StephenWakely added this pull request to the merge queue Jun 3, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jun 3, 2024
Copy link

github-actions bot commented Jun 3, 2024

Regression Detector Results

Run ID: b472ba00-ef5c-4d15-974d-c0c39c40e873
Baseline: 4a4fc2e
Comparison: b1d6043
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +5.75 [+5.62, +5.88]
syslog_log2metric_splunk_hec_metrics ingress throughput -5.07 [-5.21, -4.93]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +5.75 [+5.62, +5.88]
syslog_log2metric_humio_metrics ingress throughput +1.86 [+1.78, +1.94]
otlp_grpc_to_blackhole ingress throughput +1.33 [+1.24, +1.42]
splunk_hec_route_s3 ingress throughput +0.97 [+0.52, +1.42]
http_text_to_http_json ingress throughput +0.56 [+0.45, +0.68]
datadog_agent_remap_blackhole_acks ingress throughput +0.49 [+0.41, +0.58]
socket_to_socket_blackhole ingress throughput +0.46 [+0.38, +0.54]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.23 [+0.15, +0.31]
http_to_s3 ingress throughput +0.15 [-0.13, +0.43]
datadog_agent_remap_blackhole ingress throughput +0.14 [+0.04, +0.23]
http_to_http_noack ingress throughput +0.13 [+0.05, +0.20]
http_to_http_json ingress throughput +0.06 [-0.02, +0.14]
splunk_hec_to_splunk_hec_logs_acks ingress throughput +0.00 [-0.14, +0.14]
splunk_hec_indexer_ack_blackhole ingress throughput -0.01 [-0.15, +0.14]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.06 [-0.17, +0.06]
http_to_http_acks ingress throughput -0.10 [-1.47, +1.26]
enterprise_http_to_http ingress throughput -0.11 [-0.20, -0.01]
datadog_agent_remap_datadog_logs ingress throughput -0.36 [-0.46, -0.25]
fluent_elasticsearch ingress throughput -0.47 [-0.95, +0.01]
file_to_blackhole egress throughput -0.49 [-2.99, +2.01]
http_elasticsearch ingress throughput -0.96 [-1.03, -0.89]
syslog_splunk_hec_logs ingress throughput -3.47 [-3.53, -3.42]
syslog_regex_logs2metric_ddmetrics ingress throughput -3.88 [-3.97, -3.80]
syslog_humio_logs ingress throughput -4.14 [-4.23, -4.04]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -4.41 [-4.53, -4.29]
syslog_loki ingress throughput -4.41 [-4.46, -4.36]
syslog_log2metric_splunk_hec_metrics ingress throughput -5.07 [-5.21, -4.93]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@StephenWakely StephenWakely added this pull request to the merge queue Jun 3, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jun 3, 2024
Copy link

github-actions bot commented Jun 3, 2024

Regression Detector Results

Run ID: 8bac803b-8bb3-46ab-b00a-d3ff6f4159a0
Baseline: 4a4fc2e
Comparison: 911824f
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

Significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +5.21 [+5.08, +5.34]
syslog_splunk_hec_logs ingress throughput -5.21 [-5.26, -5.16]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -5.75 [-5.87, -5.63]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otlp_http_to_blackhole ingress throughput +5.21 [+5.08, +5.34]
otlp_grpc_to_blackhole ingress throughput +1.86 [+1.77, +1.95]
file_to_blackhole egress throughput +1.52 [-1.02, +4.06]
syslog_log2metric_humio_metrics ingress throughput +1.35 [+1.25, +1.44]
socket_to_socket_blackhole ingress throughput +1.22 [+1.15, +1.29]
http_text_to_http_json ingress throughput +0.38 [+0.26, +0.51]
syslog_regex_logs2metric_ddmetrics ingress throughput +0.30 [+0.17, +0.43]
datadog_agent_remap_blackhole ingress throughput +0.29 [+0.20, +0.37]
datadog_agent_remap_blackhole_acks ingress throughput +0.26 [+0.17, +0.35]
http_to_http_noack ingress throughput +0.15 [+0.06, +0.24]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.12 [+0.04, +0.21]
http_to_s3 ingress throughput +0.08 [-0.20, +0.36]
http_to_http_json ingress throughput +0.05 [-0.03, +0.13]
splunk_hec_indexer_ack_blackhole ingress throughput +0.00 [-0.14, +0.15]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.14, +0.14]
datadog_agent_remap_datadog_logs ingress throughput -0.00 [-0.11, +0.10]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.03 [-0.15, +0.08]
enterprise_http_to_http ingress throughput -0.08 [-0.14, -0.03]
http_to_http_acks ingress throughput -0.29 [-1.65, +1.07]
splunk_hec_route_s3 ingress throughput -0.62 [-1.07, -0.17]
fluent_elasticsearch ingress throughput -0.79 [-1.27, -0.32]
http_elasticsearch ingress throughput -0.85 [-0.92, -0.78]
syslog_humio_logs ingress throughput -3.31 [-3.43, -3.20]
syslog_log2metric_splunk_hec_metrics ingress throughput -3.85 [-3.98, -3.71]
syslog_loki ingress throughput -4.87 [-4.92, -4.81]
syslog_splunk_hec_logs ingress throughput -5.21 [-5.26, -5.16]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -5.75 [-5.87, -5.63]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@StephenWakely
Copy link
Contributor

These regression tests should really not keep failing...

@StephenWakely StephenWakely added this pull request to the merge queue Jun 3, 2024
Copy link

github-actions bot commented Jun 3, 2024

Regression Detector Results

Run ID: 0d3c08af-975e-487d-b242-57a79b63824d
Baseline: 3da355b
Comparison: 7ce8fd5
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
syslog_log2metric_splunk_hec_metrics ingress throughput +3.08 [+2.93, +3.22]
syslog_splunk_hec_logs ingress throughput +1.50 [+1.45, +1.56]
syslog_humio_logs ingress throughput +1.42 [+1.29, +1.55]
datadog_agent_remap_datadog_logs_acks ingress throughput +1.36 [+1.28, +1.44]
syslog_regex_logs2metric_ddmetrics ingress throughput +1.30 [+1.21, +1.38]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput +1.12 [+1.01, +1.23]
fluent_elasticsearch ingress throughput +0.67 [+0.18, +1.15]
syslog_loki ingress throughput +0.49 [+0.41, +0.57]
datadog_agent_remap_blackhole_acks ingress throughput +0.41 [+0.30, +0.51]
datadog_agent_remap_datadog_logs ingress throughput +0.31 [+0.20, +0.41]
http_to_http_acks ingress throughput +0.28 [-1.08, +1.65]
http_elasticsearch ingress throughput +0.25 [+0.18, +0.32]
http_to_http_noack ingress throughput +0.12 [+0.02, +0.21]
http_to_http_json ingress throughput +0.05 [-0.03, +0.13]
splunk_hec_indexer_ack_blackhole ingress throughput +0.00 [-0.15, +0.15]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.14, +0.14]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.03 [-0.14, +0.08]
enterprise_http_to_http ingress throughput -0.06 [-0.14, +0.02]
http_to_s3 ingress throughput -0.33 [-0.60, -0.05]
otlp_grpc_to_blackhole ingress throughput -0.35 [-0.44, -0.26]
syslog_log2metric_humio_metrics ingress throughput -0.89 [-0.98, -0.80]
http_text_to_http_json ingress throughput -0.94 [-1.06, -0.83]
file_to_blackhole egress throughput -1.01 [-3.46, +1.45]
splunk_hec_route_s3 ingress throughput -1.15 [-1.61, -0.69]
socket_to_socket_blackhole ingress throughput -1.30 [-1.38, -1.21]
datadog_agent_remap_blackhole ingress throughput -1.53 [-1.64, -1.42]
otlp_http_to_blackhole ingress throughput -2.11 [-2.23, -1.99]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jun 3, 2024
@caibirdme
Copy link
Contributor Author

It's weird... My code should only affect opentelemetry-source

@StephenWakely
Copy link
Contributor

This is now failing with doc tests:

failures:

---- target/debug/build/opentelemetry-proto-d2ae8386e81e6113/out/opentelemetry.proto.trace.v1.rs - proto::trace::v1::Span::attributes (line 123) stdout ----
error: expected one of `.`, `;`, `?`, `}`, or an operator, found `:`
 --> target/debug/build/opentelemetry-proto-d2ae8386e81e6113/out/opentelemetry.proto.trace.v1.rs:124:19
  |
3 | "/http/user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
  |                   ^ expected one of `.`, `;`, `?`, `}`, or an operator

error: aborting due to 1 previous error

On this doc comment in the generated proto file:

    /// attributes is a collection of key/value pairs. Note, global attributes
    /// like server name can be set using the resource API. Examples of attributes:
    ///
    ///      "/http/user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
    ///      "/http/server_latency": 300
    ///      "example.com/my_attribute": true
    ///      "example.com/score": 10.239
    ///
    /// The OpenTelemetry API specification further restricts the allowed value types:
    /// <https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/README.md#attribute>
    /// Attribute keys MUST be unique (it is not allowed to have more than one
    /// attribute with the same key).
    #[prost(message, repeated, tag = "9")]

I'm not sure I understand why..

Copy link
Contributor

@StephenWakely StephenWakely left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc tests are failing because the comments here are indented by 4 spaces, so the doc tester thinks that it is Rust code in the comments. This PR didn't change that code, but I'm guessing that since nothing used it prior to this PR the code wasn't pulled in.

It's annoying to have to do, but I think the only way to fix this would be to update those lines in the comments so they only indent by 2 spaces.

Would you be able to make this change?

@StephenWakely StephenWakely added this pull request to the merge queue Jun 4, 2024
Copy link

github-actions bot commented Jun 4, 2024

Regression Detector Results

Run ID: 7db41c8d-c248-43c7-8f89-e4c83d33f959
Baseline: 3da355b
Comparison: d1d122e
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
syslog_splunk_hec_logs ingress throughput +2.32 [+2.27, +2.37]
syslog_loki ingress throughput +1.96 [+1.91, +2.01]
syslog_humio_logs ingress throughput +1.88 [+1.76, +2.00]
http_to_http_acks ingress throughput +1.24 [-0.13, +2.60]
http_elasticsearch ingress throughput +1.21 [+1.13, +1.29]
datadog_agent_remap_datadog_logs ingress throughput +0.85 [+0.75, +0.96]
datadog_agent_remap_datadog_logs_acks ingress throughput +0.79 [+0.71, +0.87]
fluent_elasticsearch ingress throughput +0.78 [+0.31, +1.26]
syslog_regex_logs2metric_ddmetrics ingress throughput +0.71 [+0.63, +0.78]
http_to_http_noack ingress throughput +0.15 [+0.06, +0.24]
http_to_http_json ingress throughput +0.01 [-0.06, +0.08]
splunk_hec_indexer_ack_blackhole ingress throughput -0.00 [-0.15, +0.14]
splunk_hec_to_splunk_hec_logs_acks ingress throughput -0.00 [-0.14, +0.14]
splunk_hec_to_splunk_hec_logs_noack ingress throughput -0.04 [-0.16, +0.07]
enterprise_http_to_http ingress throughput -0.07 [-0.12, -0.02]
syslog_log2metric_tag_cardinality_limit_blackhole ingress throughput -0.14 [-0.25, -0.02]
http_to_s3 ingress throughput -0.20 [-0.47, +0.08]
syslog_log2metric_splunk_hec_metrics ingress throughput -0.37 [-0.50, -0.23]
file_to_blackhole egress throughput -0.52 [-2.99, +1.95]
syslog_log2metric_humio_metrics ingress throughput -0.68 [-0.79, -0.57]
socket_to_socket_blackhole ingress throughput -0.71 [-0.79, -0.64]
datadog_agent_remap_blackhole ingress throughput -0.90 [-1.00, -0.79]
otlp_http_to_blackhole ingress throughput -0.95 [-1.09, -0.82]
otlp_grpc_to_blackhole ingress throughput -0.98 [-1.07, -0.89]
datadog_agent_remap_blackhole_acks ingress throughput -1.17 [-1.26, -1.07]
http_text_to_http_json ingress throughput -1.27 [-1.40, -1.14]
splunk_hec_route_s3 ingress throughput -2.20 [-2.65, -1.74]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

Merged via the queue into vectordotdev:master with commit d1d122e Jun 4, 2024
49 checks passed
paomian pushed a commit to paomian/vector that referenced this pull request Jun 12, 2024
* feat: otlp trace

* feat: add tuple grpc services

* fmt

* add changelog.d

* add intergration tests

* remove .vscode in gitignore

* cue

* add int test

* hex trace_id span_id

* fix int test

* add a newline to pass the check

* Update changelog.d/opentelemetry_trace.feature.md

Co-authored-by: Stephen Wakely <[email protected]>

* Update website/cue/reference/components/sources/opentelemetry.cue

Co-authored-by: Jesse Szwedko <[email protected]>

* Update website/cue/reference/components/sources/opentelemetry.cue

Co-authored-by: Stephen Wakely <[email protected]>

* Fix cue error

Signed-off-by: Stephen Wakely <[email protected]>

* Update trace.proto

fix ident

---------

Signed-off-by: Stephen Wakely <[email protected]>
Co-authored-by: Stephen Wakely <[email protected]>
Co-authored-by: Jesse Szwedko <[email protected]>
Co-authored-by: Stephen Wakely <[email protected]>
AndrooTheChen pushed a commit to discord/vector that referenced this pull request Sep 23, 2024
* feat: otlp trace

* feat: add tuple grpc services

* fmt

* add changelog.d

* add intergration tests

* remove .vscode in gitignore

* cue

* add int test

* hex trace_id span_id

* fix int test

* add a newline to pass the check

* Update changelog.d/opentelemetry_trace.feature.md

Co-authored-by: Stephen Wakely <[email protected]>

* Update website/cue/reference/components/sources/opentelemetry.cue

Co-authored-by: Jesse Szwedko <[email protected]>

* Update website/cue/reference/components/sources/opentelemetry.cue

Co-authored-by: Stephen Wakely <[email protected]>

* Fix cue error

Signed-off-by: Stephen Wakely <[email protected]>

* Update trace.proto

fix ident

---------

Signed-off-by: Stephen Wakely <[email protected]>
Co-authored-by: Stephen Wakely <[email protected]>
Co-authored-by: Jesse Szwedko <[email protected]>
Co-authored-by: Stephen Wakely <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain: external docs Anything related to Vector's external, public documentation domain: sources Anything related to the Vector's sources
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants