1. 简介
在此 Codelab 中,您将使用 gRPC 创建一个客户端和一个服务器,它们将构成用 C++ 编写的路线映射应用的基础。
完成本教程后,您将拥有一个通过 gRPC OpenTelemetry 插件进行插桩的简单 gRPC HelloWorld 应用,并且能够在 Prometheus 中看到导出的可观测性指标。
学习内容
- 如何为现有的 gRPC C++ 应用设置 OpenTelemetry 插件
- 运行本地 Prometheus 实例
- 将指标导出到 Prometheus
- 查看 Prometheus 信息中心内的指标
2. 准备工作
所需条件
gitcurlbuild-essentialclangbazel来构建此 Codelab 中的示例
安装必备项:
sudo apt-get update -y
sudo apt-get upgrade -y
sudo apt-get install -y git curl build-essential clang
可以通过 bazelisk 安装 bazel。您可以在此处找到最新版本。
一种简单的设置方法是将它作为 PATH 中的 bazel 二进制文件进行安装,如下所示:
sudo cp bazelisk-linux-amd64 /usr/local/bin/bazel
sudo chmod a+x /usr/local/bin/bazel
或者,您也可以使用 CMake。有关使用 CMake 的说明,请点击此处。
获取代码
为了简化学习过程,此 Codelab 提供了预构建的源代码框架,可帮助您快速入门。以下步骤将指导您在应用中对 gRPC OpenTelemetry 插件进行插桩处理。
grpc-codelabs
此 Codelab 的框架源代码位于此 GitHub 目录中。如果您不想自行实现代码,可以在 completed 目录中找到已完成的源代码。
首先,克隆 grpc Codelab 代码库,然后 cd 进入 grpc-cpp-opentelemetry 文件夹:
git clone https://github.com/grpc-ecosystem/grpc-codelabs.git
cd grpc-codelabs/codelabs/grpc-cpp-opentelemetry/
或者,您也可以下载仅包含 Codelab 目录的 .zip 文件,然后手动将其解压缩。
使用 bazel 构建 gRPC 库:
bazel build start_here/...
3. 注册 OpenTelemetry 插件
我们需要一个 gRPC 应用来添加 gRPC OpenTelemetry 插件。在此 Codelab 中,我们将使用简单的 gRPC HelloWorld 客户端和服务器,并使用 gRPC OpenTelemetry 插件对其进行插桩处理。
第一步是在客户端中注册配置了 Prometheus 导出器的 OpenTelemetry 插件。使用您喜爱的编辑器打开 codelabs/grpc-cpp-opentelemetry/start_here/greeter_callback_client.cc,并将 main() 转换为如下所示的内容 -
int main(int argc, char **argv) {
absl::ParseCommandLine(argc, argv);
// Codelab Solution: Register a global gRPC OpenTelemetry plugin configured
// with a prometheus exporter.
opentelemetry::exporter::metrics::PrometheusExporterOptions opts;
opts.url = absl::GetFlag(FLAGS_prometheus_endpoint);
auto prometheus_exporter =
opentelemetry::exporter::metrics::PrometheusExporterFactory::Create(opts);
auto meter_provider =
std::make_shared<opentelemetry::sdk::metrics::MeterProvider>();
// The default histogram boundaries are not granular enough for RPCs. Override
// the "grpc.client.attempt.duration" view as recommended by
// https://github.com/grpc/proposal/blob/master/A66-otel-stats.md.
AddLatencyView(meter_provider.get(), "grpc.client.attempt.duration", "s");
meter_provider->AddMetricReader(std::move(prometheus_exporter));
auto status = grpc::OpenTelemetryPluginBuilder()
.SetMeterProvider(std::move(meter_provider))
.BuildAndRegisterGlobal();
if (!status.ok()) {
std::cerr << "Failed to register gRPC OpenTelemetry Plugin: "
<< status.ToString() << std::endl;
return static_cast<int>(status.code());
}
// Continuously send RPCs.
RunClient(absl::GetFlag(FLAGS_target));
return 0;
}
下一步是将 OpenTelemetry 插件添加到服务器。打开 codelabs/grpc-cpp-opentelemetry/start_here/greeter_callback_server.cc 并将 main 转换为如下所示:
int main(int argc, char** argv) {
absl::ParseCommandLine(argc, argv);
// Register a global gRPC OpenTelemetry plugin configured with a prometheus
// exporter.
opentelemetry::exporter::metrics::PrometheusExporterOptions opts;
opts.url = absl::GetFlag(FLAGS_prometheus_endpoint);
auto prometheus_exporter =
opentelemetry::exporter::metrics::PrometheusExporterFactory::Create(opts);
auto meter_provider =
std::make_shared<opentelemetry::sdk::metrics::MeterProvider>();
// The default histogram boundaries are not granular enough for RPCs. Override
// the "grpc.server.call.duration" view as recommended by
// https://github.com/grpc/proposal/blob/master/A66-otel-stats.md.
AddLatencyView(meter_provider.get(), "grpc.server.call.duration", "s");
meter_provider->AddMetricReader(std::move(prometheus_exporter));
auto status = grpc::OpenTelemetryPluginBuilder()
.SetMeterProvider(std::move(meter_provider))
.BuildAndRegisterGlobal();
if (!status.ok()) {
std::cerr << "Failed to register gRPC OpenTelemetry Plugin: "
<< status.ToString() << std::endl;
return static_cast<int>(status.code());
}
RunServer(absl::GetFlag(FLAGS_port));
return 0;
}
为方便起见,我们已添加所需的头文件和 build 依赖项。
#include "opentelemetry/exporters/prometheus/exporter_factory.h"
#include "opentelemetry/exporters/prometheus/exporter_options.h"
#include "opentelemetry/sdk/metrics/meter_provider.h"
#include <grpcpp/ext/otel_plugin.h>
构建依赖项也已添加到 BUILD 文件中 -
cc_binary(
name = "greeter_callback_client",
srcs = ["greeter_callback_client.cc"],
defines = ["BAZEL_BUILD"],
deps = [
"//util:util",
"@com_github_grpc_grpc//:grpc++",
"@com_github_grpc_grpc//:grpcpp_otel_plugin",
"@com_google_absl//absl/flags:flag",
"@com_google_absl//absl/flags:parse",
"@io_opentelemetry_cpp//exporters/prometheus:prometheus_exporter",
"@io_opentelemetry_cpp//sdk/src/metrics",
],
)
4. 运行示例并查看指标
如需运行服务器,请运行以下命令:
bazel run start_here:greeter_callback_server
如果设置成功,您将看到以下服务器输出 -
Server listening on 0.0.0.0:50051
在服务器运行时,在另一个终端上运行客户端 -
bazel run start_here:greeter_callback_client
成功运行后,输出结果如下所示:
Greeter received: Hello world
Greeter received: Hello world
Greeter received: Hello world
Greeter received: Hello world
Greeter received: Hello world
Greeter received: Hello world
Greeter received: Hello world
Greeter received: Hello world
Greeter received: Hello world
Greeter received: Hello world
Greeter received: Hello world
由于我们已设置 gRPC OpenTelemetry 插件以使用 Prometheus 导出指标。这些指标将分别在 localhost:9464(服务器)和 localhost:9465(客户端)上提供。
如需查看客户端指标,请执行以下操作:
curl localhost:9465/metrics
结果的格式应为:
# HELP exposer_transferred_bytes_total Transferred bytes to metrics services
# TYPE exposer_transferred_bytes_total counter
exposer_transferred_bytes_total 0
# HELP exposer_scrapes_total Number of times metrics were scraped
# TYPE exposer_scrapes_total counter
exposer_scrapes_total 0
# HELP exposer_request_latencies Latencies of serving scrape requests, in microseconds
# TYPE exposer_request_latencies summary
exposer_request_latencies_count 0
exposer_request_latencies_sum 0
exposer_request_latencies{quantile="0.5"} Nan
exposer_request_latencies{quantile="0.9"} Nan
exposer_request_latencies{quantile="0.99"} Nan
# HELP target Target metadata
# TYPE target gauge
target_info{otel_scope_name="grpc-c++",otel_scope_version="1.67.0-dev",service_name="unknown_service",telemetry_sdk_version="1.13.0",telemetry_sdk_name="opentelemetry",telemetry_sdk_language="cpp"} 1 1721958543107
# HELP grpc_client_attempt_rcvd_total_compressed_message_size_bytes Compressed message bytes received per call attempt
# TYPE grpc_client_attempt_rcvd_total_compressed_message_size_bytes histogram
grpc_client_attempt_rcvd_total_compressed_message_size_bytes_count{grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",grpc_target="dns:///localhost:50051",otel_scope_name="grpc-c++",otel_scope_version="1.67.0-dev"} 96 1721958543107
grpc_client_attempt_rcvd_total_compressed_message_size_bytes_sum{grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",grpc_target="dns:///localhost:50051",otel_scope_name="grpc-c++",otel_scope_version="1.67.0-dev"} 1248 1721958543107
grpc_client_attempt_rcvd_total_compressed_message_size_bytes_bucket{grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",grpc_target="dns:///localhost:50051",otel_scope_name="grpc-c++",otel_scope_version="1.67.0-dev",le="0"} 0 1721958543107
grpc_client_attempt_rcvd_total_compressed_message_size_bytes_bucket{grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",grpc_target="dns:///localhost:50051",otel_scope_name="grpc-c++",otel_scope_version="1.67.0-dev",le="5"} 0 1721958543107
grpc_client_attempt_rcvd_total_compressed_message_size_bytes_bucket{grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",grpc_target="dns:///localhost:50051",otel_scope_name="grpc-c++",otel_scope_version="1.67.0-dev",le="10"} 0 1721958543107
grpc_client_attempt_rcvd_total_compressed_message_size_bytes_bucket{grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",grpc_target="dns:///localhost:50051",otel_scope_name="grpc-c++",otel_scope_version="1.67.0-dev",le="25"} 96 1721958543107
grpc_client_attempt_rcvd_total_compressed_message_size_bytes_bucket{grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",grpc_target="dns:///localhost:50051",otel_scope_name="grpc-c++",otel_scope_version="1.67.0-dev",le="50"} 96 1721958543107
grpc_client_attempt_rcvd_total_compressed_message_size_bytes_bucket{grpc_method="helloworld.Greeter/SayHello",grpc_status="OK",grpc_target="dns:///localhost:50051",otel_scope_name="grpc-c++",otel_scope_version="1.67.0-dev",le="75"} 96 1721958543107
同样,对于服务器端指标 -
curl localhost:9464/metrics
5. 在 Prometheus 上查看指标
在此示例中,我们将设置一个 Prometheus 实例,该实例将抓取使用 Prometheus 导出指标的 gRPC 示例客户端和服务器。
下载适用于您的平台的最新版 Prometheus,然后将其解压并运行:
tar xvfz prometheus-*.tar.gz
cd prometheus-*
创建一个包含以下内容的 Prometheus 配置文件:
cat > grpc_otel_cpp_prometheus.yml <<EOF
scrape_configs:
- job_name: "prometheus"
scrape_interval: 5s
static_configs:
- targets: ["localhost:9090"]
- job_name: "grpc-otel-cpp"
scrape_interval: 5s
static_configs:
- targets: ["localhost:9464", "localhost:9465"]
EOF
使用新配置启动 Prometheus -
./prometheus --config.file=grpc_otel_cpp_prometheus.yml
这会将客户端和服务器 Codelab 进程的指标配置为每 5 秒抓取一次。
前往 http://localhost:9090/graph 查看指标。例如,以下查询:
histogram_quantile(0.5, rate(grpc_client_attempt_duration_seconds_bucket[1m]))
将显示一个图表,其中包含使用 1 分钟作为分位数计算的时间窗口的尝试延迟时间中位数。
查询率 -
increase(grpc_client_attempt_duration_seconds_bucket[1m])
6. (可选)用户练习
在 Prometheus 信息中心内,您会发现 QPS 较低。看看您能否在示例中找到一些限制 QPS 的可疑代码。
对于热衷于此的开发者,客户端代码将自身限制为在给定时刻仅具有一个待处理的 RPC。您可以修改此设置,以便客户端发送更多 RPC,而无需等待之前的 RPC 完成。(此问题的解决方案尚未提供。)