Child pages
  • Using OpenZipkin Brave
Skip to end of metadata
Go to start of metadata

Overview

OpenZipkin Brave is a distributed tracing implementation compatible with Twitter Zipkin backend services, written in Java. For quite a while OpenZipkin Brave offers a dedicated module to integrate with Apache CXF framework, namely brave-cxf3. However, lately the discussion had been initiated to make this integration a part of Apache CXF codebase so the CXF team is going to be responsible for maintaining it. As such, it is going to be available in upcoming 3.2.0/3.1.12 releases under cxf-integration-tracing-brave module, with both client side and server side supported. This section gives a complete overview on how distributed tracing using OpenZipkin Brave could be integrated into JAX-RS / JAX-WS applications built on top of Apache CXF.

OpenZipkin Brave is inspired by the Twitter Zipkin and Dapper, a Large-Scale Distributed Systems Tracing Infrastructure paper and is a full-fledged distributed tracing framework. The section dedicated to Apache HTrace has pretty good introduction into distributed tracing basics. However, there are a few key differences between Apache HTrace and OpenZipkin Brave. In Brave every Span is associated with 128 or 64-bit long Trace ID, which logically groups the spans related to the same distributed unit of work. Within the process spans are collected by reporters (it could be a console, local file, data store, ...). OpenZipkin Brave provides span reporters for Twitter Zipkin and java.util.logging loggers.

Under the hood spans are attached to their threads (in general, thread which created the span should close it), the same technique employed by other distributed tracing implementations. However, what is unique is that OpenZipkin Brave distinguishes three different types of tracers:

  • server tracer (com.github.kristofa.brave.ServerTracer)
  • client tracer (com.github.kristofa.brave.ClientTracer)
  • local tracer (com.github.kristofa.brave.LocalTracer)

Apache CXF integration uses client tracer to instantiate spans on client side (providers and interceptors) to demarcate send / receive cycle, server tracer on the server side (providers and interceptors) to demarcate receive / send cycle, while using local tracer for any spans instantiated within a process.

Distributed Tracing in Apache CXF using OpenZipkin Brave

The current integration of distributed tracing in Apache CXF supports OpenZipkin Brave (4.3.x+ release branch) in JAX-RS 2.x+ and JAX-WS applications, including the applications deploying in OSGi containers. From high-level perspective, JAX-RS 2.x+ integration consists of three main parts:

  • TracerContext (injectable through @Context annotation)
  • BraveProvider (server-side JAX-RS provider) and BraveClientProvider (client-side JAX-RS provider)
  • BraveFeature (server-side JAX-RS feature to simplify OpenZipkin Brave configuration and integration)

Similarly, from high-level perspective, JAX-WS integration includes:

  • BraveStartInterceptor / BraveStopInterceptor / BraveFeature Apache CXF feature (server-side JAX-WS support)
  • BraveClientStartInterceptor / BraveClientStopInterceptor / BraveClientFeature Apache CXF feature (client-side JAX-WS support)

Apache CXF uses HTTP headers to hand off tracing context from the client to the service and from the service to service. Those headers are used internally by OpenZipkin Brave and are not configurable at the moment. The header names are declared in the BraveHttpHeaders class and at the moment include:

  • X-B3-TraceId: 128 or 64-bit trace ID
  • X-B3-SpanId: 64-bit span ID
  • X-B3-ParentSpanId: 64-bit parent span ID
  • X-B3-Sampled: "1" means report this span to the tracing system, "0" means do not

By default, BraveClientProvider will try to pass the currently active span through HTTP headers on each service invocation. If there is no active spans, the new span will be created and passed through HTTP headers on per-invocation basis. Essentially, for JAX-RS applications just registering BraveClientProvider on the client and BraveProvider on the server is enough to have tracing context to be properly passed everywhere. The only configuration part which is necessary are span reports(s) and sampler(s).

It is also worth to mention the way Apache CXF attaches the description to spans. With regards to the client integration, the description becomes a full URL being invoked prefixed by HTTP method, for example: GET http://localhost:8282/books. On the server side integration, the description becomes a relative JAX-RS resource path prefixed by HTTP method, f.e.: GET books, POST book/123

Configuring Client

There are a couple of ways the JAX-RS client could be configured, depending on the client implementation. Apache CXF provides its own WebClient which could be configured just like that (in future versions, there would be a simpler ways to do that using client specific features):

The configuration based on using the standard JAX-RS Client is very similar:

Configuring Server

Server configuration is a bit simpler than the client one thanks to the feature class available, BraveFeature. Depending on the way the Apache CXF is used to configure JAX-RS services, it could be part of JAX-RS application configuration, for example:

Or it could be configured using JAXRSServerFactoryBean as well, for example:

Once the span reporter and sampler are properly configured, all generated spans are going to be collected and available for analysis and/or visualization.

Distributed Tracing In Action: Usage Scenarios

In the following subsections we are going to walk through many different scenarios to illustrate the distributed tracing in action, starting from the simplest ones and finishing with asynchronous JAX-RS services. All examples assume that configuration has been done (see please Configuring Client  and Configuring Server sections above).

Example #1: Client and Server with default distributed tracing configured

In the first example we are going to see the effect of using default configuration on the client and on the server, with only BraveClientProvider  and BraveProvider registered. The JAX-RS resource endpoint is pretty basic stubbed method:

The client is as simple as that:

The actual invocation of the request by the client (with service name tracer-client) and consequent invocation of the service on the server side (service name tracer-server) is going to generate the following sample traces:

 

Please notice that client and server traces are collapsed under one trace with client send / receive, and server send / receive demarcation as is seen in details

Example #2: Client and Server with nested trace

In this example server-side implementation of the JAX-RS service is going to call an external system (simulated as a simple delay of 500ms) within its own span. The client-side code stays unchanged.

The actual invocation of the request by the client (with service name tracer-client) and consequent invocation of the service on the server side (service name tracer-server) is going to generate the following sample traces:

Example #3: Client and Server trace with annotations

In this example server-side implementation of the JAX-RS service is going to add timeline to the active span. The client-side code stays unchanged.

The actual invocation of the request by the client (with service name tracer-client) and consequent invocation of the service on the server side (service name traceser-server) is going to generate the following sample traces:

Example #4: Client and Server with binary annotations (key/value)

In this example server-side implementation of the JAX-RS service is going to add key/value annotations to the active span. The client-side code stays unchanged.

The actual invocation of the request by the client (with service name tracer-client) and consequent invocation of the service on the server side (service name tracer-server) is going to generate the following sample server trace properties:

Example #5: Client and Server with parallel trace (involving thread pools)

In this example server-side implementation of the JAX-RS service is going to offload some work into thread pool and then return the response to the client, simulating parallel execution. The client-side code stays unchanged.

The actual invocation of the request by the client (with service name tracer-client) and consequent invocation of the service on the server side (process name tracer-server) is going to generate the following sample traces:

Example #6: Client and Server with asynchronous JAX-RS service (server-side)

In this example server-side implementation of the JAX-RS service is going to be executed asynchronously. It poses a challenge from the tracing prospective as request and response are processed in different threads (in general). At the moment, Apache CXF does not support the transparent tracing spans management (except for default use case) but provides the simple ways to do that (by letting to transfer spans from thread to thread). The client-side code stays unchanged.

The actual invocation of the request by the client (with service name tracer-client) and consequent invocation of the service on the server side (service name tracer-server) is going to generate the following sample traces:

Example #7: Client and Server with asynchronous invocation (client-side)

In this example server-side implementation of the JAX-RS service is going to be the default one:

While the JAX-RS client implementation is going to perform the asynchronous invocation:

In this respect, there is no difference from the caller prospective however a bit more work is going under the hood to transfer the active tracing span from JAX-RS client request filter to client response filter as in general those are executed in different threads (similarly to server-side asynchronous JAX-RS resource invocation). The actual invocation of the request by the client (with service name tracer-client) and consequent invocation of the service on the server side (service name tracer-server) is going to generate the following sample traces:

Distributed Tracing with OpenZipkin Brave and JAX-WS support

Distributed tracing in the Apache CXF is build primarily around JAX-RS 2.x implementation. However, JAX-WS is also supported but it requires to write some boiler-plate code and use OpenZipkin Brave API directly (the JAX-WS integration is going to be enhanced in the future). Essentially, from the server-side prospective the in/out interceptors, BraveStartInterceptor and BraveStopInterceptor respectively, should be configured as part of interceptor chains, either manually or using BraveFeature. For example:

Similarly to the server-side, client-side needs own set of out/in interceptors, BraveClientStartInterceptor and BraveClientStopInterceptor (or BraveClientFeature). Please notice the difference from server-side:  BraveClientStartInterceptor becomes out-interceptor while BraveClientStopInterceptor becomes in-interceptor. For example:

Distributed Tracing with OpenZipkin Brave and OSGi

OpenZipkin Brave could be deployed into OSGi container and as such, distributed tracing integration is fully available for Apache CXF services running inside the container. For a complete example please take a look on jax_ws_tracing_brave_osgi sample project, but here is the typical OSGi  Blueprint snippet:

Migrating from brave-cxf3

The migration path from OpenZipkin Brave / CXF to Apache CXF integration is pretty straightforward and essentially boils down to using JAX-RS ( BraveFeature for server side / BraveClientFeature for client side (imported from org.apache.cxf.tracing.brave.jaxrs package), for example:

Although you may continue to use OpenZipkin Brave API directly, for the server-side it is preferable to inject @Context TracerContext  into your JAX-RS services in order to interface with the tracer.

 

Similarly for JAX-WS BraveFeature for server side / BraveClientFeature for client side (imported from org.apache.cxf.tracing.brave package), for example:

 

final Tracing brave = Tracing
            .newBuilder()
            .localServiceName(

  • No labels