Centralized Logging with Stackdriver


Adam Łukaszczyk

Spis treści

Stackdriver logs are not only for Google Cloud Platform. It is possible to drop logs into Stackdriver from anywhere, and this docker-compose will help you achieve that.

This OnePager has its own repo: https://github.com/ClearcodeHQ/docker-logs-to-stackdriver

What is the Centralized Logging?

The main idea of Centralized Logging is collecting all logs’ data across the system into one storage space. By parsing, filtering and searching logs you will be able to monitor system health from one place. By having all the logs in one place, you gain a comprehensive source of information about your system. Then, you can search for patterns, dig for information and prepare system reports which can be later utilized in the decision-making process. Still not convinced? Check this reddit thread for some inspiration.

Why Stackdriver?

Centralized Logging can be provisioned On Premises, usually by utilizing ELK stack or emerging Kubernetes Logging. Maintaining own logging stack entails various performance and cost issues. Mitigating those issues involves reducing log retention.

On the other hand, Centralized Logging as a Service like Sentry or Papertrail can also be costly. How much? It depends on the amount of data you will push in and the period of data retention.

Knowing the above, Stackdriver Logging comes into the game. Its pricing plan guarantees first 50 GiB free allotment per month per project and 30 days retention period (last checked August 12th, 2019).

50 GiB per month? Is it much? Again, it depends. We are currently working on a project which generates at least 5 GiB worth of logs per day. But let’s face it, your project would need to bloat out of proportion to reach this size.

Stackdriver fees increase when you automate monitoring based on the collected logs, but the logs themselves are almost free.


Disclaimer: This package is for educational purposes only. It is not a production-ready solution. To run it locally you will need to install Docker 18.06.0+.


  1. Clone this repo
  2. Register account in GCP and create a new project
  3. Generate Service Account Key and place it into ./config/service-key.json
  4. In the docker-compose.yml file set environment variables STACKDRIVER_VM_ID and STACKDRIVER_ZONE_ID accordingly to your needs
  5. Start Docker Compose $ docker-compose up
  6. Enter to open the default Nginx page, logs from access_log should be sent to Stackdriver in the background
  7. Open your project in GCP and navigate to the Logging tab
  8. From resources dropdown choose GCE VM Instance
  9. Your Nginx’s logs should appear

Technologies used

It is always cool to read about the elements used to build a solution. Here is a quick reference:

How it works

  1. Nginx Docker is configured to push logs to STDOUT/STDERR by default
  2. Docker Compose is configured to use Fluentd Logging Driver
  3. Fluentd Docker grabs all the logs which are exposed by Logging Driver
  4. …and forwards them via Fluentd’s Google Cloud plugin
  5. In the plugin, Stackdriver Logging API is used to push logs to Stackdriver

Remember that the official Stackdriver Logging Agent supports only GCP or AWS instances. Also, Fluentd’s Google Cloud plugin doesn’t mention the possibility to push logs from the outside.

However, the plugin’s configuration allows for it. Also, Stackdriver Logging API is open and there is a lot of movement in Knative which states that Kubernetes Clusters can push logs to Stackdriver no matter which Cloud Provider it is provisioned in.


Sprawdź, kim jesteśmy i jak się
u nas pracuje