Chapter 18 - Analysis takes you through helm3 efk - fluentd with ease

This series of articles:


Chapter 1: Nine analysis takes you through helm3 installation easily

Chapter II: Nine analyses take you through the helm3 public warehouse easily

Chapter III: Nine analyses take you through the helm3 private warehouse easily

Chapter IV: Nine analysis takes you through helm3 chart easily

Chapter V: Nine analysis takes you through helm3 release easily

Chapter VI: Nine analysis takes you through helm3 gitlab easily

Chapter VII: Nine analysis takes you through helm3 nginx-ingress easily

Chapter VIII: Nine analysis takes you through helm3 gitlab nfs easily

Chapter IX: Nine analysis takes you through helm3 nexus easily

Chapter 10: Nine analysis takes you through helm3 heapster easily

Chapter 11: Nine analysis takes you through helm3 kubernetes-dashboard easily

Chapter 12: Nine analysis takes you through helm3 harbor easily

Chapter 13: Nine analysis takes you through helm3 prometheus easily

Chapter 14: Nine analysis takes you through helm3 grafana easily

Chapter 15: Nine analyses take you through grafana Association prometheus easily

Chapter 16: Nine analysis takes you through helm3 efk - elasticsearch easily

Chapter 17: Nine analysis takes you through helm3 efk - kibana easily

Chapter 18: Nine analysis takes you through helm3 efk - fluentd easily

Catalog

1 Preface

2 Download fluentd

3 Create efk namespace

4 Configure fluentd

5 Install fluentd

6 Verify fluentd

1 Preface

This paper uses helm3 v3.0.0; k8s v1.16.3.The helm repository is configured as follows:

2 Download fluentd

helm search fluentd:

helm search repo fluentd

helm Download and Unzip fluentd:

helm fetch google/fluentd

tar -zxvf fluentd-2.3.2.tgz

3 Create efk namespace

kubectl create ns efk

4 Configure fluentd

Edit the values.yaml file to add containers.input.conf field information on system.conf as follows:

containers.input.conf: |-

    <source>

        @id fluentd-containers.log

        @type tail

        path /var/log/containers/*.log

        pos_file /var/log/es-containers.log.pos

        tag raw.kubernetes.*

        read_from_head true

        <parse>

            @type multi_format

            <pattern>

                format json

                time_key time

                time_format %Y-%m-%dT%H:%M:%S.%NZ

            </pattern>

            <pattern>

                format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/

                time_format %Y-%m-%dT%H:%M:%S.%N%:z

            </pattern>

        </parse>

    </source>

    <match raw.kubernetes.**>

        @id raw.kubernetes

        @type detect_exceptions

        remove_tag_prefix raw

        message log

        stream stream

        multiline_flush_interval 5

        max_bytes 500000

        max_lines 1000

    </match>

Modify the following information in output.comf:

output.conf: |

    <match **>

        @id elasticsearch

        @type elasticsearch

        @log_level info

        include_tag_key true

        # Replace with the host/port to your Elasticsearch cluster.

        host "elasticsearch-client"

        port "9200"

        logstash_format true

        <buffer>

            @type file

            path /var/log/fluentd-buffers/kubernetes.system.buffer

            flush_mode interval

            retry_type exponential_backoff

            flush_thread_count 2

            flush_interval 5s

            retry_forever

            retry_max_interval 30

            chunk_limit_size "2M"

            queue_limit_length "8"

            overflow_action block

        </buffer>

    </match>

Edit the templates/deployment.yaml file to add the following mount information:

5 Install fluentd

efk is the namespace; the first fluentd is the helm release name, and the second is the fluentd installation directory:

helm install -n efk fluentd fluentd

Check with helm and find that fluentd has been installed successfully:

helm list -n efk

6 Verify fluentd

Open your browser and visit https://jiuxi.kibana.org (you need to modify the/etc/hosts file, add a domain name resolution record, note that IP is the host IP of nginx-ingress-controller pod, remember).

Select "management" -> "Index Patterns":

Click the link "Create index pattern":

Select the specified index file:

Click "next step":

Select a filter field name:

After successful index creation, select Discovery:

Select the specified index file:


From then on, the helm3 installation fluentd was easily completed.The entire efk was also configured successfully.

Tags: Linux Kubernetes ElasticSearch GitLab Nginx

Posted on Sun, 12 Jan 2020 11:32:20 -0500 by getmukesh