InfluxDB使开发人员能够构建物联网,分析和监控软件。它专门用于处理传感器,应用程序和基础架构所产生的大量和无数个带有时间戳的数据源。之前看到论坛上有使用这个数据库接入HA的,所以想试试。
0.安装
我是在群晖docker上安装的 InfluxDB ,直接拉取,配置下启动即可。按照如下步骤操作即可
启动完成后输入群晖的地址加上8086端口就能看到了,根据提示设置用户名密码等信息。
1.注入灵魂
数据库装好了,但是没数据,别担心,这就安排。使用 Telegraf 监控系统状态
Telegraf 是收集和报告指标和数据的代理,是TICK Stack的一部分,是一个插件驱动的服务器代理,用于收集和报告指标。
Telegraf 集成了直接从其运行的容器和系统中提取各种指标,事件和日志,从第三方API提取指标,甚至通过StatsD和Kafka消费者服务监听指标。
还是在docker里安装的 telegraf ,按照前面的步骤操作就行。就是配置 卷 的时候注意下 ,换成 docker/telegraf/telegraf.conf | /etc/telegraf/telegraf.conf 然后启动即可。
点击Load Data -> Telegraf -> Create Configuration
选择System
填写相关信息
配置完成, 记录下这个页面的Token等相关信息
接着点击 新建的标题
就能查看到相关的配置信息了
直接把里面的内容贴到 docker/telegraf/telegraf.conf 中,重启 telegraf 即可,导出的配置如下:
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will send metrics to outputs in batches of at most
## metric_batch_size metrics.
## This controls the size of writes that Telegraf sends to output plugins.
metric_batch_size = 1000
## For failed writes, telegraf will cache metric_buffer_limit metrics for each
## output, and will flush this buffer on a successful write. Oldest metrics
## are dropped first when this buffer fills.
## This buffer only fills when writes fail to output plugin(s).
metric_buffer_limit = 10000
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. Maximum flush_interval will be
## flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## By default or when set to "0s", precision will be set to the same
## timestamp order as the collection interval, with the maximum being 1s.
## ie, when interval = "10s", precision will be "1s"
## when interval = "250ms", precision will be "1ms"
## Precision will NOT be used for service inputs. It is up to each individual
## service input to set the timestamp at the appropriate precision.
## Valid time units are "ns", "us" (or "碌s"), "ms", "s".
precision = ""
## Logging configuration:
## Run telegraf with debug log messages.
debug = false
## Run telegraf in quiet mode (error log messages only).
quiet = false
## Specify the log file name. The empty string means to log to stderr.
logfile = ""
## Override default hostname, if empty use os.Hostname()
hostname = ""
## If set to true, do no set the "host" tag in the telegraf agent.
omit_hostname = false
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB cluster nodes.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
## urls exp: http://127.0.0.1:8086
urls = ["http://192.XXX.XX.XXX:8086"]
## Token for authentication.
token = "XXXXXXXX"
## Organization is the name of the organization you wish to write to; must exist.
organization = "QQQQQ.com"
## Destination bucket to write into.
bucket = "qqqqq"
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## If true, collect raw CPU time metrics.
collect_cpu_time = false
## If true, compute and report the sum of all non-idle CPU states.
report_active = false
[[inputs.disk]]
## By default stats will be gathered for all mount points.
## Set mount_points will restrict the stats to only the specified mount points.
# mount_points = ["/"]
## Ignore mount points by filesystem type.
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
2.结果查看
到此配置结束了,去 Dashboards 页签就能看到采集到的数据图表信息了。
3.总结
后面准备把HA的数据也推送过来,看看效果。