Commit ad0c9533 authored by Carlos Camacho's avatar Carlos Camacho

Fix typos across the docs repo.

Fixed some blog and documentation typos.
parent 3db16748
...@@ -161,7 +161,7 @@ instance or entire service disappears however, we call `srvs.handle` using the ...@@ -161,7 +161,7 @@ instance or entire service disappears however, we call `srvs.handle` using the
`srvs.delete` method instead. `srvs.delete` method instead.
We finish each update by another call to `srvs.persist` to write out the We finish each update by another call to `srvs.persist` to write out the
changes to the file Promtheus is watching. changes to the file Prometheus is watching.
### Modification methods ### Modification methods
......
...@@ -112,7 +112,7 @@ container, so any changes should be kept out of the container. Another problem ...@@ -112,7 +112,7 @@ container, so any changes should be kept out of the container. Another problem
was the lack of change tracking in Grafana. was the lack of change tracking in Grafana.
We have thus decided to write a generator which takes YAML maintained within We have thus decided to write a generator which takes YAML maintained within
git and generates JSON configs for Grafana dashboards. It is furthemore able to git and generates JSON configs for Grafana dashboards. It is furthermore able to
deploy dashboards to Grafana started in a fresh container without the need for deploy dashboards to Grafana started in a fresh container without the need for
persisting changes made into the container. This provides you with automation, persisting changes made into the container. This provides you with automation,
repeatability, and auditing. repeatability, and auditing.
...@@ -125,7 +125,7 @@ We are pleased to announce that this tool is also now available under an Apache ...@@ -125,7 +125,7 @@ We are pleased to announce that this tool is also now available under an Apache
An improvement which we saw immediately was the stability of Prometheus. We An improvement which we saw immediately was the stability of Prometheus. We
were fighting with stability and scalability of Graphite prior to this, so were fighting with stability and scalability of Graphite prior to this, so
getting that sorted was a great win for us. Furthemore the speed and stability getting that sorted was a great win for us. Furthermore the speed and stability
of Prometheus made access to metrics very easy for developers. Prometheus is of Prometheus made access to metrics very easy for developers. Prometheus is
really helping us to embrace the DevOps culture. really helping us to embrace the DevOps culture.
......
...@@ -6,7 +6,7 @@ author_name: Brian Brazil ...@@ -6,7 +6,7 @@ author_name: Brian Brazil
--- ---
*Next in our series of interviews with users of Prometheus, DigitalOcean talks *Next in our series of interviews with users of Prometheus, DigitalOcean talks
about how they use Promethus. Carlos Amedee also talked about [the social about how they use Prometheus. Carlos Amedee also talked about [the social
aspects of the rollout](https://www.youtube.com/watch?v=ieo3lGBHcy8) at PromCon aspects of the rollout](https://www.youtube.com/watch?v=ieo3lGBHcy8) at PromCon
2016.* 2016.*
......
...@@ -240,7 +240,7 @@ Azure SD configurations allow retrieving scrape targets from Azure VMs. ...@@ -240,7 +240,7 @@ Azure SD configurations allow retrieving scrape targets from Azure VMs.
The following meta labels are available on targets during relabeling: The following meta labels are available on targets during relabeling:
* `__meta_azure_machine_id`: the machine ID * `__meta_azure_machine_id`: the machine ID
* `__meta_azure_machine_rescource_group`: the machine's resource group * `__meta_azure_machine_resource_group`: the machine's resource group
* `__meta_azure_machine_location`: the location the machine runs in * `__meta_azure_machine_location`: the location the machine runs in
* `__meta_azure_machine_private_ip`: the machine's private IP * `__meta_azure_machine_private_ip`: the machine's private IP
* `__meta_azure_tag_<tagname>`: each tag value of the machine * `__meta_azure_tag_<tagname>`: each tag value of the machine
...@@ -580,7 +580,7 @@ Nerve SD configurations allow retrieving scrape targets from [AirBnB's Nerve] ...@@ -580,7 +580,7 @@ Nerve SD configurations allow retrieving scrape targets from [AirBnB's Nerve]
The following meta labels are available on targets during relabeling: The following meta labels are available on targets during relabeling:
* `__meta_nerve_path`: the full path to the emdpoint node in Zookeeper * `__meta_nerve_path`: the full path to the endpoint node in Zookeeper
* `__meta_nerve_endpoint_host`: the host of the endpoint * `__meta_nerve_endpoint_host`: the host of the endpoint
* `__meta_nerve_endpoint_port`: the port of the endpoint * `__meta_nerve_endpoint_port`: the port of the endpoint
* `__meta_nerve_endpoint_name`: the name of the endpoint * `__meta_nerve_endpoint_name`: the name of the endpoint
......
...@@ -33,7 +33,7 @@ will come in handy. As a rule of thumb, you should have at least three ...@@ -33,7 +33,7 @@ will come in handy. As a rule of thumb, you should have at least three
times more RAM available than needed by the memory chunks alone. times more RAM available than needed by the memory chunks alone.
PromQL queries that involve a high number of time series will make heavy use of PromQL queries that involve a high number of time series will make heavy use of
the LevelDB backed indices. If you need to run queries of that kind, tweaking the LevelDB backed indexes. If you need to run queries of that kind, tweaking
the index cache sizes might be required. The following flags are relevant: the index cache sizes might be required. The following flags are relevant:
* `-storage.local.index-cache-size.label-name-to-label-values`: For regular * `-storage.local.index-cache-size.label-name-to-label-values`: For regular
...@@ -115,7 +115,7 @@ value. Three times the number of series is a good first approximation. But keep ...@@ -115,7 +115,7 @@ value. Three times the number of series is a good first approximation. But keep
the implication for memory usage (see above) in mind. the implication for memory usage (see above) in mind.
If you have more active time series than configured memory chunks, Prometheus If you have more active time series than configured memory chunks, Prometheus
will inevitably run into a sitation where it has to keep more chunks in memory will inevitably run into a situation where it has to keep more chunks in memory
than configured. If the number of chunks goes more than 10% above the than configured. If the number of chunks goes more than 10% above the
configured limit, Prometheus will throttle ingestion of more samples (by configured limit, Prometheus will throttle ingestion of more samples (by
skipping scrapes and rule evaluations) until the configured value is exceeded skipping scrapes and rule evaluations) until the configured value is exceeded
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment