Commit 426fab7c authored by Julius Volz's avatar Julius Volz

Initial commit of the Prometheus docs site.

parents
# For projects using nanoc (http://nanoc.ws/)
# Default location for output, needs to match output_dir's value found in config.yaml
output/
# Temporary file directory
tmp/
# Crash Log
crash.log
source 'https://rubygems.org'
gem 'nanoc'
gem 'adsf'
gem 'kramdown'
gem 'guard-nanoc'
gem 'nokogiri'
gem 'redcarpet'
gem 'pygments.rb'
GEM
remote: https://rubygems.org/
specs:
adsf (1.2.0)
rack (>= 1.0.0)
celluloid (0.16.0)
timers (~> 4.0.0)
coderay (1.1.0)
colored (1.2)
cri (2.6.1)
colored (~> 1.2)
ffi (1.9.6)
formatador (0.2.5)
guard (2.6.1)
formatador (>= 0.2.4)
listen (~> 2.7)
lumberjack (~> 1.0)
pry (>= 0.9.12)
thor (>= 0.18.1)
guard-nanoc (1.0.2)
guard (>= 1.8.0)
nanoc (>= 3.6.3)
hitimes (1.2.2)
kramdown (1.4.2)
listen (2.7.11)
celluloid (>= 0.15.2)
rb-fsevent (>= 0.9.3)
rb-inotify (>= 0.9)
lumberjack (1.0.9)
method_source (0.8.2)
mini_portile (0.6.0)
nanoc (3.7.3)
cri (~> 2.3)
nokogiri (1.6.3.1)
mini_portile (= 0.6.0)
posix-spawn (0.3.9)
pry (0.10.1)
coderay (~> 1.1.0)
method_source (~> 0.8.1)
slop (~> 3.4)
pygments.rb (0.6.0)
posix-spawn (~> 0.3.6)
yajl-ruby (~> 1.1.0)
rack (1.5.2)
rb-fsevent (0.9.4)
rb-inotify (0.9.5)
ffi (>= 0.5.0)
redcarpet (3.2.0)
slop (3.6.0)
thor (0.19.1)
timers (4.0.1)
hitimes
yajl-ruby (1.1.0)
PLATFORMS
ruby
DEPENDENCIES
adsf
guard-nanoc
kramdown
nanoc
nokogiri
pygments.rb
redcarpet
# A sample Guardfile
# More info at https://github.com/guard/guard#readme
guard 'nanoc' do
watch('nanoc.yaml') # Change this to config.yaml if you use the old config file name
watch('Rules')
watch(%r{^(content|layouts|lib|static)/.*$})
end
This diff is collapsed.
# Prometheus Documentation
This repository contains both the content and the static-site generator code for the
Prometheus documentation site.
## Prerequisites
You need to have a working Ruby environment set up and then install the
necessary gems:
cd docs
bundle
## Building
To generate the static site, run:
bundle exec nanoc
The resulting static site will be stored in the `output` directory.
## Development Server
To run a local server that displays the generated site, run:
# Rebuild the site whenever relevant files change:
bundle exec guard
# Start the local development server:
bundle exec nanoc view
You should now be able to view the generated site at
[http://localhost:3000/](http://localhost:3000).
## License
Apache License 2.0, see [LICENSE](LICENSE).
#!/usr/bin/env ruby
# A few helpful tips about the Rules file:
#
# * The string given to #compile and #route are matching patterns for
# identifiers--not for paths. Therefore, you can’t match on extension.
#
# * The order of rules is important: for each item, only the first matching
# rule is applied.
#
# * Item identifiers start and end with a slash (e.g. “/about/” for the file
# “content/about.html”). To select all children, grandchildren, … of an
# item, use the pattern “/about/*/”; “/about/*” will also select the parent,
# because “*” matches zero or more characters.
compile '/assets/*' do
end
route '/assets/*' do
# /assets/foo.html/ → /foo.html
item.identifier[0..-2]
end
compile '*' do
if item[:extension] == 'md'
#filter :kramdown
filter :redcarpet, options: {hard_wrap: true, filter_html: true, autolink: true, no_intraemphasis: true, fenced_code_blocks: true, gh_blockcode: true}
filter :add_anchors
filter :bootstrappify
filter :admonition
filter :colorize_syntax, :default_colorizer => :pygmentsrb
layout 'default'
elsif item[:extension] == 'css'
# don’t filter stylesheets
elsif item.binary?
# don’t filter binary items
else
layout 'default'
end
end
route '*' do
if item[:extension] == 'css'
# Write item with identifier /foo/ to /foo.css
item.identifier.chop + '.css'
elsif item.binary?
# Write item with identifier /foo/ to /foo.ext
item.identifier.chop + '.' + item[:extension]
else
# Write item with identifier /foo/ to /foo/index.html
item.identifier + 'index.html'
end
end
#passthrough '/assets/*'
layout '*', :erb
---
title: Community
sort_rank: 6
nav_icon: users
---
---
title: Automatic Labels and Synthetic Metrics
sort_rank: 5
---
# Automatic Labels and Synthetic Metrics
## Automatically Attached Labels
When Prometheus scrapes a target, it attaches some labels automatically to the
scraped metrics timeseries which serve to identify the scraped target:
* `job`: The Prometheus job name from which the timeseries was scraped.
* `instance`: The specific instance/endpoint of the job which was scraped.
If either of these labels are already present in the scraped data, they are not
replaced. Instead, Prometheus adds its own labels with `exporter_` prepended to
the label name: `exporter_job` and `exporter_instance`. The same pattern holds
true for any manually supplied base labels supplied for a target group.
## Synthetic Timeseries
Prometheus also generates some timeseries internally which are not directly
taken from the scraped data:
* `up`: for each endpoint scrape, a sample of the form `up{job="...", instance="..."}` is stored, with a value of `1.0` indicating that the target was successfully scraped (it is up) and `0.0` indicating that the endpoint is down.
* `ALERTS`: for pending and firing alerts, a timeseries of the form `ALERTS{alertname="...", alertstate="pending|firing",...alertlabels...}` is written out. The sample value is 1.0 as long as the alert is in the indicated active (pending/firing) state, but a single 0.0 value gets written out when an alert transitions from active to inactive state.
---
title: Concepts
sort_rank: 4
nav_icon: flask
---
---
title: Contributing
sort_rank: 7
nav_icon: code-fork
---
---
title: Prometheus Documentation
is_hidden: true
---
This is the central home for all Prometheus documentation.
TODO: put some nice intro text here.
---
title: Configuration
nav_icon: sliders
---
# Configuration
The canonical definition of each configuration field is contained in the
protocol buffer schema definition at
https://github.com/prometheus/prometheus/blob/master/config/config.proto.
---
title: For Operators
sort_rank: 3
nav_icon: cog
---
---
title: Storage
nav_icon: database
---
This is about the storage.
---
title: Best Practices
sort_rank: 5
nav_icon: thumbs-o-up
---
---
title: Start Here
sort_rank: 1
nav_icon: hand-o-right
---
---
title: Download and Install
---
# Download and Install Prometheus
## Downloading
## Installing
---
title: Why Prometheus?
---
## What is Prometheus?
[Prometheus](https://github.com/prometheus) is an open-source systems
monitoring and alerting toolkit built at [SoundCloud](http://soundcloud.com).
Since its inception in 2012, it has become the standard for instrumenting new
services at SoundCloud. Prometheus' main distinguishing features as compared to
other monitoring systems are:
- a **multi-dimensional** data model (via key/value pairs attached to timeseries)
- a [**flexible query language**](http://localhost:3000/using/querying/basics/)
to leverage this dimensionality
- no reliance on distributed storage; **single server nodes are autonomous**
- timeseries collection happens via a **pull model** over HTTP
- **pushing timeseries** is supported via an intermediary gateway
- targets are discovered via **service discovery** or **static configuration**
- multiple modes of **graphing and dashboarding support**
- **federation support** coming soon
The Prometheus ecosystem consists of multiple components, many of which are
optional:
- the main [Prometheus server](https://github.com/prometheus/prometheus) which scrapes and stores timeseries data
- client libraries for instrumenting application code
- a [push gateway](https://github.com/prometheus/pushgateway) for supporting short-lived jobs
- a [GUI-based dashboard builder](PromDash) based on Rails/SQL
- special-purpose exporters (for HAProxy, StatsD, Ganglia, etc.)
- an (experimental) [alert manager](https://github.com/prometheus/alertmanager)
- a [command-line querying tool](https://github.com/prometheus/prometheus_cli)
- various support tools
## When does it fit?
Prometheus works well both for machine-based monitoring as well as monitoring
of highly dynamic service-oriented architectures. In a world of microservices,
its support for multi-dimensional data collection and querying is a particular
strength.
TODO: highlight advantage of not depending on distributed storage.
---
title: Intro Codelab
sort_rank: 1
---
# Intro Codelab
This guide is a "Hello World"-style codelab which shows how to install,
configure, and use Prometheus in a simple example setup. You'll build and run
Prometheus locally, configure it to scrape itself and an example application,
and then work with queries, rules, and graphs to make use of the collected
timeseries data.
## Getting Prometheus
First, fetch the latest Prometheus collector server code:
```language-bash
git clone git@github.com:/prometheus/prometheus
```
## Building Prometheus
Building Prometheus currently still requires a `make` step, as some parts of
the source are autogenerated (protobufs, web assets, lexer/parser files).
```language-bash
cd prometheus
make build
```
## Configuring Prometheus to Monitor Itself
Prometheus collects metrics from monitored targets by scraping metrics HTTP
endpoints on these targets. Since Prometheus also exposes data in the same
manner about itself, it may also be used to scrape and monitor its own health.
While a Prometheus server which collects only data about itself is not very
useful in practice, it's a good starting example. Save the following basic
Prometheus configuration as a file named `prometheus.conf`:
```
# Global default settings.
global: {
scrape_interval: "15s" # By default, scrape targets every 15 seconds.
evaluation_interval: "15s" # By default, evaluate rules every 15 seconds.
# Attach these extra labels to all timeseries collected by this Prometheus instance.
labels: {
label: {
name: "monitor"
value: "codelab-monitor"
}
}
}
# A job definition containing exactly one endpoint to scrape: Prometheus itself.
job: {
# The job name is added as a label `job={job-name}` to any timeseries scraped from this job.
name: "prometheus"
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: "5s"
# Let's define a group of targets to scrape for this job. In this case, only one.
target_group: {
# These endpoints are scraped via HTTP.
target: "http://localhost:9090/metrics"
}
}
```
As you might have noticed, Prometheus configuration is supplied in an ASCII
form of
[protocol buffers](https://developers.google.com/protocol-buffers/docs/overview).
The protocol buffer schema definition has a [complete documentation of all
available configuration options](https://github.com/prometheus/prometheus/blob/master/config/config.proto).
## Starting Prometheus
To start Prometheus with your newly created configuration file, change to your
Prometheus build directory and run:
```language-bash
# Start Prometheus.
# By default, Prometheus stores its database in /tmp/metrics (flag -metricsStoragePath).
./prometheus -configFile=prometheus.conf
```
Prometheus should start up and it should show a status page about itself at
http://localhost:9090. Give it a couple of seconds to start collecting data
about itself from its own HTTP metrics endpoint.
You can also verify that Prometheus is serving metrics about itself by
navigating to its metrics exposure endpoint: [[http://localhost:9090/metrics]]
## Using the Expression Browser
Let's try looking at some data that Prometheus has collected about itself. To
use Prometheus' built-in expression browser, navigate to
[[http://localhost:9090/]] and choose the "Tabular" from the "Graph & Console"
tab.
As you can gather from [[http://localhost:9090/metrics]], one metric that
Prometheus exports about itself is called
`prometheus_metric_disk_latency_microseconds`. Go ahead and enter this into the
expression console:
```
prometheus_metric_disk_latency_microseconds
```
This should return a lot of different timeseries (along with the latest value
recorded for each), all with the metric name
`prometheus_metric_disk_latency_microseconds`, but with different labels. These
labels designate different latency percentiles, operation types, and operation
results (success, failure).
To count the number of returned timeseries, you could write:
```
count(prometheus_metric_disk_latency_microseconds)
```
If we were only interested in the 99th percentile latencies for e.g.
`get_value_at_time` operations, we could use this query to retrieve that
information:
```
prometheus_metric_disk_latency_microseconds{operation="get_value_at_time", percentile="0.990000"}
```
For further details about the expression language, see the [[Expression Language]]
documentation.
## Using the Graphing Interface
To graph expressions, navigate to [[http://localhost:9090/]] and use the
"Graph" tab.
For example, enter the following expression to graph all latency percentiles
for `get_value_at_time` operations in Prometheus:
```
prometheus_metric_disk_latency_microseconds{operation="get_value_at_time"}
```
Experiment with the graph range parameters and other settings.
## Starting Up Some Sample Targets
Let's make this more interesting and start some example targets for Prometheus
to scrape.
Download the Go client library for Prometheus, and run some random examples
from it that export timeseries with random data:
```bash
# Fetch the client library code:
git clone git@github.com:/prometheus/client_golang
# You might also want to do this if you didn't download the above repo into your Go package path already:
go get github.com/prometheus/client_golang
# Start 3 example targets in screen sessions:
cd client_golang/examples/random
go run main.go -listeningAddress=:8080
go run main.go -listeningAddress=:8081
go run main.go -listeningAddress=:8082
```
You should now have example targets listening on
[[http://localhost:8080/metrics]], [[http://localhost:8081/metrics]], and
[[http://localhost:8082/metrics]].
## Configuring Prometheus to Monitor the Sample Targets
Now we'll configure Prometheus to scrape these new targets. Let's group these
three endpoints into a job we call `random-example`. However, imagine that the
first two endpoints are production targets, while the third one represents a
canary instance. To model this in Prometheus, we can add several groups of
endpoints to a single job, adding extra labels to each group of targets. In
this example, we'll add the `group="production"` label to the first group of
targets, while adding `group="canary"` to the second.
To achieve this, add the following job definition to your `prometheus.conf` and
restart your Prometheus instance:
```
job: {
name: "random-example"
# The "production" targets for this job.
target_group: {
target: "http://localhost:8080/metrics"
target: "http://localhost:8081/metrics"
labels: {
label: {
name: "group"
value: "production"
}
}
}
# The "canary" targets for this job.
target_group: {
target: "http://localhost:8082/metrics"
labels: {
label: {
name: "group"
value: "canary"
}
}
}
}
```
Go to the expression browser and verify that Prometheus now has information
about timeseries that these example endpoints expose, e.g. the
`rpc_calls_total` metric.
## Configure Rules For Aggregating Scraped Data into New Timeseries
Manually entering expressions every you time you need them can get cumbersome
and might also be slow to compute in some cases. Prometheus allows you to
periodically record expressions into completely new timeseries via configured
rules. Let's say we're interested in recording the per-second rate of
`rpc_calls_total` averaged over all instances as measured over the last 5
minutes. We could write this as:
```
avg(rate(rpc_calls_total[5m]))
```
To record this expression as a new timeseries called `rpc_calls_rate`, create a
file with the following recording rule and save it as `prometheus.rules`:
```
rpc_calls_rate_mean = avg(rate(rpc_calls_total[5m]))
```
To make Prometheus pick up this new rule, add a `rule_files` statement to the
global configuration section in your `prometheus.conf`. The global section
should now look like this:
```
# Global default settings.
global: {
scrape_interval: "15s" # By default, scrape targets every 15 seconds.
evaluation_interval: "15s" # By default, evaluate rules every 15 seconds.
# Attach these extra labels to all timeseries collected by this Prometheus instance.
labels: {
label: {
name: "monitor"
value: "codelab-monitor"
}
}
# Load and evaluate rules in this file every 'evaluation_interval' seconds. This field may be repeated.
rule_file: "prometheus.rules"
}
```
Restart Prometheus with the new configuration and verify that a new timeseries
with the metric name `rpc_calls_rate_mean` is now available by querying it
through the expression browser or graphing it.
---
title: Graphing and Dashboards
sort_rank: 3
---
---
title: For Users
sort_rank: 2
nav_icon: line-chart
---
---
title: Instrumenting Your Code
sort_rank: 2
---
# Instrumenting your code
If you want to monitor services which do not have existing Prometheus
instrumentation, you will need to instrument your application's code via one of
the Prometheus client libraries.
First, familiarize yourself with the Prometheus-supported
[metrics types](/concepts/metric_types/). To use these types programmatically, see
your specific client library's documentation.
Choose a Prometheus client library that matches the language in which your
application is written. This lets you define and expose internal metrics via an
HTTP endpoint on your application’s instance:
- [Go](https://github.com/prometheus/client_golang)
- [Java or Scala](https://github.com/prometheus/client_java)
- [Ruby](https://github.com/prometheus/client_ruby)
When Prometheus scrapes your instance's HTTP endpoint, the client library
sends the current state of all tracked metrics to the server.
---
title: Pushing Data
sort_rank: 6
---
# Pushing Data
Occasionally you will need to monitor components which cannot be scraped: They
might be behind a firewall, or they might be too short-lived to expose data
reliably via the pull model. The
[push gateway](https://github.com/prometheus/pushgateway) allows you to push
timeseries from these components to an intermediary job which Prometheus can
scrape.
For more information on installing and using the push gateway, see the
project's
[README.md](https://github.com/prometheus/pushgateway/blob/master/README.md).
---
title: The Basics
sort_rank: 1
---
# Querying Prometheus
## Overview
Prometheus provides a functional expression language that lets the user select
and aggregate timeseries data in real-time. The result of an expression can
either be shown as a graph, viewed as data in the expression browser, or
consumed and further processed by external systems via the HTTP API.
## Examples
This document is meant as a reference. For learning, it might be easier to
start with a couple of examples. See the [Expression Language Examples](/using/querying/examples).
## Basic Concepts
### Timeseries
Data in Prometheus is stored as timeseries, which are uniquely identified by a
metric name and a set of arbitrary label/value pairs. Each timeseries can have
one or more data points attached to it. Data points are timestamp/value pairs.
#### Metric name
The metric name of a timeseries (e.g. `http_requests_total`) specifies the
general feature of a system that is measured. It may contain alpha-numeric
characters, plus underscores and colons.
#### Labels
The label/value pairs which identify a timeseries allow later filtering and
aggregation by these dimensions (e.g. `endpoint`, `response_code`, `instance`). Label keys
are identifiers (alpha-numeric characters plus underscores, but no colons),
while their values may be arbitrary strings.
#### Data points
Each timeseries can have one or more data points attached to it, which are
timestamp/value pairs. Values are always encoded as floating-point numbers
(currently 64-bit precision).
## Expression Language Data Types
In Prometheus' expression language, an expression or sub-expression can
evaluate to one of four types:
* **string**
* **scalar** - simple numeric floating point value
* **instant vector** - vector of multiple timeseries, containing a single sample for each timeseries, with all samples sharing the same (instant) timestamp
* **range vector** - vector of multiple timeseries, containing a range of data points over time for each timeseries
Depending on the use-case (e.g. when graphing vs. displaying the output of an
expression), only some of these types are legal as the result from a
user-specified expression. For example, an expression that returns an instant
vector is the only type that can be directly graphed.
## Literals
### String Literals
Strings may be specified as literals in single or double quotes.
Example:
"this is a string"
### Float Literals
Scalar float values can be literally written as numbers of the form
`[-](digits)[.(digits)]`.
-2.43
## Timeseries Selectors
### Instant Vector Selectors
Instant vector selectors allow the selection of a set of timeseries and a
single sample value for each at a given timestamp (instant): in the simplest
form, only a metric name is specified. This results in an instant vector
containing elements for all timeseries that have this metric name.
This example selects all timeseries that have the `http_requests_total` metric
name:
http_requests_total
It is possible to filter these timeseries further by appending a set of labels
to match in curly braces (`{}`).
This example selects only those timeseries with the `http_requests_total`
metric name that also have the `job` label set to `prometheus` and their
`group` label set to `canary`:
http_requests_total{job="prometheus",group="canary"}
It is also possible to negatively match a label value, or to match label values
again regular expressions. The following label matching operators exist:
* `=`: Select labels that are exactly equal to the provided string.
* `!=`: Select labels that are not equal to the provided string.
* `=~`: Select labels that regex-match the provided string (or substring).
* `!~`: Select labels that do not regex-match the provided string (or substring).
For example, this selects all `http_requests_total` timeseries for `staging`,
`testing`, and `development` environments and HTTP methods other than `GET`.
http_requests_total{environment=~"staging|testing|development",method!="GET"}
### Range Vector Selectors
Range vector literals work like instant vector literals, except that they
select a range of samples back from the current instant. Syntactically, a range
duration is appended in square brackets (`[]`) at the end of a vector selector
to specify how far back in time values should be fetched for each resulting
range vector element.
Time durations are specified as a number, followed immediately by one of the
following units:
* `s` - seconds
* `m` - minutes
* `h` - hours
* `d` - days
* `w` - weeks
* `y` - years
In this example, we select all the values we have recorded within the last 5
minutes for all timeseries that have the metric name `http_requests_total` and
a `job` label set to `prometheus`:
http_requests_total{job="prometheus"}[5m]
## Operators
Prometheus supports many binary and aggregation operators. These are described
in detail in the [[Expression Language Operators]] page.
## Functions
Prometheus supports several functions to operate on data. These are described
in detail in the [[Expression Language Functions]] page.
## Gotchas
TODO:
* staleness and interpolation
* ...
---
title: Examples
sort_rank: 4
---
# Query Examples
## Simple literals
Return (as a sample vector) all timeseries with the metric
`http_requests_total`:
http_requests_total
Return (as a sample vector) all timeseries with the metric
`http_requests_total` and the given `job` and `group` labels:
http_requests_total{job="prometheus", group="canary"}
Return a whole range of time (in this case 5 minutes) for the same vector,
making it a range vector:
http_requests_total{job="prometheus", group="canary"}[5m]
## Using Functions, Operators, etc.
Return (as a sample vector) the per-second rate for all timeseries with the
`http_requests_total` metric name, as measured over the last 5 minutes:
rate(http_requests_total[5m])
Let's say that the `http_request_totals` timeseries all have the labels `job`
(fanout by job name) and `instance` (fanout by instance of the job). We might
want to sum over the rate of all instances, so we get fewer output timeseries:
sum(rate(http_requests_total[5m]))
---
title: Functions
sort_rank: 3
---
# Functions
## abs()
`abs(v vector)` returns the input vector with all sample values converted to
their absolute value.
## count_scalar()
`count_scalar(v instant-vector)` returns the number of elements in a timeseries
vector as a scalar. This is in contrast to the `count()` aggregation operator,
which always returns a vector (an empty one if the input vector is empty) and
allows grouping by labels via a `by` clause.
## delta()
`delta(v range-vector, counter bool)` calculates the difference between the
first and last value of each timeseries element in a range vector `v`,
returning an instant vector with the given deltas and equivalent labels. If
`counter` is set to `1` (`true`), the timeseries in the range vector are
treated as monotonically increasing counters. Breaks in monotonicity (such as
counter resets due to target restarts) are automatically adjusted for. Setting
`counter` to `0` (`false`) turns this behavior off.
Example which returns the total number of HTTP requests counted within the last
5 minutes, per timeseries in the range vector:
```
delta(http_requests{job="api-server"}[5m], 1)
```
Example which returns the difference in CPU temperature between now and 2 hours
ago:
```
delta(cpu_temp_celsius{host="zeus"}[2h], 0)
```
## drop_common_labels()
`drop_common_labels(instant-vector)` drops all labels that have the same name
and value across all series in the input vector.
## rate()
`rate(v range-vector)` behaves like `delta()`, with two differences:
* the returned delta is converted into a per-second rate, according to the respective interval
* the `counter` argument is implicitly set to `1` (`true`)
Example call which returns the per-second rate of HTTP requests as measured
over the last 5 minutes, per timeseries in the range vector:
```
rate(http_requests{job="api-server"}[5m])
```
## scalar()
Given a single-element input vector, `scalar(v instant-vector)` returns the
sample value of that single element as a scalar. If the input vector doesn't
have exactly one element, `scalar` will return `NaN`.
## sort()
`sort(v instant-vector)` returns vector elements sorted by their sample values,
in ascending order.
## sort_desc()
Same as `sort`, but sorts in descending order.
## time()
`time()` returns the number of seconds since January 1, 1970 UTC. Note that
this doesn't actually return the current time, but the time at which the
expression is to be evaluated.
## *_over_time(): Aggregating values within series over time:
The following functions allow aggregating each series of a given range vector
over time and return an instant vector with per-series aggregation results:
- `avg_over_time(range-vector)`: the average value of all points under the specified interval.
- `min_over_time(range-vector)`: the minimum value of all points under the specified interval.
- `max_over_time(range-vector)`: the maximum value of all points under the specified interval.
- `sum_over_time(range-vector)`: the sum of all values under the specified interval.
- `count_over_time(range-vector)`: the count of all values under the specified interval.
## topk() / bottomk()
`topk(k integer, v instant-vector)` returns the `k` largest elements of `v` by
sample value.
`bottomk(k integer, v instant-vector` returns the `k` smallest elements of `v`
by sample value.
---
title: Query Language
sort_rank: 3
---
---
title: Operators
sort_rank: 2
---
# Operators
## Arithmetic Binary Operators
The following binary arithmetic operators exist in Prometheus:
* `+` (addition)
* `-` (subtraction)
* `*` (multiplication)
* `/` (division)
* `%` (modulo)
Binary arithmetic operators are defined between scalar/scalar, vector/scalar,
and vector/vector value pairs.
**Between two scalars**, the behavior is obvious: they evaluate to another
scalar that is the result of the operator applied to both scalar operands.
**Between an instant vector and a scalar**, the operator is applied to the
value of every data sample in the vector. E.g. if a timeseries instant vector
is multiplied by 2, the result is another vector in which every sample value of
the original vector is multiplied by 2.
**Between two instant vectors**, a binary arithmetic operator only applies to
vector elements that have identical sets of labels between the two vectors.
Vector elements that don't find an exact label match on the other side get
dropped from the result. The metric name of the result vector is carried over
from the left hand side of the expression.
## Comparison / Filter Binary Operators
The following binary comparison/filter operators exist in Prometheus:
* `>` (greater-than)
* `<` (less-than)
* `>=` (greater-or-equal)
* `<=` (less-or-equal)
Comparison/filters operators are defined between scalar/scalar, vector/scalar,
and vector/vector value pairs.
**Between two scalars**, these operators result in another scalar that is
either `0` (`false`) or `1` (`true`), depending on the comparison result.
**Between an instant vector and a scalar**, these operators are applied to the
value of every data sample in the vector, and vector elements between which the
comparison result is `false` get dropped from the result vector.
**Between two instant vectors**, these operators behave as a filter: They apply
to vector elements that have identical sets of labels between the two vectors.
Vector elements for which the expression evaluates to `false` or which don't
find an exact label match on the other side of the expression get dropped from
the result, while the others get carried over into a result vector with their
original (left-hand-side) metric names and data values.
# Logical/Set Binary Operators
These logical/set binary operators are only defined between instant vectors:
* `and` (intersection)
* `or` (union)
`vector1 and vector2` results in a vector consisting of the elements of
`vector1` for which there are elements in `vector2` with exactly matching
labelsets. Other elements are dropped. The metric name and values are carried
over from the left-hand-side vector.
`vector1 or vector2` results in a vector that contains all original elements
(labelsets + values) of `vector1` and additionally all elements of `vector2`,
which don't have matching labelsets in `vector1`. The metric name is carried
over from the left-hand-side vector in both cases.
# Aggregation Operators
Prometheus supports the following built-in aggregation operators that can be
used to aggregate the elements of a single instant vector, resulting in a new
vector of fewer elements with aggregated values:
* `sum` (calculate sum over dimensions)
* `min` (select minimum over dimensions)
* `max` (select maximum over dimensions)
* `avg` (calculate the average over dimensions)
* `count` (count number of elements in the vector)
These operators can either be used to aggregate over **all** label dimensions
or preserve distinct dimensions by including a `by`-clause.
<aggr-op>(<vector expression>) [by (<label list>)] [keeping_extra]
By default, labels that are not listed in the `by` clause will be dropped from
the result vector, even if their label values are identical between all
elements of the vector. The `keeping_extra` clause allows to keep those extra
labels (labels that are identical between elements, but not in the `by`
clause).
Example:
If the metric `http_requests_total` had timeseries that fan out by
`application`, `instance`, and `group` labels, we could calculate the total
number of seen HTTP requests per application and group over all instances via:
sum(http_requests_total) by (application, group)
If we are just interested in the total of HTTP requests we've seen in **all**
applications, we could simply write:
sum(http_requests_total)
---
title: Recording and Alerting Rules
sort_rank: 5
---
# Defining Recording and Alerting Rules
## Configuring Rules
Prometheus supports two types of rules which may be configured and then
evaluated at regular intervals: recording rules and alerting rules. To include
rules in Prometheus, create a file containing the necessary rule statements and
have Prometheus load the file via the `rule_files` field in the [Prometheus
configuration](https://github.com/prometheus/prometheus/blob/master/config/config.proto).
## Syntax-Checking Rules
To quickly check whether a rule file is syntactically correct without starting
a Prometheus server, install and run Prometheus' `rule_checker` tool:
```bash
# If $GOPATH/github.com/prometheus/prometheus already exists, update it first:
go get -u github.com/prometheus/prometheus
go install github.com/prometheus/prometheus/tools/rule_checker
rule_checker -ruleFile=/path/to/example.rules
```
When the file is syntactically valid, the checker prints a textual
representation of the parsed rules and then exits with an `0` return status.
If there are any syntax errors, it prints an error message and exits with a
`255` return status.
## Recording Rules
Recording rules allow you to precompute frequently needed or computationally
expensive expressions and save their result as a new set of timeseries.
Querying the precomputed result will then often be much faster than executing
the original expression every time it is needed. This is especially useful for
dashboards, which need to query the same expression repeatedly every time they
refresh.
To add a new recording rule, add a line of the following syntax to your rule
file:
<new timeseries name>[{<label overrides>}] = <expression to record>
Some examples:
// Saving the per-job HTTP request count as a new set of timeseries:
job:api_http_requests_total:sum = sum(api_http_requests_total) by (job)
// Drop or rewrite labels in the result timeseries:
new_timeseries{label_to_change="new_value",label_to_drop=""} = old_timeseries
Recording rules are evaluated at the interval specified by the
`evaluation_interval` field in the Prometheus configuration. During each
evaluation cycle, the right-hand-side expression of the rule statement is
evaluated at the current instant in time and the resulting sample vector is
stored as a new set of timeseries with the current timestamp and a new metric
name (and perhaps an overridden set of labels).
## Alerting Rules
Alerting rules allow you to define alert conditions based on Prometheus
expression language expressions and to send notifications about firing alerts
to an external service. Whenever the alert expression results in one or more
vector elements at a given point in time, the alert counts as active for these
elements' label sets.
### Defining Alerting Rules
Alerting rules are defined in the following syntax:
ALERT <alert name>
IF <expression>
[FOR <duration>]
WITH <label set>
SUMMARY "<summary template>"
DESCRIPTION "<description template>"
The optional `FOR` clause causes Prometheus to wait for a certain duration
between first encountering a new expression output vector element (like an
instance with a high HTTP error rate) and counting an alert as firing for this
element. Elements that are active, but not firing yet, are in pending state.
The `WITH` clause allows specifying a set of additional labels to be attached
to the alert. Any existing conflicting labels will be overwritten.
The `SUMMARY` should be a short, human-readable summary of the alert (suitable
for e.g. an email subject line), while the `DESCRIPTION` clause should provide
a longer description. Both string fields allow the inclusion of template
variables derived from the firing vector elements of the alert:
// To insert a firing element's label values:
{{$labels.<labelname>}}
// To insert the numeric expression value of the firing element:
{{$value}}
Examples:
// Alert for any instance that is unreachable for >5 minutes.
ALERT InstanceDown
IF up == 0
FOR 5m
WITH {
severity="page"
}
SUMMARY "Instance {{$labels.instance}} down"
DESCRIPTION "{{$labels.instance}} of job {{$labels.job}} has been down for more than 5 minutes."
// Alert for any instance that have a median request latency >1s.
ALERT ApiHighRequestLatency
IF api_http_request_latencies_ms{quantile="0.5"} > 1000
FOR 1m
WITH {}
SUMMARY "High request latency on {{$labels.instance}}"
DESCRIPTION "{{$labels.instance}} has a median request latency above 1s (current value: {{$value}})"
### Inspecting Alerts During Runtime
To manually inspect which alerts are active (pending or firing), navigate to
the "Alerts" tab of your Prometheus instance. This will show you the exact
label sets for which each defined alert is currently active.
### Sending Alert Notifications
Prometheus's alerting rules are good at figuring what is broken *right now*,
but they are not a fully-fledged notification solution. Another layer is needed
to add summarization, notification rate limiting, silencing and alert
dependencies on top of the simple alert definitions. In Prometheus' ecosystem,
the [Alert Manager](http://github.com/prometheus/alertmanager) takes on this
role. Thus, Prometheus may be configured to periodically send information about
alert states to an Alert Manager instance, which then takes care of dispatching
the right notifications. The Alert Manager instance may be configured via the
`-alertmanager.url` command line flag.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>Prometheus Documentation</title>
<!-- Bootstrap Core CSS -->
<link href="/assets/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<!-- SB-Admin CSS -->
<link href="/assets/sb-admin.css" rel="stylesheet">
<!-- Custom CSS -->
<link href="/assets/docs.css" rel="stylesheet">
<!-- Syntax Highlighting CSS -->
<link href="/assets/monokai.css" rel="stylesheet">
<!-- Custom Fonts -->
<link href="/assets/font-awesome-4.2.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body>
<div id="wrapper">
<!-- Navigation -->
<nav class="navbar navbar-inverse navbar-fixed-top" role="navigation">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-ex1-collapse">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="index.html">Prometheus Documentation</a>
</div>
<!-- Top Menu Items -->
<ul class="nav navbar-right top-nav">
<li>
<a href="/about"><i class="fa fa-info-circle"></i></a>
</li>
<li>
<a href="https://github.com/prometheus"><i class="fa fa-github"></i></a>
</li>
</ul>
<!-- Sidebar Menu Items - These collapse to the responsive navigation menu on small screens -->
<div class="collapse navbar-collapse navbar-ex1-collapse">
<ul class="nav navbar-nav side-nav">
<%= @items['/'].children.sort_by { |i| i[:sort_rank] || 0 }.map { |i| toc(i, @item) }.join('') %>
</ul>
</div>
<!-- /.navbar-collapse -->
</nav>
<div id="page-wrapper">
<div class="container-fluid">
<div class="col-lg-6">
<%= yield %>
</div>
</div>
<!-- /.container-fluid -->
</div>
<!-- /#page-wrapper -->
</div>
<!-- /#wrapper -->
<!-- jQuery Version 1.11.1 -->
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<!-- Bootstrap Core JavaScript -->
<script src="/assets/bootstrap/js/bootstrap.min.js"></script>
</body>
</html>
# All files in the 'lib' directory will be loaded
# before nanoc starts compiling.
include Nanoc::Helpers::LinkTo
# encoding: utf-8
require 'nokogiri'
class AddAnchorsFilter < ::Nanoc::Filter
identifier :add_anchors
def run(content, params={})
# `#dup` is necessary because `.fragment` modifies the incoming string. Ew!
# See https://github.com/sparklemotion/nokogiri/issues/1077
doc = Nokogiri::HTML::DocumentFragment.parse(content.dup)
doc.css('h1,h2,h3,h4,h5,h6').each do |h_node|
next if h_node['id'].nil?
node = Nokogiri::XML::Node.new('a', doc).tap do |a|
a.content = ''
a['class'] = 'header-anchor'
a['href'] = '#' + h_node['id']
end
h_node.add_child(node)
end
doc.to_s
end
end
# encoding: utf-8
# Adapted from the admonition code on http://nanoc.ws/
class AdmonitionFilter < Nanoc::Filter
identifier :admonition
BOOSTRAP_MAPPING = {
'tip' => 'info',
'note' => 'info',
'caution' => 'danger',
'todo' => 'info',
}
def run(content, params = {})
# `#dup` is necessary because `.fragment` modifies the incoming string. Ew!
# See https://github.com/sparklemotion/nokogiri/issues/1077
doc = Nokogiri::HTML.fragment(content.dup)
doc.css('p').each do |para|
content = para.inner_html
next if content !~ /\A(TIP|NOTE|CAUTION|TODO): (.*)\Z/m
new_content = generate($1.downcase, $2)
para.replace(new_content)
end
doc.to_s
end
def generate(kind, content)
%[<div class="admonition-wrapper #{kind}">] +
%[<div class="admonition alert alert-#{BOOSTRAP_MAPPING[kind]}">] +
content +
%[</div></div>]
end
end
# encoding: utf-8
require 'nokogiri'
class Boostrappify < ::Nanoc::Filter
identifier :bootstrappify
def run(content, params={})
# `#dup` is necessary because `.fragment` modifies the incoming string. Ew!
# See https://github.com/sparklemotion/nokogiri/issues/1077
doc = Nokogiri::HTML::DocumentFragment.parse(content.dup)
doc.css('h1').each do |h1|
h1['class'] = 'page-header'
end
doc.css('table').each do |table_node|
next if table_node['class'] && table_node['class'] =~ /table/
table_node['class'] = (table_node['class'] || '') + ' table table-bordered'
end
doc.to_s
end
end
def nav_title_of(i)
i[:nav_title] || i[:title] || ''
end
def decorate_title_for(i, title)
return i[:nav_icon] unless i[:nav_icon].nil?
if !i.children.empty?
if @item_rep.path.start_with?(i.path)
"chevron-down"
else
"chevron-right"
end
end
end
def toc(root_item, focused_item, buffer='', with_children=true)
# Skip non-written or hidden items
return buffer if root_item.nil? || root_item.path.nil? || root_item[:is_hidden]
# Open list element
is_active = @item_rep && @item_rep.path == root_item.path
if is_active
buffer << "<li class=\"active\">"
else
buffer << "<li>"
end
# Add link
title = nav_title_of(root_item)
if root_item[:nav_icon]
title = "<i class=\"fa fa-fw fa-#{root_item[:nav_icon]}\"></i> " + title
end
if !root_item.children.empty?
icon = if @item_rep.path.start_with?(root_item.path)
else
"chevron-down"
end
title = title + " <i class=\"pull-right fa fa-fw fa-#{icon}\"></i>"
end
buffer << link_to(title, root_item.path)
# Add children to sitemap, recursively
visible_children = root_item.children.select { |child| !child[:is_hidden] && child.path }
visible_children = visible_children.sort_by { |child| child[:sort_rank] || 0 }
visible_children = visible_children.select do |child|
focused_item.identifier.start_with?(child.identifier) ||
focused_item.identifier.start_with?(child.parent.identifier)
end
if with_children && visible_children.size > 0
buffer << '<ul class="nav">'
visible_children.each do |child|
toc(child, focused_item, buffer)
end
buffer << '</ul>'
end
# Close list element
buffer << '</li>'
# Return sitemap
buffer
end
# A list of file extensions that nanoc will consider to be textual rather than
# binary. If an item with an extension not in this list is found, the file
# will be considered as binary.
text_extensions: [ 'coffee', 'css', 'erb', 'haml', 'handlebars', 'hb', 'htm', 'html', 'js', 'less', 'markdown', 'md', 'ms', 'mustache', 'php', 'rb', 'sass', 'scss', 'slim', 'txt', 'xhtml', 'xml' ]
# The path to the directory where all generated files will be written to. This
# can be an absolute path starting with a slash, but it can also be path
# relative to the site directory.
output_dir: output
# A list of index filenames, i.e. names of files that will be served by a web
# server when a directory is requested. Usually, index files are named
# “index.html”, but depending on the web server, this may be something else,
# such as “default.htm”. This list is used by nanoc to generate pretty URLs.
index_filenames: [ 'index.html' ]
# Whether or not to generate a diff of the compiled content when compiling a
# site. The diff will contain the differences between the compiled content
# before and after the last site compilation.
enable_output_diff: false
prune:
# Whether to automatically remove files not managed by nanoc from the output
# directory. For safety reasons, this is turned off by default.
auto_prune: false
# Which files and directories you want to exclude from pruning. If you version
# your output directory, you should probably exclude VCS directories such as
# .git, .svn etc.
exclude: [ '.git', '.hg', '.svn', 'CVS' ]
# The data sources where nanoc loads its data from. This is an array of
# hashes; each array element represents a single data source. By default,
# there is only a single data source that reads data from the “content/” and
# “layout/” directories in the site directory.
data_sources:
-
# The type is the identifier of the data source. By default, this will be
# `filesystem_unified`.
type: filesystem_unified
# The path where items should be mounted (comparable to mount points in
# Unix-like systems). This is “/” by default, meaning that items will have
# “/” prefixed to their identifiers. If the items root were “/en/”
# instead, an item at content/about.html would have an identifier of
# “/en/about/” instead of just “/about/”.
items_root: /
# The path where layouts should be mounted. The layouts root behaves the
# same as the items root, but applies to layouts rather than items.
layouts_root: /
# Whether to allow periods in identifiers. When turned off, everything
# past the first period is considered to be the extension, and when
# turned on, only the characters past the last period are considered to
# be the extension. For example, a file named “content/about.html.erb”
# will have the identifier “/about/” when turned off, but when turned on
# it will become “/about.html/” instead.
allow_periods_in_identifiers: false
# The encoding to use for input files. If your input files are not in
# UTF-8 (which they should be!), change this.
encoding: utf-8
-
type: static
items_root: /assets/
# Configuration for the “check” command, which run unit tests on the site.
checks:
# Configuration for the “internal_links” checker, which checks whether all
# internal links are valid.
internal_links:
# A list of patterns, specified as regular expressions, to exclude from the check.
# If an internal link matches this pattern, the validity check will be skipped.
# E.g.:
# exclude: ['^/server_status']
exclude: []
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
pre {
background-color: #333;
//color: #f8f8f2;
color: #ccc;
}
code {
color: #333;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
// Bordered & Pulled
// -------------------------
.@{fa-css-prefix}-border {
padding: .2em .25em .15em;
border: solid .08em @fa-border-color;
border-radius: .1em;
}
.pull-right { float: right; }
.pull-left { float: left; }
.@{fa-css-prefix} {
&.pull-left { margin-right: .3em; }
&.pull-right { margin-left: .3em; }
}
// Base Class Definition
// -------------------------
.@{fa-css-prefix} {
display: inline-block;
font: normal normal normal 14px/1 FontAwesome; // shortening font declaration
font-size: inherit; // can't have font-size inherit on line above, so need to override
text-rendering: auto; // optimizelegibility throws things off #1094
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
// Fixed Width Icons
// -------------------------
.@{fa-css-prefix}-fw {
width: (18em / 14);
text-align: center;
}
/*!
* Font Awesome 4.2.0 by @davegandy - http://fontawesome.io - @fontawesome
* License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License)
*/
@import "variables.less";
@import "mixins.less";
@import "path.less";
@import "core.less";
@import "larger.less";
@import "fixed-width.less";
@import "list.less";
@import "bordered-pulled.less";
@import "spinning.less";
@import "rotated-flipped.less";
@import "stacked.less";
@import "icons.less";
This diff is collapsed.
// Icon Sizes
// -------------------------
/* makes the font 33% larger relative to the icon container */
.@{fa-css-prefix}-lg {
font-size: (4em / 3);
line-height: (3em / 4);
vertical-align: -15%;
}
.@{fa-css-prefix}-2x { font-size: 2em; }
.@{fa-css-prefix}-3x { font-size: 3em; }
.@{fa-css-prefix}-4x { font-size: 4em; }
.@{fa-css-prefix}-5x { font-size: 5em; }
// List Icons
// -------------------------
.@{fa-css-prefix}-ul {
padding-left: 0;
margin-left: @fa-li-width;
list-style-type: none;
> li { position: relative; }
}
.@{fa-css-prefix}-li {
position: absolute;
left: -@fa-li-width;
width: @fa-li-width;
top: (2em / 14);
text-align: center;
&.@{fa-css-prefix}-lg {
left: (-@fa-li-width + (4em / 14));
}
}
// Mixins
// --------------------------
.fa-icon() {
display: inline-block;
font: normal normal normal 14px/1 FontAwesome; // shortening font declaration
font-size: inherit; // can't have font-size inherit on line above, so need to override
text-rendering: auto; // optimizelegibility throws things off #1094
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
.fa-icon-rotate(@degrees, @rotation) {
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=@rotation);
-webkit-transform: rotate(@degrees);
-ms-transform: rotate(@degrees);
transform: rotate(@degrees);
}
.fa-icon-flip(@horiz, @vert, @rotation) {
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=@rotation, mirror=1);
-webkit-transform: scale(@horiz, @vert);
-ms-transform: scale(@horiz, @vert);
transform: scale(@horiz, @vert);
}
/* FONT PATH
* -------------------------- */
@font-face {
font-family: 'FontAwesome';
src: url('@{fa-font-path}/fontawesome-webfont.eot?v=@{fa-version}');
src: url('@{fa-font-path}/fontawesome-webfont.eot?#iefix&v=@{fa-version}') format('embedded-opentype'),
url('@{fa-font-path}/fontawesome-webfont.woff?v=@{fa-version}') format('woff'),
url('@{fa-font-path}/fontawesome-webfont.ttf?v=@{fa-version}') format('truetype'),
url('@{fa-font-path}/fontawesome-webfont.svg?v=@{fa-version}#fontawesomeregular') format('svg');
// src: url('@{fa-font-path}/FontAwesome.otf') format('opentype'); // used when developing fonts
font-weight: normal;
font-style: normal;
}
// Rotated & Flipped Icons
// -------------------------
.@{fa-css-prefix}-rotate-90 { .fa-icon-rotate(90deg, 1); }
.@{fa-css-prefix}-rotate-180 { .fa-icon-rotate(180deg, 2); }
.@{fa-css-prefix}-rotate-270 { .fa-icon-rotate(270deg, 3); }
.@{fa-css-prefix}-flip-horizontal { .fa-icon-flip(-1, 1, 0); }
.@{fa-css-prefix}-flip-vertical { .fa-icon-flip(1, -1, 2); }
// Hook for IE8-9
// -------------------------
:root .@{fa-css-prefix}-rotate-90,
:root .@{fa-css-prefix}-rotate-180,
:root .@{fa-css-prefix}-rotate-270,
:root .@{fa-css-prefix}-flip-horizontal,
:root .@{fa-css-prefix}-flip-vertical {
filter: none;
}
// Spinning Icons
// --------------------------
.@{fa-css-prefix}-spin {
-webkit-animation: fa-spin 2s infinite linear;
animation: fa-spin 2s infinite linear;
}
@-webkit-keyframes fa-spin {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(359deg);
transform: rotate(359deg);
}
}
@keyframes fa-spin {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(359deg);
transform: rotate(359deg);
}
}
// Stacked Icons
// -------------------------
.@{fa-css-prefix}-stack {
position: relative;
display: inline-block;
width: 2em;
height: 2em;
line-height: 2em;
vertical-align: middle;
}
.@{fa-css-prefix}-stack-1x, .@{fa-css-prefix}-stack-2x {
position: absolute;
left: 0;
width: 100%;
text-align: center;
}
.@{fa-css-prefix}-stack-1x { line-height: inherit; }
.@{fa-css-prefix}-stack-2x { font-size: 2em; }
.@{fa-css-prefix}-inverse { color: @fa-inverse; }
This diff is collapsed.
// Bordered & Pulled
// -------------------------
.#{$fa-css-prefix}-border {
padding: .2em .25em .15em;
border: solid .08em $fa-border-color;
border-radius: .1em;
}
.pull-right { float: right; }
.pull-left { float: left; }
.#{$fa-css-prefix} {
&.pull-left { margin-right: .3em; }
&.pull-right { margin-left: .3em; }
}
// Base Class Definition
// -------------------------
.#{$fa-css-prefix} {
display: inline-block;
font: normal normal normal 14px/1 FontAwesome; // shortening font declaration
font-size: inherit; // can't have font-size inherit on line above, so need to override
text-rendering: auto; // optimizelegibility throws things off #1094
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
// Fixed Width Icons
// -------------------------
.#{$fa-css-prefix}-fw {
width: (18em / 14);
text-align: center;
}
This diff is collapsed.
// Icon Sizes
// -------------------------
/* makes the font 33% larger relative to the icon container */
.#{$fa-css-prefix}-lg {
font-size: (4em / 3);
line-height: (3em / 4);
vertical-align: -15%;
}
.#{$fa-css-prefix}-2x { font-size: 2em; }
.#{$fa-css-prefix}-3x { font-size: 3em; }
.#{$fa-css-prefix}-4x { font-size: 4em; }
.#{$fa-css-prefix}-5x { font-size: 5em; }
// List Icons
// -------------------------
.#{$fa-css-prefix}-ul {
padding-left: 0;
margin-left: $fa-li-width;
list-style-type: none;
> li { position: relative; }
}
.#{$fa-css-prefix}-li {
position: absolute;
left: -$fa-li-width;
width: $fa-li-width;
top: (2em / 14);
text-align: center;
&.#{$fa-css-prefix}-lg {
left: -$fa-li-width + (4em / 14);
}
}
// Mixins
// --------------------------
@mixin fa-icon() {
display: inline-block;
font: normal normal normal 14px/1 FontAwesome; // shortening font declaration
font-size: inherit; // can't have font-size inherit on line above, so need to override
text-rendering: auto; // optimizelegibility throws things off #1094
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
@mixin fa-icon-rotate($degrees, $rotation) {
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=#{$rotation});
-webkit-transform: rotate($degrees);
-ms-transform: rotate($degrees);
transform: rotate($degrees);
}
@mixin fa-icon-flip($horiz, $vert, $rotation) {
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=#{$rotation});
-webkit-transform: scale($horiz, $vert);
-ms-transform: scale($horiz, $vert);
transform: scale($horiz, $vert);
}
/* FONT PATH
* -------------------------- */
@font-face {
font-family: 'FontAwesome';
src: url('#{$fa-font-path}/fontawesome-webfont.eot?v=#{$fa-version}');
src: url('#{$fa-font-path}/fontawesome-webfont.eot?#iefix&v=#{$fa-version}') format('embedded-opentype'),
url('#{$fa-font-path}/fontawesome-webfont.woff?v=#{$fa-version}') format('woff'),
url('#{$fa-font-path}/fontawesome-webfont.ttf?v=#{$fa-version}') format('truetype'),
url('#{$fa-font-path}/fontawesome-webfont.svg?v=#{$fa-version}#fontawesomeregular') format('svg');
//src: url('#{$fa-font-path}/FontAwesome.otf') format('opentype'); // used when developing fonts
font-weight: normal;
font-style: normal;
}
// Rotated & Flipped Icons
// -------------------------
.#{$fa-css-prefix}-rotate-90 { @include fa-icon-rotate(90deg, 1); }
.#{$fa-css-prefix}-rotate-180 { @include fa-icon-rotate(180deg, 2); }
.#{$fa-css-prefix}-rotate-270 { @include fa-icon-rotate(270deg, 3); }
.#{$fa-css-prefix}-flip-horizontal { @include fa-icon-flip(-1, 1, 0); }
.#{$fa-css-prefix}-flip-vertical { @include fa-icon-flip(1, -1, 2); }
// Hook for IE8-9
// -------------------------
:root .#{$fa-css-prefix}-rotate-90,
:root .#{$fa-css-prefix}-rotate-180,
:root .#{$fa-css-prefix}-rotate-270,
:root .#{$fa-css-prefix}-flip-horizontal,
:root .#{$fa-css-prefix}-flip-vertical {
filter: none;
}
// Spinning Icons
// --------------------------
.#{$fa-css-prefix}-spin {
-webkit-animation: fa-spin 2s infinite linear;
animation: fa-spin 2s infinite linear;
}
@-webkit-keyframes fa-spin {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(359deg);
transform: rotate(359deg);
}
}
@keyframes fa-spin {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(359deg);
transform: rotate(359deg);
}
}
// Stacked Icons
// -------------------------
.#{$fa-css-prefix}-stack {
position: relative;
display: inline-block;
width: 2em;
height: 2em;
line-height: 2em;
vertical-align: middle;
}
.#{$fa-css-prefix}-stack-1x, .#{$fa-css-prefix}-stack-2x {
position: absolute;
left: 0;
width: 100%;
text-align: center;
}
.#{$fa-css-prefix}-stack-1x { line-height: inherit; }
.#{$fa-css-prefix}-stack-2x { font-size: 2em; }
.#{$fa-css-prefix}-inverse { color: $fa-inverse; }
This diff is collapsed.
/*!
* Font Awesome 4.2.0 by @davegandy - http://fontawesome.io - @fontawesome
* License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License)
*/
@import "variables";
@import "mixins";
@import "path";
@import "core";
@import "larger";
@import "fixed-width";
@import "list";
@import "bordered-pulled";
@import "spinning";
@import "rotated-flipped";
@import "stacked";
@import "icons";
.hll { background-color: #49483e }
.c { color: #75715e } /* Comment */
.err { color: #960050; background-color: #1e0010 } /* Error */
.k { color: #66d9ef } /* Keyword */
.l { color: #ae81ff } /* Literal */
.n { color: #f8f8f2 } /* Name */
.o { color: #f92672 } /* Operator */
.p { color: #f8f8f2 } /* Punctuation */
.cm { color: #75715e } /* Comment.Multiline */
.cp { color: #75715e } /* Comment.Preproc */
.c1 { color: #75715e } /* Comment.Single */
.cs { color: #75715e } /* Comment.Special */
.ge { font-style: italic } /* Generic.Emph */
.gs { font-weight: bold } /* Generic.Strong */
.kc { color: #66d9ef } /* Keyword.Constant */
.kd { color: #66d9ef } /* Keyword.Declaration */
.kn { color: #f92672 } /* Keyword.Namespace */
.kp { color: #66d9ef } /* Keyword.Pseudo */
.kr { color: #66d9ef } /* Keyword.Reserved */
.kt { color: #66d9ef } /* Keyword.Type */
.ld { color: #e6db74 } /* Literal.Date */
.m { color: #ae81ff } /* Literal.Number */
.s { color: #e6db74 } /* Literal.String */
.na { color: #a6e22e } /* Name.Attribute */
.nb { color: #f8f8f2 } /* Name.Builtin */
.nc { color: #a6e22e } /* Name.Class */
.no { color: #66d9ef } /* Name.Constant */
.nd { color: #a6e22e } /* Name.Decorator */
.ni { color: #f8f8f2 } /* Name.Entity */
.ne { color: #a6e22e } /* Name.Exception */
.nf { color: #a6e22e } /* Name.Function */
.nl { color: #f8f8f2 } /* Name.Label */
.nn { color: #f8f8f2 } /* Name.Namespace */
.nx { color: #a6e22e } /* Name.Other */
.py { color: #f8f8f2 } /* Name.Property */
.nt { color: #f92672 } /* Name.Tag */
.nv { color: #f8f8f2 } /* Name.Variable */
.ow { color: #f92672 } /* Operator.Word */
.w { color: #f8f8f2 } /* Text.Whitespace */
.mf { color: #ae81ff } /* Literal.Number.Float */
.mh { color: #ae81ff } /* Literal.Number.Hex */
.mi { color: #ae81ff } /* Literal.Number.Integer */
.mo { color: #ae81ff } /* Literal.Number.Oct */
.sb { color: #e6db74 } /* Literal.String.Backtick */
.sc { color: #e6db74 } /* Literal.String.Char */
.sd { color: #e6db74 } /* Literal.String.Doc */
.s2 { color: #e6db74 } /* Literal.String.Double */
.se { color: #ae81ff } /* Literal.String.Escape */
.sh { color: #e6db74 } /* Literal.String.Heredoc */
.si { color: #e6db74 } /* Literal.String.Interpol */
.sx { color: #e6db74 } /* Literal.String.Other */
.sr { color: #e6db74 } /* Literal.String.Regex */
.s1 { color: #e6db74 } /* Literal.String.Single */
.ss { color: #e6db74 } /* Literal.String.Symbol */
.bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
.vc { color: #f8f8f2 } /* Name.Variable.Class */
.vg { color: #f8f8f2 } /* Name.Variable.Global */
.vi { color: #f8f8f2 } /* Name.Variable.Instance */
.il { color: #ae81ff } /* Literal.Number.Integer.Long */
.gh { } /* Generic Heading & Diff Header */
.gu { color: #75715e; } /* Generic.Subheading & Diff Unified/Comment? */
.gd { color: #f92672; } /* Generic.Deleted & Diff Deleted */
.gi { color: #a6e22e; } /* Generic.Inserted & Diff Inserted */
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment