Update module prometheus/client_golang to v1
This commit is contained in:
2
vendor/github.com/prometheus/procfs/.golangci.yml
generated
vendored
2
vendor/github.com/prometheus/procfs/.golangci.yml
generated
vendored
@ -1,6 +1,4 @@
|
||||
# Run only staticcheck for now. Additional linters will be enabled one-by-one.
|
||||
linters:
|
||||
enable:
|
||||
- staticcheck
|
||||
- govet
|
||||
disable-all: true
|
||||
|
109
vendor/github.com/prometheus/procfs/CONTRIBUTING.md
generated
vendored
109
vendor/github.com/prometheus/procfs/CONTRIBUTING.md
generated
vendored
@ -2,17 +2,120 @@
|
||||
|
||||
Prometheus uses GitHub to manage reviews of pull requests.
|
||||
|
||||
* If you are a new contributor see: [Steps to Contribute](#steps-to-contribute)
|
||||
|
||||
* If you have a trivial fix or improvement, go ahead and create a pull request,
|
||||
addressing (with `@...`) the maintainer of this repository (see
|
||||
addressing (with `@...`) a suitable maintainer of this repository (see
|
||||
[MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request.
|
||||
|
||||
* If you plan to do something more involved, first discuss your ideas
|
||||
on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers).
|
||||
This will avoid unnecessary work and surely give you and us a good deal
|
||||
of inspiration.
|
||||
of inspiration. Also please see our [non-goals issue](https://github.com/prometheus/docs/issues/149) on areas that the Prometheus community doesn't plan to work on.
|
||||
|
||||
* Relevant coding style guidelines are the [Go Code Review
|
||||
Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments)
|
||||
and the _Formatting and style_ section of Peter Bourgon's [Go: Best
|
||||
Practices for Production
|
||||
Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style).
|
||||
Environments](https://peter.bourgon.org/go-in-production/#formatting-and-style).
|
||||
|
||||
* Be sure to sign off on the [DCO](https://github.com/probot/dco#how-it-works)
|
||||
|
||||
## Steps to Contribute
|
||||
|
||||
Should you wish to work on an issue, please claim it first by commenting on the GitHub issue that you want to work on it. This is to prevent duplicated efforts from contributors on the same issue.
|
||||
|
||||
Please check the [`help-wanted`](https://github.com/prometheus/procfs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) label to find issues that are good for getting started. If you have questions about one of the issues, with or without the tag, please comment on them and one of the maintainers will clarify it. For a quicker response, contact us over [IRC](https://prometheus.io/community).
|
||||
|
||||
For quickly compiling and testing your changes do:
|
||||
```
|
||||
make test # Make sure all the tests pass before you commit and push :)
|
||||
```
|
||||
|
||||
We use [`golangci-lint`](https://github.com/golangci/golangci-lint) for linting the code. If it reports an issue and you think that the warning needs to be disregarded or is a false-positive, you can add a special comment `//nolint:linter1[,linter2,...]` before the offending line. Use this sparingly though, fixing the code to comply with the linter's recommendation is in general the preferred course of action.
|
||||
|
||||
## Pull Request Checklist
|
||||
|
||||
* Branch from the master branch and, if needed, rebase to the current master branch before submitting your pull request. If it doesn't merge cleanly with master you may be asked to rebase your changes.
|
||||
|
||||
* Commits should be as small as possible, while ensuring that each commit is correct independently (i.e., each commit should compile and pass tests).
|
||||
|
||||
* If your patch is not getting reviewed or you need a specific person to review it, you can @-reply a reviewer asking for a review in the pull request or a comment, or you can ask for a review on IRC channel [#prometheus](https://webchat.freenode.net/?channels=#prometheus) on irc.freenode.net (for the easiest start, [join via Riot](https://riot.im/app/#/room/#prometheus:matrix.org)).
|
||||
|
||||
* Add tests relevant to the fixed bug or new feature.
|
||||
|
||||
## Dependency management
|
||||
|
||||
The Prometheus project uses [Go modules](https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more) to manage dependencies on external packages. This requires a working Go environment with version 1.12 or greater installed.
|
||||
|
||||
All dependencies are vendored in the `vendor/` directory.
|
||||
|
||||
To add or update a new dependency, use the `go get` command:
|
||||
|
||||
```bash
|
||||
# Pick the latest tagged release.
|
||||
go get example.com/some/module/pkg
|
||||
|
||||
# Pick a specific version.
|
||||
go get example.com/some/module/pkg@vX.Y.Z
|
||||
```
|
||||
|
||||
Tidy up the `go.mod` and `go.sum` files and copy the new/updated dependency to the `vendor/` directory:
|
||||
|
||||
|
||||
```bash
|
||||
# The GO111MODULE variable can be omitted when the code isn't located in GOPATH.
|
||||
GO111MODULE=on go mod tidy
|
||||
|
||||
GO111MODULE=on go mod vendor
|
||||
```
|
||||
|
||||
You have to commit the changes to `go.mod`, `go.sum` and the `vendor/` directory before submitting the pull request.
|
||||
|
||||
|
||||
## API Implementation Guidelines
|
||||
|
||||
### Naming and Documentation
|
||||
|
||||
Public functions and structs should normally be named according to the file(s) being read and parsed. For example,
|
||||
the `fs.BuddyInfo()` function reads the file `/proc/buddyinfo`. In addition, the godoc for each public function
|
||||
should contain the path to the file(s) being read and a URL of the linux kernel documentation describing the file(s).
|
||||
|
||||
### Reading vs. Parsing
|
||||
|
||||
Most functionality in this library consists of reading files and then parsing the text into structured data. In most
|
||||
cases reading and parsing should be separated into different functions/methods with a public `fs.Thing()` method and
|
||||
a private `parseThing(r Reader)` function. This provides a logical separation and allows parsing to be tested
|
||||
directly without the need to read from the filesystem. Using a `Reader` argument is preferred over other data types
|
||||
such as `string` or `*File` because it provides the most flexibility regarding the data source. When a set of files
|
||||
in a directory needs to be parsed, then a `path` string parameter to the parse function can be used instead.
|
||||
|
||||
### /proc and /sys filesystem I/O
|
||||
|
||||
The `proc` and `sys` filesystems are pseudo file systems and work a bit differently from standard disk I/O.
|
||||
Many of the files are changing continuously and the data being read can in some cases change between subsequent
|
||||
reads in the same file. Also, most of the files are relatively small (less than a few KBs), and system calls
|
||||
to the `stat` function will often return the wrong size. Therefore, for most files it's recommended to read the
|
||||
full file in a single operation using an internal utility function called `util.ReadFileNoStat`.
|
||||
This function is similar to `ioutil.ReadFile`, but it avoids the system call to `stat` to get the current size of
|
||||
the file.
|
||||
|
||||
Note that parsing the file's contents can still be performed one line at a time. This is done by first reading
|
||||
the full file, and then using a scanner on the `[]byte` or `string` containing the data.
|
||||
|
||||
```
|
||||
data, err := util.ReadFileNoStat("/proc/cpuinfo")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
reader := bytes.NewReader(data)
|
||||
scanner := bufio.NewScanner(reader)
|
||||
```
|
||||
|
||||
The `/sys` filesystem contains many very small files which contain only a single numeric or text value. These files
|
||||
can be read using an internal function called `util.SysReadFile` which is similar to `ioutil.ReadFile` but does
|
||||
not bother to check the size of the file before reading.
|
||||
```
|
||||
data, err := util.SysReadFile("/sys/class/power_supply/BAT0/capacity")
|
||||
```
|
||||
|
||||
|
7
vendor/github.com/prometheus/procfs/Makefile.common
generated
vendored
7
vendor/github.com/prometheus/procfs/Makefile.common
generated
vendored
@ -86,6 +86,7 @@ endif
|
||||
PREFIX ?= $(shell pwd)
|
||||
BIN_DIR ?= $(shell pwd)
|
||||
DOCKER_IMAGE_TAG ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD))
|
||||
DOCKERFILE_PATH ?= ./
|
||||
DOCKER_REPO ?= prom
|
||||
|
||||
DOCKER_ARCHS ?= amd64
|
||||
@ -212,7 +213,7 @@ $(BUILD_DOCKER_ARCHS): common-docker-%:
|
||||
docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" \
|
||||
--build-arg ARCH="$*" \
|
||||
--build-arg OS="linux" \
|
||||
.
|
||||
$(DOCKERFILE_PATH)
|
||||
|
||||
.PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS)
|
||||
common-docker-publish: $(PUBLISH_DOCKER_ARCHS)
|
||||
@ -247,7 +248,9 @@ proto:
|
||||
ifdef GOLANGCI_LINT
|
||||
$(GOLANGCI_LINT):
|
||||
mkdir -p $(FIRST_GOPATH)/bin
|
||||
curl -sfL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s -- -b $(FIRST_GOPATH)/bin $(GOLANGCI_LINT_VERSION)
|
||||
curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/$(GOLANGCI_LINT_VERSION)/install.sh \
|
||||
| sed -e '/install -d/d' \
|
||||
| sh -s -- -b $(FIRST_GOPATH)/bin $(GOLANGCI_LINT_VERSION)
|
||||
endif
|
||||
|
||||
ifdef GOVENDOR
|
||||
|
16
vendor/github.com/prometheus/procfs/README.md
generated
vendored
16
vendor/github.com/prometheus/procfs/README.md
generated
vendored
@ -1,6 +1,6 @@
|
||||
# procfs
|
||||
|
||||
This procfs package provides functions to retrieve system, kernel and process
|
||||
This package provides functions to retrieve system, kernel, and process
|
||||
metrics from the pseudo-filesystems /proc and /sys.
|
||||
|
||||
*WARNING*: This package is a work in progress. Its API may still break in
|
||||
@ -13,7 +13,8 @@ backwards-incompatible ways without warnings. Use it at your own risk.
|
||||
## Usage
|
||||
|
||||
The procfs library is organized by packages based on whether the gathered data is coming from
|
||||
/proc, /sys, or both. Each package contains an `FS` type which represents the path to either /proc, /sys, or both. For example, current cpu statistics are gathered from
|
||||
/proc, /sys, or both. Each package contains an `FS` type which represents the path to either /proc,
|
||||
/sys, or both. For example, cpu statistics are gathered from
|
||||
`/proc/stat` and are available via the root procfs package. First, the proc filesystem mount
|
||||
point is initialized, and then the stat information is read.
|
||||
|
||||
@ -29,10 +30,17 @@ Some sub-packages such as `blockdevice`, require access to both the proc and sys
|
||||
stats, err := fs.ProcDiskstats()
|
||||
```
|
||||
|
||||
## Package Organization
|
||||
|
||||
The packages in this project are organized according to (1) whether the data comes from the `/proc` or
|
||||
`/sys` filesystem and (2) the type of information being retrieved. For example, most process information
|
||||
can be gathered from the functions in the root `procfs` package. Information about block devices such as disk drives
|
||||
is available in the `blockdevices` sub-package.
|
||||
|
||||
## Building and Testing
|
||||
|
||||
The procfs library is normally built as part of another application. However, when making
|
||||
changes to the library, the `make test` command can be used to run the API test suite.
|
||||
The procfs library is intended to be built as part of another application, so there are no distributable binaries.
|
||||
However, most of the API includes unit tests which can be run with `make test`.
|
||||
|
||||
### Updating Test Fixtures
|
||||
|
||||
|
85
vendor/github.com/prometheus/procfs/arp.go
generated
vendored
Normal file
85
vendor/github.com/prometheus/procfs/arp.go
generated
vendored
Normal file
@ -0,0 +1,85 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ARPEntry contains a single row of the columnar data represented in
|
||||
// /proc/net/arp.
|
||||
type ARPEntry struct {
|
||||
// IP address
|
||||
IPAddr net.IP
|
||||
// MAC address
|
||||
HWAddr net.HardwareAddr
|
||||
// Name of the device
|
||||
Device string
|
||||
}
|
||||
|
||||
// GatherARPEntries retrieves all the ARP entries, parse the relevant columns,
|
||||
// and then return a slice of ARPEntry's.
|
||||
func (fs FS) GatherARPEntries() ([]ARPEntry, error) {
|
||||
data, err := ioutil.ReadFile(fs.proc.Path("net/arp"))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error reading arp %s: %s", fs.proc.Path("net/arp"), err)
|
||||
}
|
||||
|
||||
return parseARPEntries(data)
|
||||
}
|
||||
|
||||
func parseARPEntries(data []byte) ([]ARPEntry, error) {
|
||||
lines := strings.Split(string(data), "\n")
|
||||
entries := make([]ARPEntry, 0)
|
||||
var err error
|
||||
const (
|
||||
expectedDataWidth = 6
|
||||
expectedHeaderWidth = 9
|
||||
)
|
||||
for _, line := range lines {
|
||||
columns := strings.Fields(line)
|
||||
width := len(columns)
|
||||
|
||||
if width == expectedHeaderWidth || width == 0 {
|
||||
continue
|
||||
} else if width == expectedDataWidth {
|
||||
entry, err := parseARPEntry(columns)
|
||||
if err != nil {
|
||||
return []ARPEntry{}, fmt.Errorf("failed to parse ARP entry: %s", err)
|
||||
}
|
||||
entries = append(entries, entry)
|
||||
} else {
|
||||
return []ARPEntry{}, fmt.Errorf("%d columns were detected, but %d were expected", width, expectedDataWidth)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return entries, err
|
||||
}
|
||||
|
||||
func parseARPEntry(columns []string) (ARPEntry, error) {
|
||||
ip := net.ParseIP(columns[0])
|
||||
mac := net.HardwareAddr(columns[3])
|
||||
|
||||
entry := ARPEntry{
|
||||
IPAddr: ip,
|
||||
HWAddr: mac,
|
||||
Device: columns[5],
|
||||
}
|
||||
|
||||
return entry, nil
|
||||
}
|
2
vendor/github.com/prometheus/procfs/buddyinfo.go
generated
vendored
2
vendor/github.com/prometheus/procfs/buddyinfo.go
generated
vendored
@ -31,7 +31,7 @@ type BuddyInfo struct {
|
||||
Sizes []float64
|
||||
}
|
||||
|
||||
// NewBuddyInfo reads the buddyinfo statistics from the specified `proc` filesystem.
|
||||
// BuddyInfo reads the buddyinfo statistics from the specified `proc` filesystem.
|
||||
func (fs FS) BuddyInfo() ([]BuddyInfo, error) {
|
||||
file, err := os.Open(fs.proc.Path("buddyinfo"))
|
||||
if err != nil {
|
||||
|
167
vendor/github.com/prometheus/procfs/cpuinfo.go
generated
vendored
Normal file
167
vendor/github.com/prometheus/procfs/cpuinfo.go
generated
vendored
Normal file
@ -0,0 +1,167 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// CPUInfo contains general information about a system CPU found in /proc/cpuinfo
|
||||
type CPUInfo struct {
|
||||
Processor uint
|
||||
VendorID string
|
||||
CPUFamily string
|
||||
Model string
|
||||
ModelName string
|
||||
Stepping string
|
||||
Microcode string
|
||||
CPUMHz float64
|
||||
CacheSize string
|
||||
PhysicalID string
|
||||
Siblings uint
|
||||
CoreID string
|
||||
CPUCores uint
|
||||
APICID string
|
||||
InitialAPICID string
|
||||
FPU string
|
||||
FPUException string
|
||||
CPUIDLevel uint
|
||||
WP string
|
||||
Flags []string
|
||||
Bugs []string
|
||||
BogoMips float64
|
||||
CLFlushSize uint
|
||||
CacheAlignment uint
|
||||
AddressSizes string
|
||||
PowerManagement string
|
||||
}
|
||||
|
||||
// CPUInfo returns information about current system CPUs.
|
||||
// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt
|
||||
func (fs FS) CPUInfo() ([]CPUInfo, error) {
|
||||
data, err := util.ReadFileNoStat(fs.proc.Path("cpuinfo"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return parseCPUInfo(data)
|
||||
}
|
||||
|
||||
// parseCPUInfo parses data from /proc/cpuinfo
|
||||
func parseCPUInfo(info []byte) ([]CPUInfo, error) {
|
||||
cpuinfo := []CPUInfo{}
|
||||
i := -1
|
||||
scanner := bufio.NewScanner(bytes.NewReader(info))
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if strings.TrimSpace(line) == "" {
|
||||
continue
|
||||
}
|
||||
field := strings.SplitN(line, ": ", 2)
|
||||
switch strings.TrimSpace(field[0]) {
|
||||
case "processor":
|
||||
cpuinfo = append(cpuinfo, CPUInfo{}) // start of the next processor
|
||||
i++
|
||||
v, err := strconv.ParseUint(field[1], 0, 32)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cpuinfo[i].Processor = uint(v)
|
||||
case "vendor_id":
|
||||
cpuinfo[i].VendorID = field[1]
|
||||
case "cpu family":
|
||||
cpuinfo[i].CPUFamily = field[1]
|
||||
case "model":
|
||||
cpuinfo[i].Model = field[1]
|
||||
case "model name":
|
||||
cpuinfo[i].ModelName = field[1]
|
||||
case "stepping":
|
||||
cpuinfo[i].Stepping = field[1]
|
||||
case "microcode":
|
||||
cpuinfo[i].Microcode = field[1]
|
||||
case "cpu MHz":
|
||||
v, err := strconv.ParseFloat(field[1], 64)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cpuinfo[i].CPUMHz = v
|
||||
case "cache size":
|
||||
cpuinfo[i].CacheSize = field[1]
|
||||
case "physical id":
|
||||
cpuinfo[i].PhysicalID = field[1]
|
||||
case "siblings":
|
||||
v, err := strconv.ParseUint(field[1], 0, 32)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cpuinfo[i].Siblings = uint(v)
|
||||
case "core id":
|
||||
cpuinfo[i].CoreID = field[1]
|
||||
case "cpu cores":
|
||||
v, err := strconv.ParseUint(field[1], 0, 32)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cpuinfo[i].CPUCores = uint(v)
|
||||
case "apicid":
|
||||
cpuinfo[i].APICID = field[1]
|
||||
case "initial apicid":
|
||||
cpuinfo[i].InitialAPICID = field[1]
|
||||
case "fpu":
|
||||
cpuinfo[i].FPU = field[1]
|
||||
case "fpu_exception":
|
||||
cpuinfo[i].FPUException = field[1]
|
||||
case "cpuid level":
|
||||
v, err := strconv.ParseUint(field[1], 0, 32)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cpuinfo[i].CPUIDLevel = uint(v)
|
||||
case "wp":
|
||||
cpuinfo[i].WP = field[1]
|
||||
case "flags":
|
||||
cpuinfo[i].Flags = strings.Fields(field[1])
|
||||
case "bugs":
|
||||
cpuinfo[i].Bugs = strings.Fields(field[1])
|
||||
case "bogomips":
|
||||
v, err := strconv.ParseFloat(field[1], 64)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cpuinfo[i].BogoMips = v
|
||||
case "clflush size":
|
||||
v, err := strconv.ParseUint(field[1], 0, 32)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cpuinfo[i].CLFlushSize = uint(v)
|
||||
case "cache_alignment":
|
||||
v, err := strconv.ParseUint(field[1], 0, 32)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cpuinfo[i].CacheAlignment = uint(v)
|
||||
case "address sizes":
|
||||
cpuinfo[i].AddressSizes = field[1]
|
||||
case "power management":
|
||||
cpuinfo[i].PowerManagement = field[1]
|
||||
}
|
||||
}
|
||||
return cpuinfo, nil
|
||||
|
||||
}
|
131
vendor/github.com/prometheus/procfs/crypto.go
generated
vendored
Normal file
131
vendor/github.com/prometheus/procfs/crypto.go
generated
vendored
Normal file
@ -0,0 +1,131 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// Crypto holds info parsed from /proc/crypto.
|
||||
type Crypto struct {
|
||||
Alignmask *uint64
|
||||
Async bool
|
||||
Blocksize *uint64
|
||||
Chunksize *uint64
|
||||
Ctxsize *uint64
|
||||
Digestsize *uint64
|
||||
Driver string
|
||||
Geniv string
|
||||
Internal string
|
||||
Ivsize *uint64
|
||||
Maxauthsize *uint64
|
||||
MaxKeysize *uint64
|
||||
MinKeysize *uint64
|
||||
Module string
|
||||
Name string
|
||||
Priority *int64
|
||||
Refcnt *int64
|
||||
Seedsize *uint64
|
||||
Selftest string
|
||||
Type string
|
||||
Walksize *uint64
|
||||
}
|
||||
|
||||
// Crypto parses an crypto-file (/proc/crypto) and returns a slice of
|
||||
// structs containing the relevant info. More information available here:
|
||||
// https://kernel.readthedocs.io/en/sphinx-samples/crypto-API.html
|
||||
func (fs FS) Crypto() ([]Crypto, error) {
|
||||
data, err := ioutil.ReadFile(fs.proc.Path("crypto"))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing crypto %s: %s", fs.proc.Path("crypto"), err)
|
||||
}
|
||||
crypto, err := parseCrypto(data)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing crypto %s: %s", fs.proc.Path("crypto"), err)
|
||||
}
|
||||
return crypto, nil
|
||||
}
|
||||
|
||||
func parseCrypto(cryptoData []byte) ([]Crypto, error) {
|
||||
crypto := []Crypto{}
|
||||
|
||||
cryptoBlocks := bytes.Split(cryptoData, []byte("\n\n"))
|
||||
|
||||
for _, block := range cryptoBlocks {
|
||||
var newCryptoElem Crypto
|
||||
|
||||
lines := strings.Split(string(block), "\n")
|
||||
for _, line := range lines {
|
||||
if strings.TrimSpace(line) == "" || line[0] == ' ' {
|
||||
continue
|
||||
}
|
||||
fields := strings.Split(line, ":")
|
||||
key := strings.TrimSpace(fields[0])
|
||||
value := strings.TrimSpace(fields[1])
|
||||
vp := util.NewValueParser(value)
|
||||
|
||||
switch strings.TrimSpace(key) {
|
||||
case "async":
|
||||
b, err := strconv.ParseBool(value)
|
||||
if err == nil {
|
||||
newCryptoElem.Async = b
|
||||
}
|
||||
case "blocksize":
|
||||
newCryptoElem.Blocksize = vp.PUInt64()
|
||||
case "chunksize":
|
||||
newCryptoElem.Chunksize = vp.PUInt64()
|
||||
case "digestsize":
|
||||
newCryptoElem.Digestsize = vp.PUInt64()
|
||||
case "driver":
|
||||
newCryptoElem.Driver = value
|
||||
case "geniv":
|
||||
newCryptoElem.Geniv = value
|
||||
case "internal":
|
||||
newCryptoElem.Internal = value
|
||||
case "ivsize":
|
||||
newCryptoElem.Ivsize = vp.PUInt64()
|
||||
case "maxauthsize":
|
||||
newCryptoElem.Maxauthsize = vp.PUInt64()
|
||||
case "max keysize":
|
||||
newCryptoElem.MaxKeysize = vp.PUInt64()
|
||||
case "min keysize":
|
||||
newCryptoElem.MinKeysize = vp.PUInt64()
|
||||
case "module":
|
||||
newCryptoElem.Module = value
|
||||
case "name":
|
||||
newCryptoElem.Name = value
|
||||
case "priority":
|
||||
newCryptoElem.Priority = vp.PInt64()
|
||||
case "refcnt":
|
||||
newCryptoElem.Refcnt = vp.PInt64()
|
||||
case "seedsize":
|
||||
newCryptoElem.Seedsize = vp.PUInt64()
|
||||
case "selftest":
|
||||
newCryptoElem.Selftest = value
|
||||
case "type":
|
||||
newCryptoElem.Type = value
|
||||
case "walksize":
|
||||
newCryptoElem.Walksize = vp.PUInt64()
|
||||
}
|
||||
}
|
||||
crypto = append(crypto, newCryptoElem)
|
||||
}
|
||||
return crypto, nil
|
||||
}
|
3778
vendor/github.com/prometheus/procfs/fixtures.ttar
generated
vendored
3778
vendor/github.com/prometheus/procfs/fixtures.ttar
generated
vendored
File diff suppressed because it is too large
Load Diff
7
vendor/github.com/prometheus/procfs/go.mod
generated
vendored
7
vendor/github.com/prometheus/procfs/go.mod
generated
vendored
@ -1,3 +1,8 @@
|
||||
module github.com/prometheus/procfs
|
||||
|
||||
require golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4
|
||||
go 1.12
|
||||
|
||||
require (
|
||||
github.com/google/go-cmp v0.3.1
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e
|
||||
)
|
||||
|
6
vendor/github.com/prometheus/procfs/go.sum
generated
vendored
6
vendor/github.com/prometheus/procfs/go.sum
generated
vendored
@ -1,2 +1,4 @@
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
|
3
vendor/github.com/prometheus/procfs/internal/fs/fs.go
generated
vendored
3
vendor/github.com/prometheus/procfs/internal/fs/fs.go
generated
vendored
@ -25,6 +25,9 @@ const (
|
||||
|
||||
// DefaultSysMountPoint is the common mount point of the sys filesystem.
|
||||
DefaultSysMountPoint = "/sys"
|
||||
|
||||
// DefaultConfigfsMountPoint is the common mount point of the configfs
|
||||
DefaultConfigfsMountPoint = "/sys/kernel/config"
|
||||
)
|
||||
|
||||
// FS represents a pseudo-filesystem, normally /proc or /sys, which provides an
|
||||
|
88
vendor/github.com/prometheus/procfs/internal/util/parse.go
generated
vendored
Normal file
88
vendor/github.com/prometheus/procfs/internal/util/parse.go
generated
vendored
Normal file
@ -0,0 +1,88 @@
|
||||
// Copyright 2018 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ParseUint32s parses a slice of strings into a slice of uint32s.
|
||||
func ParseUint32s(ss []string) ([]uint32, error) {
|
||||
us := make([]uint32, 0, len(ss))
|
||||
for _, s := range ss {
|
||||
u, err := strconv.ParseUint(s, 10, 32)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
us = append(us, uint32(u))
|
||||
}
|
||||
|
||||
return us, nil
|
||||
}
|
||||
|
||||
// ParseUint64s parses a slice of strings into a slice of uint64s.
|
||||
func ParseUint64s(ss []string) ([]uint64, error) {
|
||||
us := make([]uint64, 0, len(ss))
|
||||
for _, s := range ss {
|
||||
u, err := strconv.ParseUint(s, 10, 64)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
us = append(us, u)
|
||||
}
|
||||
|
||||
return us, nil
|
||||
}
|
||||
|
||||
// ParsePInt64s parses a slice of strings into a slice of int64 pointers.
|
||||
func ParsePInt64s(ss []string) ([]*int64, error) {
|
||||
us := make([]*int64, 0, len(ss))
|
||||
for _, s := range ss {
|
||||
u, err := strconv.ParseInt(s, 10, 64)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
us = append(us, &u)
|
||||
}
|
||||
|
||||
return us, nil
|
||||
}
|
||||
|
||||
// ReadUintFromFile reads a file and attempts to parse a uint64 from it.
|
||||
func ReadUintFromFile(path string) (uint64, error) {
|
||||
data, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return strconv.ParseUint(strings.TrimSpace(string(data)), 10, 64)
|
||||
}
|
||||
|
||||
// ParseBool parses a string into a boolean pointer.
|
||||
func ParseBool(b string) *bool {
|
||||
var truth bool
|
||||
switch b {
|
||||
case "enabled":
|
||||
truth = true
|
||||
case "disabled":
|
||||
truth = false
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
return &truth
|
||||
}
|
38
vendor/github.com/prometheus/procfs/internal/util/readfile.go
generated
vendored
Normal file
38
vendor/github.com/prometheus/procfs/internal/util/readfile.go
generated
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
)
|
||||
|
||||
// ReadFileNoStat uses ioutil.ReadAll to read contents of entire file.
|
||||
// This is similar to ioutil.ReadFile but without the call to os.Stat, because
|
||||
// many files in /proc and /sys report incorrect file sizes (either 0 or 4096).
|
||||
// Reads a max file size of 512kB. For files larger than this, a scanner
|
||||
// should be used.
|
||||
func ReadFileNoStat(filename string) ([]byte, error) {
|
||||
const maxBufferSize = 1024 * 512
|
||||
|
||||
f, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
reader := io.LimitReader(f, maxBufferSize)
|
||||
return ioutil.ReadAll(reader)
|
||||
}
|
48
vendor/github.com/prometheus/procfs/internal/util/sysreadfile.go
generated
vendored
Normal file
48
vendor/github.com/prometheus/procfs/internal/util/sysreadfile.go
generated
vendored
Normal file
@ -0,0 +1,48 @@
|
||||
// Copyright 2018 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// +build linux,!appengine
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// SysReadFile is a simplified ioutil.ReadFile that invokes syscall.Read directly.
|
||||
// https://github.com/prometheus/node_exporter/pull/728/files
|
||||
//
|
||||
// Note that this function will not read files larger than 128 bytes.
|
||||
func SysReadFile(file string) (string, error) {
|
||||
f, err := os.Open(file)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
// On some machines, hwmon drivers are broken and return EAGAIN. This causes
|
||||
// Go's ioutil.ReadFile implementation to poll forever.
|
||||
//
|
||||
// Since we either want to read data or bail immediately, do the simplest
|
||||
// possible read using syscall directly.
|
||||
const sysFileBufferSize = 128
|
||||
b := make([]byte, sysFileBufferSize)
|
||||
n, err := syscall.Read(int(f.Fd()), b)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return string(bytes.TrimSpace(b[:n])), nil
|
||||
}
|
26
vendor/github.com/prometheus/procfs/internal/util/sysreadfile_compat.go
generated
vendored
Normal file
26
vendor/github.com/prometheus/procfs/internal/util/sysreadfile_compat.go
generated
vendored
Normal file
@ -0,0 +1,26 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// +build linux,appengine !linux
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// SysReadFile is here implemented as a noop for builds that do not support
|
||||
// the read syscall. For example Windows, or Linux on Google App Engine.
|
||||
func SysReadFile(file string) (string, error) {
|
||||
return "", fmt.Errorf("not supported on this platform")
|
||||
}
|
91
vendor/github.com/prometheus/procfs/internal/util/valueparser.go
generated
vendored
Normal file
91
vendor/github.com/prometheus/procfs/internal/util/valueparser.go
generated
vendored
Normal file
@ -0,0 +1,91 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// TODO(mdlayher): util packages are an anti-pattern and this should be moved
|
||||
// somewhere else that is more focused in the future.
|
||||
|
||||
// A ValueParser enables parsing a single string into a variety of data types
|
||||
// in a concise and safe way. The Err method must be invoked after invoking
|
||||
// any other methods to ensure a value was successfully parsed.
|
||||
type ValueParser struct {
|
||||
v string
|
||||
err error
|
||||
}
|
||||
|
||||
// NewValueParser creates a ValueParser using the input string.
|
||||
func NewValueParser(v string) *ValueParser {
|
||||
return &ValueParser{v: v}
|
||||
}
|
||||
|
||||
// Int interprets the underlying value as an int and returns that value.
|
||||
func (vp *ValueParser) Int() int { return int(vp.int64()) }
|
||||
|
||||
// PInt64 interprets the underlying value as an int64 and returns a pointer to
|
||||
// that value.
|
||||
func (vp *ValueParser) PInt64() *int64 {
|
||||
if vp.err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
v := vp.int64()
|
||||
return &v
|
||||
}
|
||||
|
||||
// int64 interprets the underlying value as an int64 and returns that value.
|
||||
// TODO: export if/when necessary.
|
||||
func (vp *ValueParser) int64() int64 {
|
||||
if vp.err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
// A base value of zero makes ParseInt infer the correct base using the
|
||||
// string's prefix, if any.
|
||||
const base = 0
|
||||
v, err := strconv.ParseInt(vp.v, base, 64)
|
||||
if err != nil {
|
||||
vp.err = err
|
||||
return 0
|
||||
}
|
||||
|
||||
return v
|
||||
}
|
||||
|
||||
// PUInt64 interprets the underlying value as an uint64 and returns a pointer to
|
||||
// that value.
|
||||
func (vp *ValueParser) PUInt64() *uint64 {
|
||||
if vp.err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// A base value of zero makes ParseInt infer the correct base using the
|
||||
// string's prefix, if any.
|
||||
const base = 0
|
||||
v, err := strconv.ParseUint(vp.v, base, 64)
|
||||
if err != nil {
|
||||
vp.err = err
|
||||
return nil
|
||||
}
|
||||
|
||||
return &v
|
||||
}
|
||||
|
||||
// Err returns the last error, if any, encountered by the ValueParser.
|
||||
func (vp *ValueParser) Err() error {
|
||||
return vp.err
|
||||
}
|
12
vendor/github.com/prometheus/procfs/ipvs.go
generated
vendored
12
vendor/github.com/prometheus/procfs/ipvs.go
generated
vendored
@ -15,6 +15,7 @@ package procfs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
@ -24,6 +25,8 @@ import (
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// IPVSStats holds IPVS statistics, as exposed by the kernel in `/proc/net/ip_vs_stats`.
|
||||
@ -64,17 +67,16 @@ type IPVSBackendStatus struct {
|
||||
|
||||
// IPVSStats reads the IPVS statistics from the specified `proc` filesystem.
|
||||
func (fs FS) IPVSStats() (IPVSStats, error) {
|
||||
file, err := os.Open(fs.proc.Path("net/ip_vs_stats"))
|
||||
data, err := util.ReadFileNoStat(fs.proc.Path("net/ip_vs_stats"))
|
||||
if err != nil {
|
||||
return IPVSStats{}, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
return parseIPVSStats(file)
|
||||
return parseIPVSStats(bytes.NewReader(data))
|
||||
}
|
||||
|
||||
// parseIPVSStats performs the actual parsing of `ip_vs_stats`.
|
||||
func parseIPVSStats(file io.Reader) (IPVSStats, error) {
|
||||
func parseIPVSStats(r io.Reader) (IPVSStats, error) {
|
||||
var (
|
||||
statContent []byte
|
||||
statLines []string
|
||||
@ -82,7 +84,7 @@ func parseIPVSStats(file io.Reader) (IPVSStats, error) {
|
||||
stats IPVSStats
|
||||
)
|
||||
|
||||
statContent, err := ioutil.ReadAll(file)
|
||||
statContent, err := ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return IPVSStats{}, err
|
||||
}
|
||||
|
111
vendor/github.com/prometheus/procfs/mdstat.go
generated
vendored
111
vendor/github.com/prometheus/procfs/mdstat.go
generated
vendored
@ -22,8 +22,8 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
statuslineRE = regexp.MustCompile(`(\d+) blocks .*\[(\d+)/(\d+)\] \[[U_]+\]`)
|
||||
buildlineRE = regexp.MustCompile(`\((\d+)/\d+\)`)
|
||||
statusLineRE = regexp.MustCompile(`(\d+) blocks .*\[(\d+)/(\d+)\] \[[U_]+\]`)
|
||||
recoveryLineRE = regexp.MustCompile(`\((\d+)/\d+\)`)
|
||||
)
|
||||
|
||||
// MDStat holds info parsed from /proc/mdstat.
|
||||
@ -34,8 +34,12 @@ type MDStat struct {
|
||||
ActivityState string
|
||||
// Number of active disks.
|
||||
DisksActive int64
|
||||
// Total number of disks the device consists of.
|
||||
// Total number of disks the device requires.
|
||||
DisksTotal int64
|
||||
// Number of failed disks.
|
||||
DisksFailed int64
|
||||
// Spare disks in the device.
|
||||
DisksSpare int64
|
||||
// Number of blocks the device holds.
|
||||
BlocksTotal int64
|
||||
// Number of blocks on the device that are in sync.
|
||||
@ -59,29 +63,38 @@ func (fs FS) MDStat() ([]MDStat, error) {
|
||||
|
||||
// parseMDStat parses data from mdstat file (/proc/mdstat) and returns a slice of
|
||||
// structs containing the relevant info.
|
||||
func parseMDStat(mdstatData []byte) ([]MDStat, error) {
|
||||
func parseMDStat(mdStatData []byte) ([]MDStat, error) {
|
||||
mdStats := []MDStat{}
|
||||
lines := strings.Split(string(mdstatData), "\n")
|
||||
for i, l := range lines {
|
||||
if strings.TrimSpace(l) == "" || l[0] == ' ' ||
|
||||
strings.HasPrefix(l, "Personalities") || strings.HasPrefix(l, "unused") {
|
||||
lines := strings.Split(string(mdStatData), "\n")
|
||||
|
||||
for i, line := range lines {
|
||||
if strings.TrimSpace(line) == "" || line[0] == ' ' ||
|
||||
strings.HasPrefix(line, "Personalities") ||
|
||||
strings.HasPrefix(line, "unused") {
|
||||
continue
|
||||
}
|
||||
|
||||
deviceFields := strings.Fields(l)
|
||||
deviceFields := strings.Fields(line)
|
||||
if len(deviceFields) < 3 {
|
||||
return nil, fmt.Errorf("not enough fields in mdline (expected at least 3): %s", l)
|
||||
return nil, fmt.Errorf("not enough fields in mdline (expected at least 3): %s", line)
|
||||
}
|
||||
mdName := deviceFields[0]
|
||||
activityState := deviceFields[2]
|
||||
mdName := deviceFields[0] // mdx
|
||||
state := deviceFields[2] // active or inactive
|
||||
|
||||
if len(lines) <= i+3 {
|
||||
return mdStats, fmt.Errorf("missing lines for md device %s", mdName)
|
||||
return nil, fmt.Errorf(
|
||||
"error parsing %s: too few lines for md device",
|
||||
mdName,
|
||||
)
|
||||
}
|
||||
|
||||
active, total, size, err := evalStatusLine(lines[i+1])
|
||||
// Failed disks have the suffix (F) & Spare disks have the suffix (S).
|
||||
fail := int64(strings.Count(line, "(F)"))
|
||||
spare := int64(strings.Count(line, "(S)"))
|
||||
active, total, size, err := evalStatusLine(lines[i], lines[i+1])
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("error parsing md device lines: %s", err)
|
||||
}
|
||||
|
||||
syncLineIdx := i + 2
|
||||
@ -89,20 +102,38 @@ func parseMDStat(mdstatData []byte) ([]MDStat, error) {
|
||||
syncLineIdx++
|
||||
}
|
||||
|
||||
// If device is recovering/syncing at the moment, get the number of currently
|
||||
// If device is syncing at the moment, get the number of currently
|
||||
// synced bytes, otherwise that number equals the size of the device.
|
||||
syncedBlocks := size
|
||||
if strings.Contains(lines[syncLineIdx], "recovery") || strings.Contains(lines[syncLineIdx], "resync") {
|
||||
syncedBlocks, err = evalRecoveryLine(lines[syncLineIdx])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
recovering := strings.Contains(lines[syncLineIdx], "recovery")
|
||||
resyncing := strings.Contains(lines[syncLineIdx], "resync")
|
||||
|
||||
// Append recovery and resyncing state info.
|
||||
if recovering || resyncing {
|
||||
if recovering {
|
||||
state = "recovering"
|
||||
} else {
|
||||
state = "resyncing"
|
||||
}
|
||||
|
||||
// Handle case when resync=PENDING or resync=DELAYED.
|
||||
if strings.Contains(lines[syncLineIdx], "PENDING") ||
|
||||
strings.Contains(lines[syncLineIdx], "DELAYED") {
|
||||
syncedBlocks = 0
|
||||
} else {
|
||||
syncedBlocks, err = evalRecoveryLine(lines[syncLineIdx])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing sync line in md device %s: %s", mdName, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
mdStats = append(mdStats, MDStat{
|
||||
Name: mdName,
|
||||
ActivityState: activityState,
|
||||
ActivityState: state,
|
||||
DisksActive: active,
|
||||
DisksFailed: fail,
|
||||
DisksSpare: spare,
|
||||
DisksTotal: total,
|
||||
BlocksTotal: size,
|
||||
BlocksSynced: syncedBlocks,
|
||||
@ -112,39 +143,51 @@ func parseMDStat(mdstatData []byte) ([]MDStat, error) {
|
||||
return mdStats, nil
|
||||
}
|
||||
|
||||
func evalStatusLine(statusline string) (active, total, size int64, err error) {
|
||||
matches := statuslineRE.FindStringSubmatch(statusline)
|
||||
if len(matches) != 4 {
|
||||
return 0, 0, 0, fmt.Errorf("unexpected statusline: %s", statusline)
|
||||
func evalStatusLine(deviceLine, statusLine string) (active, total, size int64, err error) {
|
||||
|
||||
sizeStr := strings.Fields(statusLine)[0]
|
||||
size, err = strconv.ParseInt(sizeStr, 10, 64)
|
||||
if err != nil {
|
||||
return 0, 0, 0, fmt.Errorf("unexpected statusLine %s: %s", statusLine, err)
|
||||
}
|
||||
|
||||
size, err = strconv.ParseInt(matches[1], 10, 64)
|
||||
if err != nil {
|
||||
return 0, 0, 0, fmt.Errorf("unexpected statusline %s: %s", statusline, err)
|
||||
if strings.Contains(deviceLine, "raid0") || strings.Contains(deviceLine, "linear") {
|
||||
// In the device deviceLine, only disks have a number associated with them in [].
|
||||
total = int64(strings.Count(deviceLine, "["))
|
||||
return total, total, size, nil
|
||||
}
|
||||
|
||||
if strings.Contains(deviceLine, "inactive") {
|
||||
return 0, 0, size, nil
|
||||
}
|
||||
|
||||
matches := statusLineRE.FindStringSubmatch(statusLine)
|
||||
if len(matches) != 4 {
|
||||
return 0, 0, 0, fmt.Errorf("couldn't find all the substring matches: %s", statusLine)
|
||||
}
|
||||
|
||||
total, err = strconv.ParseInt(matches[2], 10, 64)
|
||||
if err != nil {
|
||||
return 0, 0, 0, fmt.Errorf("unexpected statusline %s: %s", statusline, err)
|
||||
return 0, 0, 0, fmt.Errorf("unexpected statusLine %s: %s", statusLine, err)
|
||||
}
|
||||
|
||||
active, err = strconv.ParseInt(matches[3], 10, 64)
|
||||
if err != nil {
|
||||
return 0, 0, 0, fmt.Errorf("unexpected statusline %s: %s", statusline, err)
|
||||
return 0, 0, 0, fmt.Errorf("unexpected statusLine %s: %s", statusLine, err)
|
||||
}
|
||||
|
||||
return active, total, size, nil
|
||||
}
|
||||
|
||||
func evalRecoveryLine(buildline string) (syncedBlocks int64, err error) {
|
||||
matches := buildlineRE.FindStringSubmatch(buildline)
|
||||
func evalRecoveryLine(recoveryLine string) (syncedBlocks int64, err error) {
|
||||
matches := recoveryLineRE.FindStringSubmatch(recoveryLine)
|
||||
if len(matches) != 2 {
|
||||
return 0, fmt.Errorf("unexpected buildline: %s", buildline)
|
||||
return 0, fmt.Errorf("unexpected recoveryLine: %s", recoveryLine)
|
||||
}
|
||||
|
||||
syncedBlocks, err = strconv.ParseInt(matches[1], 10, 64)
|
||||
if err != nil {
|
||||
return 0, fmt.Errorf("%s in buildline: %s", err, buildline)
|
||||
return 0, fmt.Errorf("%s in recoveryLine: %s", err, recoveryLine)
|
||||
}
|
||||
|
||||
return syncedBlocks, nil
|
||||
|
277
vendor/github.com/prometheus/procfs/meminfo.go
generated
vendored
Normal file
277
vendor/github.com/prometheus/procfs/meminfo.go
generated
vendored
Normal file
@ -0,0 +1,277 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// Meminfo represents memory statistics.
|
||||
type Meminfo struct {
|
||||
// Total usable ram (i.e. physical ram minus a few reserved
|
||||
// bits and the kernel binary code)
|
||||
MemTotal uint64
|
||||
// The sum of LowFree+HighFree
|
||||
MemFree uint64
|
||||
// An estimate of how much memory is available for starting
|
||||
// new applications, without swapping. Calculated from
|
||||
// MemFree, SReclaimable, the size of the file LRU lists, and
|
||||
// the low watermarks in each zone. The estimate takes into
|
||||
// account that the system needs some page cache to function
|
||||
// well, and that not all reclaimable slab will be
|
||||
// reclaimable, due to items being in use. The impact of those
|
||||
// factors will vary from system to system.
|
||||
MemAvailable uint64
|
||||
// Relatively temporary storage for raw disk blocks shouldn't
|
||||
// get tremendously large (20MB or so)
|
||||
Buffers uint64
|
||||
Cached uint64
|
||||
// Memory that once was swapped out, is swapped back in but
|
||||
// still also is in the swapfile (if memory is needed it
|
||||
// doesn't need to be swapped out AGAIN because it is already
|
||||
// in the swapfile. This saves I/O)
|
||||
SwapCached uint64
|
||||
// Memory that has been used more recently and usually not
|
||||
// reclaimed unless absolutely necessary.
|
||||
Active uint64
|
||||
// Memory which has been less recently used. It is more
|
||||
// eligible to be reclaimed for other purposes
|
||||
Inactive uint64
|
||||
ActiveAnon uint64
|
||||
InactiveAnon uint64
|
||||
ActiveFile uint64
|
||||
InactiveFile uint64
|
||||
Unevictable uint64
|
||||
Mlocked uint64
|
||||
// total amount of swap space available
|
||||
SwapTotal uint64
|
||||
// Memory which has been evicted from RAM, and is temporarily
|
||||
// on the disk
|
||||
SwapFree uint64
|
||||
// Memory which is waiting to get written back to the disk
|
||||
Dirty uint64
|
||||
// Memory which is actively being written back to the disk
|
||||
Writeback uint64
|
||||
// Non-file backed pages mapped into userspace page tables
|
||||
AnonPages uint64
|
||||
// files which have been mapped, such as libraries
|
||||
Mapped uint64
|
||||
Shmem uint64
|
||||
// in-kernel data structures cache
|
||||
Slab uint64
|
||||
// Part of Slab, that might be reclaimed, such as caches
|
||||
SReclaimable uint64
|
||||
// Part of Slab, that cannot be reclaimed on memory pressure
|
||||
SUnreclaim uint64
|
||||
KernelStack uint64
|
||||
// amount of memory dedicated to the lowest level of page
|
||||
// tables.
|
||||
PageTables uint64
|
||||
// NFS pages sent to the server, but not yet committed to
|
||||
// stable storage
|
||||
NFSUnstable uint64
|
||||
// Memory used for block device "bounce buffers"
|
||||
Bounce uint64
|
||||
// Memory used by FUSE for temporary writeback buffers
|
||||
WritebackTmp uint64
|
||||
// Based on the overcommit ratio ('vm.overcommit_ratio'),
|
||||
// this is the total amount of memory currently available to
|
||||
// be allocated on the system. This limit is only adhered to
|
||||
// if strict overcommit accounting is enabled (mode 2 in
|
||||
// 'vm.overcommit_memory').
|
||||
// The CommitLimit is calculated with the following formula:
|
||||
// CommitLimit = ([total RAM pages] - [total huge TLB pages]) *
|
||||
// overcommit_ratio / 100 + [total swap pages]
|
||||
// For example, on a system with 1G of physical RAM and 7G
|
||||
// of swap with a `vm.overcommit_ratio` of 30 it would
|
||||
// yield a CommitLimit of 7.3G.
|
||||
// For more details, see the memory overcommit documentation
|
||||
// in vm/overcommit-accounting.
|
||||
CommitLimit uint64
|
||||
// The amount of memory presently allocated on the system.
|
||||
// The committed memory is a sum of all of the memory which
|
||||
// has been allocated by processes, even if it has not been
|
||||
// "used" by them as of yet. A process which malloc()'s 1G
|
||||
// of memory, but only touches 300M of it will show up as
|
||||
// using 1G. This 1G is memory which has been "committed" to
|
||||
// by the VM and can be used at any time by the allocating
|
||||
// application. With strict overcommit enabled on the system
|
||||
// (mode 2 in 'vm.overcommit_memory'),allocations which would
|
||||
// exceed the CommitLimit (detailed above) will not be permitted.
|
||||
// This is useful if one needs to guarantee that processes will
|
||||
// not fail due to lack of memory once that memory has been
|
||||
// successfully allocated.
|
||||
CommittedAS uint64
|
||||
// total size of vmalloc memory area
|
||||
VmallocTotal uint64
|
||||
// amount of vmalloc area which is used
|
||||
VmallocUsed uint64
|
||||
// largest contiguous block of vmalloc area which is free
|
||||
VmallocChunk uint64
|
||||
HardwareCorrupted uint64
|
||||
AnonHugePages uint64
|
||||
ShmemHugePages uint64
|
||||
ShmemPmdMapped uint64
|
||||
CmaTotal uint64
|
||||
CmaFree uint64
|
||||
HugePagesTotal uint64
|
||||
HugePagesFree uint64
|
||||
HugePagesRsvd uint64
|
||||
HugePagesSurp uint64
|
||||
Hugepagesize uint64
|
||||
DirectMap4k uint64
|
||||
DirectMap2M uint64
|
||||
DirectMap1G uint64
|
||||
}
|
||||
|
||||
// Meminfo returns an information about current kernel/system memory statistics.
|
||||
// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt
|
||||
func (fs FS) Meminfo() (Meminfo, error) {
|
||||
b, err := util.ReadFileNoStat(fs.proc.Path("meminfo"))
|
||||
if err != nil {
|
||||
return Meminfo{}, err
|
||||
}
|
||||
|
||||
m, err := parseMemInfo(bytes.NewReader(b))
|
||||
if err != nil {
|
||||
return Meminfo{}, fmt.Errorf("failed to parse meminfo: %v", err)
|
||||
}
|
||||
|
||||
return *m, nil
|
||||
}
|
||||
|
||||
func parseMemInfo(r io.Reader) (*Meminfo, error) {
|
||||
var m Meminfo
|
||||
s := bufio.NewScanner(r)
|
||||
for s.Scan() {
|
||||
// Each line has at least a name and value; we ignore the unit.
|
||||
fields := strings.Fields(s.Text())
|
||||
if len(fields) < 2 {
|
||||
return nil, fmt.Errorf("malformed meminfo line: %q", s.Text())
|
||||
}
|
||||
|
||||
v, err := strconv.ParseUint(fields[1], 0, 64)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
switch fields[0] {
|
||||
case "MemTotal:":
|
||||
m.MemTotal = v
|
||||
case "MemFree:":
|
||||
m.MemFree = v
|
||||
case "MemAvailable:":
|
||||
m.MemAvailable = v
|
||||
case "Buffers:":
|
||||
m.Buffers = v
|
||||
case "Cached:":
|
||||
m.Cached = v
|
||||
case "SwapCached:":
|
||||
m.SwapCached = v
|
||||
case "Active:":
|
||||
m.Active = v
|
||||
case "Inactive:":
|
||||
m.Inactive = v
|
||||
case "Active(anon):":
|
||||
m.ActiveAnon = v
|
||||
case "Inactive(anon):":
|
||||
m.InactiveAnon = v
|
||||
case "Active(file):":
|
||||
m.ActiveFile = v
|
||||
case "Inactive(file):":
|
||||
m.InactiveFile = v
|
||||
case "Unevictable:":
|
||||
m.Unevictable = v
|
||||
case "Mlocked:":
|
||||
m.Mlocked = v
|
||||
case "SwapTotal:":
|
||||
m.SwapTotal = v
|
||||
case "SwapFree:":
|
||||
m.SwapFree = v
|
||||
case "Dirty:":
|
||||
m.Dirty = v
|
||||
case "Writeback:":
|
||||
m.Writeback = v
|
||||
case "AnonPages:":
|
||||
m.AnonPages = v
|
||||
case "Mapped:":
|
||||
m.Mapped = v
|
||||
case "Shmem:":
|
||||
m.Shmem = v
|
||||
case "Slab:":
|
||||
m.Slab = v
|
||||
case "SReclaimable:":
|
||||
m.SReclaimable = v
|
||||
case "SUnreclaim:":
|
||||
m.SUnreclaim = v
|
||||
case "KernelStack:":
|
||||
m.KernelStack = v
|
||||
case "PageTables:":
|
||||
m.PageTables = v
|
||||
case "NFS_Unstable:":
|
||||
m.NFSUnstable = v
|
||||
case "Bounce:":
|
||||
m.Bounce = v
|
||||
case "WritebackTmp:":
|
||||
m.WritebackTmp = v
|
||||
case "CommitLimit:":
|
||||
m.CommitLimit = v
|
||||
case "Committed_AS:":
|
||||
m.CommittedAS = v
|
||||
case "VmallocTotal:":
|
||||
m.VmallocTotal = v
|
||||
case "VmallocUsed:":
|
||||
m.VmallocUsed = v
|
||||
case "VmallocChunk:":
|
||||
m.VmallocChunk = v
|
||||
case "HardwareCorrupted:":
|
||||
m.HardwareCorrupted = v
|
||||
case "AnonHugePages:":
|
||||
m.AnonHugePages = v
|
||||
case "ShmemHugePages:":
|
||||
m.ShmemHugePages = v
|
||||
case "ShmemPmdMapped:":
|
||||
m.ShmemPmdMapped = v
|
||||
case "CmaTotal:":
|
||||
m.CmaTotal = v
|
||||
case "CmaFree:":
|
||||
m.CmaFree = v
|
||||
case "HugePages_Total:":
|
||||
m.HugePagesTotal = v
|
||||
case "HugePages_Free:":
|
||||
m.HugePagesFree = v
|
||||
case "HugePages_Rsvd:":
|
||||
m.HugePagesRsvd = v
|
||||
case "HugePages_Surp:":
|
||||
m.HugePagesSurp = v
|
||||
case "Hugepagesize:":
|
||||
m.Hugepagesize = v
|
||||
case "DirectMap4k:":
|
||||
m.DirectMap4k = v
|
||||
case "DirectMap2M:":
|
||||
m.DirectMap2M = v
|
||||
case "DirectMap1G:":
|
||||
m.DirectMap1G = v
|
||||
}
|
||||
}
|
||||
|
||||
return &m, nil
|
||||
}
|
180
vendor/github.com/prometheus/procfs/mountinfo.go
generated
vendored
Normal file
180
vendor/github.com/prometheus/procfs/mountinfo.go
generated
vendored
Normal file
@ -0,0 +1,180 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// A MountInfo is a type that describes the details, options
|
||||
// for each mount, parsed from /proc/self/mountinfo.
|
||||
// The fields described in each entry of /proc/self/mountinfo
|
||||
// is described in the following man page.
|
||||
// http://man7.org/linux/man-pages/man5/proc.5.html
|
||||
type MountInfo struct {
|
||||
// Unique Id for the mount
|
||||
MountId int
|
||||
// The Id of the parent mount
|
||||
ParentId int
|
||||
// The value of `st_dev` for the files on this FS
|
||||
MajorMinorVer string
|
||||
// The pathname of the directory in the FS that forms
|
||||
// the root for this mount
|
||||
Root string
|
||||
// The pathname of the mount point relative to the root
|
||||
MountPoint string
|
||||
// Mount options
|
||||
Options map[string]string
|
||||
// Zero or more optional fields
|
||||
OptionalFields map[string]string
|
||||
// The Filesystem type
|
||||
FSType string
|
||||
// FS specific information or "none"
|
||||
Source string
|
||||
// Superblock options
|
||||
SuperOptions map[string]string
|
||||
}
|
||||
|
||||
// Reads each line of the mountinfo file, and returns a list of formatted MountInfo structs.
|
||||
func parseMountInfo(info []byte) ([]*MountInfo, error) {
|
||||
mounts := []*MountInfo{}
|
||||
scanner := bufio.NewScanner(bytes.NewReader(info))
|
||||
for scanner.Scan() {
|
||||
mountString := scanner.Text()
|
||||
parsedMounts, err := parseMountInfoString(mountString)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
mounts = append(mounts, parsedMounts)
|
||||
}
|
||||
|
||||
err := scanner.Err()
|
||||
return mounts, err
|
||||
}
|
||||
|
||||
// Parses a mountinfo file line, and converts it to a MountInfo struct.
|
||||
// An important check here is to see if the hyphen separator, as if it does not exist,
|
||||
// it means that the line is malformed.
|
||||
func parseMountInfoString(mountString string) (*MountInfo, error) {
|
||||
var err error
|
||||
|
||||
mountInfo := strings.Split(mountString, " ")
|
||||
mountInfoLength := len(mountInfo)
|
||||
if mountInfoLength < 11 {
|
||||
return nil, fmt.Errorf("couldn't find enough fields in mount string: %s", mountString)
|
||||
}
|
||||
|
||||
if mountInfo[mountInfoLength-4] != "-" {
|
||||
return nil, fmt.Errorf("couldn't find separator in expected field: %s", mountInfo[mountInfoLength-4])
|
||||
}
|
||||
|
||||
mount := &MountInfo{
|
||||
MajorMinorVer: mountInfo[2],
|
||||
Root: mountInfo[3],
|
||||
MountPoint: mountInfo[4],
|
||||
Options: mountOptionsParser(mountInfo[5]),
|
||||
OptionalFields: nil,
|
||||
FSType: mountInfo[mountInfoLength-3],
|
||||
Source: mountInfo[mountInfoLength-2],
|
||||
SuperOptions: mountOptionsParser(mountInfo[mountInfoLength-1]),
|
||||
}
|
||||
|
||||
mount.MountId, err = strconv.Atoi(mountInfo[0])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse mount ID")
|
||||
}
|
||||
mount.ParentId, err = strconv.Atoi(mountInfo[1])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse parent ID")
|
||||
}
|
||||
// Has optional fields, which is a space separated list of values.
|
||||
// Example: shared:2 master:7
|
||||
if mountInfo[6] != "" {
|
||||
mount.OptionalFields, err = mountOptionsParseOptionalFields(mountInfo[6 : mountInfoLength-4])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return mount, nil
|
||||
}
|
||||
|
||||
// mountOptionsIsValidField checks a string against a valid list of optional fields keys.
|
||||
func mountOptionsIsValidField(s string) bool {
|
||||
switch s {
|
||||
case
|
||||
"shared",
|
||||
"master",
|
||||
"propagate_from",
|
||||
"unbindable":
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// mountOptionsParseOptionalFields parses a list of optional fields strings into a double map of strings.
|
||||
func mountOptionsParseOptionalFields(o []string) (map[string]string, error) {
|
||||
optionalFields := make(map[string]string)
|
||||
for _, field := range o {
|
||||
optionSplit := strings.SplitN(field, ":", 2)
|
||||
value := ""
|
||||
if len(optionSplit) == 2 {
|
||||
value = optionSplit[1]
|
||||
}
|
||||
if mountOptionsIsValidField(optionSplit[0]) {
|
||||
optionalFields[optionSplit[0]] = value
|
||||
}
|
||||
}
|
||||
return optionalFields, nil
|
||||
}
|
||||
|
||||
// Parses the mount options, superblock options.
|
||||
func mountOptionsParser(mountOptions string) map[string]string {
|
||||
opts := make(map[string]string)
|
||||
options := strings.Split(mountOptions, ",")
|
||||
for _, opt := range options {
|
||||
splitOption := strings.Split(opt, "=")
|
||||
if len(splitOption) < 2 {
|
||||
key := splitOption[0]
|
||||
opts[key] = ""
|
||||
} else {
|
||||
key, value := splitOption[0], splitOption[1]
|
||||
opts[key] = value
|
||||
}
|
||||
}
|
||||
return opts
|
||||
}
|
||||
|
||||
// Retrieves mountinfo information from `/proc/self/mountinfo`.
|
||||
func GetMounts() ([]*MountInfo, error) {
|
||||
data, err := util.ReadFileNoStat("/proc/self/mountinfo")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return parseMountInfo(data)
|
||||
}
|
||||
|
||||
// Retrieves mountinfo information from a processes' `/proc/<pid>/mountinfo`.
|
||||
func GetProcMounts(pid int) ([]*MountInfo, error) {
|
||||
data, err := util.ReadFileNoStat(fmt.Sprintf("/proc/%d/mountinfo", pid))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return parseMountInfo(data)
|
||||
}
|
1
vendor/github.com/prometheus/procfs/net_dev.go
generated
vendored
1
vendor/github.com/prometheus/procfs/net_dev.go
generated
vendored
@ -183,7 +183,6 @@ func (netDev NetDev) Total() NetDevLine {
|
||||
names = append(names, ifc.Name)
|
||||
total.RxBytes += ifc.RxBytes
|
||||
total.RxPackets += ifc.RxPackets
|
||||
total.RxPackets += ifc.RxPackets
|
||||
total.RxErrors += ifc.RxErrors
|
||||
total.RxDropped += ifc.RxDropped
|
||||
total.RxFIFO += ifc.RxFIFO
|
||||
|
163
vendor/github.com/prometheus/procfs/net_sockstat.go
generated
vendored
Normal file
163
vendor/github.com/prometheus/procfs/net_sockstat.go
generated
vendored
Normal file
@ -0,0 +1,163 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// A NetSockstat contains the output of /proc/net/sockstat{,6} for IPv4 or IPv6,
|
||||
// respectively.
|
||||
type NetSockstat struct {
|
||||
// Used is non-nil for IPv4 sockstat results, but nil for IPv6.
|
||||
Used *int
|
||||
Protocols []NetSockstatProtocol
|
||||
}
|
||||
|
||||
// A NetSockstatProtocol contains statistics about a given socket protocol.
|
||||
// Pointer fields indicate that the value may or may not be present on any
|
||||
// given protocol.
|
||||
type NetSockstatProtocol struct {
|
||||
Protocol string
|
||||
InUse int
|
||||
Orphan *int
|
||||
TW *int
|
||||
Alloc *int
|
||||
Mem *int
|
||||
Memory *int
|
||||
}
|
||||
|
||||
// NetSockstat retrieves IPv4 socket statistics.
|
||||
func (fs FS) NetSockstat() (*NetSockstat, error) {
|
||||
return readSockstat(fs.proc.Path("net", "sockstat"))
|
||||
}
|
||||
|
||||
// NetSockstat6 retrieves IPv6 socket statistics.
|
||||
//
|
||||
// If IPv6 is disabled on this kernel, the returned error can be checked with
|
||||
// os.IsNotExist.
|
||||
func (fs FS) NetSockstat6() (*NetSockstat, error) {
|
||||
return readSockstat(fs.proc.Path("net", "sockstat6"))
|
||||
}
|
||||
|
||||
// readSockstat opens and parses a NetSockstat from the input file.
|
||||
func readSockstat(name string) (*NetSockstat, error) {
|
||||
// This file is small and can be read with one syscall.
|
||||
b, err := util.ReadFileNoStat(name)
|
||||
if err != nil {
|
||||
// Do not wrap this error so the caller can detect os.IsNotExist and
|
||||
// similar conditions.
|
||||
return nil, err
|
||||
}
|
||||
|
||||
stat, err := parseSockstat(bytes.NewReader(b))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read sockstats from %q: %v", name, err)
|
||||
}
|
||||
|
||||
return stat, nil
|
||||
}
|
||||
|
||||
// parseSockstat reads the contents of a sockstat file and parses a NetSockstat.
|
||||
func parseSockstat(r io.Reader) (*NetSockstat, error) {
|
||||
var stat NetSockstat
|
||||
s := bufio.NewScanner(r)
|
||||
for s.Scan() {
|
||||
// Expect a minimum of a protocol and one key/value pair.
|
||||
fields := strings.Split(s.Text(), " ")
|
||||
if len(fields) < 3 {
|
||||
return nil, fmt.Errorf("malformed sockstat line: %q", s.Text())
|
||||
}
|
||||
|
||||
// The remaining fields are key/value pairs.
|
||||
kvs, err := parseSockstatKVs(fields[1:])
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing sockstat key/value pairs from %q: %v", s.Text(), err)
|
||||
}
|
||||
|
||||
// The first field is the protocol. We must trim its colon suffix.
|
||||
proto := strings.TrimSuffix(fields[0], ":")
|
||||
switch proto {
|
||||
case "sockets":
|
||||
// Special case: IPv4 has a sockets "used" key/value pair that we
|
||||
// embed at the top level of the structure.
|
||||
used := kvs["used"]
|
||||
stat.Used = &used
|
||||
default:
|
||||
// Parse all other lines as individual protocols.
|
||||
nsp := parseSockstatProtocol(kvs)
|
||||
nsp.Protocol = proto
|
||||
stat.Protocols = append(stat.Protocols, nsp)
|
||||
}
|
||||
}
|
||||
|
||||
if err := s.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &stat, nil
|
||||
}
|
||||
|
||||
// parseSockstatKVs parses a string slice into a map of key/value pairs.
|
||||
func parseSockstatKVs(kvs []string) (map[string]int, error) {
|
||||
if len(kvs)%2 != 0 {
|
||||
return nil, errors.New("odd number of fields in key/value pairs")
|
||||
}
|
||||
|
||||
// Iterate two values at a time to gather key/value pairs.
|
||||
out := make(map[string]int, len(kvs)/2)
|
||||
for i := 0; i < len(kvs); i += 2 {
|
||||
vp := util.NewValueParser(kvs[i+1])
|
||||
out[kvs[i]] = vp.Int()
|
||||
|
||||
if err := vp.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// parseSockstatProtocol parses a NetSockstatProtocol from the input kvs map.
|
||||
func parseSockstatProtocol(kvs map[string]int) NetSockstatProtocol {
|
||||
var nsp NetSockstatProtocol
|
||||
for k, v := range kvs {
|
||||
// Capture the range variable to ensure we get unique pointers for
|
||||
// each of the optional fields.
|
||||
v := v
|
||||
switch k {
|
||||
case "inuse":
|
||||
nsp.InUse = v
|
||||
case "orphan":
|
||||
nsp.Orphan = &v
|
||||
case "tw":
|
||||
nsp.TW = &v
|
||||
case "alloc":
|
||||
nsp.Alloc = &v
|
||||
case "mem":
|
||||
nsp.Mem = &v
|
||||
case "memory":
|
||||
nsp.Memory = &v
|
||||
}
|
||||
}
|
||||
|
||||
return nsp
|
||||
}
|
91
vendor/github.com/prometheus/procfs/net_softnet.go
generated
vendored
Normal file
91
vendor/github.com/prometheus/procfs/net_softnet.go
generated
vendored
Normal file
@ -0,0 +1,91 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// For the proc file format details,
|
||||
// see https://elixir.bootlin.com/linux/v4.17/source/net/core/net-procfs.c#L162
|
||||
// and https://elixir.bootlin.com/linux/v4.17/source/include/linux/netdevice.h#L2810.
|
||||
|
||||
// SoftnetEntry contains a single row of data from /proc/net/softnet_stat
|
||||
type SoftnetEntry struct {
|
||||
// Number of processed packets
|
||||
Processed uint
|
||||
// Number of dropped packets
|
||||
Dropped uint
|
||||
// Number of times processing packets ran out of quota
|
||||
TimeSqueezed uint
|
||||
}
|
||||
|
||||
// GatherSoftnetStats reads /proc/net/softnet_stat, parse the relevant columns,
|
||||
// and then return a slice of SoftnetEntry's.
|
||||
func (fs FS) GatherSoftnetStats() ([]SoftnetEntry, error) {
|
||||
data, err := ioutil.ReadFile(fs.proc.Path("net/softnet_stat"))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error reading softnet %s: %s", fs.proc.Path("net/softnet_stat"), err)
|
||||
}
|
||||
|
||||
return parseSoftnetEntries(data)
|
||||
}
|
||||
|
||||
func parseSoftnetEntries(data []byte) ([]SoftnetEntry, error) {
|
||||
lines := strings.Split(string(data), "\n")
|
||||
entries := make([]SoftnetEntry, 0)
|
||||
var err error
|
||||
const (
|
||||
expectedColumns = 11
|
||||
)
|
||||
for _, line := range lines {
|
||||
columns := strings.Fields(line)
|
||||
width := len(columns)
|
||||
if width == 0 {
|
||||
continue
|
||||
}
|
||||
if width != expectedColumns {
|
||||
return []SoftnetEntry{}, fmt.Errorf("%d columns were detected, but %d were expected", width, expectedColumns)
|
||||
}
|
||||
var entry SoftnetEntry
|
||||
if entry, err = parseSoftnetEntry(columns); err != nil {
|
||||
return []SoftnetEntry{}, err
|
||||
}
|
||||
entries = append(entries, entry)
|
||||
}
|
||||
|
||||
return entries, nil
|
||||
}
|
||||
|
||||
func parseSoftnetEntry(columns []string) (SoftnetEntry, error) {
|
||||
var err error
|
||||
var processed, dropped, timeSqueezed uint64
|
||||
if processed, err = strconv.ParseUint(columns[0], 16, 32); err != nil {
|
||||
return SoftnetEntry{}, fmt.Errorf("Unable to parse column 0: %s", err)
|
||||
}
|
||||
if dropped, err = strconv.ParseUint(columns[1], 16, 32); err != nil {
|
||||
return SoftnetEntry{}, fmt.Errorf("Unable to parse column 1: %s", err)
|
||||
}
|
||||
if timeSqueezed, err = strconv.ParseUint(columns[2], 16, 32); err != nil {
|
||||
return SoftnetEntry{}, fmt.Errorf("Unable to parse column 2: %s", err)
|
||||
}
|
||||
return SoftnetEntry{
|
||||
Processed: uint(processed),
|
||||
Dropped: uint(dropped),
|
||||
TimeSqueezed: uint(timeSqueezed),
|
||||
}, nil
|
||||
}
|
4
vendor/github.com/prometheus/procfs/net_unix.go
generated
vendored
4
vendor/github.com/prometheus/procfs/net_unix.go
generated
vendored
@ -207,10 +207,6 @@ func (u NetUnix) parseUsers(hexStr string) (uint64, error) {
|
||||
return strconv.ParseUint(hexStr, 16, 32)
|
||||
}
|
||||
|
||||
func (u NetUnix) parseProtocol(hexStr string) (uint64, error) {
|
||||
return strconv.ParseUint(hexStr, 16, 32)
|
||||
}
|
||||
|
||||
func (u NetUnix) parseType(hexStr string) (NetUnixType, error) {
|
||||
typ, err := strconv.ParseUint(hexStr, 16, 16)
|
||||
if err != nil {
|
||||
|
59
vendor/github.com/prometheus/procfs/proc.go
generated
vendored
59
vendor/github.com/prometheus/procfs/proc.go
generated
vendored
@ -22,6 +22,7 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/fs"
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// Proc provides information about a running process.
|
||||
@ -121,13 +122,7 @@ func (fs FS) AllProcs() (Procs, error) {
|
||||
|
||||
// CmdLine returns the command line of a process.
|
||||
func (p Proc) CmdLine() ([]string, error) {
|
||||
f, err := os.Open(p.path("cmdline"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
data, err := ioutil.ReadAll(f)
|
||||
data, err := util.ReadFileNoStat(p.path("cmdline"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -141,13 +136,7 @@ func (p Proc) CmdLine() ([]string, error) {
|
||||
|
||||
// Comm returns the command name of a process.
|
||||
func (p Proc) Comm() (string, error) {
|
||||
f, err := os.Open(p.path("comm"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
data, err := ioutil.ReadAll(f)
|
||||
data, err := util.ReadFileNoStat(p.path("comm"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
@ -247,6 +236,18 @@ func (p Proc) MountStats() ([]*Mount, error) {
|
||||
return parseMountStats(f)
|
||||
}
|
||||
|
||||
// MountInfo retrieves mount information for mount points in a
|
||||
// process's namespace.
|
||||
// It supplies information missing in `/proc/self/mounts` and
|
||||
// fixes various other problems with that file too.
|
||||
func (p Proc) MountInfo() ([]*MountInfo, error) {
|
||||
data, err := util.ReadFileNoStat(p.path("mountinfo"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return parseMountInfo(data)
|
||||
}
|
||||
|
||||
func (p Proc) fileDescriptors() ([]string, error) {
|
||||
d, err := os.Open(p.path("fd"))
|
||||
if err != nil {
|
||||
@ -265,3 +266,33 @@ func (p Proc) fileDescriptors() ([]string, error) {
|
||||
func (p Proc) path(pa ...string) string {
|
||||
return p.fs.Path(append([]string{strconv.Itoa(p.PID)}, pa...)...)
|
||||
}
|
||||
|
||||
// FileDescriptorsInfo retrieves information about all file descriptors of
|
||||
// the process.
|
||||
func (p Proc) FileDescriptorsInfo() (ProcFDInfos, error) {
|
||||
names, err := p.fileDescriptors()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var fdinfos ProcFDInfos
|
||||
|
||||
for _, n := range names {
|
||||
fdinfo, err := p.FDInfo(n)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
fdinfos = append(fdinfos, *fdinfo)
|
||||
}
|
||||
|
||||
return fdinfos, nil
|
||||
}
|
||||
|
||||
// Schedstat returns task scheduling information for the process.
|
||||
func (p Proc) Schedstat() (ProcSchedstat, error) {
|
||||
contents, err := ioutil.ReadFile(p.path("schedstat"))
|
||||
if err != nil {
|
||||
return ProcSchedstat{}, err
|
||||
}
|
||||
return parseProcSchedstat(string(contents))
|
||||
}
|
||||
|
37
vendor/github.com/prometheus/procfs/proc_environ.go
generated
vendored
Normal file
37
vendor/github.com/prometheus/procfs/proc_environ.go
generated
vendored
Normal file
@ -0,0 +1,37 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// Environ reads process environments from /proc/<pid>/environ
|
||||
func (p Proc) Environ() ([]string, error) {
|
||||
environments := make([]string, 0)
|
||||
|
||||
data, err := util.ReadFileNoStat(p.path("environ"))
|
||||
if err != nil {
|
||||
return environments, err
|
||||
}
|
||||
|
||||
environments = strings.Split(string(data), "\000")
|
||||
if len(environments) > 0 {
|
||||
environments = environments[:len(environments)-1]
|
||||
}
|
||||
|
||||
return environments, nil
|
||||
}
|
125
vendor/github.com/prometheus/procfs/proc_fdinfo.go
generated
vendored
Normal file
125
vendor/github.com/prometheus/procfs/proc_fdinfo.go
generated
vendored
Normal file
@ -0,0 +1,125 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"regexp"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// Regexp variables
|
||||
var (
|
||||
rPos = regexp.MustCompile(`^pos:\s+(\d+)$`)
|
||||
rFlags = regexp.MustCompile(`^flags:\s+(\d+)$`)
|
||||
rMntID = regexp.MustCompile(`^mnt_id:\s+(\d+)$`)
|
||||
rInotify = regexp.MustCompile(`^inotify`)
|
||||
)
|
||||
|
||||
// ProcFDInfo contains represents file descriptor information.
|
||||
type ProcFDInfo struct {
|
||||
// File descriptor
|
||||
FD string
|
||||
// File offset
|
||||
Pos string
|
||||
// File access mode and status flags
|
||||
Flags string
|
||||
// Mount point ID
|
||||
MntID string
|
||||
// List of inotify lines (structed) in the fdinfo file (kernel 3.8+ only)
|
||||
InotifyInfos []InotifyInfo
|
||||
}
|
||||
|
||||
// FDInfo constructor. On kernels older than 3.8, InotifyInfos will always be empty.
|
||||
func (p Proc) FDInfo(fd string) (*ProcFDInfo, error) {
|
||||
data, err := util.ReadFileNoStat(p.path("fdinfo", fd))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var text, pos, flags, mntid string
|
||||
var inotify []InotifyInfo
|
||||
|
||||
scanner := bufio.NewScanner(bytes.NewReader(data))
|
||||
for scanner.Scan() {
|
||||
text = scanner.Text()
|
||||
if rPos.MatchString(text) {
|
||||
pos = rPos.FindStringSubmatch(text)[1]
|
||||
} else if rFlags.MatchString(text) {
|
||||
flags = rFlags.FindStringSubmatch(text)[1]
|
||||
} else if rMntID.MatchString(text) {
|
||||
mntid = rMntID.FindStringSubmatch(text)[1]
|
||||
} else if rInotify.MatchString(text) {
|
||||
newInotify, err := parseInotifyInfo(text)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
inotify = append(inotify, *newInotify)
|
||||
}
|
||||
}
|
||||
|
||||
i := &ProcFDInfo{
|
||||
FD: fd,
|
||||
Pos: pos,
|
||||
Flags: flags,
|
||||
MntID: mntid,
|
||||
InotifyInfos: inotify,
|
||||
}
|
||||
|
||||
return i, nil
|
||||
}
|
||||
|
||||
// InotifyInfo represents a single inotify line in the fdinfo file.
|
||||
type InotifyInfo struct {
|
||||
// Watch descriptor number
|
||||
WD string
|
||||
// Inode number
|
||||
Ino string
|
||||
// Device ID
|
||||
Sdev string
|
||||
// Mask of events being monitored
|
||||
Mask string
|
||||
}
|
||||
|
||||
// InotifyInfo constructor. Only available on kernel 3.8+.
|
||||
func parseInotifyInfo(line string) (*InotifyInfo, error) {
|
||||
r := regexp.MustCompile(`^inotify\s+wd:([0-9a-f]+)\s+ino:([0-9a-f]+)\s+sdev:([0-9a-f]+)\s+mask:([0-9a-f]+)`)
|
||||
m := r.FindStringSubmatch(line)
|
||||
i := &InotifyInfo{
|
||||
WD: m[1],
|
||||
Ino: m[2],
|
||||
Sdev: m[3],
|
||||
Mask: m[4],
|
||||
}
|
||||
return i, nil
|
||||
}
|
||||
|
||||
// ProcFDInfos represents a list of ProcFDInfo structs.
|
||||
type ProcFDInfos []ProcFDInfo
|
||||
|
||||
func (p ProcFDInfos) Len() int { return len(p) }
|
||||
func (p ProcFDInfos) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
|
||||
func (p ProcFDInfos) Less(i, j int) bool { return p[i].FD < p[j].FD }
|
||||
|
||||
// InotifyWatchLen returns the total number of inotify watches
|
||||
func (p ProcFDInfos) InotifyWatchLen() (int, error) {
|
||||
length := 0
|
||||
for _, f := range p {
|
||||
length += len(f.InotifyInfos)
|
||||
}
|
||||
|
||||
return length, nil
|
||||
}
|
12
vendor/github.com/prometheus/procfs/proc_io.go
generated
vendored
12
vendor/github.com/prometheus/procfs/proc_io.go
generated
vendored
@ -15,8 +15,8 @@ package procfs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// ProcIO models the content of /proc/<pid>/io.
|
||||
@ -43,13 +43,7 @@ type ProcIO struct {
|
||||
func (p Proc) IO() (ProcIO, error) {
|
||||
pio := ProcIO{}
|
||||
|
||||
f, err := os.Open(p.path("io"))
|
||||
if err != nil {
|
||||
return pio, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
data, err := ioutil.ReadAll(f)
|
||||
data, err := util.ReadFileNoStat(p.path("io"))
|
||||
if err != nil {
|
||||
return pio, err
|
||||
}
|
||||
|
21
vendor/github.com/prometheus/procfs/proc_psi.go
generated
vendored
21
vendor/github.com/prometheus/procfs/proc_psi.go
generated
vendored
@ -24,11 +24,13 @@ package procfs
|
||||
// > full avg10=0.00 avg60=0.13 avg300=0.96 total=8183134
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
const lineFormat = "avg10=%f avg60=%f avg300=%f total=%d"
|
||||
@ -55,24 +57,21 @@ type PSIStats struct {
|
||||
// resource from /proc/pressure/<resource>. At time of writing this can be
|
||||
// either "cpu", "memory" or "io".
|
||||
func (fs FS) PSIStatsForResource(resource string) (PSIStats, error) {
|
||||
file, err := os.Open(fs.proc.Path(fmt.Sprintf("%s/%s", "pressure", resource)))
|
||||
data, err := util.ReadFileNoStat(fs.proc.Path(fmt.Sprintf("%s/%s", "pressure", resource)))
|
||||
if err != nil {
|
||||
return PSIStats{}, fmt.Errorf("psi_stats: unavailable for %s", resource)
|
||||
}
|
||||
|
||||
defer file.Close()
|
||||
return parsePSIStats(resource, file)
|
||||
return parsePSIStats(resource, bytes.NewReader(data))
|
||||
}
|
||||
|
||||
// parsePSIStats parses the specified file for pressure stall information
|
||||
func parsePSIStats(resource string, file io.Reader) (PSIStats, error) {
|
||||
func parsePSIStats(resource string, r io.Reader) (PSIStats, error) {
|
||||
psiStats := PSIStats{}
|
||||
stats, err := ioutil.ReadAll(file)
|
||||
if err != nil {
|
||||
return psiStats, fmt.Errorf("psi_stats: unable to read data for %s", resource)
|
||||
}
|
||||
|
||||
for _, l := range strings.Split(string(stats), "\n") {
|
||||
scanner := bufio.NewScanner(r)
|
||||
for scanner.Scan() {
|
||||
l := scanner.Text()
|
||||
prefix := strings.Split(l, " ")[0]
|
||||
switch prefix {
|
||||
case "some":
|
||||
|
12
vendor/github.com/prometheus/procfs/proc_stat.go
generated
vendored
12
vendor/github.com/prometheus/procfs/proc_stat.go
generated
vendored
@ -16,10 +16,10 @@ package procfs
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"github.com/prometheus/procfs/internal/fs"
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// Originally, this USER_HZ value was dynamically retrieved via a sysconf call
|
||||
@ -106,20 +106,14 @@ type ProcStat struct {
|
||||
|
||||
// NewStat returns the current status information of the process.
|
||||
//
|
||||
// Deprecated: use NewStat() instead
|
||||
// Deprecated: use p.Stat() instead
|
||||
func (p Proc) NewStat() (ProcStat, error) {
|
||||
return p.Stat()
|
||||
}
|
||||
|
||||
// Stat returns the current status information of the process.
|
||||
func (p Proc) Stat() (ProcStat, error) {
|
||||
f, err := os.Open(p.path("stat"))
|
||||
if err != nil {
|
||||
return ProcStat{}, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
data, err := ioutil.ReadAll(f)
|
||||
data, err := util.ReadFileNoStat(p.path("stat"))
|
||||
if err != nil {
|
||||
return ProcStat{}, err
|
||||
}
|
||||
|
19
vendor/github.com/prometheus/procfs/proc_status.go
generated
vendored
19
vendor/github.com/prometheus/procfs/proc_status.go
generated
vendored
@ -15,13 +15,13 @@ package procfs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// ProcStat provides status information about the process,
|
||||
// ProcStatus provides status information about the process,
|
||||
// read from /proc/[pid]/stat.
|
||||
type ProcStatus struct {
|
||||
// The process ID.
|
||||
@ -29,6 +29,9 @@ type ProcStatus struct {
|
||||
// The process name.
|
||||
Name string
|
||||
|
||||
// Thread group ID.
|
||||
TGID int
|
||||
|
||||
// Peak virtual memory size.
|
||||
VmPeak uint64
|
||||
// Virtual memory size.
|
||||
@ -72,13 +75,7 @@ type ProcStatus struct {
|
||||
|
||||
// NewStatus returns the current status information of the process.
|
||||
func (p Proc) NewStatus() (ProcStatus, error) {
|
||||
f, err := os.Open(p.path("status"))
|
||||
if err != nil {
|
||||
return ProcStatus{}, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
data, err := ioutil.ReadAll(f)
|
||||
data, err := util.ReadFileNoStat(p.path("status"))
|
||||
if err != nil {
|
||||
return ProcStatus{}, err
|
||||
}
|
||||
@ -113,6 +110,8 @@ func (p Proc) NewStatus() (ProcStatus, error) {
|
||||
|
||||
func (s *ProcStatus) fillStatus(k string, vString string, vUint uint64, vUintBytes uint64) {
|
||||
switch k {
|
||||
case "Tgid":
|
||||
s.TGID = int(vUint)
|
||||
case "Name":
|
||||
s.Name = vString
|
||||
case "VmPeak":
|
||||
|
118
vendor/github.com/prometheus/procfs/schedstat.go
generated
vendored
Normal file
118
vendor/github.com/prometheus/procfs/schedstat.go
generated
vendored
Normal file
@ -0,0 +1,118 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
var (
|
||||
cpuLineRE = regexp.MustCompile(`cpu(\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+)`)
|
||||
procLineRE = regexp.MustCompile(`(\d+) (\d+) (\d+)`)
|
||||
)
|
||||
|
||||
// Schedstat contains scheduler statistics from /proc/schedstat
|
||||
//
|
||||
// See
|
||||
// https://www.kernel.org/doc/Documentation/scheduler/sched-stats.txt
|
||||
// for a detailed description of what these numbers mean.
|
||||
//
|
||||
// Note the current kernel documentation claims some of the time units are in
|
||||
// jiffies when they are actually in nanoseconds since 2.6.23 with the
|
||||
// introduction of CFS. A fix to the documentation is pending. See
|
||||
// https://lore.kernel.org/patchwork/project/lkml/list/?series=403473
|
||||
type Schedstat struct {
|
||||
CPUs []*SchedstatCPU
|
||||
}
|
||||
|
||||
// SchedstatCPU contains the values from one "cpu<N>" line
|
||||
type SchedstatCPU struct {
|
||||
CPUNum string
|
||||
|
||||
RunningNanoseconds uint64
|
||||
WaitingNanoseconds uint64
|
||||
RunTimeslices uint64
|
||||
}
|
||||
|
||||
// ProcSchedstat contains the values from /proc/<pid>/schedstat
|
||||
type ProcSchedstat struct {
|
||||
RunningNanoseconds uint64
|
||||
WaitingNanoseconds uint64
|
||||
RunTimeslices uint64
|
||||
}
|
||||
|
||||
// Schedstat reads data from /proc/schedstat
|
||||
func (fs FS) Schedstat() (*Schedstat, error) {
|
||||
file, err := os.Open(fs.proc.Path("schedstat"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
stats := &Schedstat{}
|
||||
scanner := bufio.NewScanner(file)
|
||||
|
||||
for scanner.Scan() {
|
||||
match := cpuLineRE.FindStringSubmatch(scanner.Text())
|
||||
if match != nil {
|
||||
cpu := &SchedstatCPU{}
|
||||
cpu.CPUNum = match[1]
|
||||
|
||||
cpu.RunningNanoseconds, err = strconv.ParseUint(match[8], 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
cpu.WaitingNanoseconds, err = strconv.ParseUint(match[9], 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
cpu.RunTimeslices, err = strconv.ParseUint(match[10], 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
stats.CPUs = append(stats.CPUs, cpu)
|
||||
}
|
||||
}
|
||||
|
||||
return stats, nil
|
||||
}
|
||||
|
||||
func parseProcSchedstat(contents string) (stats ProcSchedstat, err error) {
|
||||
match := procLineRE.FindStringSubmatch(contents)
|
||||
|
||||
if match != nil {
|
||||
stats.RunningNanoseconds, err = strconv.ParseUint(match[1], 10, 64)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
stats.WaitingNanoseconds, err = strconv.ParseUint(match[2], 10, 64)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
stats.RunTimeslices, err = strconv.ParseUint(match[3], 10, 64)
|
||||
return
|
||||
}
|
||||
|
||||
err = errors.New("could not parse schedstat")
|
||||
return
|
||||
}
|
12
vendor/github.com/prometheus/procfs/stat.go
generated
vendored
12
vendor/github.com/prometheus/procfs/stat.go
generated
vendored
@ -15,13 +15,14 @@ package procfs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/fs"
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// CPUStat shows how much time the cpu spend in various stages.
|
||||
@ -164,16 +165,15 @@ func (fs FS) NewStat() (Stat, error) {
|
||||
// Stat returns information about current cpu/process statistics.
|
||||
// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt
|
||||
func (fs FS) Stat() (Stat, error) {
|
||||
|
||||
f, err := os.Open(fs.proc.Path("stat"))
|
||||
fileName := fs.proc.Path("stat")
|
||||
data, err := util.ReadFileNoStat(fileName)
|
||||
if err != nil {
|
||||
return Stat{}, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
stat := Stat{}
|
||||
|
||||
scanner := bufio.NewScanner(f)
|
||||
scanner := bufio.NewScanner(bytes.NewReader(data))
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
parts := strings.Fields(scanner.Text())
|
||||
@ -237,7 +237,7 @@ func (fs FS) Stat() (Stat, error) {
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
return Stat{}, fmt.Errorf("couldn't parse %s: %s", f.Name(), err)
|
||||
return Stat{}, fmt.Errorf("couldn't parse %s: %s", fileName, err)
|
||||
}
|
||||
|
||||
return stat, nil
|
||||
|
210
vendor/github.com/prometheus/procfs/vm.go
generated
vendored
Normal file
210
vendor/github.com/prometheus/procfs/vm.go
generated
vendored
Normal file
@ -0,0 +1,210 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// +build !windows
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// The VM interface is described at
|
||||
// https://www.kernel.org/doc/Documentation/sysctl/vm.txt
|
||||
// Each setting is exposed as a single file.
|
||||
// Each file contains one line with a single numerical value, except lowmem_reserve_ratio which holds an array
|
||||
// and numa_zonelist_order (deprecated) which is a string
|
||||
type VM struct {
|
||||
AdminReserveKbytes *int64 // /proc/sys/vm/admin_reserve_kbytes
|
||||
BlockDump *int64 // /proc/sys/vm/block_dump
|
||||
CompactUnevictableAllowed *int64 // /proc/sys/vm/compact_unevictable_allowed
|
||||
DirtyBackgroundBytes *int64 // /proc/sys/vm/dirty_background_bytes
|
||||
DirtyBackgroundRatio *int64 // /proc/sys/vm/dirty_background_ratio
|
||||
DirtyBytes *int64 // /proc/sys/vm/dirty_bytes
|
||||
DirtyExpireCentisecs *int64 // /proc/sys/vm/dirty_expire_centisecs
|
||||
DirtyRatio *int64 // /proc/sys/vm/dirty_ratio
|
||||
DirtytimeExpireSeconds *int64 // /proc/sys/vm/dirtytime_expire_seconds
|
||||
DirtyWritebackCentisecs *int64 // /proc/sys/vm/dirty_writeback_centisecs
|
||||
DropCaches *int64 // /proc/sys/vm/drop_caches
|
||||
ExtfragThreshold *int64 // /proc/sys/vm/extfrag_threshold
|
||||
HugetlbShmGroup *int64 // /proc/sys/vm/hugetlb_shm_group
|
||||
LaptopMode *int64 // /proc/sys/vm/laptop_mode
|
||||
LegacyVaLayout *int64 // /proc/sys/vm/legacy_va_layout
|
||||
LowmemReserveRatio []*int64 // /proc/sys/vm/lowmem_reserve_ratio
|
||||
MaxMapCount *int64 // /proc/sys/vm/max_map_count
|
||||
MemoryFailureEarlyKill *int64 // /proc/sys/vm/memory_failure_early_kill
|
||||
MemoryFailureRecovery *int64 // /proc/sys/vm/memory_failure_recovery
|
||||
MinFreeKbytes *int64 // /proc/sys/vm/min_free_kbytes
|
||||
MinSlabRatio *int64 // /proc/sys/vm/min_slab_ratio
|
||||
MinUnmappedRatio *int64 // /proc/sys/vm/min_unmapped_ratio
|
||||
MmapMinAddr *int64 // /proc/sys/vm/mmap_min_addr
|
||||
NrHugepages *int64 // /proc/sys/vm/nr_hugepages
|
||||
NrHugepagesMempolicy *int64 // /proc/sys/vm/nr_hugepages_mempolicy
|
||||
NrOvercommitHugepages *int64 // /proc/sys/vm/nr_overcommit_hugepages
|
||||
NumaStat *int64 // /proc/sys/vm/numa_stat
|
||||
NumaZonelistOrder string // /proc/sys/vm/numa_zonelist_order
|
||||
OomDumpTasks *int64 // /proc/sys/vm/oom_dump_tasks
|
||||
OomKillAllocatingTask *int64 // /proc/sys/vm/oom_kill_allocating_task
|
||||
OvercommitKbytes *int64 // /proc/sys/vm/overcommit_kbytes
|
||||
OvercommitMemory *int64 // /proc/sys/vm/overcommit_memory
|
||||
OvercommitRatio *int64 // /proc/sys/vm/overcommit_ratio
|
||||
PageCluster *int64 // /proc/sys/vm/page-cluster
|
||||
PanicOnOom *int64 // /proc/sys/vm/panic_on_oom
|
||||
PercpuPagelistFraction *int64 // /proc/sys/vm/percpu_pagelist_fraction
|
||||
StatInterval *int64 // /proc/sys/vm/stat_interval
|
||||
Swappiness *int64 // /proc/sys/vm/swappiness
|
||||
UserReserveKbytes *int64 // /proc/sys/vm/user_reserve_kbytes
|
||||
VfsCachePressure *int64 // /proc/sys/vm/vfs_cache_pressure
|
||||
WatermarkBoostFactor *int64 // /proc/sys/vm/watermark_boost_factor
|
||||
WatermarkScaleFactor *int64 // /proc/sys/vm/watermark_scale_factor
|
||||
ZoneReclaimMode *int64 // /proc/sys/vm/zone_reclaim_mode
|
||||
}
|
||||
|
||||
// VM reads the VM statistics from the specified `proc` filesystem.
|
||||
func (fs FS) VM() (*VM, error) {
|
||||
path := fs.proc.Path("sys/vm")
|
||||
file, err := os.Stat(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !file.Mode().IsDir() {
|
||||
return nil, fmt.Errorf("%s is not a directory", path)
|
||||
}
|
||||
|
||||
files, err := ioutil.ReadDir(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var vm VM
|
||||
for _, f := range files {
|
||||
if f.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
name := filepath.Join(path, f.Name())
|
||||
// ignore errors on read, as there are some write only
|
||||
// in /proc/sys/vm
|
||||
value, err := util.SysReadFile(name)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
vp := util.NewValueParser(value)
|
||||
|
||||
switch f.Name() {
|
||||
case "admin_reserve_kbytes":
|
||||
vm.AdminReserveKbytes = vp.PInt64()
|
||||
case "block_dump":
|
||||
vm.BlockDump = vp.PInt64()
|
||||
case "compact_unevictable_allowed":
|
||||
vm.CompactUnevictableAllowed = vp.PInt64()
|
||||
case "dirty_background_bytes":
|
||||
vm.DirtyBackgroundBytes = vp.PInt64()
|
||||
case "dirty_background_ratio":
|
||||
vm.DirtyBackgroundRatio = vp.PInt64()
|
||||
case "dirty_bytes":
|
||||
vm.DirtyBytes = vp.PInt64()
|
||||
case "dirty_expire_centisecs":
|
||||
vm.DirtyExpireCentisecs = vp.PInt64()
|
||||
case "dirty_ratio":
|
||||
vm.DirtyRatio = vp.PInt64()
|
||||
case "dirtytime_expire_seconds":
|
||||
vm.DirtytimeExpireSeconds = vp.PInt64()
|
||||
case "dirty_writeback_centisecs":
|
||||
vm.DirtyWritebackCentisecs = vp.PInt64()
|
||||
case "drop_caches":
|
||||
vm.DropCaches = vp.PInt64()
|
||||
case "extfrag_threshold":
|
||||
vm.ExtfragThreshold = vp.PInt64()
|
||||
case "hugetlb_shm_group":
|
||||
vm.HugetlbShmGroup = vp.PInt64()
|
||||
case "laptop_mode":
|
||||
vm.LaptopMode = vp.PInt64()
|
||||
case "legacy_va_layout":
|
||||
vm.LegacyVaLayout = vp.PInt64()
|
||||
case "lowmem_reserve_ratio":
|
||||
stringSlice := strings.Fields(value)
|
||||
pint64Slice := make([]*int64, 0, len(stringSlice))
|
||||
for _, value := range stringSlice {
|
||||
vp := util.NewValueParser(value)
|
||||
pint64Slice = append(pint64Slice, vp.PInt64())
|
||||
}
|
||||
vm.LowmemReserveRatio = pint64Slice
|
||||
case "max_map_count":
|
||||
vm.MaxMapCount = vp.PInt64()
|
||||
case "memory_failure_early_kill":
|
||||
vm.MemoryFailureEarlyKill = vp.PInt64()
|
||||
case "memory_failure_recovery":
|
||||
vm.MemoryFailureRecovery = vp.PInt64()
|
||||
case "min_free_kbytes":
|
||||
vm.MinFreeKbytes = vp.PInt64()
|
||||
case "min_slab_ratio":
|
||||
vm.MinSlabRatio = vp.PInt64()
|
||||
case "min_unmapped_ratio":
|
||||
vm.MinUnmappedRatio = vp.PInt64()
|
||||
case "mmap_min_addr":
|
||||
vm.MmapMinAddr = vp.PInt64()
|
||||
case "nr_hugepages":
|
||||
vm.NrHugepages = vp.PInt64()
|
||||
case "nr_hugepages_mempolicy":
|
||||
vm.NrHugepagesMempolicy = vp.PInt64()
|
||||
case "nr_overcommit_hugepages":
|
||||
vm.NrOvercommitHugepages = vp.PInt64()
|
||||
case "numa_stat":
|
||||
vm.NumaStat = vp.PInt64()
|
||||
case "numa_zonelist_order":
|
||||
vm.NumaZonelistOrder = value
|
||||
case "oom_dump_tasks":
|
||||
vm.OomDumpTasks = vp.PInt64()
|
||||
case "oom_kill_allocating_task":
|
||||
vm.OomKillAllocatingTask = vp.PInt64()
|
||||
case "overcommit_kbytes":
|
||||
vm.OvercommitKbytes = vp.PInt64()
|
||||
case "overcommit_memory":
|
||||
vm.OvercommitMemory = vp.PInt64()
|
||||
case "overcommit_ratio":
|
||||
vm.OvercommitRatio = vp.PInt64()
|
||||
case "page-cluster":
|
||||
vm.PageCluster = vp.PInt64()
|
||||
case "panic_on_oom":
|
||||
vm.PanicOnOom = vp.PInt64()
|
||||
case "percpu_pagelist_fraction":
|
||||
vm.PercpuPagelistFraction = vp.PInt64()
|
||||
case "stat_interval":
|
||||
vm.StatInterval = vp.PInt64()
|
||||
case "swappiness":
|
||||
vm.Swappiness = vp.PInt64()
|
||||
case "user_reserve_kbytes":
|
||||
vm.UserReserveKbytes = vp.PInt64()
|
||||
case "vfs_cache_pressure":
|
||||
vm.VfsCachePressure = vp.PInt64()
|
||||
case "watermark_boost_factor":
|
||||
vm.WatermarkBoostFactor = vp.PInt64()
|
||||
case "watermark_scale_factor":
|
||||
vm.WatermarkScaleFactor = vp.PInt64()
|
||||
case "zone_reclaim_mode":
|
||||
vm.ZoneReclaimMode = vp.PInt64()
|
||||
}
|
||||
if err := vp.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return &vm, nil
|
||||
}
|
196
vendor/github.com/prometheus/procfs/zoneinfo.go
generated
vendored
Normal file
196
vendor/github.com/prometheus/procfs/zoneinfo.go
generated
vendored
Normal file
@ -0,0 +1,196 @@
|
||||
// Copyright 2019 The Prometheus Authors
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// +build !windows
|
||||
|
||||
package procfs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/prometheus/procfs/internal/util"
|
||||
)
|
||||
|
||||
// Zoneinfo holds info parsed from /proc/zoneinfo.
|
||||
type Zoneinfo struct {
|
||||
Node string
|
||||
Zone string
|
||||
NrFreePages *int64
|
||||
Min *int64
|
||||
Low *int64
|
||||
High *int64
|
||||
Scanned *int64
|
||||
Spanned *int64
|
||||
Present *int64
|
||||
Managed *int64
|
||||
NrActiveAnon *int64
|
||||
NrInactiveAnon *int64
|
||||
NrIsolatedAnon *int64
|
||||
NrAnonPages *int64
|
||||
NrAnonTransparentHugepages *int64
|
||||
NrActiveFile *int64
|
||||
NrInactiveFile *int64
|
||||
NrIsolatedFile *int64
|
||||
NrFilePages *int64
|
||||
NrSlabReclaimable *int64
|
||||
NrSlabUnreclaimable *int64
|
||||
NrMlockStack *int64
|
||||
NrKernelStack *int64
|
||||
NrMapped *int64
|
||||
NrDirty *int64
|
||||
NrWriteback *int64
|
||||
NrUnevictable *int64
|
||||
NrShmem *int64
|
||||
NrDirtied *int64
|
||||
NrWritten *int64
|
||||
NumaHit *int64
|
||||
NumaMiss *int64
|
||||
NumaForeign *int64
|
||||
NumaInterleave *int64
|
||||
NumaLocal *int64
|
||||
NumaOther *int64
|
||||
Protection []*int64
|
||||
}
|
||||
|
||||
var nodeZoneRE = regexp.MustCompile(`(\d+), zone\s+(\w+)`)
|
||||
|
||||
// Zoneinfo parses an zoneinfo-file (/proc/zoneinfo) and returns a slice of
|
||||
// structs containing the relevant info. More information available here:
|
||||
// https://www.kernel.org/doc/Documentation/sysctl/vm.txt
|
||||
func (fs FS) Zoneinfo() ([]Zoneinfo, error) {
|
||||
data, err := ioutil.ReadFile(fs.proc.Path("zoneinfo"))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error reading zoneinfo %s: %s", fs.proc.Path("zoneinfo"), err)
|
||||
}
|
||||
zoneinfo, err := parseZoneinfo(data)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error parsing zoneinfo %s: %s", fs.proc.Path("zoneinfo"), err)
|
||||
}
|
||||
return zoneinfo, nil
|
||||
}
|
||||
|
||||
func parseZoneinfo(zoneinfoData []byte) ([]Zoneinfo, error) {
|
||||
|
||||
zoneinfo := []Zoneinfo{}
|
||||
|
||||
zoneinfoBlocks := bytes.Split(zoneinfoData, []byte("\nNode"))
|
||||
for _, block := range zoneinfoBlocks {
|
||||
var zoneinfoElement Zoneinfo
|
||||
lines := strings.Split(string(block), "\n")
|
||||
for _, line := range lines {
|
||||
|
||||
if nodeZone := nodeZoneRE.FindStringSubmatch(line); nodeZone != nil {
|
||||
zoneinfoElement.Node = nodeZone[1]
|
||||
zoneinfoElement.Zone = nodeZone[2]
|
||||
continue
|
||||
}
|
||||
if strings.HasPrefix(strings.TrimSpace(line), "per-node stats") {
|
||||
zoneinfoElement.Zone = ""
|
||||
continue
|
||||
}
|
||||
parts := strings.Fields(strings.TrimSpace(line))
|
||||
if len(parts) < 2 {
|
||||
continue
|
||||
}
|
||||
vp := util.NewValueParser(parts[1])
|
||||
switch parts[0] {
|
||||
case "nr_free_pages":
|
||||
zoneinfoElement.NrFreePages = vp.PInt64()
|
||||
case "min":
|
||||
zoneinfoElement.Min = vp.PInt64()
|
||||
case "low":
|
||||
zoneinfoElement.Low = vp.PInt64()
|
||||
case "high":
|
||||
zoneinfoElement.High = vp.PInt64()
|
||||
case "scanned":
|
||||
zoneinfoElement.Scanned = vp.PInt64()
|
||||
case "spanned":
|
||||
zoneinfoElement.Spanned = vp.PInt64()
|
||||
case "present":
|
||||
zoneinfoElement.Present = vp.PInt64()
|
||||
case "managed":
|
||||
zoneinfoElement.Managed = vp.PInt64()
|
||||
case "nr_active_anon":
|
||||
zoneinfoElement.NrActiveAnon = vp.PInt64()
|
||||
case "nr_inactive_anon":
|
||||
zoneinfoElement.NrInactiveAnon = vp.PInt64()
|
||||
case "nr_isolated_anon":
|
||||
zoneinfoElement.NrIsolatedAnon = vp.PInt64()
|
||||
case "nr_anon_pages":
|
||||
zoneinfoElement.NrAnonPages = vp.PInt64()
|
||||
case "nr_anon_transparent_hugepages":
|
||||
zoneinfoElement.NrAnonTransparentHugepages = vp.PInt64()
|
||||
case "nr_active_file":
|
||||
zoneinfoElement.NrActiveFile = vp.PInt64()
|
||||
case "nr_inactive_file":
|
||||
zoneinfoElement.NrInactiveFile = vp.PInt64()
|
||||
case "nr_isolated_file":
|
||||
zoneinfoElement.NrIsolatedFile = vp.PInt64()
|
||||
case "nr_file_pages":
|
||||
zoneinfoElement.NrFilePages = vp.PInt64()
|
||||
case "nr_slab_reclaimable":
|
||||
zoneinfoElement.NrSlabReclaimable = vp.PInt64()
|
||||
case "nr_slab_unreclaimable":
|
||||
zoneinfoElement.NrSlabUnreclaimable = vp.PInt64()
|
||||
case "nr_mlock_stack":
|
||||
zoneinfoElement.NrMlockStack = vp.PInt64()
|
||||
case "nr_kernel_stack":
|
||||
zoneinfoElement.NrKernelStack = vp.PInt64()
|
||||
case "nr_mapped":
|
||||
zoneinfoElement.NrMapped = vp.PInt64()
|
||||
case "nr_dirty":
|
||||
zoneinfoElement.NrDirty = vp.PInt64()
|
||||
case "nr_writeback":
|
||||
zoneinfoElement.NrWriteback = vp.PInt64()
|
||||
case "nr_unevictable":
|
||||
zoneinfoElement.NrUnevictable = vp.PInt64()
|
||||
case "nr_shmem":
|
||||
zoneinfoElement.NrShmem = vp.PInt64()
|
||||
case "nr_dirtied":
|
||||
zoneinfoElement.NrDirtied = vp.PInt64()
|
||||
case "nr_written":
|
||||
zoneinfoElement.NrWritten = vp.PInt64()
|
||||
case "numa_hit":
|
||||
zoneinfoElement.NumaHit = vp.PInt64()
|
||||
case "numa_miss":
|
||||
zoneinfoElement.NumaMiss = vp.PInt64()
|
||||
case "numa_foreign":
|
||||
zoneinfoElement.NumaForeign = vp.PInt64()
|
||||
case "numa_interleave":
|
||||
zoneinfoElement.NumaInterleave = vp.PInt64()
|
||||
case "numa_local":
|
||||
zoneinfoElement.NumaLocal = vp.PInt64()
|
||||
case "numa_other":
|
||||
zoneinfoElement.NumaOther = vp.PInt64()
|
||||
case "protection:":
|
||||
protectionParts := strings.Split(line, ":")
|
||||
protectionValues := strings.Replace(protectionParts[1], "(", "", 1)
|
||||
protectionValues = strings.Replace(protectionValues, ")", "", 1)
|
||||
protectionValues = strings.TrimSpace(protectionValues)
|
||||
protectionStringMap := strings.Split(protectionValues, ", ")
|
||||
val, err := util.ParsePInt64s(protectionStringMap)
|
||||
if err == nil {
|
||||
zoneinfoElement.Protection = val
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
zoneinfo = append(zoneinfo, zoneinfoElement)
|
||||
}
|
||||
return zoneinfo, nil
|
||||
}
|
Reference in New Issue
Block a user