2014-11-19

SaltStack 入门

Salt 入门

2014-11-16

冬岛

目录

  • Installation
  • Config
  • 授权
  • Target
  • Command Line
  • Top file
  • States
  • Grains
  • Pillar
  • Node Group
  • Jobs
  • Schedule jobs
  • 参考文献

Installation

Config

master

minion

Configuring the Salt Minion

  • /etc/salt/minion master: 10.0.0.1

授权

[root@master ~]# salt-key -L
Unaccepted Keys:
alpha
bravo
charlie
delta
Accepted Keys:
[root@master ~]# salt-key -A
[root@master ~]# salt-key -L
Unaccepted Keys:
Accepted Keys:
alpha
bravo
charlie
delta

salt-key

Target

target 就是通过某种方式确定一个设备集合。目的是在这个集合的设备上面:

  • 执行命令
  • 执行 states
  • 获取设备信息
  • 定义 groups

Targeting Minions

举例:

salt web1 apache.signal restart
base:
  'web1':
    - webserver

可以通过如下几种方式确定 target

minion id

minion id 默认是 hostname,当然也可以通过 minion 的配置文件明确指定 id

举例:

salt '*' test.ping
salt '*.example.net' test.ping
salt '*.example.*' test.ping
salt 'web?.example.net' test.ping
salt 'web[1-5]' test.ping
salt 'web[1,3]' test.ping
salt 'web-[x-z]' test.ping

# 正则
salt -E 'web1-(prod|devel)' test.ping

# 列表
salt -L 'web1,web2,web3' test.ping

Grains

Grains 简单理解就是 minion 的设备信息,salt Master 可以根据不同的 minion 环境(比如: 操作系统、开机开启时间、系统负载等信息)做不同的决策

举例:

salt -G 'os:CentOS' test.ping
salt -G 'cpuarch:x86_64' grains.item num_cpus
alt -G 'ec2_tags:environment:*production*'
# 查看所有可用的 grains 信息
salt '*' grains.ls  
salt '*' grains.items

Grains 是可以在 minion 配置文件中配置的 GRAINS IN THE MINION CONFIG

Subnet/IP Address

salt -S 192.168.40.20 test.ping
salt -S 10.0.0.0/24 test.ping

Compound matchers

举例:

# -C 可以使用逻辑 与或
salt -C 'webserv* and G@os:Debian or E@web-dc1-srv.*' test.ping
salt -C '* and not G@kernel:Darwin' test.ping
salt -C '* and not web-dc1-srv' test.ping
salt -C '( ms-1 or G@id:ms-3 ) and G@id:ms-3' test.ping

Node groups

salt -N group1 test.ping

Batch Size

salt '*' -b 10 test.ping
salt -G 'os:RedHat' --batch-size 25% apache.signal restart

Command Line

Command Line 就是在 minion 上面执行各种不同形式的命令

USING THE SALT COMMAND

# The default target type is a bash glob:
salt '*foo.com' sys.doc
# Salt can also define the target minions with regular expressions:
salt -E '.*' cmd.run 'ls -l | grep foo'
# Or to explicitly list hosts, salt can take a list:
salt -L foo.bar.baz,quo.qux cmd.run 'ps aux | grep foo'

Calling the Function

# Functions may also accept arguments, space-delimited:
salt '*' cmd.exec_code python 'import sys; print sys.version'

# Optional, keyword arguments are also supported:
salt '*' pip.install salt timeout=5 upgrade=True

#Arguments are formatted as YAML:
salt '*' cmd.run 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'

# Finding available minion functions
salt '*' sys.doc

Compound Command Execution

# Compound Command Execution
salt '*' cmd.run,test.ping,test.echo 'cat /proc/cpuinfo',,foo

#You may change the arguments separator using the --args-separator option:
salt --args-separator=:: '*' some.fun,test.echo params with , comma :: foo

更多

The Top File

The top file is used to map what SLS modules get loaded onto what minions via the state system. minion 的 environment 可以在 /etc/salt/minion 中配置

单 top file 配置

# /etc/salt/master 配置举例
file_roots:
  base:
    - /srv/salt/base
  dev:
    - /srv/salt/dev
  qa:
    - /srv/salt/qa
  prod:
    - /srv/salt/prod
# top file
base:
  '*':
    - core
    - edit
dev:
  'webserver*dev*':
    - webserver
  'db*dev*':
    - db
qa:
  'webserver*qa*':
    - webserver
  'db*qa*':
    - db
prod:
  'webserver*prod*':
    - webserver
  'db*prod*':
    - db

多 top file 配置

#  /srv/salt/base/top.sls
base:
  '*':
    - common
dev:
  'webserver*dev*':
    - webserver
  'db*dev*':
    - db

# /srv/salt/dev/top.sls:
dev:
  '10.10.100.0/24':
    - match: ipcidr
    - deployments.dev.site1
  '10.10.101.0/24':
    - match: ipcidr
    - deployments.dev.site2

当出现这种配置时, /srv/salt/dev/top.sls 这个 top file 会被忽略, 因为解析 top file 的时候会首先查找 base/top.sls, 而 base/top.sls 中的 dev 配置会覆盖 /srv/salt/dev/top.sls 的内容

states

state 举例

apache:
  pkg:
    - installed
  service:
    - running
    - require:
      - pkg: apache

添加文件和用户配置

apache:
  pkg:
    - installed
  service:
    - running
    - watch:
      - pkg: apache
      - file: /etc/httpd/conf/httpd.conf
      - user: apache
  user.present:
    - uid: 87
    - gid: 87
    - home: /var/www/html
    - shell: /bin/nologin
    - require:
      - group: apache
  group.present:
    - gid: 87
    - require:
      - pkg: apache

/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://apache/httpd.conf
    - user: root
    - group: root
    - mode: 644

使用多个 sls 文件

  • apache/init.sls
  • apache/httpd.conf
# ssh/init.sls:
openssh-client:
  pkg.installed
/etc/ssh/ssh_config:
  file.managed:
    - user: root
    - group: root
    - mode: 644
    - source: salt://ssh/ssh_config
    - require:
      - pkg: openssh-client

# ssh/server.sls:
include:
  - ssh
openssh-server:
  pkg.installed
sshd:
  service.running:
    - require:
      - pkg: openssh-client
      - pkg: openssh-server
      - file: /etc/ssh/banner
      - file: /etc/ssh/sshd_config
/etc/ssh/sshd_config:
  file.managed:
    - user: root
    - group: root
    - mode: 644
    - source: salt://ssh/sshd_config
    - require:
      - pkg: openssh-server
/etc/ssh/banner:
  file:
    - managed
    - user: root
    - group: root
    - mode: 644
    - source: salt://ssh/banner
    - require:
      - pkg: openssh-server
  • apache/init.sls
  • apache/httpd.conf
  • ssh/init.sls
  • ssh/server.sls
  • ssh/banner
  • ssh/ssh_config
  • ssh/sshd_config

扩展和继承

# ssh/custom-server.sls:
include:
  - ssh.server
extend:
  /etc/ssh/banner:
    file:
      - source: salt://ssh/custom-banner

# python/mod_python.sls:
include:
  - apache
extend:
  apache:
    service:
      - watch:
        - pkg: mod_python
mod_python:
  pkg.installed

/etc/ssh/banner: 是增加一个 banner 不是替换老的 banner 设置

模板系统

# apache/init.sls:
apache:
  pkg.installed:
    {% if grains['os'] == 'RedHat'%}
    - name: httpd
    {% endif %}
  service.running:
    {% if grains['os'] == 'RedHat'%}
    - name: httpd
    {% endif %}
    - watch:
      - pkg: apache
      - file: /etc/httpd/conf/httpd.conf
      - user: apache
  user.present:
    - uid: 87
    - gid: 87
    - home: /var/www/html
    - shell: /bin/nologin
    - require:
      - group: apache
  group.present:
    - gid: 87
    - require:
      - pkg: apache

/etc/httpd/conf/httpd.conf:
  file.managed:
    - source: salt://apache/httpd.conf
    - user: root
    - group: root
    - mode: 644

Grains

举例

# Match all CentOS minions:
salt -G 'os:CentOS' test.ping

# Match all minions with 64-bit CPUs
salt -G 'cpuarch:x86_64' grains.item num_cpus

# Available grains can be listed by using the 'grains.ls' module:
salt '*' grains.ls

# Grains data can be listed by using the 'grains.items' module:
salt '*' grains.items

minion 静态配置 Grains

Grains can also be statically assigned within the minion configuration file

grains:
  roles:
    - webserver
    - memcache
  deployment: datacenter4
  cabinet: 13
  cab_u: 14-15

GRAINS IN /ETC/SALT/GRAINS

roles:
  - webserver
  - memcache
deployment: datacenter4
cabinet: 13
cab_u: 14-15

MATCHING GRAINS IN THE TOP FILE

'node_type:web':
  - match: grain
  - webserver

'node_type:postgres':
  - match: grain
  - database

'node_type:redis':
  - match: grain
  - redis

'node_type:lb':
  - match: grain
  - lb
{% set node_type = salt['grains.get']('node_type', '') %}

{% if node_type %}
  'node_type:{{ self }}':
    - match: grain
    - {{ self }}
{% endif %}

WRITING GRAINS

#!/usr/bin/env python
def yourfunction():
     # initialize a grains dictionary
     grains = {}
     # Some code for logic that sets grains like
     grains['yourcustomgrain']=True
     grains['anothergrain']='somevalue'
     return grains

PRECEDENCE

  1. Core grains.
  2. Custom grains in /etc/salt/grains.
  3. Custom grains in /etc/salt/minion.
  4. Custom grain modules in _grains directory, synced to minions.

Each successive evaluation overrides the previous ones, so any grains defined in /etc/salt/grains that have the same name as a core grain will override that core grain.

Pillar

  • Storing Static Data in the Pillar Pillars are tree-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion.

Note Grains and Pillar are sometimes confused, just remember that Grains are data about a minion which is stored or generated from the minion. This is why information like the OS and CPU type are found in Grains. Pillar is information about a minion or many minions stored or generated on the Salt Master.

By default the contents of the master configuration file are loaded into pillar for all minions. This enables the master configuration file to be used for global configuration of minions.

DECLARING THE MASTER PILLAR

The configuration for the pillar_roots in the master config file is identical in behavior and function as file_roots:

pillar_roots:
  base:
    - /srv/pillar

The top file used matches the name of the top file used for States, and has the same structure:

# /srv/pillar/top.sls
base:
  '*':
    - packages

使用举例

Here is an example using the grains matcher to target pillars to minions by their os grain:

# /srv/pillar/packages.sls
{% if grains['os'] == 'RedHat' %}
apache: httpd
git: git
{% elif grains['os'] == 'Debian' %}
apache: apache2
git: git-core
{% endif %}
company: Foo Industries
apache:
  pkg:
    - installed
    - name: {{ pillar['apache'] }}
git:
  pkg:
    - installed
    - name: {{ pillar['git'] }}

Node Groups

The nodegroups master config file parameter is used to define nodegroups. Here’s an example nodegroup configuration within /etc/salt/master:

nodegroups:
  group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
  group2: 'G@os:Debian and foo.domain.com'

To match a nodegroup on the CLI, use the -N command-line option:

salt -N group1 test.ping

jobs

Since Salt executes jobs running on many systems, Salt needs to be able to manage jobs running on many systems.

THE MINION PROC SYSTEM

/var/cache/salt/proc

FUNCTIONS IN THE SALTUTIL MODULE

  1. running Returns the data of all running jobs that are found in the proc directory.
  2. find_job Returns specific data about a certain job based on job id.
  3. signal_job Allows for a given jid to be sent a signal.
  4. term_job Sends a termination signal (SIGTERM, 15) to the process controlling the specified job.
  5. kill_job Sends a kill signal (SIGKILL, 9) to the process controlling the specified job.

使用举例

salt-run jobs.active
salt-run jobs.lookup_jid <job id number>
salt-run jobs.list_jobs

Scheduler

Scheduling is enabled via the schedule option on either the master or minion config files, or via a minion’s pillar data.

# minion 每 10 分钟就执行一次 highstate
schedule:
  highstate:
    function: state.highstate
    seconds: 600

参考文献