Tom Clark taught that course which is DevOps serial of Puppet.
Table of Content
- Preparaton for puppet
- Set up the puppetmaster
- Set up an agent
- Set alias command with run agent
- Create sudo module
- Create NTP module
- Create MariaDB module
- Create Puppet Server module
- Create SSH module
- Create Nagios module
- NRPE
- Nagios Notifications
- Bacula
- Database backup
- DNS Update
- ownCloud
Preparaton for puppet
- On all server edit file “/etc/dhcp/dhclient.conf”
supersede domain-name "foo.org.nz"; supersede domain-search "foo.org.nz"; supersede domain-name-servers 10.25.100.110;put above code after the request directive.
- Run the command “sudo dhclient”
- then we check the changing does it working.
cat /etc/resolv.confmake sure the nameserver is 10.25.100.110
Set up the puppetmaster
on group21mgmt server:
sudo apt-get install puppetmaster
edit the file “/etc/puppet/puppet.conf”:
[master]
certname=group21mgmt.foo.org.nz
create the file “/etc/puppet/manifests/site.pp”
sudo touch /etc/puppet/manifests/site.pp
run “systemctl restart puppetmaster”
PS: To check and sign agent:
sudo puppet cert –list
sudo puppet cert –sign group21db
Set up an agent
on any agent server(such as db, app, backups):
sudo apt-get install puppet
then run
sudo puppet agent –server=group21mgmt.foo.org.nz –no-daemonize –verbose –onetime
all green is good.
create the file “/etc/puppet/manifests/nodes.pp”, for example:
node 'group21db' {
package { 'vim': ensure => present }
}
then create the file “/etc/puppet/manifests/site.pp”:
import 'nodes.pp'
Set alias command with run agent
- add below code to the file “~/.bash_aliases”:
alias runagent=’sudo puppet agent –server=group21mgmt.foo.org.nz –no-daemonize –verbose –onetime’
-
run command “source alias”
- do some procedure on all agent server.
Create sudo module
- create file structure under the “/etc/puppet/modules”
sudo sudo/files sudo/templates sudo/manifests sudo/manifests/init.pp - edit the init.pp file
class sudo { package { 'sudo': ensure => present, } file { '/etc/sudoers': owner => "root", group => "root", mode => 0440, source => "puppet:///modules/sudo/etc/sudoers", require => Package['sudo'], } } - copy sudoers files
cp /etc/sudoers /etc/puppet/modules/sudo/files/etc/sudoers
- add the code “include sudo” to “/etc/puppet/manifests/nodes.pp”
Create NTP module
- create file structure under the “/etc/puppet/modules”
ntp_service ntp_service/files ntp_service/templates ntp_service/manifests ntp_service/manifests/init.pp - edit the init.pp file ```bash class ntp_service { include install, config, service } class ntp_service::install { package { “ntp”: ensure => present, } } class ntp_service::config { if $hostname == “group21mgmt” { $restrict = “restrict 10.25.0.0 mask 255.255.0.0 nomodify notrap” $server = “server 127.127.1.0” $fudge = “127.127.1.0 stratum 10” } else { $restrict = “” $server = “server group21mgmt.foo.org.nz prefer” $fudge = “” }
file { “/etc/ntp.conf”: ensure => present, owner => “root”, group => “root”, mode => 0444, content => template(“ntp_service/ntp.conf.erb”), } } class ntp_service::service { service { “ntp”: ensure => running, enable => true, } }
3. create the file "ntp.conf.erb" under the "/etc/puppet/modules/ntp_service/templates/" and add code below:
```bash
# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help
driftfile /var/lib/ntp/ntp.drift
# Enable this if you want statistics to be logged.
#statsdir /var/log/ntpstats/
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
# Specify one or more NTP servers.
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
# more information.
pool 0.ubuntu.pool.ntp.org iburst
pool 1.ubuntu.pool.ntp.org iburst
pool 2.ubuntu.pool.ntp.org iburst
pool 3.ubuntu.pool.ntp.org iburst
# Use Ubuntu's ntp server as a fallback.
pool ntp.ubuntu.com
# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
# details. The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
# might also be helpful.
#
# Note that "restrict" applies to both servers and clients, so a configuration
# that might be intended to block requests from certain clients could also end
# up blocking replies from your own upstream servers.
# By default, exchange time with everybody, but don't allow configuration.
restrict -4 default kod notrap nomodify nopeer noquery limited
restrict -6 default kod notrap nomodify nopeer noquery limited
# allow local systems to query
<%= restrict %>
# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1
# Needed for adding pool entries
restrict source notrap nomodify noquery
# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust
# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)
#broadcast 192.168.123.255
# If you want to listen to time broadcasts on your local subnet, de-comment the
# next lines. Please do this only if you trust everybody on the network!
#disable auth
#broadcastclient
#Changes recquired to use pps synchonisation as explained in documentation:
#http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#AEN3918
#server 127.127.8.1 mode 135 prefer # Meinberg GPS167 with PPS
#fudge 127.127.8.1 time1 0.0042 # relative to PPS for my hardware
#server 127.127.22.1 # ATOM(PPS)
#fudge 127.127.22.1 flag3 1 # enable PPS API
<%= server %>
<%= fudge %>
- go to the file “/etc/puppet/manifests/nodes.pp”, add the code “include ntp_service”.
node 'group21db' { package { 'vim': ensure => present } include ntp_service }
Create MariaDB module
- create file structure under the “/etc/puppet/modules”
mariadb mariadb/files/50-server.cnf mariadb/templates mariadb/manifests/install.pp mariadb/manifests/init.pp mariadb/manifests/config.pp mariadb/manifests/service.ppThe 50-server.cnf file content below:
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#
# * Basic Settings
#
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 127.0.0.1
#
# * Fine Tuning
#
key_buffer_size = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
#max_connections = 100
#table_cache = 64
#thread_concurrency = 10
#
# * Query Cache Configuration
#
query_cache_limit = 1M
query_cache_size = 16M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
#general_log_file = /var/log/mysql/mysql.log
#general_log = 1
#
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
#
# Enable the slow query log to see queries with especially long duration
#slow_query_log_file = /var/log/mysql/mariadb-slow.log
#long_query_time = 10
#log_slow_rate_limit = 1000
#log_slow_verbosity = query_plan
#log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
#server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
max_binlog_size = 100M
#binlog_do_db = include_database_name
#binlog_ignore_db = include_database_name
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
#
# * Character sets
#
# MySQL/MariaDB default is Latin1, but in Debian we rather default to the full
# utf8 4-byte character set. See also client.cnf
#
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
#
# * Unix socket authentication plugin is built-in since 10.0.22-6
#
# Needed so the root database user can authenticate without a password but
# only when running as the unix root user.
#
# Also available for other users if required.
# See https://mariadb.com/kb/en/unix_socket-authentication-plugin/
# this is only for embedded server
[embedded]
# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
# This group is only read by MariaDB-10.0 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.0]
- put the code below to install.pp file.
class mariadb::install {
package { "mariadb-server" :
ensure => present,
require => User["mysql"],
}
user { "mysql":
ensure => present,
comment => "MariaDB user",
gid => "mysql",
shell => "/bin/false",
require => Group["mysql"],
}
group { "mysql" :
ensure => present,
}
}
- put the code below to config.pp file.
class mariadb::config { file { "/etc/mysql/mariadb.conf.d/50-server.cnf": ensure => present, source => "puppet:///modules/mariadb/50-server.cnf", mode => 0444, owner => "root", group => "root", require => Class["mariadb::install"], notify => Class["mariadb::service"], } } - put the code below to service.pp file.
class mariadb::service { service { "mysql" : ensure => running, hasstatus => true, hasrestart => true, enable => true, require => Class["mariadb::config"], } } - put the code below to init.pp file.
class mariadb { include mariadb::install, mariadb::config, mariadb::service } - Change the nodes.pp file which is locate the “/etc/puppet/manifests/nodes.pp”:
node 'group21db.foo.org.nz' { package { 'vim': ensure => present } include sudo include ntp_service include puppetasservice include ssh include mariadb }
Create Puppet Server module
The “puppet as a server” module all procedures same as MariaDB, just refer it. The only difference is on config.pp and the puppet as a server will run all nodes, and the MariaDB only run the group21db.foo.org.nz. I list puppet.conf to below:
[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/run/puppet
factpath=$vardir/lib/facter
prerun_command=/etc/puppet/etckeeper-commit-pre
postrun_command=/etc/puppet/etckeeper-commit-post
server=group00mgmt.foo.org.nz
[master]
# These are needed when the puppetmaster is run by passenger
# and can safely be removed if webrick is used.
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
certname=group00mgmt.foo.org.nz
Create SSH module
# normal procedual
* openssh-server
* ssh-keygen -t RSA 4096 on client
* ssh-copy-id remote-user@remote-server-hostname on client
* ssh remote-user@remote-server-hostname onclient
The “ssh” module all procedures same as MariaDB, just refer it. The difference is that we need put the ssh Authorized_Keys file with local computer public key as I record the common procedure before on wiki pages. Also list the “sshd_config” file below:
# Package generated configuration file
# See the sshd_config(5) manpage for details
# What ports, IPs and protocols we listen for
Port 22
# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
#ListenAddress 0.0.0.0
Protocol 2
# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
#Privilege Separation is turned on for security
UsePrivilegeSeparation yes
# Lifetime and size of ephemeral version 1 server key
KeyRegenerationInterval 3600
ServerKeyBits 1024
# Logging
SyslogFacility AUTH
LogLevel INFO
# Authentication:
LoginGraceTime 120
PermitRootLogin prohibit-password
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
#AuthorizedKeysFile %h/.ssh/authorized_keys
# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
#IgnoreUserKnownHosts yes
# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no
# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosGetAFSToken no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
#UseLogin no
#MaxStartups 10:30:60
#Banner /etc/issue.net
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes
Create Nagios module
- Create file structure under the “/etc/puppet/modules/nagios/” as below shown:
nagios ├── files │ ├── htpasswd.users │ └── nagios.cfg ├── manifests │ ├── config.pp │ ├── init.pp │ ├── install.pp │ └── service.pp └── templates
“htpasswd.users” will stored web interface log password by automatically generated.
“nagios.cfg” file we get a template put on this repository.
- install.pp
class nagios::install { package{ "nagios3": ensure=>present, } user { "nagios": ensure => present, } group { "nagios": ensure => present, } } - service.pp
class nagios::service { service { "nagios3" : ensure => running, hasstatus => true, hasrestart => true, enable => true, require => Class["nagios::config"], } } - init.pp
class nagios { include nagios::install, nagios::config, nagios::service } - config.pp
class nagios::config {
file { "/etc/nagios3/nagios.cfg":
ensure => present,
source => "puppet:///modules/nagios/nagios.cfg",
mode => 0444,
owner => "root",
group => "root",
require => Class["nagios::install"],
notify => Class["nagios::service"],
}
file { "/etc/nagios3/htpasswd.users":
ensure => present,
source => "puppet:///modules/nagios/htpasswd.users",
mode => 0444,
owner => "root",
group => "root",
require => Class["nagios::install"],
notify => Class["nagios::service"],
}
file { "/etc/nagios3/conf.d":
ensure => present,
mode => 0775,
owner => "puppet",
group => "puppet",
require => Class["nagios::install"],
notify => Class["nagios::service"],
}
nagios_host { "group21db.foo.org.nz":
target => "/etc/nagios3/conf.d/ppt_hosts.cfg",
alias => "db",
check_period => "24x7",
max_check_attempts => 3,
check_command => "check-host-alive",
notification_interval => 30,
notification_period => "24x7",
notification_options => "d,u,r",
mode => 0444,
}
nagios_host { "group21app.foo.org.nz":
target => "/etc/nagios3/conf.d/ppt_hosts.cfg",
alias => "app",
check_period => "24x7",
max_check_attempts => 3,
check_command => "check-host-alive",
notification_interval => 30,
notification_period => "24x7",
notification_options => "d,u,r",
mode => 0444,
}
nagios_host { "group21backups.foo.org.nz":
target => "/etc/nagios3/conf.d/ppt_hosts.cfg",
alias => "backups",
check_period => "24x7",
max_check_attempts => 3,
check_command => "check-host-alive",
notification_interval => 30,
notification_period => "24x7",
notification_options => "d,u,r",
mode => 0444,
}
nagios_host { "group21mgmt.foo.org.nz":
target => "/etc/nagios3/conf.d/ppt_hosts.cfg",
alias => "mgmt",
check_period => "24x7",
max_check_attempts => 3,
check_command => "check-host-alive",
notification_interval => 30,
notification_period => "24x7",
notification_options => "d,u,r",
mode => 0444,
}
nagios_hostgroup { 'ssh-servers':
target => '/etc/nagios3/conf.d/ppt_hostgroups.cfg',
alias => 'ssh Servers',
members => 'group21db.foo.org.nz, group21app.foo.org.nz, group21backups.foo.org.nz, group21mgmt.foo.org.nz',
mode => 0444,
}
nagios_service { 'ssh':
service_description => 'ssh servers',
hostgroup_name => 'ssh-servers',
target => '/etc/nagios3/conf.d/ppt_services.cfg',
check_command => 'check_ssh',
max_check_attempts => 3,
retry_check_interval => 1,
normal_check_interval => 5,
check_period => '24x7',
notification_interval => 30,
notification_period => '24x7',
notification_options => 'w,u,c',
contact_groups => 'admins',
mode => 0444,
}
nagios_hostgroup { 'db-servers':
target => '/etc/nagios3/conf.d/ppt_hostgroups.cfg',
alias => 'db Servers',
members => 'group21db.foo.org.nz',
mode => 0444,
}
nagios_service { 'db':
service_description => 'database servers',
hostgroup_name => 'db-servers',
target => '/etc/nagios3/conf.d/ppt_services.cfg',
check_command => 'check_mysql_cmdlinecred!nagios!abc',
max_check_attempts => 3,
retry_check_interval => 1,
normal_check_interval => 5,
check_period => '24x7',
notification_interval => 30,
notification_period => '24x7',
notification_options => 'w,u,c',
contact_groups => 'admins',
mode => 0444,
}
nagios_hostgroup { 'remote-disks':
target => '/etc/nagios3/conf.d/ppt_hostgroups.cfg',
alias => 'Remote Disks',
members => 'group21db.foo.org.nz, group21app.foo.org.nz, group21backups.foo.org.nz',
mode => 0444,
}
nagios_service { 'diskcheck':
service_description => 'remote disk check',
hostgroup_name => 'remote-disks',
target => '/etc/nagios3/conf.d/ppt_services.cfg',
check_command => 'check_nrpe_1arg!check_sda1',
max_check_attempts => 3,
retry_check_interval => 1,
normal_check_interval => 5,
check_period => '24x7',
notification_interval => 30,
notification_period => '24x7',
notification_options => 'w,u,c',
contact_groups => 'admins',
mode => 0444,
}
nagios_hostgroup { 'remote-http':
target => '/etc/nagios3/conf.d/ppt_hostgroups.cfg',
alias => 'Remote HTTP',
members => 'group21app.foo.org.nz',
mode => 0444,
}
nagios_service { 'httpcheck':
service_description => 'remote http check',
hostgroup_name => 'remote-http',
target => '/etc/nagios3/conf.d/ppt_services.cfg',
check_command => 'check_http',
max_check_attempts => 3,
retry_check_interval => 1,
normal_check_interval => 5,
check_period => '24x7',
notification_interval => 30,
notification_period => '24x7',
notification_options => 'w,u,c',
contact_groups => 'admins',
mode => 0444,
}
nagios_contact { 'wangh21':
target => '/etc/nagios3/conf.d/ppt_contacts.cfg',
alias => 'Hua Wang',
service_notification_period => '24x7',
host_notification_period => '24x7',
service_notification_options => 'w,u,c,r',
host_notification_options => 'd,r',
service_notification_commands => 'notify-service-by-slack',
host_notification_commands => 'notify-host-by-slack',
email => 'root@localhost',
mode => 0444,
}
nagios_contact { 'kiselv1':
target => '/etc/nagios3/conf.d/ppt_contacts.cfg',
alias => 'Kiselv',
service_notification_period => '24x7',
host_notification_period => '24x7',
service_notification_options => 'w,u,c,r',
host_notification_options => 'd,r',
service_notification_commands => 'notify-service-by-slack',
host_notification_commands => 'notify-host-by-slack',
email => 'root@localhost',
mode => 0444,
}
nagios_contactgroup { 'sysadmins':
target => '/etc/nagios3/conf.d/ppt_contactgroups.cfg',
alias => 'Nagios Administrators',
members => 'wangh21, kiselv1',
mode => 0444,
}
}
Troubleshooting Nagios Checks
- check the MariaDB server whether running correctly.
sudo -i # to entry root user to log on local MariaDB
mysql -u root
GRANT SELECT ON nagiosdb.* to ‘nagios’@’<mgmt server’s ip address’ IDENTIFIED BY ‘mypasswd’;
sudo ufw allow from 10.25.137.155 to any port 3306
systemctl status mysql.service
- On MGMT server, change /etc/puppet/modules/mariadb/files/50-server.conf, to make sure:
bind-address = 0.0.0.0
- The check command location:
/usr/lib/nagios/plugins/check_mysql
Notes: There are a couple of things we need to be really careful.
- If there are some error about dependency and “postfix”, just check “/etc/postfix/main.cf” file, if there is a dot in the end of the hostname such as “myhostname = group21mgmt.foo.org.nz.”, it should take the dot off.
- When any changes on config.pp file, we can run command “
sudo nagios3 -v /etc/nagios3/nagios.cfg”, most of the error will display out the screen. It’s good for troubleshooting the problems. - After any changes we might need “
systemctl restrat nagios3.service” command to restart services. - About config.pp, if we have change the host,hostgroup,services, we can mv the file under /etc/nagios3/conf.d/, that is like clear cache, and we can see the changing result immediately.
- In the file “/etc/nagios3/conf.d/hostgroups_nagios2.cfg”, I have commit “A list of your ssh-accessible servers”. The original error is “duplicate define hostgroup”
NRPE
- Create nrpe module with puppet.
- package name: nagios-nrpe-server
- config file with command what we need change: /etc/nagios/nrpe.cfg
allowed_hosts=group21mgmt.foo.org.nz command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 # command "df" show device list
- Nagios Server Configuration
- Install nagios-nrpe-plugin package
sudo apt install nagios-nrpe-plugin
- create hostgourp and service # for entire config.pp file see GitHub_puppet
nagios_hostgroup { 'remote-disks': target => '/etc/nagios3/conf.d/ppt_hostgroups.cfg', alias => 'Remote Disks', members => 'group21db.foo.org.nz, group21app.foo.org.nz, group21backups.foo.org.nz', mode => 0444, } nagios_service { 'diskcheck': service_description => 'remote disk check', hostgroup_name => 'remote-disks', target => '/etc/nagios3/conf.d/ppt_services.cfg', check_command => 'check_nrpe_1arg!check_sda1', max_check_attempts => 3, retry_check_interval => 1, normal_check_interval => 5, check_period => '24x7', notification_interval => 30, notification_period => '24x7', notification_options => 'w,u,c', contact_groups => 'admins', mode => 0444, }Notes: The command to do this is check_nrpe_1arg!check_hd. This command is defined in the file /etc/nagios-plugins/config/check_nrpe.cfg. The single argument, check_hd, is the name of the command we set up in the nrpe.cfg file on the remote system. #check_sda1
- Install nagios-nrpe-plugin package
Nagios Notifications
- Create a Slack Workspace and a channel under it. When we done it, its need install nagios app to it, and we can get domain name(Workspace url) and token. In this particular case, we just use our Systemadmin class Workspace, and we create our team channel named group21chanel.
- Install the Per modules “libwww-perl”, “libcrypt-ssleay-perl”.
- Download the plugin file. Cope it to
/usr/lib/nagios/pluginsand set its permissions to 0755. Also replace the values of $opt_domain and $opt_token as our own it refer to step 1. - Create a file
/etc/nagios-plugins/config/slack.cfgwith contents below and replace the value of slack_channel to our own which is channel name :hua@group21mgmt:/etc/puppet$ more /etc/nagios-plugins/config/slack.cfg define command { command_name notify-service-by-slack command_line /usr/lib/nagios/plugins/nagios.pl -field slack_channel=#group21chanel -field \ HOSTALIAS="$HOSTNAME$" -field SERVICEDESC="$SERVICEDESC$" \ -field SERVICESTATE="$SERVICESTATE$" -field SERVICEOUTPUT="$SERVICEOUTPUT$" \ -field NOTIFICATIONTYPE="$NOTIFICATIONTYPE$" } define command { command_name notify-host-by-slack command_line /usr/lib/nagios/plugins/nagios.pl -field slack_channel=#group21chanel -field \ HOSTALIAS="$HOSTNAME$" -field HOSTSTATE="$HOSTSTATE$" \ -field HOSTOUTPUT="$HOSTOUTPUT$" \ -field NOTIFICATIONTYPE="$NOTIFICATIONTYPE$" } - Create Nagios contacts and contact groups when Nagios triggers an alert. Just put the code below to Nagios module
config.pp:nagios_contact { 'wangh21': target => '/etc/nagios3/conf.d/ppt_contacts.cfg', alias => 'Hua Wang', service_notification_period => '24x7', host_notification_period => '24x7', service_notification_options => 'w,u,c,r', host_notification_options => 'd,r', service_notification_commands => 'notify-service-by-slack', host_notification_commands => 'notify-host-by-slack', email => 'root@localhost', mode => 0444, } nagios_contact { 'kiselv1': target => '/etc/nagios3/conf.d/ppt_contacts.cfg', alias => 'Kiselv', service_notification_period => '24x7', host_notification_period => '24x7', service_notification_options => 'w,u,c,r', host_notification_options => 'd,r', service_notification_commands => 'notify-service-by-slack', host_notification_commands => 'notify-host-by-slack', email => 'root@localhost', mode => 0444, } nagios_contactgroup { 'sysadmins': target => '/etc/nagios3/conf.d/ppt_contactgroups.cfg', alias => 'Nagios Administrators', members => 'wangh21, kiselv1', mode => 0444, } - Test: just change disk check rule from
/etc/puppet/modules/nrep/files/nrep.cfg, this will make a problem.
Bacula
Notes: based on backups server, if I had nagios package be installed. run sudo apt-get --purge autoremove nagios to remove it.
sudo apt-get install bacula-serveragree with SQLite Database,sudo apt-get install bacula-client- location to
/etc/bacula/bacula-sd.conf, find outDevicesection with nameFileChgr1-Dev1, FileChgr1-Dev2and change theArchive Deviceto/home/bacula/storage/dev1,/home/bacula/storage/dev2 - location to
/etc/bacula/bacula-dir.conf, findJobsection withRestoreFilesand change theWhereproperty to/home/bacula/restoresand also change theFileproperty to/home/bacula/data-to-backupin theFileSetsection. - run
service bacula-director reload,service bacula-sd restart - create folder like below structure and make dev1, dev2 owner to bacula.
├── bacula │ ├── data-to-backup │ │ │ │ │ │ │ ├── restores │ │ │ └── storage │ ├── dev1 │ │ │ └── dev2 - create some test files in
/home/bacula/data-to-backup - In second ssh session run
bconsolecommand, there are some useful command:show filesets,status dir,status client,status storage,messages. - Enter
runin bconsole, enter1to run BackupClient1 job, answer yes. - Enter
labelchooseFile, name new volumeTestVolume1, choose the File pool. done backup job - Now it can test delete some file under the
/home/bacula/data-to-backup. - Enter
restore allto restore in bconsole. pick option 5, and enterdoneto restore everything. - Bacula will place the restored files under /home/bacula/restores, verify that the files are correct before manually copying them to the desired location. what if you want to restore the files to their original locations? Start a new restore job as you did above. When you get to the
Yes/No/Modstep, entermod. Set theWhereproperty of the restore tonothingor /. Now bacula will restore the files directly to their original locations.
Database backup
Create a puppet module to backup MySQL via CRON run dumpdbfile command. Just put the code below on MariaDB module config.pp file as a section.
cron {"dumpdbfile":
command => 'mysqldump --all-databases --add-drop-table > /home/hua/db-backup.sql',
user => root,
hour => 1,
minute => 20,
weekday => "*"
}
DNS Update
To avoid DHCP overwriting /etc/resolv.conf:
Create a file /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate with code underneath:
#!/bin/sh
make_resolv_conf() {
:
}
ownCloud
- sudo -i
- mysql -u root
-
GRANT ALL PRIVILEGES ON owncloud.* to ocuser@<app server address> identified by ‘abc’;
-
curl https://download.owncloud.org/download/repositories/production/Ubuntu_16.04/Release.key sudo apt-key add - -
echo ‘deb http://download.owncloud.org/download/repositories/production/Ubuntu_16.04/ /’ sudo tee /etc/apt/sources.list.d/owncloud.list - run apt-get update and install packages below:
- apache2
- libapache2-mod-php
- php-mysql
- php-mbstring
- php-gettext
- php-intl
- php-redis
- php-imagick
- php-igbinary
- php-gmp
- php-curl
- php-gd
- php-zip
- php-imap
- php-ldap
- php-bz2
- php-phpseclib
- owncloud-files
- php-xml
- Create the file /etc/apache2/sites-available/owncloud.conf with the following contents: ``` Alias /owncloud “/var/www/owncloud/”
<Directory /var/www/owncloud/> Options +FollowSymlinks AllowOverride All
SetEnv HOME /var/www/owncloud SetEnv HTTP_HOME /var/www/owncloud </Directory> ```
- Create a symlink in /etc/apache2/sites-enabled that links to the file you just created.
sudo ln -s /etc/apache2/sites-available/owncloud.conf /etc/apache2/sites-enabled/owncloud.conf
- Restart the apache2 service.