Hi,
I need to add lines in the .ssh/config file of a specific user.
I choose to use augeas to do so, because, in reality, it is a collection of various exported resources. So basicaly :
augeas{"ssh_config_deployer_local":
context => '/files/deployer/.ssh/config',
changes => [
"set host ${hostname}",
"set user deploy"
],
}
Turns out this won't work, because augeas does not manage this path (/deployer/.ssh/config). So I tried it another way around :
augeas{"ssh_config_deployer_local":
context => '/files',
incl => '/deployer/.ssh/config',
lens => 'Ssh.lns',
changes => [
"set host ${hostname}",
"set user deploy"
],
}
withou success.
How can I tell the augeas resource to edit this file with the Ssh lens ?
↧
Can augeas resource edit a file outside the recognized path ?
↧
Unable to start Puppet Enterprise Puppetserver
Hi Guys,
I download puppet enterprise VM and was trying to start puppetserver and I get the error below:`Logfile name=puppetserver.log
2017-06-08 06:27:08,286 INFO [async-dispatch-2] [p.e.s.m.master-service] Puppet Server has successfully started and is now ready to handle requests
2017-06-08 06:27:08,288 INFO [async-dispatch-2] [p.e.s.l.pe-legacy-routes-service] The legacy routing service has successfully started and is now ready to handle requests
2017-06-08 06:27:08,294 INFO [async-dispatch-2] [p.e.s.a.analytics-service] Puppet Server Analytics has successfully started and will run in the background
2017-06-08 06:27:09,644 INFO [pool-2-thread-1] [p.d.version-check] Newer version 2017.2.1 is available! Visit http://links.puppet.com/enterpriseupgrade for details.
2017-06-08 06:27:09,680 WARN [pool-2-thread-1] [p.e.s.a.analytics-utils] Failed to reach server https://master.puppetlabs.vm:8081/metrics/v1/mbeans/puppetlabs.puppetdb.population:name=num-nodes: java.net.ConnectException: Connection refused
2017-06-08 06:27:09,693 WARN [pool-2-thread-1] [p.e.s.a.analytics-utils] Failed to reach server https://master.puppetlabs.vm:8081/pdb/query/v4/facts: java.net.ConnectException: Connection refused
2017-06-08 06:27:09,708 WARN [pool-2-thread-1] [p.e.s.a.analytics-utils] Failed to reach server https://master.puppetlabs.vm:8081/pdb/query/v4/facts: java.net.ConnectException: Connection refused
2017-06-08 06:27:09,924 WARN [pool-2-thread-1] [p.e.s.a.analytics-utils] Failed to reach server https://master.puppetlabs.vm:8081/pdb/query/v4/facts: java.net.ConnectException: Connection refused
2017-06-08 06:27:10,007 WARN [pool-2-thread-1] [p.e.s.a.analytics-utils] Failed to reach server https://master.puppetlabs.vm:4433/metrics/v1/mbeans/puppetlabs.classifier%3Aname%3Dpuppetlabs.puppetlabs.classifier.total-groups: java.net.ConnectException: Connection refused
2017-06-08 06:27:10,024 WARN [pool-2-thread-1] [p.e.s.a.analytics-utils] Failed to reach server https://master.puppetlabs.vm:8143/metrics/v1/mbeans/puppetlabs.orchestrator%3Aname%3Dpuppetlabs.localhost.job-nodes.histogram: java.net.ConnectException: Connection refused
2017-06-08 06:27:10,534 WARN [pool-2-thread-1] [p.e.s.a.analytics-utils] Failed to reach server https://master.puppetlabs.vm:8143/metrics/v1/mbeans/puppetlabs.orchestrator%3Aname%3Dpuppetlabs.localhost.created-job.counter: java.net.ConnectException: Connection refused
2017-06-08 06:27:10,859 WARN [clojure-agent-send-pool-0] [puppetserver] Puppet Support for ruby version 1.9.3 is deprecated and will be removed in a future release. See https://docs.puppet.com/puppet/latest/system_requirements.html#ruby for a list of supported ruby versions.
(at /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:167:in `Puppet')
2017-06-08 06:27:10,886 INFO [clojure-agent-send-pool-0] [puppetserver] Puppet Puppet settings initialized; run mode: master
2017-06-08 06:27:11,406 INFO [clojure-agent-send-pool-0] [p.s.j.i.jruby-agents] Finished creating JRubyInstance 2 of 3
2017-06-08 06:27:11,406 INFO [clojure-agent-send-pool-0] [p.s.j.i.jruby-internal] Creating JRubyInstance with id 3.
2017-06-08 06:27:14,406 WARN [clojure-agent-send-pool-0] [puppetserver] Puppet Support for ruby version 1.9.3 is deprecated and will be removed in a future release. See https://docs.puppet.com/puppet/latest/system_requirements.html#ruby for a list of supported ruby versions.
(at /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:167:in `Puppet')
2017-06-08 06:27:14,426 INFO [clojure-agent-send-pool-0] [puppetserver] Puppet Puppet settings initialized; run mode: master
2017-06-08 06:27:14,833 INFO [clojure-agent-send-pool-0] [p.s.j.i.jruby-agents] Finished creating JRubyInstance 3 of 3
systemctl status update below:
root@master:/opt/puppetlabs/server/bin # systemctl |grep pe
var-lib-nfs-rpc_pipefs.mount loaded active mounted RPC Pipe File System
session-1.scope loaded active running Session 1 of user root
session-10.scope loaded active running Session 10 of user root
session-3.scope loaded active running Session 3 of user root
session-4.scope loaded active running Session 4 of user root
session-9.scope loaded active running Session 9 of user root
● pe-activemq.service loaded failed failed Puppet Enterprise ActiveMQ
● pe-nginx.service loaded failed failed pe-nginx - Puppet Enterprise web server
● pe-postgresql.service loaded failed failed Puppet Enterprise PostgreSQL database server
● pe-puppetdb.service loaded failed failed pe-puppetdb Service
sshd.service loaded active running OpenSSH server daemon
dm-event.socket loaded active listening Device-mapper event daemon FIFOs
systemd-initctl.socket loaded active listening /dev/initctl Compatibility Named Pipe
dev-mapper-centos\x2dswap.swap loaded active active /dev/mapper/centos-swap
LOAD = Reflects whether the unit definition was properly loaded.
SUB = The low-level unit activation state, values depend on unit type.
root@master:/opt/puppetlabs/server/bin #
I deleted the ssl/* from puppet and puppetdb
I also did pupet cert clean master.puppetlabs.vm and then puppetdb ssk-setup -f but I am still getting the error described above. Please assist.
Config files below:
root@master:/opt/puppetlabs/server/bin # cat /etc/puppetlabs/puppetdb/conf.d/config.ini
# See README.md for more thorough explanations of each section and
# option.
[global]
# Store mq/db data in a custom directory
vardir = /opt/puppetlabs/server/data/puppetdb
# Use an external logback config file
logging-config = /etc/puppetlabs/puppetdb/logback.xml
product-name = pe-puppetdb
[command-processing]
# How many command-processing threads to use, defaults to (CPUs / 2)
# threads = 4
threads = 1
# Maximum amount of disk space (in MB) to allow for ActiveMQ persistent message storage
# store-usage = 102400
# Maximum amount of disk space (in MB) to allow for ActiveMQ temporary message storage
# temp-usage = 51200
concurrent-writes = 1
root@master:/opt/puppetlabs/server/bin #
↧
↧
fact vs variable (scope question)
if I have a fact called `datacenter`, and my class has a variable declared (in the console) called `datacenter`, which one gets presence over the other? Or is this going to be a big "mess" and have to have unique names? hopefully not.
thanks
↧
view my inheritance relationships?
Greetings.
I'm not sure if I'm using the word **inheritance** the way Puppet uses the word. It's the only word I can think of.
I have a site manifest with lots of profiles. For example:
class profiles::generic
(
$patchhour = 13,
$patchminute = 0,
$hostfile,
)
{
class { 'etchostfile' :
hostfile => $hostfile,
}
class { 'systempatch' :
hour => $patchhour,
minute => $patchminute,
}
}
class profiles::testsystem
(
$patchhour = 14,
$patchminute = 30,
$hostfile = "primary",
)
{
class { 'profiles::sdogeneric' :
hostfile => $hostfile,
patchhour => $patchhour,
patchminute => $patchminute,
}
include testpackage
}
class profiles::prodsystem
(
$patchhour = 13,
$patchminute = 0,
$hostfile = "primary",
)
{
class { 'profiles::sdogeneric' :
hostfile => $hostfile,
patchhour => $patchhour,
patchminute => $patchminute,
}
include prodpackage
}
node 'cutlass'
{
class { 'profiles::prodsystem' :
patchminute => 10,
}
}
node 'volvo'
{
class { 'profiles::testsystem' :
patchminute => 25,
hostfile => "alternate",
}
}
The intent, if it's not obvious, is that we have
- a "generic" profile with a set of default values (patch at 1pm, no specific host file)
- a "testsystem" profile that inherits all of the the "generic" profile defaults, but patches at 2:30pm by default and uses the "primary" host file by default
- a "prodsystem" profile that inherits all of the "generic" profile defaults, but uses the "primary" host file by default
- a production machine that patches at 1:10pm (keep the default hour but change the minute)
- a test machine that patches at 2:25pm and specifies the "alternate" host file
So Question 1 is ... is there a better way to do this? Should I be using (say) global variables instead of parameters passed from profile to profile? I'd prefer to stay away from hiera and keep all of my info in the manifest.
And Question 2 is ... does anybody have or recommend a script or a viewer or a recommended way that I can parse my config and specifically view
- cutlass specified the patchminutes as 10, which overrides the profiles::prodsystem default patchminute which is 0, which overrides the profiles::generic default patchminute which is (also) 0"
- cutlass will use the profiles::prodsystem default hour which is 13, which overrides the profiles::generic default patchhour which is (also) 13"
? I know that `puppet compile find cutlass` will give me the final catalog info, but not (as far as I can tell) the path it used to get those values.
↧
How to Copy files from server based on underlying OS.!?
Hi Team,
If `OS=Linux`, i need to copy `splunk_forwarder_linux.tar.gz` to /opt/splunk/ location.
If `OS=AIX`, i need to copy `splunk_forwarder_aix.tar.gz` to /opt/splunk/ location.
If `OS=SOLARIS`, i need to copy `splunk_forwarder_solaris.tar.gz` to /opt/splunk/ location.
------------------
Currently my Puppet code look like this:, How would like to include above feature too in my code. Please help here..!
file { '/opt/splunk/splunk_forwarder.tar.gz':
ensure => present,
mode => 0600,
source => [
"puppet:///modules/splunk_repo/splunk_forwarder.tar.gz"
],
before => Exec['unpack_splunk_forwarder.tar.gz'],
}
exec {'unpack_splunk_forwarder.tar.gz':
unless => 'test -f /opt/splunk/splunk_forwarder/bin/splunk',
cwd => '/opt/splunk',
command => 'tar -zvxf splunk_forwarder.tar.gz',
}
exec {'start_splunk_service':
command => '/opt/splunk/splunk_forwarder/bin/splunk start --accept-license',
onlyif => "test -f /opt/splunk/splunk_forwarder/bin/splunk",
}
↧
↧
vardir translation problem
Hello,
Just wondering if it's possible to reference the puppet agent's vardir in a puppet class instead of it defaulting to the master's value? We have a puppet class that creates a directory defined as follows:
"${settings::vardir}/dummy-dir"
However, when we include this class on our agents, it translates vardir to the puppet server's value of /opt/puppetlabs/server/data/puppetserver which obviously doesn't exist on our agents.
The agent's vardir is /opt/puppetlabs/puppet/cache, but I'd prefer not to hardcode that value in the puppet class if there's a way around it.
Any ideas?
Thanks
↧
augeas: update xml value based on previous tag
Hi, I have a simple issue while updating xml file but I'm not able to find the solution after trying many things out. I am trying to update password based on the username which is defined in just previous tag of password tag. My xml file is below:user1 pass1 user2 pass2 user3 pass3
Here, based on the username in tag I need to update password in tag. I have tried following but it's not working:
augeas {'Update external.xml':
incl => '/tmp/external.xml',
context => '/files/tmp/external.xml/ns2:tokens',
lens => "xml.lns",
changes => ["set /files/external.xml/token/[name/#text='user2'/name]/value/#text newpass2",]
}
Any idea?
↧
Passenger problem
Hello
I've installed puppet server but I couldn't implement apache with passenger along together.
apache works fine but I think there is something wrong with passenger. Also I have no access to http://192.168.1.1:8140, pretty sure port is open I checked it via telnet however puppet works fine with puppserver.
[ 2017-06-09 15:45:27.5508 32086/b607eb70 age/Cor/App/Implementation.cpp:304 ]: Could not spawn process for application /usr/share/puppet/rack/puppetmasterd: An error occurred while starting up the preloader.
Error ID: 4af67d38
Error details saved to: /tmp/passenger-error-pUovSo.html
Message from application: no such file to load -- rack (LoadError)
/usr/lib/ruby/site_ruby/1.8/rubygems/core_ext/kernel_require.rb:55:in `gem_original_require'
/usr/lib/ruby/site_ruby/1.8/rubygems/core_ext/kernel_require.rb:55:in `require'
/usr/local/rvm/gems/ruby-2.2.2/gems/passenger-5.1.4/src/ruby_supportlib/phusion_passenger/loader_shared_helpers.rb:430:in `activate_gem'
/usr/local/rvm/gems/ruby-2.2.2/gems/passenger-5.1.4/src/helper-scripts/rack-preloader.rb:102:in `preload_app'
/usr/local/rvm/gems/ruby-2.2.2/gems/passenger-5.1.4/src/helper-scripts/rack-preloader.rb:156
↧
Passenger fails on RHEL/CentOS 7
I've been beating my head against RHEL 7 (CentOS 7) trying to set up a puppet master. I'm using IPA and NSS for certificate management, and those problems appear to be solved.
I can run the puppet master daemon (Webrick default) and connect from an agent successfully, but passenger throws 500 exceptions and a diagnostic page.
I tried changing the config.ru to a simple "hello world" web app, and that runs successfully, so the hook through Apache to Passenger seems successful.
I suspected problems with systemd and the fact that passenger makes heavy use of the /tmp directory. I tracked down the httpd service file and set PrivateTmp=false, and confirmed that passenger stopped using the private /tmp directory. This had no effect on the fail.
Here is the stack trace reported on the passenger 500 page:
exit (SystemExit)
/usr/share/ruby/vendor_ruby/puppet/util.rb:493:in `exit'
/usr/share/ruby/vendor_ruby/puppet/util.rb:493:in `rescue in exit_on_fail'
/usr/share/ruby/vendor_ruby/puppet/util.rb:479:in `exit_on_fail'
/usr/share/ruby/vendor_ruby/puppet/application.rb:369:in `run'
/usr/share/ruby/vendor_ruby/puppet/util/command_line.rb:137:in `run'
/usr/share/ruby/vendor_ruby/puppet/util/command_line.rb:91:in `execute'
config.ru:35:in `block in '
/usr/local/share/gems/gems/rack-1.5.2/lib/rack/builder.rb:55:in `instance_eval'
/usr/local/share/gems/gems/rack-1.5.2/lib/rack/builder.rb:55:in `initialize'
config.ru:1:in `new'
config.ru:1:in `'
/usr/local/share/gems/gems/passenger-4.0.48/helper-scripts/rack-preloader.rb:112:in `eval'
/usr/local/share/gems/gems/passenger-4.0.48/helper-scripts/rack-preloader.rb:112:in `preload_app'
/usr/local/share/gems/gems/passenger-4.0.48/helper-scripts/rack-preloader.rb:158:in `'
/usr/local/share/gems/gems/passenger-4.0.48/helper-scripts/rack-preloader.rb:29:in `'
/usr/local/share/gems/gems/passenger-4.0.48/helper-scripts/rack-preloader.rb:28:in `'
I'd love some suggestions for troubleshooting this problem. My fallback is to ditch RHEL 7 for 6.5.
↧
↧
hiera lookup not working at module layer
I am not getting why my hiera code is not working though everything seem alright. I am using Puppet 2017.2 and configured hiera 5. I have defined module layer hiera and removed global heira file. Below are my code and issue:
[root@puppetserver fileops]# ls
data examples Gemfile hiera.yaml manifests metadata.json Rakefile README.md spec
[root@puppetserver fileops]# cat hiera.yaml
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "User details"
path: "common.yaml"
[root@puppetserver fileops]#
Content of fileops/data/common.yaml
[root@puppetserver fileops]# cat data/common.yaml
---
fileops::myname: rajeev
code of manifest:
class fileops::name (
$myname,
){
notify {"My name is $myname":}
}
I am clueless why I'm getting following error:
> Info: Retrieving pluginfacts Info:> Retrieving plugin Info: Loading facts> Error: Could not retrieve catalog from> remote server: Error 500 on SERVER:> Server Error: Evaluation Error: Error> while evaluating a Resource Statement,> Class[Fileops::Name]: expects a value> for parameter 'myname' on node> puppetserver Warning: Not using cache> on failed catalog Error: Could not> retrieve catalog; skipping run
Let me know if you notice anything I have missed. Otherwise I'll have to use Hiera3.
↧
how to start with puppet in RHOS
puppet-master -- version gives the verson as 3.6.2
how can i check the list of agents installed in it
pls provide a link to start
Also how i enable puppet GUI link
↧
unable to connect agent to puppet master
using command:
puppet agent -t
error is:
Error: Could not request certificate: Connection refused - connect(2)
Exiting; failed to retrieve certificate and waitforcert is disabled
in agent server,i can see /etc/hosts contains puppet master hostname and i.p
177.177.177.10 master pinocchio.dev.mobistar.be pinocchio.puppetlabs.lro
↧
hiera interpolation
I have module called mdwcfg which is used for hiera configuration:
mdwcfg/hiera.yaml:
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "wld-pil-03"
path: "wld-pil-03.yaml"
- name: "wld-pil-04"
path: "wld-pil-04.yaml"
- name: "jdk"
path: "jdk.yaml"
- name: "common"
path: "common.yaml"
mdwcfg/data/wld-pil-04.yaml:
---
domain_env: 'pil'
domain_number: '04'
mdwcfg::domain_env: &domain_env 'pil'
mdwcfg::domain_number: &domain_number '04'
mdwcfg::domain_name: &domain_name "wld-%{mdwcfg::domain_env}-%{mdwcfg::domain_number}"
and I create a test file /tmp/kk.pp
include mdwcfg
$a = lookup('mdwcfg::domain_name')
notify{"a: ${a}":}
puppet can't interpolate the variable mdwcfg:domain_name
/opt/puppetlabs/bin/puppet apply /tmp/kk.pp --modulepath=/etc/puppetlabs/code/environments/production/modules/ --debug
....
Debug: Facter: resolving Xen facts.
Debug: Evicting cache entry for environment 'production'
Debug: Caching environment 'production' (ttl = 0 sec)
Debug: importing '/etc/puppetlabs/code/environments/production/modules/mdwcfg/manifests/init.pp' in environment production
Debug: Automatically imported mdwcfg from mdwcfg into production
Warning: Defining "data_provider": "hiera" in metadata.json is deprecated. It is ignored since a 'hiera.yaml' with version >= 5 is present
(in /etc/puppetlabs/code/environments/production/modules/mdwcfg/metadata.json)
Warning: Module 'mdwcfg': Hierarchy entry "wld-pil-04" must use keys qualified with the name of the module
Warning: Module 'mdwcfg': Hierarchy entry "wld-pil-04" must use keys qualified with the name of the module
Warning: Undefined variable 'domain_env';
(file & line not available)
Warning: Undefined variable 'domain_number';
(file & line not available)
Debug: Automatic Parameter Lookup of 'mdwcfg::domain_name
Searching for "lookup_options"
Global Data Provider (hiera configuration version 5)
No such key: "lookup_options"
Module "mdwcfg" Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/production/modules/mdwcfg/hiera.yaml"
Merge strategy hash
Hierarchy entry "wld-pil-03"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/wld-pil-03.yaml"
Original path: "wld-pil-03.yaml"
No such key: "lookup_options"
Hierarchy entry "wld-pil-04"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/wld-pil-04.yaml"
Original path: "wld-pil-04.yaml"
No such key: "lookup_options"
Hierarchy entry "jdk"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/jdk.yaml"
Original path: "jdk.yaml"
No such key: "lookup_options"
Hierarchy entry "common"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/common.yaml"
Original path: "common.yaml"
No such key: "lookup_options"
Searching for "mdwcfg::domain_name"
Global Data Provider (hiera configuration version 5)
No such key: "mdwcfg::domain_name"
Module "mdwcfg" Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/production/modules/mdwcfg/hiera.yaml"
Hierarchy entry "wld-pil-03"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/wld-pil-03.yaml"
Original path: "wld-pil-03.yaml"
No such key: "mdwcfg::domain_name"
Hierarchy entry "wld-pil-04"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/wld-pil-04.yaml"
Original path: "wld-pil-04.yaml"
Interpolation on "wld-%{mdwcfg::domain_env}-%{mdwcfg::domain_number}"
Global Scope
Global Scope
Found key: "mdwcfg::domain_name" value: "wld--"
Debug: Lookup of 'mdwcfg::domain_name'
Searching for "lookup_options"
Global Data Provider (hiera configuration version 5)
No such key: "lookup_options"
Module "mdwcfg" Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/production/modules/mdwcfg/hiera.yaml"
Merge strategy hash
Hierarchy entry "wld-pil-03"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/wld-pil-03.yaml"
Original path: "wld-pil-03.yaml"
No such key: "lookup_options"
Hierarchy entry "wld-pil-04"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/wld-pil-04.yaml"
Original path: "wld-pil-04.yaml"
No such key: "lookup_options"
Hierarchy entry "jdk"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/jdk.yaml"
Original path: "jdk.yaml"
No such key: "lookup_options"
Hierarchy entry "common"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/common.yaml"
Original path: "common.yaml"
No such key: "lookup_options"
Searching for "mdwcfg::domain_name"
Global Data Provider (hiera configuration version 5)
No such key: "mdwcfg::domain_name"
Module "mdwcfg" Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/production/modules/mdwcfg/hiera.yaml"
Hierarchy entry "wld-pil-03"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/wld-pil-03.yaml"
Original path: "wld-pil-03.yaml"
No such key: "mdwcfg::domain_name"
Hierarchy entry "wld-pil-04"
Path "/etc/puppetlabs/code/environments/production/modules/mdwcfg/data/wld-pil-04.yaml"
Original path: "wld-pil-04.yaml"
Interpolation on "wld-%{mdwcfg::domain_env}-%{mdwcfg::domain_number}"
Global Scope
Global Scope
Found key: "mdwcfg::domain_name" value: "wld--"
Notice: Compiled catalog for vm-lab-linux-1.msc.es in environment production in 0.15 seconds
Debug: Creating default schedules
Debug: Loaded state in 0.00 seconds
Debug: Loaded state in 0.00 seconds
Debug: Loaded transaction store file in 0.00 seconds
Info: Applying configuration version '1497262320'
Notice: a: wld--
Notice: /Stage[main]/Main/Notify[a: wld--]/message: defined 'message' as 'a: wld--'
Debug: /Stage[main]/Main/Notify[a: wld--]: The container Class[Main] will propagate my refresh event
Debug: Class[Main]: The container Stage[main] will propagate my refresh event
Debug: Finishing transaction 31423700
Debug: Storing state
Debug: Stored state in 0.01 seconds
Notice: Applied catalog in 0.05 seconds
Debug: Applying settings catalog for sections reporting, metrics
Debug: Finishing transaction 34689140
Debug: Received report to process from vm-lab-linux-1.msc.es
Debug: Evicting cache entry for environment 'production'
Debug: Caching environment 'production' (ttl = 0 sec)
Debug: Processing report from vm-lab-linux-1.msc.es with processor Puppet::Reports::Store
Am I doing something wrong?
thanks in advance,
Raúl
↧
↧
Require on class not working, class parrtitly applied.
Hi.
I've run into a strange issue with class requirements. Puppet needs to add a private repository before it tries to install etcd. I'm using the role/profile setup and the class are defined as follows::
class roles::controlplane {
include profiles::controlplane
}
And in this class I include
class profiles::controlplane {
include profiles::repo
class {'::etcd':
require => Class['::profiles::repo']
}
}
The etcd class only has a package resource. The repo class is as follows (edit for brevity). I also added the cowsay package for debugging (more on that later)
class profiles::repo {
package{'cowsay':
¦ ensure => 'present'
}
# Add APT repository
apt::source { 'repo-ci':
¦ comment => 'Repo CI stable repository',¦ location => "http://repo.domain.tld/repo-${lsbdistcodename}",¦ release => "repo-${lsbdistcodename}",¦ repos => 'main',¦ key => 'XXXXXX'
}
}
When I do a puppet run it fails the first run because it tries to install the package before the private apt repo is included. The debug output shows that the cowsay package is being installed before the etcd package so it seems the class is included correctly but for some reason the apt repo isnt added before the etcd package.
What am I missing?
**Version info**
*puppet*
puppet-agent 1.10.0-1jessie
puppetlabs-release-pc1 1.1.0-2jessie
*apt module*
puppetlabs-apt (= 4.0.0)
↧
Require on class not working, class parrtitly applied.
Hi.
I've run into a strange issue with class requirements. Puppet needs to add a private repository before it tries to install etcd. I'm using the role/profile setup and the class are defined as follows::
class roles::controlplane {
include profiles::controlplane
}
And in this class I include
class profiles::controlplane {
include ::profiles::repo
class {'::etcd':
require => Class['::profiles::repo']
}
}
The etcd class only has a package resource. The repo class is as follows (edit for brevity). I also added the cowsay package for debugging (more on that later)
class profiles::repo {
package{ 'cowsay':
¦ ensure => 'present'
}
# Add APT repository
apt::source { 'repo-ci':
¦ comment => 'Repo CI stable repository',¦ location => "http://repo.domain.tld/repo-${lsbdistcodename}",¦ release => "repo-${lsbdistcodename}",¦ repos => 'main',¦ key => 'XXXXXX'
}
}
When I do a puppet run it fails the first run because it tries to install the package before the private apt repo is included. The debug output shows that the cowsay package is being installed before the etcd package so it seems the class is included correctly but for some reason the apt repo isnt added before the etcd package.
What am I missing?
**Version info**
*puppet*
puppet-agent 1.10.0-1jessie
puppetlabs-release-pc1 1.1.0-2jessie
*apt module*
puppetlabs-apt (= 4.0.0)
↧
Testing Puppet code and deployment
I am new to puppet world.. Based on my project we have written manifests to configure Azure VM accordingly. But customer is expecting some testing tools for both Manifest and deployment. Any one used some testing tools like RSpec,puppetlint,etc... Please suggest me a good testing tool for puppet. Puppet parser will do the basic syntax check but I am looking for a tool which check all sort of tesing.
↧
I'm confused about scope. Shouldn't this work?
I have a class
class test
{
notify { "The value of \$foo is $foo" : }
}
And I have a node
node agent1
{
$foo="bar"
include test
}
And when puppet runs, I get
> notice: The value of $foo is bar
Cool. Then I change my node to:
class profiles::myprofile
{
include test
}
node agent1
{
$foo="bar"
class { 'profiles::myprofile' : }
}
And I still get
> notice: The value of $foo is bar
So far so good. But when I change my node to this:
class profiles::myprofile
{
$foo="bar"
include test
}
node agent1
{
class { 'profiles::myprofile' : }
}
I end up with
> notice: The value of $foo is
So ... what am I not understanding about scope, that a variable can get passed from node to profile to class, but not from the profile to the class?
↧
↧
Autosigning puppet master certs in a multi-master environment
I'm running a multi-master setup with a centralized CA. The full stack runs in a docker swarm with containers running puppetserver as strictly non-CA masters, and a puppetserver-based CA with a mounted file system that contains all of the certificates. Everything is working great, but I'd like to automate the signing of master certificates so that scaling this system is completely automated. The problem is, my masters all contain DNS alt names, so the CA refuses to autosign their certificates despite the domains being on the whitelist. Has anyone tried a similar setup where certificate signing is automated for masters only? Is there some workaround for this? I've explored the puppet dockerhub, but it doesn't seem to contain much information on multi-master docker setups.
↧
easiest way to determine how a class is assigned to a node
I am trying to identify how a given puppet class is assigned/matched to a node. I cant find a "classification" that has the class.
Thanks!
sb
↧
Overwriting Puppet params.pp variables with Hiera
I want to overwrite some default variables in the params.pp file in a Puppet module. The section I am looking to do this on is an if statement:
if $::osfamily == 'RedHat' {
$defaultsiteconfig = {
'appname' => "${app_appname}",
'Organization' => "${app_organization}",
'WebPath' => "/opt/${package}${package_maj_version}",
'WebPort' => "${app_web_port}",
'DatabaseType' => "${app_database_type}",
}
}
The variables:
${package}
${package_maj_version}
${app_web_port}
${app_database_type}
are working fine since they are being pulled from higher up in the params.pp file. The same is happening with:
${app_appname}
${app_organization}
(They come up blank since they are set to _undef_). But, I want them to pull from hiera instead:
app::app_appname: "example.org"
app::app_organization: "example.org"
Is there something in Puppet that prevents this from occurring in the params.pp file since other manifests can pull those hiera variables without issue?
↧