I have a Mac .dmg with multiple .app installers on it. These .app files are executable installers (similar to .pkg files) and should not be copied into the Applications directory but instead executed. Is there a provider that will handle this? It seems the appdmg provider uses ditto to copy the .app to the Applications directory, but I need the .app to be executed... Maybe this is the software company's problem (in that they are incorrectly distributing the software), but wanted to see if this is something anyone else has encountered or found a solution for already.
I was thinking that the appdmg provider could be modified to have another parameter that tells it to execute the .app instead of ditto-ing it... but wasn't sure how to handle this case.
Thanks,
Paul
↧
appdmg or pkgdmg modification?
↧
fileserver to agent
I want my agent download a rpm and execute "rpm -Uvh" on it.
My master's fileserver.conf has
[fs]
path /etc/storage/
allow *
and etc/storage has
storage]$ ls /etc/storage/
jdk-8u45-linux-x64.rpm my.cnf
my pp has
file { "/home/oracle/Downloads/jdk-8u45-linux-x64.rpm":
owner => "root",
source => "puppet:///fs/jdk-8u45-linux-x64.rpm",
}
exec { 'java':
command => "sudo rpm -Uvh jdk-8u45-linux-x64.rpm",
path => "${srcdir}/",
logoutput => "on_failure",
} ->
agent errors as:
jira ~]$ sudo puppet agent --verbose --no-daemonize --onetime
sudo: /etc/sudoers.d/proxy is world writable
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for jira.oracle
Info: Applying configuration version '1432667489'
Error: Could not set 'file' on ensure: No such file or directory - /home/oracle/Downloads/jdk-8u45-linux-x64.rpm20150526-3191-1cphghf.lock at 16:/etc/puppet/environments/production/manifests/init.pp
Error: Could not set 'file' on ensure: No such file or directory - /home/oracle/Downloads/jdk-8u45-linux-x64.rpm20150526-3191-1cphghf.lock at 16:/etc/puppet/environments/production/manifests/init.pp
Wrapped exception:
No such file or directory - /home/oracle/Downloads/jdk-8u45-linux-x64.rpm20150526-3191-1cphghf.lock
Error: /Stage[main]/Fishcruc/File[/home/oracle/Downloads/jdk-8u45-linux-x64.rpm]/ensure: change from absent to file failed: Could not set 'file' on ensure: No such file or directory - /home/oracle/Downloads/jdk-8u45-linux-x64.rpm20150526-3191-1cphghf.lock at 16:/etc/puppet/environments/production/manifests/init.pp
Error: Could not find command 'sudo'
Error: /Stage[main]/Fishcruc/Exec[java]/returns: change from notrun to 0 failed: Could not find command 'sudo'
Notice: /Stage[main]/Fishcruc/File[/etc/environment]: Dependency Exec[java] has failures: true
Warning: /Stage[main]/Fishcruc/File[/etc/environment]: Skipping because of failed dependencies
Notice: /Stage[main]/Fishcruc/Wget::Fetch[ficr]/Exec[wget-ficr]: Dependency Exec[java] has failures: true
Warning: /Stage[main]/Fishcruc/Wget::Fetch[ficr]/Exec[wget-ficr]: Skipping because of failed dependencies
Notice: /Stage[main]/Fishcruc/Exec[ficr]: Dependency Exec[java] has failures: true
Warning: /Stage[main]/Fishcruc/Exec[ficr]: Skipping because of failed dependencies
Notice: /Stage[main]/Fishcruc/Exec[start]: Dependency Exec[java] has failures: true
Warning: /Stage[main]/Fishcruc/Exec[start]: Skipping because of failed dependencies
Notice: Finished catalog run in 57.12 seconds
What am I doing wrong here?
↧
↧
mount bind option status check
mounting a "bind" mount works fine in puppet, but puppet doesn't think that it is mounted on further puppet runs and mounts it again.
Thus we ned up with something like this:
[root@epay1 prod ~]# mount|grep ofapp
/corp_invoice/JPM_DirectDebit on /mnt/sftp/ofapp/DD/out type none (rw,bind)
/corp_invoice/JPM_DirectDebit on /mnt/sftp/ofapp/DD/out type none (rw,bind)
/corp_invoice/JPM_DirectDebit on /mnt/sftp/ofapp/DD/out type none (rw,bind)
/corp_invoice/JPM_DirectDebit on /mnt/sftp/ofapp/DD/out type none (rw,bind)
/corp_invoice/JPM_DirectDebit on /mnt/sftp/ofapp/DD/out type none (rw,bind)
/corp_invoice/JPM_DirectDebit on /mnt/sftp/ofapp/DD/out type none (rw,bind)
/corp_invoice/JPM_DirectDebit on /mnt/sftp/ofapp/DD/out type none (rw,bind)
/corp_invoice/JPM_DirectDebit on /mnt/sftp/ofapp/DD/out type none (rw,bind)
/corp_invoice/JPM_DirectDebit on /mnt/sftp/ofapp/DD/out type none (rw,bind)
and on puppet run it says:
Info: /Stage[main]/Epay_class::Realtime/Mount[/home/ofapp/DD/in]: Scheduling refresh of Mount[/home/ofapp/DD/in]
Notice: /Stage[main]/Epay_class::Realtime/Mount[/home/ofapp/DD/out]/ensure: current_value unmounted, should be mounted (noop)
How do I get puppet to correctly access that the mount is indeed mounted so it doesn't try to remount it?
Here's how I'm mounting it:
mount { '/home/ofapp/DD/out':
ensure => mounted,
fstype => 'none',
device => '/corp_invoice/JPM_DirectDebit',
atboot => true,
options => 'bind',
require => [File['/mnt/sftp/ofapp/DD/out'],Mount['/corp_invoice']],
}
client is CentOs 6, puppet-3.2.4-1.el6.noarch.
↧
Quest 2 Task 3 Add Rule does not find matching node
Going through the learning vm tutorial. Get to Quest 2 - The Power of Puppet, Task 3 - Create a node group.
The snippet in the tutorial shows the rule 'name is learning.puppetlabs.vm' as matching 1 node. This does not happen for me, it is always 0. But after a time the 'pinned nodes' section below the rules lists 'learning.puppetlabs.vm'.
Is this normal?
I'm running PE 4.2.1
↧
Error: Could not request certificate: Find /puppet-ca/v1/certificate/ca?environment=production
Error: Could not request certificate: Find /puppet-ca/v1/certificate/ca?environment=production&fail_on_404=true resulted in 404 with the message:
Info: Creating a new SSL key for testhost.0
Debug: Creating new connection for https://oraclevm:8140
Error: Could not request certificate: Find /puppet-ca/v1/certificate/ca?environment=production&fail_on_404=true resulted in 404 with the message: Error 404
Powered by Jetty:// Exiting; failed to retrieve certificate and waitforcert is disabled
HTTP ERROR: 404
Problem accessing /puppet-ca/v1/certificate/ca. Reason:
Not Found
Powered by Jetty:// Exiting; failed to retrieve certificate and waitforcert is disabled
↧
↧
RMM vs. Puppet
Hey everyone,
This may be a basic question which may direct to a sticky, so if so, please let me know! I understand Puppet is a deployment automation tool, but is there any overlap or possible duplication between RMM tools like Kaseya, Labtech, N-Able, etc?
↧
how to use custom facts in hiera hierarchy
I defined my hierarchy like this:
---
:backends:
- yaml
- puppet
:hierarchy:
- node/%{fqdn}
- role/%{myrole}
- subenvironment/%{subenv}
- domain/%{domain}
- common
:yaml:
:datadir: /etc/puppetlabs/hiera
if I define classes in node or domain tree, that works but node for role or subenvironment tree. It seems to be hiera does not know about custom facts `%{subenv}` and `%{myrole}` for some reason.
On agent node I can see these custom facts with `facter -p`, but not with `facter` command.
When I run `puppet agent -t` I can see :
info: Loading facts in /var/opt/lib/pe-puppet/lib/facter/subenv.rb
Info: Loading facts in /var/opt/lib/pe-puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/opt/lib/pe-puppet/lib/facter/concat_basedir.rb
Info: Loading facts in /var/opt/lib/pe-puppet/lib/facter/myrole.rb
etc ...
But hiera does not read myrole dir tree?!
please help
Milan
↧
Creating custom facts: how to access structured facts Facter.value ?
Reading through [Facter 3.1 docs](https://docs.puppetlabs.com/facter/3.1/custom_facts.html) there is a section talking about using other facts by calling Facter.value. [Here is the exact section](https://docs.puppetlabs.com/facter/3.1/custom_facts.html#using-other-facts).
Here's the relevant line:
distid = Facter.value(:lsbdistid)
My question is how to access a structured fact in Facter.value, for example if I wanted to use the 'ec2_metadata->ami_id' fact in that line instead of lsbdistid.
↧
Install Security Updates via Puppet Master on Agents
Hi
we have Puppet Master agent setup working fine.
We intend to do central Security updates on all agents connected to Puppet Master.
Mcollective plugin is also installed and We tried following two approaches:
1.
Restart puppet service on agent via MCO from Master, As when puppet is restarted It gets sync to Master for its setting and on Master(Site.pp) lets suppose we define these two commands to execute. (Apt-get update & aptitude safe-upgrade)
cmd: mco rpc service restart service=puppet -S hostname=nodename
Result: puppet on node gets restarted and Apt-get update & aptitude safe-upgrade are run in background.
Issue: On Puppet Master we only get service puppet is running.No clue whether these commands(Apt-get update & aptitude safe-upgrade) runs successfully or not.
2.
Run command on puppet agent From puppet Master via MCO
CMD: mco rpc nrpe runcommand command=puppet_restart -I node -v
puppet_restart is defined as a nrpe cmd on agent that contains "puppet agent -t"
Result: On agent it sync with Master and runs cmds in background.
Issue: As it is nrpe cmd and apt-get and aptitude takes time , we get time-out response on Puppet Master end, Means we are not sure whether cmds executed successfully or not
Is there any way to Install Security updates on all connected agents while remaining on Puppet Master?
Any help or suggestion will be appreciated
Thanks
↧
↧
Using puppet on Vagrant
I have a puppet master which manages a bunch of our servers. I'm trying to setup some Vagrant boxes for development, but I'm trying to do this without having to worry about signing certs on the master. What's the best way to get a regular puppet setup to work in a masterless setup for vagrant? I want to make it easy for developers to spin up a new server without having to worry about getting me to sign their certs.
↧
Learning VM behind proxy?
Hello!
I'm learning with the current learning VM (puppet 4.8.1). I deployed it in my company and configured the proxy:
/etc/puppetlabs/puppet/puppet.conf
[main]
(snip)
http_proxy_host = *myproxy*
http_proxy_port = 8080
And also in /root/.bashrc
export http_proxy=http://*myproxy*:8080
export https_proxy=http://*myproxy*:8080
This allowed me to download le Graphite module of the "Power of Puppet" quest, but when trying to execute the agent:
root@learning ~]# puppet agent --test
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: 503 "Service Unavailable"
Info: Retrieving pluginfacts
Error: /File[/opt/puppetlabs/puppet/cache/facts.d]: Failed to generate additional resources using 'eval_generate': 503 "Service Unavailable"
Error: /File[/opt/puppetlabs/puppet/cache/facts.d]: Could not evaluate: Could not retrieve file metadata for puppet:///pluginfacts: 503 "Service Unavailable"
Info: Retrieving plugin
Error: /File[/opt/puppetlabs/puppet/cache/lib]: Failed to generate additional resources using 'eval_generate': 503 "Service Unavailable"
Error: /File[/opt/puppetlabs/puppet/cache/lib]: Could not evaluate: Could not retrieve file metadata for puppet:///plugins: 503 "Service Unavailable"
Info: Loading facts
Error: Could not retrieve catalog from remote server: 503 "Service Unavailable"
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: 503 "Service Unavailable"
I tried without proxy but I get errors too when trying to connec to puppetlabs on the strange port 8140. I'm pretty sure that this port isn't allowed...
How can I manage that?
Thank you!!
↧
librarian generated stdlib module install fails
Hi,
I'm running puppetserver 2.1.1 with puppet 4.2.1 on vagrant puppetlabs/debian-7.8-64-puppet
I'm using librarian-puppet (v2.2.1 @ ruby 1.9.3) to manage modules.
My environment is called 'local' and is defined in the puppet.conf [Agent] section.
Currently the Vagrant has direct internet access.
I've added stdlib to Puppetfile, and once i run librarian-puppet i get an error message. It tells me to check if this puppet module command work, which is doesn't.
This is the puppet module command output. Can someone disect why this i failing.
root@puppetserver:/etc/puppetlabs/code/environments/local#
puppet module install
--debug
--version 4.9.0
--target-dir /etc/puppetlabs/code/environments/local/.tmp/librarian/cache/source/puppet/forge/forgeapi_puppetlabs_com/puppetlabs-stdlib/4.9.0 --module_repository https://forgeapi.puppetlabs.com --modulepath /etc/puppetlabs/code/environments/local/.tmp/librarian/cache/source/puppet/forge/forgeapi_puppetlabs_com/puppetlabs-stdlib/4.9.0
--module_working_dir /etc/puppetlabs/code/environments/local/.tmp/librarian/cache/source/puppet/forge/forgeapi_puppetlabs_com/puppetlabs-stdlib/4.9.0
--ignore-dependencies puppetlabs-stdlib
Debug: Runtime environment: puppet_version=4.2.1, ruby_version=2.1.6, run_mode=user, default_encoding=UTF-8
Notice: Preparing to install into /etc/puppetlabs/code/environments/local/.tmp/librarian/cache/source/puppet/forge/forge
api_puppetlabs_com/puppetlabs-stdlib/4.9.0 ...
Notice: Downloading from https://forgeapi.puppetlabs.com ...
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-stdlib
Debug: Evicting cache entry for environment 'production'
Debug: Caching environment 'production' (ttl = 0 sec)
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-stdlib&limit=20&offset=20
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/releases?module=puppetlabs-stdlib&limit=20&offset=40
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Info: Resolving dependencies ...
Info: Preparing to install ...
Debug: HTTP GET https://forgeapi.puppetlabs.com/v3/files/puppetlabs-stdlib-4.9.0.tar.gz
Debug: Failed to load library 'pe_license' for feature 'pe_license'
Debug: Executing: 'gzip -dc /etc/puppetlabs/code/environments/local/.tmp/librarian/cache/source/puppet/forge/forgeapi_pu
ppetlabs_com/puppetlabs-stdlib/4.9.0/cache/puppetlabs-stdlib20151001-3394-jycfat | tar xof -'
Error: Could not extract contents of module archive: Execution of 'gzip -dc /etc/puppetlabs/code/environments/local/.tmp
/librarian/cache/source/puppet/forge/forgeapi_puppetlabs_com/puppetlabs-stdlib/4.9.0/cache/puppetlabs-stdlib20151001-339
4-jycfat | tar xof -' returned 2: tar: puppetlabs-stdlib-4.9.0/spec/fixtures/modules/stdlib/manifests: Cannot mkdir: Pro
tocol error
tar: puppetlabs-stdlib-4.9.0/spec/acceptance/nodesets/ubuntu-server-10044-x64.yml: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/spec/acceptance/nodesets/ubuntu-server-12042-x64.yml: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/spec/acceptance/nodesets/ubuntu-server-1404-x64.yml: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/spec/acceptance/nodesets/windows-2008r2-x86_64.yml: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/spec/acceptance/nodesets/windows-2012r2-x86_64.yml: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/lib/puppet/parser/functions/defined_with_params.rb: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/lib/puppet/parser/functions/delete_undef_values.rb: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/lib/puppet/parser/functions/has_interface_with.rb: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/lib/puppet/parser/functions/is_function_available.rb: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/lib/puppet/parser/functions/join_keys_to_values.rb: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/lib/puppet/parser/functions/load_module_metadata.rb: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/lib/puppet/parser/functions/validate_absolute_path.rb: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/lib/puppet/parser/functions/validate_ipv4_address.rb: Cannot open: Operation not permitted
tar: puppetlabs-stdlib-4.9.0/lib/puppet/parser/functions/validate_ipv6_address.rb: Cannot open: Operation not permitted
tar: Exiting with failure status due to previous errors
Error: Try 'puppet help module install' for usage
Puppetfile contains:
#!/usr/bin/env ruby
#^syntax detection
forge "https://forgeapi.puppetlabs.com"
# use dependencies defined in metadata.json
#metadata
# use dependencies defined in Modulefile
# modulefile
#shared
mod 'puppetlabs-apache', '1.6.0'
mod 'puppetlabs-apt'
mod 'puppetlabs-concat', '1.2.4'
mod 'puppetlabs-firewall', '1.7.1'
mod 'puppetlabs-inifile', '1.4.2'
mod 'puppetlabs-postgresql', '4.6.0'
mod 'puppetlabs-puppetdb', '5.0.0'
mod 'spotify-puppetexplorer', '1.0.1'
mod 'puppetlabs-stdlib', '4.9.0'
#mod 'elasticsearch-elasticsearch', '0.9.9'
mod 'puppetlabs-java', '1.4.1'
mod 'zuinnote-oraclejdk8', '1.0.1'
↧
puppet file server and custom mount issue
his is more of a question for the custom mount point which is on the same lines as above.
I have the following on the puppet master
[service_misc]
path /opt/service/scripts/misc
allow *
----------
On the puppet agent I run the command
sudo puppet apply--write-catalog-summary -e "file {'/home/virtual/test.sh': ensure => file, recurse => remote, owner => 'root', group => 'root', source => 'puppet:///service_misc/test.sh', mode => 755, replace => 'yes',}"
I see the following error
Error:
/Stage[main]/Main/File[/home/virtual/test.sh]:
Could not evaluate: Could not retrieve information from environment
production source(s) puppet:///service_misc/test.sh
Not sure what I am doing wrong.
-Narahari
↧
↧
Compatibility Issues: Between Oracle Solaris11/Puppet( v3.6.2 ) and OracleLinux6/Puppet-Enterprise (4.2.1 )
**Compatibility Issues**: Oracle.Solaris11/Puppet( v3.6.2 ) not structurally lining up with Oracle.Linux.Enterprise (4.2.1 )
As an example -> why there is no conf.d directory in Solaris11 Puppet(v3.6.2) directory structure.
**OracleLinux6**:
/etc/puppetlabs/nginx/conf.d
/etc/puppetlabs/puppetserver/conf.d
/etc/puppetlabs/puppetdb/conf.d
/etc/puppetlabs/console-services/conf.d
↧
pe 201502 - new env ignored
Guys,
I've added a new env test1 by adding the appropriate dirs on the Master and target node.
I've pinned the node to the test1 env group (and its not listed in the pinned nodes for production), but whenever I run
puppet agent -t it says production.
Can anybody help? NB: I want the node's env to be controlled by the Master. Chris @Greg: On the node: puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for vm3.local Info: Applying configuration version '1442966700' Notice: Applied catalog in 1.01 seconds puppet config print environment production Can you be a bit more specific about the console info please. Is this what you want? Test1 environment Parent Production environment Environment test1 env group 1 node is pinned to this node group: Certname vm3.local @Greg (2): Classification Create, edit, and remove node groups here. Node group name Parent name Environment Agent-specified environment Production environment agent-specified env group All Nodes All Nodes production NTP All Nodes production PE ActiveMQ Broker PE Infrastructure production PE Agent PE Infrastructure production PE Certificate Authority PE Infrastructure production PE Console PE Infrastructure production PE Infrastructure All Nodes production PE Master PE Infrastructure production PE MCollective PE Infrastructure production PE PuppetDB PE Infrastructure production Production environment All Nodes production env group Test1 environment Production environment test1 env group vm3.local Facts Classes Variables Reports Groups Class Source group puppet_enterprise::profile::agent PE Agent puppet_enterprise PE Agent ntp NTP puppet_enterprise::profile::mcollective::agent PE MCollective puppet_enterprise PE MCollective
# cat puppet.conf # This file can be used to override the default puppet settings. # See the following links for more details on what settings are available: # - https://docs.puppetlabs.com/puppet/latest/reference/config_important_settings.html # - https://docs.puppetlabs.com/puppet/latest/reference/config_about_settings.html # - https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html # - https://docs.puppetlabs.com/references/latest/configuration.html [main] server = puppet.local [agent] certname = vm3.local HTH
PS how do you post the exact image like that? @Greg: Interesting; created user to install on vm3 by basically copying user2 on vm2 (you solved my qn about that) and got # puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for vm3.local Info: Applying configuration version '1443059500' Notice: Applied catalog in 0.86 seconds However, it didn't create the user or home dir or group ie nada... ------------------------------------------------------------------------------------------------- Definitely sure :) Actually, I just realised that wasn't entirely clear (above); what I actually did was copy the user2 manifests then edited to user3 to match the hostname (for consistency). Also I checked the passwd, group files and /home dir. ---------- Re classes: In the file /etc/puppetlabs/code/environments/test1/manifests/site.pp node 'vm3.local' { include vm3 } /etc/puppetlabs/code/environments/test1/modules/vm3/manifests # ls groups.pp init.pp users.pp # cat * class vm3::groups { group { "pupusers": ensure => present, gid => 3000, } } # vm3.local defn class vm3 { include vm3::groups include vm3::users } class vm3::users { user { 'user3': ensure => present, managehome => true, home => '/home/user3', password => '$6$saltsaltsomemore$n/pv9PuSHwY4rl0lWajN6OceI7CqC9Uysy80WKW/44S45Rayu5AKBUom6LUheypqSGieOO47GUkf5SbrNVPDx.', uid => '3000', gid => '3000', shell => '/bin/bash', } file { '/home/user3': ensure => directory, owner => 'user3', group => 'pupusers', mode => '0700', } } @sahumphries: still says production. I've always found it odd that the docs say you should make other envs (eg test1) a child of the production env. When I was playing with PE 3.8 I made test1 a child of All Nodes (iirc) instead, which seemed to help. For this PE 201502/4.2 I tried that but it didn't help, so I reverted to by-the-book. If anyone has a working multiple Dir Envs setup where the nodes envs are dictated by the Master, I'd be interested to see it. Images:     @Greg - keep asking; I'm sure we'll get there eventually :) Env Print Master:
puppet agent --configprint manifest --environment test1
/etc/puppetlabs/code/environments/test1/manifests VM2:
puppet agent --configprint manifest --environment test1
(blank) @Greg: don't give up now :)
anyway, this is self education (@home), so paid support is not an option. :( Can you show the (minimum) dir/file setup that's supposed to provide dir based envs? Maybe I can work it out from there.
Thx
Chris
PS: can you explain why test1 should be a child of production (according to the docs). As per prev note above,, when trying this with PE 3.8 I had to make it a child of All Nodes instead (ie same as prod) to make it work.
I've pinned the node to the test1 env group (and its not listed in the pinned nodes for production), but whenever I run
puppet agent -t it says production.
Can anybody help? NB: I want the node's env to be controlled by the Master. Chris @Greg: On the node: puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for vm3.local Info: Applying configuration version '1442966700' Notice: Applied catalog in 1.01 seconds puppet config print environment production Can you be a bit more specific about the console info please. Is this what you want? Test1 environment Parent Production environment Environment test1 env group 1 node is pinned to this node group: Certname vm3.local @Greg (2): Classification Create, edit, and remove node groups here. Node group name Parent name Environment Agent-specified environment Production environment agent-specified env group All Nodes All Nodes production NTP All Nodes production PE ActiveMQ Broker PE Infrastructure production PE Agent PE Infrastructure production PE Certificate Authority PE Infrastructure production PE Console PE Infrastructure production PE Infrastructure All Nodes production PE Master PE Infrastructure production PE MCollective PE Infrastructure production PE PuppetDB PE Infrastructure production Production environment All Nodes production env group Test1 environment Production environment test1 env group vm3.local Facts Classes Variables Reports Groups Class Source group puppet_enterprise::profile::agent PE Agent puppet_enterprise PE Agent ntp NTP puppet_enterprise::profile::mcollective::agent PE MCollective puppet_enterprise PE MCollective
# cat puppet.conf # This file can be used to override the default puppet settings. # See the following links for more details on what settings are available: # - https://docs.puppetlabs.com/puppet/latest/reference/config_important_settings.html # - https://docs.puppetlabs.com/puppet/latest/reference/config_about_settings.html # - https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html # - https://docs.puppetlabs.com/references/latest/configuration.html [main] server = puppet.local [agent] certname = vm3.local HTH
PS how do you post the exact image like that? @Greg: Interesting; created user to install on vm3 by basically copying user2 on vm2 (you solved my qn about that) and got # puppet agent -t Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for vm3.local Info: Applying configuration version '1443059500' Notice: Applied catalog in 0.86 seconds However, it didn't create the user or home dir or group ie nada... ------------------------------------------------------------------------------------------------- Definitely sure :) Actually, I just realised that wasn't entirely clear (above); what I actually did was copy the user2 manifests then edited to user3 to match the hostname (for consistency). Also I checked the passwd, group files and /home dir. ---------- Re classes: In the file /etc/puppetlabs/code/environments/test1/manifests/site.pp node 'vm3.local' { include vm3 } /etc/puppetlabs/code/environments/test1/modules/vm3/manifests # ls groups.pp init.pp users.pp # cat * class vm3::groups { group { "pupusers": ensure => present, gid => 3000, } } # vm3.local defn class vm3 { include vm3::groups include vm3::users } class vm3::users { user { 'user3': ensure => present, managehome => true, home => '/home/user3', password => '$6$saltsaltsomemore$n/pv9PuSHwY4rl0lWajN6OceI7CqC9Uysy80WKW/44S45Rayu5AKBUom6LUheypqSGieOO47GUkf5SbrNVPDx.', uid => '3000', gid => '3000', shell => '/bin/bash', } file { '/home/user3': ensure => directory, owner => 'user3', group => 'pupusers', mode => '0700', } } @sahumphries: still says production. I've always found it odd that the docs say you should make other envs (eg test1) a child of the production env. When I was playing with PE 3.8 I made test1 a child of All Nodes (iirc) instead, which seemed to help. For this PE 201502/4.2 I tried that but it didn't help, so I reverted to by-the-book. If anyone has a working multiple Dir Envs setup where the nodes envs are dictated by the Master, I'd be interested to see it. Images:     @Greg - keep asking; I'm sure we'll get there eventually :) Env Print Master:
puppet agent --configprint manifest --environment test1
/etc/puppetlabs/code/environments/test1/manifests VM2:
puppet agent --configprint manifest --environment test1
(blank) @Greg: don't give up now :)
anyway, this is self education (@home), so paid support is not an option. :( Can you show the (minimum) dir/file setup that's supposed to provide dir based envs? Maybe I can work it out from there.
Thx
Chris
PS: can you explain why test1 should be a child of production (according to the docs). As per prev note above,, when trying this with PE 3.8 I had to make it a child of All Nodes instead (ie same as prod) to make it work.
↧
facts: os is not a hash or array
I have a node and a master. Everything is OK, I can apply catalog. Now I would like to use Facts but it seems that the results of Facts are always stringify.
In one of my module, I have this line:
$test = $os['name']
I'm running Puppet **3.8.2** (community edition), so on the node, in the file **/etc/puppet.conf**, I added this line:
[main]
...
stringify_facts=false
...
After adding this line, I restarted Puppet on this node:
# service puppet restart
On the node, I can run this command:
# facter --puppet os
{"name"=>"Debian", "family"=>"Debian", "release"=>{"major"=>"7", "minor"=>"8", "full"=>"7.8"}, "lsb"=>{"distcodename"=>"wheezy", "distid"=>"Debian", "distdescription"=>"Debian GNU/Linux 7.8 (wheezy)", "distrelease"=>"7.8", "majdistrelease"=>"7", "minordistrelease"=>"8"}}
However, when I apply the catalog I have this error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: os is not a hash or array when accessing it with name
Instead, if in my module I'm considering "os" as a string, then it is working:
$test = $os
# and in my template I can print the long string with <%= @test %>
I also added **stringify_facts=false** to the Master, but that doesn't solve my issue.
I also created my custom facts returning a hash, but still same issue.
Is there any setting that I'm missing?
Facter version: 2.4.1
↧
How do you apply puppet run from the puppet enterprise console?
How do you apply puppet run from the puppet enterprise console?
↧
↧
puppet pdf and/or epub documentation
Hi,
is there a
puppet pdf and/or epub documentation
for puppet 3.8?
Greetings and thanks
Tobias
↧
When the Puppetmaster receives a specific file, how can you get it to upload it everytime to my windows hosts and overwrite the existing file.
Basically we randomly have to push a windows build file (45Mb in size - might require zipping but not terribley important for now...) to the Puppetmaster which in turn should be clever enogh to upload it to all my windows hosts and overwrite the existing file on each host or shall we say the deprcared one
Does this invole the following?
1. Some sort of crontab/crond/cronjob
2. Mcollective - orchestration
3. Manifest
4. Invoke resource
5. Ctime
6. Mtime
7. Batchfile
8. PSCP
9. Or ruby script - which I have very limited knowledge on.
Thanks
Michael
↧
remove module from private forge
I am running my own puppet forge for several yrs now
How do I delete ancient published modules off it ?
I am using an older version of pulp (2.3.0), but have not
found any instructions for newer versions either
↧