Category Archives: linux

Puppet and Foreman demarcation (Part II)

This describes our assignment of responsibility between Foreman and Puppet. For an overview, please see Part I .

Old Configuration

Our original configuration relied primarily on Foreman to define services, required classes and their supply configuration parameters. This left puppet to solely provide a mix of modules (ie autofs, etc) and profile-like classes which would be glued together by foreman at the Host Group level. When we started down the path we were on Ubuntu 12.04 (~2013) and running Foreman 1.2 or 1.3. Config Groups were not yet and option and the UI tended to force most configuration overrides to occur when configuring classes.

At first, this configuration worked well, however it soon became unwieldy list of classes (400+) that were listed in foreman and the assignment per host started to get quite cluttered. For example the configuration of a research VM running our standard R setup was three Host Groups deep, and had 27 different classes whose configuration was keyed off a mix of host group and domain. Managing this, and determining what got applied where and ensuring configuration changes didn’t have unintended side effects became a burden. Additionally, adding new classes meant weeding through the 400+ included to find what you needed. In addition, as the groupings and configuration were all in Foreman, creating a development environment was a fairly manual process of recreating the host groups and applying all the configuration overrides.

The configuration we were performing on classes fell into two categories, service-based config where items like db names and who has access to a service would vary depending on the service and the static configs for items like overall domain configuration, core apt repo’s that would almost never vary once setup.

In hindsight, setting up ignored_environments.yml would have saved us some heartache and led to a cleaner class list. It wouldn’t have led to clarity on the filesystem of easily knowing which modules were top level modules (ie, foreman directly applies) vs modules that were installed to fulfil dependencies.

New Configuration

In our new configuration, we realized that we needed to draw a line between where configuration and class application should occur. This can be a bit tricky as there is substantial overlap between what foreman provides and what puppet provides.

foremanpuppet

In deciding whether foreman or puppet should be responsible for a particular item we decided to use the following guidelines:

  • Use foreman to determine what a host is. Foreman should be the starting point to seeing what classes have been applied to a host and at a quick glance give someone an idea of what services/processes should be running.
  • There should be a single point of connection between foreman and puppet.
  • Only service-level config in foreman, not domain or global configs.

We started by looking at the Roles and Profiles pattern in Puppet and seeing how we could adapt this to Foreman. The first mapping that was pretty obvious is that a Foreman config group is a puppet role. Both do not allow parameters and both are supposed to be composed only of classes. So config groups or roles? In order to allow an admin logged into foreman to see what services are running on a host, we decided to use Foreman config groups in favor of Puppet roles.

The next step was to reduce the surface area between foreman and puppet to clearly defined lines of control. Previously we had directly included any puppet module in a config group and applied configuration on foreman via smart parameters. This time, following the profile pattern, we define one profile per service and expose only these profiles to foreman by filter in ignored_environments.yml.

:filters:
 - !ruby/regexp '/^(?!role|profile).*$/'

These profiles have configurable service configuration exposed to foreman as parameters. Where possible, sane defaults for our environments are provided if we decide to even expose a parameter rather than configure it in the profile class.  These profiles are combined using config groups and applied to Host Groups. The diagram below shows roughly what this looks like:

foremanpuppetstructure

What about Hiera?

We considered using Hiera to manage global configuration options, but after mocking up some workflows and seeing how little data we would actually have in it vs foreman decided to just put those configuration values in the various profiles. A second reason for not using Heira was to reduce the number of places to look for configuration. While not too bad, using Hiera would have let to a second code repo which would have required careful synchronization with the main puppet code repo. We may revisit this in the future as the need arises.

Shotwell Plugins, Part I – Setup

Here’s a quick overview on how to start writing a custom publishing plugin. This is being done on Ubuntu 14.04, so no promises it will function on any other version.

  1. Install valac-0.22, libgphoto2-dev, gnome-doc-utils,libgstreamer-plugins-base1.0-dev, libgee-0.8-dev libsqlite3-dev libraw-dev librest-dev libwebkitgtk-3.0-dev libgexiv2-dev libgudev-1.0-dev libgtk-3-dev libjson-glib-dev
  2. Download the shotwell 0.20.2 sources and not the current version from github. The current version in get uses some new gtk features which are not available in ubuntu 14.04.
  3. Copy the shotwell/samples/simple-plugin from the shotwell git repo to a new directory
  4. Build/install shotwell 0.20
    $ ./configure --install-headers
    $ sudo make -j6 install

     

  5. In your new plugin, run ‘make; make install’ to ensure the basic build works.
  6. Rename simple-plugin.vala to your publishing plugin name (ie, OnedrivePublishing.vala)
  7. Modify the Makefile and set the PROGRAM to your plugin name (ie, OnedrivePublishing)
  8. Running make should compile your new empty plugin.

Now that that’s done we can start creating out publishing plugin.

The plugin sample implements the Spit.Pluggable interface, in order to create a publishing plugin, we’ll need to use that to return our publishing module and create a new class to implement the Spit.Pluggable and Spit.Publishing.Service interface as well. Rename that class and include all the necessary interfaces. We’ll use the ShotwellPublishingCoreServices as a template for how to bootstrap out publishing service.

The basic do-nothing module which compiles w/ one warning (the return null) now contains the following:

extern const string _VERSION;
private class OnedriveModule : Object, Spit.Module {
    private Spit.Pluggable[] pluggables = new Spit.Pluggable[0];

    public OnedriveModule() {
        pluggables += new OnedriveService();
    }
    
    public unowned string get_module_name() {
        return _("OneDrive Publishing Services");
    }
    
    public unowned string get_version() {
        return _VERSION;
    }
    
    public unowned string get_id() {
        return "org.yorba.shotwell.publishing.onedrive";
    }
    
    public unowned Spit.Pluggable[]? get_pluggables() {
        return pluggables;
    }
}
// This is our new publishing class
private class OnedriveService : Object, Spit.Pluggable, Spit.Publishing.Service {
        

    public OnedriveService() {
    }

    public unowned string get_id() {
        return "org.yorba.shotwell.publishing.onedrive";
    }
    
    public Spit.Publishing.Publisher.MediaType get_supported_media() {
        return (Spit.Publishing.Publisher.MediaType.PHOTO |
            Spit.Publishing.Publisher.MediaType.VIDEO);
    }
    public Spit.Publishing.Publisher create_publisher(Spit.Publishing.PluginHost host) {
        //TODO
        return null;
    }

    public void get_info(ref Spit.PluggableInfo info) {
        info.authors = "Mike Smorul";
        info.version = _VERSION;
        info.is_license_wordwrapped = false;
        
    }    
    public unowned string get_pluggable_name() {
        return "OneDrive";
    }

    public int get_pluggable_interface(int min_host_interface, int max_host_interface) {
        return Spit.negotiate_interfaces(min_host_interface, max_host_interface,
            Spit.Publishing.CURRENT_INTERFACE);
    }
    
    public void activation(bool enabled) {
    }
}
// This entry point is required for all SPIT modules.
public Spit.Module? spit_entry_point(Spit.EntryPointParams *params) {
    params->module_spit_interface = Spit.negotiate_interfaces(params->host_min_spit_interface,
        params->host_max_spit_interface, Spit.CURRENT_INTERFACE);

    return (params->module_spit_interface != Spit.UNSUPPORTED_INTERFACE)
        ? new OnedriveModule() : null;
}

private void dummy_main() {
}

You can now compile this by:

$ make clean; make ; make install

This will install your new module into your local modules directory. To make sure it works, open up shotwell, go to Edit -> Preferences -> Plugins and you should see your new plugin listed under the Publishing section with a generic graphic next to it. If you enable the module you’ll notice the following error that will be fixed when we start implementing functionality

 GSettingsEngine.vala:457: GSettingsConfigurationEngine: error: schema 'org.yorba.shotwell.plugins.enable-state' does not define key 'publishing-onedrive'

Useful Links

Default Linux Config

Sigh.. notes to self on the standard steps when installing a fresh ubuntu desktop:

  • Preserve shotwell indexes: backup/restore ~/.local/share/shotwell
  • add multi user xhost access: add ‘xhost +SI:localhost:testusers >& /dev/null’ to .bash_rc
  • Pulse audio:  copy default.pa to ~/.pulse and add ‘load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1’ to the end. In all test accounts that need audio access, add ‘default-server = 127.0.0.1’ to ~/.pulse/client.conf

Windows NFS server permissions

One issue we recently ran into was linux nfs clients were blowing away inherited permissions on windows volumes. In order to allow rename/mv and chmod to work properly on an nfs (4 or 3) mount, you need to grant clients ‘full permissions’ on the directory they will be working in. This has the lovely side affect of a chmod, rsync, tar -xpf or anything that touches permissions completely changing the local permissions on that directory for ALL users/groups you may have assigned on NTFS

  1. Create a directory, set appropriate ntfs permissions (Full permissions) with inheritance for multiple security groups
  2. Share that directory out to an nfs client.
  3. On the nfs client, mount the volume, and run ‘chmod 700 /mountpoint’
  4. Go back into windows and notice you’ve lost all the inherited permissions you thought you assigned on that share.
  5. Scratch your head, check the KeepInheritance registry key, run tcp dump.
  6. Realize you need to place the permissions you wish to inherit in a place that the nfs client cannot change them.

How we now share volumes out is the following ‘X:\[projectname]\[data]

  • projectname – high level, NOT shared directory that is the holder of all permissions for a project (subfolders, etc).
    • For groups/users that apply to your unix clients make sure they have full permission.
    • For your windows only folks, ‘Modify’ is generally good enough.
  • data – directory that is actually shared out via cifs/nfs

So far this scheme is working pretty well and allows unix clients to work properly and do horrible things on local files while preserving the broader group permissions you wish to see on your windows clients.

PBS, FD_CLOEXEC and Java

The PBS/Torque scheduler that ships w/ Ubuntu 12.04 uses an interesting method to verify that user requests from a submission node cannot impersonate anyone else. In a nutshell, any Torque command (qsub, qstat, etc) calls a suid program (pbs_iff) which connects to the pbs server from a privileged port and notifies the server the client port and what user will be sending commands from that port. pbs_iff receives this information by looking at the source port on the file handle passed to if during its clone. The whole handshake looks like this:

  1. Unprivileged client opens a socket to the pbs server
  2. Client calls clone and passes the file handle number to a suid pbs_iff as an argument
  3. pbs_iff reads the source port off of the file handle
  4. pbs_iff opens a socket from a priviliged port to the pbs server and sends invoking user and source port .
  5. The pbs server now trusts that commands from the initial socket belong to the user passed by pbs_iff
  6. pbs_iff terminates and the original client sends whatever commands it desires.

This works nice in C where the default is to pass all file handles to the child process on a fork. However, many languages frown on this file handle leaking for a number of reasons and have decided this default is a bad idea. Java is one of these, so it nicely sets FD_CLOEXEC on all file handles it opens. This means when you use the ProcessBuilder or call Runtime.exec, you can’t see any file handles you previously had open thereby breaking Torque’s security mechanism.

Sympa and Active Directory

Some basic steps on running sympa on Ubuntu 12.04 and using Active Directories Global Directory to auto-populate groups.

Ubuntu Notes 

  • apt-get install sympa will give you a ‘mostly’ working version
  •  Chown -R /var/lib/sympa sympa
  • The suid wrapper does not work on 12.04. You will need to create a sudo wrapper instead:
  • set use_fast_cgi 1 in /etc/sympa/wwsympa.conf
  • /usr/lib/cgi-bin/sympa/wwsympa_sudo_wrapper.fcgi
  • #!/usr/bin/perl
    
    exec '/usr/bin/sudo', '-E', '-u', 'sympa', '/usr/lib/cgi-bin/sympa/wwsympa.fcgi';
  • In apache/conf.d/sympa, change:
    ScriptAlias /wws /usr/lib/cgi-bin/sympa/wwsympa_sudo_wrapper.pl
  • add the following line to your sudoers file:
    www-data ALL = (sympa) SETENV: NOPASSWD: /usr/lib/cgi-bin/sympa/wwsympa.fcgi
  • References:

LDAP/AD Bound Lists

  • If you only have one domain, then you can just use the following and point at one of your domain controllers.
  • If you want to use forest-wide groups, you have two options for accessing those groups.
  • This will work this either security or distribution groups, however will NOT include nested membership.
    • In the ldap config for the group, point at the dc the group resides in. CHange suffix, host and user as appropriate, set use_ssl to yes, drop the :3268
    • Make the group universal and use the global directory (route I chose)
  • LDAP Configuration
    include_ldap_query
    attrs mail
    filter memberof=Some Group,OU=...,OU=...,DC=research,DC=domain,DC=org
    ssl_ciphers ALL
    name any_name
    host dc1.mydomain.org:3268
    use_ssl no
    passwd your_password
    timeout 30
    suffix DC=domain,DC=org
    user   CN=Read Account,OU=...,DC=domain,DC=org
    ssl_version sslv2
    scope sub
    select first
    ssl_version tls
  • References

Isolating Big Blue Button Video

This is a quick how to on manually connecting to a BBB video stream. Before we begin, here’s a very, very quick background.

  • Video streams are grouped under a conference-room specific url that has for format rtmp://host/video/roomID
  • Each streaming component under BBB is available as a separate stream (ie, video, desktop, sip/audio, etc)
  • BBB uses red5 under the hood to manage these streams
  • Grab flowplayer here and the flowplayer rtmp client here
  1. Connect to your room and start your webcam.
  2. Tail /usr/share/red5/log/bigbluebutton.log and uou should see the following log lines:
    2011-07-11 18:14:54,871 [NioProcessor-1] DEBUG o.b.c.s.p.ParticipantsEventRecorder - A participant's status has changed 141 streamName 640x480141
    2011-07-11 18:14:54,919 [NioProcessor-1] DEBUG o.b.c.s.p.ParticipantsService - Setting participant status ec0449a0-b5d1-4ca5-bfdf-d118d8bc2299 141 hasStream true
    • ec0449a0-b5d1-4ca5-bfdf-d118d8bc2299 or similar is the room id
    • 640×480141 is the stream id you need
  3. Download and place flowplayer-…swf, flowplayer.rtmp-…swf, and flowplayer-…min.js into a directory.
  4. Create a web page as follows:
  5. 
           
           Minimal Flowplayer setup
    
    
    
  6. Load up your web page and you should see the streaming video.