Verification of Node installation in Openshift Origin M4

The Openshift Origin Comprehensive Installation Guideline (http://openshift.github.io/documentation/oo_deployment_guide_comprehensive.html) states that there is several things that can be done to ensure the Node is ready for integration into Openshift cluster :

  • built-in script to check the node : 
    • oo-accept-node
  • check that facter runs properly :
    • /etc/cron.minutely/openshift-facts
  • check that mcollective communication works :
    • in the broker, run : oo-mco ping 
What I found that it is not enough. For example, openshift-facts show blanks, even though if there is an error with the facter functionality. So check the facter with :
  • facter 
And oo-mco ping works fine even though that there is something wrong with the rpc channel. I would suggest run these in the broker :
  • oo-mco facts kernel
  • oo-mco inventory

In one of our Openshift Origin M4 cluster , I have these lines in /opt/rh/ruby193/root/etc/mcollective/server.cfg:

main_collective = mcollective
collectives = mcollective
direct_access = 1

When I changed direct_access to 0, the oo-mco facts command doesn't work and  neither are the oo-admin-ctl-district -c add-node -n -i  

On the other cluster, I have these lines :

topicprefix = /topic/
main_collective = mcollective
collectives = mcollective
direct_access = 0

And the nodes works, albeit with warnings about topicprefix.

Additional notes :
Facter errors in my VMs (which have eth1 as the only working network interface) were fixed by ensuring /etc/openshift/node.conf contains these lines :
PUBLIC_NIC="eth1"
EXTERNAL_ETH_DEV="eth1"
INTERNAL_ETH_DEV="eth1"

Comments

Popular posts from this blog

Long running process in Linux using PHP

Reverse Engineering Reptile Kernel module to Extract Authentication code

SAP System Copy Lessons Learned