- Published on
Red Hat Satellite
- Authors
- Name
- Jackson Chen
Red hat Satellite
Redhat Satellite
Red Hat content network (Red Hat internet) -> Red Hat Satellite (on-prem)
1. manifest
manifest file to manage the license
create a manifest for each organization or department/team
Each satellite has an integrated capsule
# Red Hat satellite Capsule Servers
Have a RHEL satellite capsule server at each location, similar to SCCM DP server
https://access.redhat.com
-> subscription Allocations
a. Create the manifest file
Create a new subscription allocation
and select ansible
Choose the available amount
-> Export Manifest
download and upload to the on-prem satellite server
# Content iso (for offline / secure network)
- Base (base content iso)
- Incremental content (update every 6 weeks)
Note:
We need to have a satellite server that will have internet connection, it will download the contents
Then it will delivery to the secure network
Web server (on-prem) that will host the content on site
## RHEL document
access.redhat.com
# Satellite server specification
4 core
20 GB
4 GB swap
# time sync
/var logical volume, scalable, usng SSD and NVMe
# ******** suggestion, using logical volume for the following
/var/cache/pulp # initial download location 20GB
Then migrate to /var/lib/pulp 300GB <------- size depending on the products that you have
/var/lib/mongodb/ 50GB
/var/lib/pgsql postgrep sql
/var/spool/squid
# ssh to satellite servet
lscpu
free -m # verify free memory
lsblk # verify disk
cat /etc/*-release # check version and OS
firewall-cmd --add-service RH-Satellite-6 # allow in firewall
!! --permanent (!! is the previous command)
cat /user/lib/firewalld/srvices/RH-Satellite-6.xml #verify the firewall ports required
subscription-manager status # verify registery, if offline secure network
unknown
1. register
2. attach to satellite pool
3. disable all repos
4. enable necessary repos
5. yum update
6. install satellite packages, dependecies will be taken care off
yum install satellite
foreman-rake permissions:reset #
reset admin password
google chrome or firefox supported from satellite managment console
# access
https://satellite.lab.example.com
user: admin
pwd: password from foreman-rake reset
Then change after login to satellite -> administer ->
#*********** Verify the installation
ssh to satellite sever
lab deploy-intall sart
ctrl + shift (open new tab)
satellite-maintain service list
satellite-maintain health check
firewall-cmd --list-all
systemctl status chronyd # verify start status
chronyc sources # verify source
ping -cl localhost # ping locally
ping -cl satellite.lab.example.com
host 172.25.250.15 # reverse lookup
lab deploy-install # clean up the lab
# login to satellite (https://.....) <----------------- import the license manifest file
Content -> Subscriptions
upload the manifest (import a Manifest)
# ****************** configure organisation and content *************
lab deploy-organizations start
# create new organization
https://satallite.lab.com <------- login to the web client
# *********
Default Organization
-> New organization
Default Location
Content -> Organization -> add organization
Create new organization and location
Administer -> location -> new location
*Note Can create nested location
Capsules (could have multiple capsules for the satellite)
# access the website
https://material.example.com
1. download the manifest file
the upload the manifest (select the right contest - organization and location)
# change the cdn.rhe.. to the local cdn
click "Manage manifes"
Red Hat CDN URL https://cdn.redhat.com
-> change to http://content.example.cm/rhs6.6/x86_64/cdn (on-prem location for the content)
Note: This is per organization manifest
# How to verify
Command line tool:
1. ssh to satellite server
hammer organization list
hammer --output jason organization list # output in json format
hammer l <tab>
hammer location list
hammer location list --organization Operations
# back to web console
go to the organzation, edit
Loation -> select the location
# back to command line
hammer location list --organization list
# in Gui
Navigate to "Hosts"
Change to "Any organziation" and location, it will show the satelite server
select the statellite server, and assign to the organization and location
Now, when select the required organization and location, it will now show the satellite server is now associated with the rquried organization and location
Note: to clean up the lab
#************ Synchronize RedHat Content *****************
Satellite Server (similar to WSUS server)
It pull the content from CDN (content delivery network)
The satellite client will download the content from satellite server
# Sync plan (policies)
on demand
background
Immediate
# Synchronize Redhat content
We have define the product, we are using our on-prem satellite server
lab lifecycles-sync start # prepare the lab
# On web console
Verify the required organization # important to select the right organization
Content -> Red hat Repositories
Administer -> Settings -> Content
Select the repository, and click "+" to enable the repository
du -sh /var/cache/pulp
du -sh /var/lib/pulp # It will grow when we sync the content
content -> sync status
# from command line
hammer repository list # list all the registered repos
hammer repository synchronize --id 5 # sync id 5 repo content, verify your environment
watch !! (!! reference the previous command)
# set up sync plan
GUI -> content
Create Sync plan
Interval "custom cron"
15 18 * * *
Select Products tab, click Add, and search the product to sync
Command
lab lifecyle-finish # clean up the lab
# ************* Create software lifecycle **************
Satellite server
- library (it has library, all the softwares and repo)
DEV -> UAT -> Prod server
We then create software lifecycle, DEV -> UAT -> PROD (promote the software to lifecycle, and test them)
Navigate to satellite GUI console
Content -> lifecycle Environment
click Create Environment
a. Name
b. Label
c. Description
We could use GUI, hammer and APi to manage satellite
hammer lifecycle-environment list
hammer lifecycle-environment list --organization Operations
hammer lifecycle-environment create --organization Operations --name Production --label Production --description Production --prior UAT
# verify
hammer lifecycle-environment list --organization Operations
#************* Publishing and promoting content views **************
In satellite GYI
Content -> verify lifecycle environment
content -> content views
create new view
click "solve dependencies"
choose repos and add the repository
Publish New Version (publish the new version)
Description:
Add description, such as initial content view with BaseOS repository
Note:
This will publish the content to the library
Click Promote
Select the lifecycle environment, such as DEV, UAT and PROD
Yum Content -> Repository
Add (tab), and add other repositories
click Publish New Version
Version 2
Description: Add AppStream, satellite Tools
#********************* registrying hosts / client to satellite server ***********
1. manually registry hosts
satellite server ----> get content from on-prem CDN
client ----> will be client to get content from satellite server (register with satellite)
install client (similar to SCCM client)
kickstart or image as part of the installation (install the client)
# make ssh connection to serverA (ctrl+shift+T open new tab)
subscription-manager status (no register)
subscription-manager register #verify registry status
yum localinstall -y http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm # install satellite client/package
7.2 KB
subscription-manager register --org Operations --environment Development/Base
Username: admin
Password: redhat
The serverA is now register with satellite
subscription-manager status # verify status
GUI
Hosts -> Content Hosts -> Release version (select the content host content version)
Hosts -> Subscriptions
Add (this will sunsume 1 subscription license)
subscription-manager refresh
subscription-manager status
# manually register is not scalable
#*************** Managing hosts with host collection ****************
create a group, and grant group access to resouce, then add the users to the group
create content host group / collection
Add the hosts (clients) to the content host group
grant
Hosts -> Host Collection
Create Host collection
name
Description
Add serverA to the host collection
Hosts -> Host Collections -> OpsServers (collection), and then select the client/host to the collection
there are different options for the collectoins
# *************** automating content hosts
Activation key
It specifies the information that hosts will need to have, the clients will inherite all the information
release version
lifecycle environment
products
GUI (sate)
Content -> Activation key
Create new Activation Key
Name: Operation servers
Unlimited hosts (select)
Select the environment
Select the content view
Save
Properties
Release version
Subscriptions
Add, and select the product, add selected
Host collections
Repository Sets
Then enable activation key
ssh to serverA
subscription-manager status
subscription-manager repos --list
subscription-manager register --org-"Operations" --activiationkey="OpsServers" --force # force it register with acgtion key
GUI
Content -> Base
content -> activation keys -> OperationServers -> selecgt environment "UAT"
Back to ssh to serverA
force to registration with activation key
Check
subscription-manager repos --list
# install agent
yum localinstall -y http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm # install satellite client/package
GUI
Hosts -> Content Hosts
before ServerA has agent installed, we can push to the ServerA
#******* registering hosts
serverb> yum localinstall -y http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm
subscription-manager status
#************ Deploy content to host
Software view
DEV -> UAT -> PROD for software lifecycle
errata
SDN --> satellite [library - content library] --> DEV -> UAT -> PROD
# *************** Controlling software with content view
satellite GUI
In Operations organization
Content -> Content View
Hosts -> Content Hosts
servera.lab.example.com
ssh to servera
ctl+shift+t (new tab)
subscription-manager repos --list
yum install -y ant #intall software
ssh to serverc
ssh root@serverc
subscripton-manager status
install satellite agent
yum localinstall -y http://...... /katello-ca-consumer-latest.noarch.rpm
subscription-manager --org Operatoins --environment Development/Base # register the computer
Backup to GUI
Hosts -> Content Hosts
severc
select serverc
Details tab
select Release version
Subscriptions
clikck Add
select and add the subscription
ssh to satellite server
subscriptoin-manager refresh
ssh to serverc
subscription-manager repos --list
yum install -y ant # failed, as the repo is not associate with serverc
#*************** Create content view folders ****************
playbook - ansible
RHSA security update
RHBA bug fix
RHEA improvement (enhancements)
filter only install RHSA
Composite Content View
Base
- BaseOS
- AppStream
- Satellite Tools
Ansible
- ansible 2.8
Operations Software (other vendor softwares)
- repo1
- repo2
software filter
GUI
Content -> Red Hat Repostories
available: name ~ ansible
click + to eanble the repository
Note
We need to get the content from CDN
Sync
select the content and sync the content
content -> Sync plan
select the sync plan -> product, select the product and add to the sync plan
Create a content view
Content -> Content view -> Create New view
Name: Ansible
Lable
Select associate denpdencies
Repository selection
Select the product
Yum Content
name: Anisble-older-2.8.4
Content type
typoe
Inclusion type
Exclude
Desciption
Exclude ansible older than 2.8.4
RPM nam
ansible version (less than) 2.8.4
Note: This will filter ansible older than 2.8.4
The Puslish New Version
Description
Initial publish of Ansible repository with exclusion filter only allowing Ansible 2.8.4 and newer
then Promote to the environment
Select "UAT"
Note:
We can only have on content view assicated with the content host
Solution: usign composite view
Create new Content view
Add all the required content view, Add Content View
Publish New version
Content View tab
List/Remove
Add
After create the composite content view
promote the content view
ssh to serverc
subscription-manager clean
subscripton-manager --org operations --envrionment Development/BaseAnsiblecomposite
username
Password
Verify satellite GUI
Hosts -> content hosts -> selet serverc
Subscription
click Add, and add the subscription (it will consume on subscription)
# ssh to serverc
subscription-manager repos --enable=<repo-id>
yum search ansible --showduplicates
# ***************** Apply errata to hosts
Managing errata using ansible
Content -> Errata
All Repositories
search
type = security # only show security errata
package_name ~ant # show errata only applicable to ansible
select
Applicable
installable
#************Managing and applying errata to hosts
satellite GUI
content -> content view
select the required content view
select Yum Content
Add new yum filter
name
content type
Erratum - Date and type
Inclusion type
Exclude
Description
Excludign non-security errata older than date
Erratum Data Range
select the date
After create the content view, the publish the new version
Hosts -> Content Hosts
Content -> Errata
ssh to servera
rpm -q katello-agent
Hosts -> content Hosts
Select servera
-> Errata tab
installable Erratal, and search for the required errata, and click Apply selected errata
ssh to servera
yum history info # It shows the package installation history
# **************** manage software module stream - collection of packages ********
they are install and remove as a unit
Example: perl (module)
it has different stream, only one version is applied or installed
yum install @ # stream
GUI
content -> content view
select the required content view
select module stream
it will show all the module stream
ssh to serverc
yum module list container-tools
yum module install containter-tools:rhel8
#******************* Install customs software and yum repository **************
creating a customs product
create a repository
Ensure you are in Operations organization in satellite GUI
Cntent -> Product
name:
lable
SSL Cet
...
Description
Content -> Custom Software
New Repository
type: yum
...
After creating the repository, select and upload the packages
You could change information about the repository
#**************** Create product using reposotry discovery
It could fetch the remote repository and populate the local repository
Repo discovery
Content -> Repository Discovery
Repository Type
Yum Repositories
URL to discover
Select the required repositories
Create Repository
Content -> Sync Status
Content -> Sync Plan
Add the new content/product to the sync plan
#****************** GPG public and private key
gpg private (sign)
public (veify)
sudo yum install -y rpm-sign gpg
sudo rngd -r /dev/urandom
gpg --fulll-generate-key # generate private/public key pair
gpg --list-keys
gpg --fingerprint
gpg --armor --export rad@redhat.com > my_gpg_key # contain the public key
wget http://.......... .rpm
echo '%_gpg_name rad@redhat.com' > ~/.rpmmacros
rpm --resign hwinfo-1.0-1.e1....rpm
rpm --import ../my_gpg_key # sign the public key
sudo !! (run the previous command)
rpm --checksig hw.....rpm # verify the rpm signature sign
#************ administering customs product and repository **********
Content -> content credentials
Create content credential
name:
Type: gpg key
upload the key
Content -> Product
customs Software
select the requird repo, and navigate to GPG key, and select the uploaded "content credential / gpg key"
Content -> Activation Key
select the key
Subscription
Add
Repository Sets
Content -> Content views
select the require content view, and add the new version after we add the custom repository
Then publish New version
Promote to the required environment
Content -> Activation keys
select OperationServer and
ssh to servera
subscription-manager status
subscription-manager
Back to servera from satellite, and they register servera with the gpg key
subscription-manager register --org-"Operations" --activiationkey="OpsServers" --force
#************** Deploy capsule server
requirement
same hardware and software requirement for capsule server, and port numbers
Dedicated capsule server **
using yum to install capsule server
generate cert tarball from satellite, and copy to capsule server and install capsule server
load balancing satellite server (similar to SCCM DP)
1. register the capsule server as client to satellite server
python bootstrap.py # show how to use it
wget http://sate..../bootstrap.py
python bootstrap.py --login=admin --server=satell.lab.com --organization=Operation bootstrap xxx
cat /usr/lib/firwalld/services/RH-satellite-6.xml # list all the ports required
firewall-cmd --add-service RH...6 --permanent
firewall-cmd --add-port=8443/tcp --permanent
firewall-cmd --reload
subscription-manager repos --list
rhel-.....-capsule.rpm # capsule install rpm
mkdir -pv /root/capsule_ssl # need to generate from satellite server
satel> capsule-certs-generate --foreman-proxy-fqdn ..............
cp the ssl tarball to capsule server
cap> yum install -y capsule..rpm
ctrl+shift-c copy
ctrl+shift-v paste
# In Sate GUI
infrastructure -> Capsule
change to any organization and any location
change location / add require location
#********* configure capsule server
sate gui
Infrastrucure -> capsules
ssh root@capsule
yum install -y pulp-
grep default_password /etc/pulp/server.conf
pulp-admin -u admin -p xxxx repo list
#********** publishing content to capsule server
sate /var/lib/pulp
cap /var/lib/pulp (it will host the repo, sync from sat)
du -sh /var/lib/pulp
in Sat GUI
infras -> capsules, select the cap server
click Edit, and add the required lifecycle environment
Then, click Synchronize -> optimized sync
ssh root@cap...
pulp-admin -u admin -p xxxx repo list
#**************** remote management
managed host (hosts managed by satellite server) tcp 22
satellite capsule (ssh, ansible, puppet)
3 ways to install sat public key to the managed hosts
1. ssh-copy-id
2. kickstart download
3. provisioning template
clients (accessible on tcp 22) (ensure sat public key installed on managed hosts)
roote@cap hammer job-template list | grep -i "Run Command"
hammer job-template info --id 129
root@sat> hammer job-invocation list
sat GUI
Host -> job templates
name ~ run command (search) name contains
#************** config remote execution
ssh root@cap
ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_forman_proxy.pub root@servera.lab.com # works for few system
ssh root@satellite
ssh-copy-id -i ~for
sat gui
hosts -> hosts
select servera
schedule job (job invocation)
job category: Commands
Job template
Command:
Schedule: execute now
ssh to sat
hammer job-invocation create --job-template "Run command - SSH Defalt" --search-query "name =..." --input command="uptme; hostname; whoami"
hammer job-invocation list
Host -> all hosts
select the required host
action -> schedule remote job
job category: Ansible commands
job template: run command - Ansible default
command:
Schedule: execute now
Note: anisble provide better output
We can use "hammer" to create Ansible job # Ansible play book -> yaml file
hammer job-invocation output --id 2 --host servera.lab.com
ssh student@servera
#************** configure Ansible Remote Execution
root@sat# ansible-galaxy init rgdaxx --init...
tree -F /var/tmp/<dir>
main.yml # default variable and value
/defaults/main.yml # expect user to enter value
/var/main.yml # developer variables and vaules
galaxy ansible
ls -l /etc/ansible/roles # ensure roles are installed in this folder
yum install -y rhel-system-roles
/usr/share/ansible/roles
rhel-system-roles.timesync #
ssh root@capsule
root@capsule# ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub root@servera.lab.com
# Verify serverb is associate with environment
Hosts -> All Hosts
ssh root@severb
yum update -y
sate gui
Hosts -> All Hosts
Select the host -> Action -> Schedule Remote job
Job category
Job template
puppet
Infr -> capsule
select the capsule
#********************* Run remote execution
root@sate# tar xf apache-setup-role.tgz
cp -R apache-setup-role /etc/ansible/roles
sat gui
configre -> roles
import the role
#************** Provisioning host / build RHEL ***************
anaconda -> interactive installation
Kickstart -> auto installation
Kickstart files
- provisioning file (Using provisioning templates)
base rpm and appstream
- PXE
PXE kickstart template
- pxelinux
pxe client --> discover dhcp server
pxe -> dhcp server
- finish (templates)
- userdata (templates)
- partitioning (templates)
satellite (hosting installation file, tftp server)
- web server (images)
- tftp server (pxe boot)
sat GUI
- Organization -> Operations
Content -> Red Hat Repositories
Select Kickstart
a. search " name ~ BaseOS"
enable this repository
b. enable AppStream repository
Content -> Sync Status
Ensure "....kickstart..." select and sync
then, look at the templates
Host -> provisiong Templates
clone and make changes to the clone template (kind = PXELinux) -> kickstart default PXELinux (using embeded oobe)
kind = provision
kickstart default
Host -> Partition tables
kickstart default
After select the above 3 templates, we will then be able to build RHEL
#****** preparing network for provisioning
capsule server (for provisioning)
- dhcpd (subnet object)
- bind (zone object)
satellite - installer command (to add and remove feature)
- add provisiong feature
ssh root@capsule
cat satellite-installer-example.txt
satellite-installer --scenario capsule --foreman-proxy-dns true --foreman-proxy-dns-interface eth0 --foreman-proxy-dns-forwarders x.x.x.x
--foreman-proxy-dns-zone --foreman-proxy-dhcp true --foreman-proxy-dhcp-interface eth0 --foreman-proxy-dhcp-range "x.x.x.200 x.x.x.220"
--foreman-proxy-dhcp-nameservers y.y.y.y --foreman-proxy-dhcp-gateway y.y.y.z --foreman-proxy-tftp true
bash satellite-installer-example.txt
Infrastructure -> Subnets
-> capsule
- select the capsule, and import subnet
#*********
- define host $<MAC>
/var/lib/tftpboot/pxelinux.cfg/$<MAC>
FDI (foreman discovery interface)
Host Group
#*** host provisioning
GUI -> Hosts -> Host Group (define and create new host group) <---------------------- create host group
Hosts -> create host <------------------ create the host
Name: serverh
Host group: select the predefined host group
Interface: edit, need to specify the MAC address (the new VM)
Note:
After create the host from satellite GUI (or using hammer), verify the pxelinux file
root@capsule# ls -l /var/lib/tftpboot/pxelinux.cfg/
Note: There are two files
a) 01-52-54-00-00-fa-0e # example of the pxelinux file with "MAC" as the file name
b) 01-52-54-00-00-fa-0e.ipxe
cat /var/lib/tftpboot/pxelinux.cfg/01-52-54-00-00-fa-0e
# This file was deployed via 'kickstart default PXELinux' template <------- it automatically populates
ks=http://capsule.xxxx:8000/unattended/provision?token......
Then, pxe boot the system
it will build the new RHEL server
Note:
ssh to satellite server, and install the following two packages (rpm)
yum install -y foreman-discovery-image rubygem-smart_proxy_discovery
then run satellite-installer and enable foreman plugin
satellite-installer --scenario capsule --enable-foreman-proxy-plugin-discovery
satellite-maintain service restart
Then login to sate gui
infrastructure -> subnets
select the requured subnet and verify capsule
then clone
PXELinux global default -> Lab PXELinux global default
pxelinux_discovery -> Lab_pxelinux_discovery
modify the file
proxy.url=capsule.lab.example.com:9090 proxy.type=proxy
then navigate to
Administer -> Settings -> Provisiong tab
select "Global default PXELinux template" and change to "Lab PXELinux global default"
Settigns -> discovery tab
Select "Discovery location", select the location
Settings -> Provisioning Templates
select "Build PXE Default"
#******************************* Managing RHEL satellite API - Query RHEL satellite API ****************
RHEL API
- web console
- hammer
- json / curl (GET, POST, PUT, DELETE) / ruby / ansible / python
Example
curl -s --request GET --user admin:redhat https://lab.example.com/api/v2/hosts | python -m json.tool | grep '"name".*lab.example.com'
curl -s --request GET --user admin:redhat https://lab.example.com/api/v2/hosts | python -m json.tool | grep -A2 '"id":' # grep two lines after id
| grep -m1 "id" # find the 1st match
where
pyton -m json.tool # format the output using json
#**************** integrating satellite function
mkdir -pv bin
Administer -> Settings -> Default Repostiry download policy
Content -> Red Hat Repositories
name ~ baseos # search
click + # enable button
name ~ appstream
click + # enable button
name ~ satellite tools
click + # enable
Content -> sync status
select the repositories, and click Sync
Note:
better to use sync plan
# sate server
du -sh /var/cache/pulp
hammer repository list
# verify ID
hammer repository synchronize --id 5 # select the required repository id, it will inherit the default download policy
du -sh /var/cache/pulp
watch !! # watch the last command
du -sh /var/lib/pulp
Create sync plan
Red Hat Product sync
# Interval -> Customs Cron 15 18 * * * # at 18:15, it will sync
click Add tab
Select the products
#*********** Using Hammer as an API
output -> csv, yaml, json
hammer support command completion (tab)
Hammer CLI Guide (version 6.6)
Hammer CLI Cheat Sheet
cat .hammer/cli.modules.d/foreman.yml
hammer --debug # see debug
hammer organization create --name SecOps --description 'name'
hammer host-collection create --name xx --organzation xx
hammer user create --login SeccOpsAdmin --password xx --mail xxx@xxx --auth-source-id 1 --organization SecOps
hammer user-group create --name SecOpersors --organization SecOps
hammer add-role --login Secopadmin --role 'xx'
hammer lifecycle-environment update --name xxx --organization 'xx' --description 'xxx'
#***************** Running satellite on cloud platform
#************ Managing hosts within cloud provider
# ************* maintenance
Administer -> Roles (RABC)
name ~ admin
#********* Backup and restore
du -sh /var/lib/mongodb \ # \ to continue
satellite-maintain backkup
#***********maintain satellite databse
cat /etc/cron.d/foreman-
hammer audit list | wc -l
hammer audit:exire days=<num-days> # clean up purge
systemctl stop goferd httpd pulp_workers pulp_celerybeat pulp_resource_manager pulp_streamer
mongo pulp_database --eval 'db.stats()'
mongo pulp_database --eval 'db.stats()'
#************** Exporting and importing content view (moving between different organization, or between different satellite servers)
Content view - immediate, no mirror on sync
Content View must have same name
/var/libpulp/kate
#***** commands
hammer repository list --organization Operations | grep BaseOS
Note the output:
Mirror on sync: yes
Download Policy: on_demand
hammer repository update --id <repository id> --organization Operations --mirror-on-sync no
hammer repository update --id <id> --organization Operations --download-policy immediate
hammer repository info --id <id> --organization Operations | grep -E "Mirror on Sync|Download Policy"
Mirror on sync: No
Download Policy: immediate
hammer content-view version list --organiation Operations
hammer content-viwe version export --export-dir /var/tmp --id <id>
hammer content-view version republish-repositories --content-view-id <id> --version <number> # to fix issue
hammer content-view version export --export-dir /var/tmp --id <id-num>
ls -hl /var/tmp/export-base-1.0.tar
tar xf export-base-1.0.tar | head -10
RHEL 7.9 & Satellite 6.10 Implementation and Troubleshooting
# System Requirements
CPU: min 4
Memory: min 20GB
RHEL: The latest version of Red Hat Enterprise Linux 7 Server
Satellite: v6.10
Domain Name: Full forward and reverse DNS resolution using a fully-qualified domain name
Host Name: A unique host name, which can contain lower-case letters, numbers, dots (.) and hyphens (-)
# SELinux Mode
SELinux must be enabled, either in enforcing or permissive mode. Installation with disabled SELinux is not supported.
# Storage Requirement
Directory Runtime Size
-----------------------------------------------
/var/log 10GB
/var/opt/rh/rh-postgresql12/lib/pgsql 20GB
/var/lib/pulp/ 300GB
#********** Storage Guidelines
1. If you mount the /tmp directory as a separate file system, you must use the exec mount option in the /etc/fstab file.
If /tmp is already mounted with the noexec option, you must change the option to exec and re-mount the file system.
This is a requirement for the puppetserver service to work.
Note:
If /tmp with noexec option, Satellite installation/configuration will failed
2. Because most Satellite Server data is stored in the /var directory,
mounting /var on LVM storage can help the system to scale.
3. The bulk of storage resides in the /var/lib/pulp/ directory.
These end points are not manually configurable.
Ensure that storage is available on the /var file system to prevent storage problems.
#**** Log File Storage
Log files are written to /var/log/messages/, /var/log/httpd/, and /var/lib/foreman-proxy/openscap/content/.
You can manage the size of these files using logrotate.
vi /etc/logrotate.conf
rotate 8
compress # un-comment compress
#**** Software Collections
Software collections are installed in the /opt/rh/ and /opt/theforeman/ directories
#***** Symbolic links
You cannot use symbolic links for /var/lib/pulp/
#***** Supported Operating Systems
1. Red Hat Enterprise Linux 7, x86_64 only
2. Install Satellite Server on a freshly provisioned system.
3. Red Hat does not support using the system for anything other than running Satellite Server.
#********** Firewall Requirements
1. Enabling Connections from a Client to Satellite Server
firewall-cmd \
--add-port="80/tcp" --add-port="443/tcp" \
--add-port="5647/tcp" --add-port="8000/tcp" \
--add-port="8140/tcp" --add-port="9090/tcp" \
--add-port="53/udp" --add-port="53/tcp" \
--add-port="67/udp" --add-port="69/udp"
2. Make the changes persistent:
firewall-cmd --runtime-to-permanent
3. Verify firewall settings
firewall-cmd --list-all
#*********** Verify DNS resolution
1. Ensure that the host name and local host resolve correctly:
ping -c1 localhost
ping -c1 `hostname -f` # my_system.domain.com
2.To avoid discrepancies with static and transient host names,
set all the host names on the system by entering the following command:
hostnamectl set-hostname name
#*********** Create logical volume for RHEL download packages
# Install/add two disks to base OS
1. 100GB
2. 1TB
# Set installation disk partition size during the RHEL installation
/ 25GiB
/tmp 10GiB
/var 25GiB
/var/log 10GiB
/var/log/audit 5GiB
/home 10GiB
# Create physical, virtual group and logical volumes for the 2nd hard disk
lsblk
# create physical volume
pvcreate /dev/sdb
pvdisplay
# create volume group
vgCreate vgData /dev/sdb
vgdisplay
# create logical volume
lvcreate -n lvPulp -L 300G vgData
lvcreate -n lvCache -L 300G vgData
lvcreate -n lvDB -L 20G vgData
lvdisplay
# verify /dev/mapper
ls -l /dev/mapper
# format logical volumes
mkfs.xfs /dev/mapper/vgData-lvPulp
mkfs.xfs /dev/mapper/vgData-lvCache
mkfs.xfs /dev/mapper/vgData-lvDB
# Update /etc/fstab
/dev/mapperrhel_hostname-tmp /tmp xfs defaults 0 0
/dev/mapper/vgData-lvPulp /var/lib/pulp xfs noexec,nosuid,nodev 0 0
/dev/mapper/vgData-lvCache /var//cache/pulp xfs noexec,nosuid,nodev 0 0
/dev/mapper/vgData-lvDB /var/opt/rh xfs noexec,nosuid,nodev 0 0
# verify disk space
df -h
# Join the Satellite server to domain
realm join --user=username test.local --computer-ou="ou=Linux,DC=test,DC=local"
# grant group ssh access
realm permit --groups "Domain Admins"
realm permit --groups "Linux Admins"
# Edit /etc/sudoers
"%Domain Admins" ALL=(ALL) ALL
"%Linux Admins" ALL=(ALL) ALL
#*********** Configuring the Base Operating System with Offline Repositories
1. Download and copy RHEL 7.9 and Satellite 6.10, plus the following two SELinux packages to Satellite base system
/var/tmp
2. Install selinux package
yum localinstall /var/tmp/selinux-policy-3.13.1-268.el7_9.2.noarch.rpm
yum localinstall selinux-policy-targeted-3.13.1-268.el7_9.2.noarch.rpm
3. create mount point/directory
mkdir /media/rhel7-server
mount -o loop /var/tmp/rhel7-Server-DVD.iso /media/rhel7-server
cp /media/rhel7-server/media.repo /etc/yum.repos.d/rhel7-server.repo
# Edit rhel7-server.repo, add baseurl directive
baseurl=file:///media/rhel7-server/
yum repolist # verify the repository has been configured
yum repolist -v
mkdir /media/sat6
mkdir /media/sat6
mount -o loop /var/tmp/sat6-DVD.iso /media/sat6
yum repolist -v # verify satellite packages accessible
yum repolist all
#*********** Installing the Satellite Packages from the Offline Repositories
1. Ensure the ISO images for Red Hat Enterprise Linux Server and Red Hat Satellite are mounted:
findmnt -t iso9660
2. Import the Red Hat GPG keys:
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
3. Ensure the base operating sytems is up to date
yum update
4. Install Satellite
yum update # ensure the base OS is up to date with the binary ISO image
cd /media/sat6
./install_packages
If you have successfully installed the Satellite packages, the following message is displayed:
Install is complete. Please run satellite-installer --scenario satellite
5. nstalling the SOS Package on the Base Operating System
Install the sos package on the base operating system so that
you can collect configuration and diagnostic information from a Red Hat Enterprise Linux system
yum install sos
6. Configure satellite
Use the satellite-installer --scenario satellite --help
Enter the following command with any additional options that you want to use:
# satellite-installer --scenario satellite \
--foreman-initial-organization "initial_organization_name" \
--foreman-initial-location "initial_location_name" \
--foreman-initial-admin-username admin_user_name \
--foreman-initial-admin-password admin_password
7. Verify installation logs
The script displays its progress and writes logs to /var/log/foreman-installer/satellite.log.
tail -f /var/log/messages
8. Enabling the Disconnected Mode (disconnected network installation only)
hammer settings set --name content_disconnected --value true
Note: From GUI https://satellite.test.lab
a. In Satellite web UI, navigate to Administrater -> Settings
b. Click the Content tab
c. Set the "Disconnected mode" value to "Yes"
Change RHEL server ip address or default gateway
/etc/sysconfig/network-scripts/ifcfg-device /etc/sysconfig/network-scripts/ens192 # example nmcli con load /etc/sysconfig/network-scripts/ens192 # update changes
Errors
Red Hat Satellite Installation or upgrade fails with the exception
https://access.redhat.com/solutions/3370091
# Error 1: Puppet -Error: Cannot determine basic system flavour
Starting puppetserver Service...
Failed to load feature test for posix: can't find user for 0
Puppet::Error: Cannot determine basic system flavour
Resolution
1. Assign executable permission to /tmp
2. Following modifications need to be performed to use any other mount point that has "exec" bit set,
which can then be used as a temporary mount point for Java -
# vim /etc/sysconfig/puppetserver
Add the following line in the above file and save the file:
JAVA_ARGS="-Xms2G -Xmx2G -XX:MaxPermSize=256m -Djava.io.tmpdir=/var/tmp"
Note: In this example, the mount point has been set to "/var/tmp"
3. Verify
# systemctl restart puppetserver
# systemctl status puppetserver ----> Ensure that the service is running
# ps -ef | grep java.io.tmpdir | grep puppetlabs ----> Ensure that new "tmp" partition is in effect
4. Refer below step in case the above step does not resolve the issue:
# vim /etc/foreman-installer/custom-hiera.yaml
Adding below line in the above file can also be used as a workaround for this issue:
puppet::server_jvm_extra_args: '-XX:MaxPermSize=256m -Djava.io.tmpdir=/custom/dir/location'
Note:
1) Either way, you’ll need to set the permissions of the directory to 1777.
This allows the Puppet Server JRuby process to write a file to '/tmp' and then execute it.
If permissions are set incorrectly, you’ll get a massive stack trace without much useful information in it.
2) The changes in /etc/sysconfig/puppetserver file might get overwritten with Satellite upgrade.
Hence it is recommended to make changes in /etc/foreman-installer/custom-hiera.yaml file.
Root Cause
If /tmp is mounted with the option noexec, puppetserver assumes that it is running on a Windows server,
because it cannot determine the basic system flavour.
Puppetserver service fails to start with error - Permission denied
# Permission denied - /var/log/puppetlabs/puppetserver" in Red Hat Satellite 6
# Issue
The puppetserver service is not getting started and the following errors were observed in the Red Hat Satellite\Capsule server.
/var/log/messages
satellite puppetserver: (RuntimeError) Got 1 failure(s) while initializing:
File[/var/log/puppetlabs/puppetserver]: change from 'absent' to 'directory' failed:
Could not set 'directory' on ensure: Permission denied - /var/log/puppetlabs/puppetserver
Failed to start puppetserver Service.
Unit puppetserver.service entered failed state.
puppetserver.service failed.
# Resolution
1. Ensure that the ownership\permission\selinux_context of the directory
/var/log/puppetlabs/puppetserver
looks like the following
# ls -ld /var/log/puppetlabs/puppetserver -Z
drwxr-x---. puppet puppet system_u:object_r:var_log_t:s0 /var/log/puppetlabs/puppetserver
2. If the issue persists, Ensure that /var/log/puppetlabs itself has the correct ownership\permission\selinux_context applied.
# ls -ldZ /var/log/puppetlabs
drwxr-xr-x. root root system_u:object_r:var_log_t:s0 /var/log/puppetlabs
# Root Cause
The permission of /var/log/puppetlabs directory was set to 740 drwxr
whereas the expected permission is 755 i.e. drwxr-xr-x
# Diagnostic Steps
1. Verify the permission\ownership\selinux_context of the /var/log/puppetlabs and its underlying directory and files.
# namei -lom /var/log/puppetlabs/puppetserver/*.log
# ls -lRZa /var/log/puppetlabs
2. Verify if there are any SELinux denials captured inside /var/log/audit/audit.log related to puppet or puppetserver.
How to verify if all services are running fine on Redhat Satellite Server
https://ngelinux.com/how-to-verify-if-all-services-are-running-fine-on-redhat-satellite-server/
1. Check to ping all services parts of Satellite Server
# hammer ping
2. In case anything gets failed, we need to further check all services and start the corresponding services.
# katello-service status
3. Re-start the stopped or pending services
Example
systemctl stop rh-mongodb34-mongod.service
systemctl start rh-mongodb34-mongod.service
systemctl status rh-mongodb34-mongod.service
4. Now check the status again
# katello-service status