Table of Contents |
---|
...
Compute Requirements
Hardware | Specification | ||||
Virtual Machines |
Storage Notes
| ||||
Virtualization software | VMware vSphere ESXi 6 or higher VMware Workstation support 12 or higher |
Yarnman Deployment
Setup Process
Steps | Purpose | Notes | Dependancies | |
---|---|---|---|---|
1 | Deploy OVA | Deploy all yarnman virtual machines | ||
2 | Set IP address | Set static IP address for yarnman | ||
3 | Generate Certificates | Generate service container certificates | If changes are required at this step the script can be started https://yarnlab.atlassian.net/wiki/spaces/YMNYSP/pages/28221112332916745332/Yarnman+PH4+Deployment+and+Installation#ym+Photon+Powered+YM-PH+-+Command+Line+Interface+Guide+CLI#ym-generate-certs.sh | |
4 | Install yarnman | Install yarnman and initialise system | If changes are required at this step the script can be started https://yarnlab.atlassian.net/wiki/spaces/YMNYSP/pages/28221112332916745332/Yarnman+PH4+Deployment+and+Installation#ym+Photon+Powered+YM-PH+-+Command+Line+Interface+Guide+CLI#ym-install.sh | |
5 | Encrypt configuration | Encrypt keys and config using clevis/tang | The other nodes must be deployed and initalised before this step can be performed | |
6 | Local node customisation | Customise local node | ||
7 | Enable Couchd clustering | Only required for clustered yarnman deployments | Local node customisation |
Deploy Yarnman OVA
1. Deploy yarnman OVA to VMware
Yarnman ova can be depoyed either using VMware OVFtool or by uploading the ova to vSphere/ESXi
...
Info |
---|
If using Ovftool to deploy
|
2. Using VMware Console log into Yarnman to bootstrap configuration
1. login to to yarnman using the VMware console
Info |
---|
Default username: root Password: yarnman |
...
Note that you will be prompted to change the root password on first login, note that the root account cannot be used for SSH
2. Set the static ip and other network settings
...
Code Block |
---|
root@yarnman [ ~ ]# ym-set-static-ip.sh Do you want to set a static IP? Y or Ny set static Please, select a network interface from the numberic index: 0 eth0 1 docker0 0 Selected eth0 *** Please enter the following details: *** Hostname: yarnman-test IP Address: 10.101.10.37 Netmask Bits: 24 Gateway: 10.101.10.1 DNS: 10.101.205.200 Domain: lab.yarnlab.io NTP: 10.101.205.200 |
3. Confirm Network Settings
...
Code Block |
---|
Applying the following configuration: Interface = eth0 Hostname = yarnman-test IP Address = 10.101.10.37 Netmask = 24 Gateway = 10.101.10.1 DNS = 10.101.205.200 Domain = lab.yarnlab.io NTP = 10.101.205.200 Is this correct? Y or N |
Console output from previous set - no action required
...
Code Block |
---|
setting static ip - netmgr ip4_address --set --interface eth0 --mode static --addr 10.101.10.37/24 --gateway 10.101.10.1 IPv4 Address Mode: static IPv4 Address=10.101.10.37/24 IPv4 Gateway=10.101.10.1 use --dhcp default value 0. use --autoconf default value 0. setting hostname - netmgr hostname --set --name yarnman-test Hostname: yarnman-test # Begin /etc/hosts (network card version) ::1 ipv6-localhost ipv6-loopback 127.0.0.1 localhost.localdomain 127.0.0.1 localhost 127.0.0.1 yarnman # End /etc/hosts (network card version) 10.101.10.37 yarnman-test setting dns servers - netmgr dns_servers --set --mode staic --servers 10.101.205.200 DNSMode=static DNSServers=127.0.0.53 nameserver 10.101.205.200 setting dns servers - netmgr dns_domains --set --domains lab.yarnlab.io Domains=domains. setting dns servers - netmgr ntp_servers --set --servers 10.101.205.200 NTPServers= 10.101.205.200 Bootstrap configuration complete |
4. Set the password for the yarnman user
This user is used for ssh with a userid of yarnman
...
Code Block |
---|
yarnman user not found adding now Set yarnman password New password: BAD PASSWORD: The password is shorter than 8 characters New password: Retype new password: passwd: password updated successfully Adding yarnman-user to SSH allowed groups |
5. If certificates are not present the script will ask the user to automatically generate local certificates
Note |
---|
These certificate are for local services and there is no advantage for using signed certificates - These are not the brower certificates |
...
Code Block |
---|
Certificates not present Do you want to generate certificates? Y or N |
Certificate verification
...
Code Block |
---|
Applying the following configuration: Certificate Duration Days = 3650 Certificate Country = AU Certificate State = NSW Certificate Location = yarnlab Certificate Organisation = yarnlab Certificate Common Name = yarnman-test.lab.yarnlab.io Certificate Alt Names = DNS:yarnman-test.local,IP:10.101.10.37 Is this correct? Y or N |
Info |
---|
If you dont accept the certificate you can use the script ym-generate-certs.sh |
Certificate generation output
...
Code Block |
---|
Cenerating Certificates Generating yarnman rootCA Generating Certificates for registry Certificate request self-signature ok subject=C = AU, ST = NSW, L = yarnlab, O = yarnlab, CN = yarnman-test.lab.yarnlab.io writing RSA key Generating Certificates for couchdb Certificate request self-signature ok subject=C = AU, ST = NSW, L = yarnlab, O = yarnlab, CN = yarnman-test.lab.yarnlab.io writing RSA key Certificates Generated Yarnman local.yaml is not present |
6. If yarnman has not been installed the script will prompt to set the database password for yarnman
...
Code Block |
---|
Yarnman local.yaml is not present Do you want to install yarnman? Y or Ny Install Yarnman Set Couch DB password: Couch password (again): |
...
Code Block |
---|
1660723089554 INFO Default authentication database has been created and prepared. 1660723089564 INFO Default role default created. 1660723089622 INFO Password changed for user yarnman successfully. 1660723089632 INFO Default Yarnman User yarnman created. 1660723089647 INFO Default role default has had its permissions updated. 1660723089657 INFO Default policy Central DB-Only Policy created. 1660723089666 INFO We have successfully enrolled the node. 1660723089678 INFO We have successfully created a node registration. 1660723089693 INFO Configuration Standalone Yarnman Proxy has been successfully created. 1660723089702 INFO Configuration Standalone Yarnman Administration App has been successfully created. 1660723089711 INFO Configuration Standalone Yarnman Workflow Service has been successfully created. 1660723089740 INFO Both public and private encryption keys been located and verified. 1660723090021 INFO SSL key and cert have been generated (self-signed). 1660723090022 WARN Setting directory permissions. 1660723090022 INFO Installation process for Yarnman Standalone Core has been completed successfully. 1660723090022 INFO Go to Admin-App and then add services. Imported 1 GPG key to remote "photon" * photon 6271beba2e07da40ad3480af0fbba313a3c26e63f425174e9b25b14b302a1f09.0 Version: 4.0_yarnman origin refspec: photon:photon/4.0/x86_64/yarnman GPG: Signature made Wed 17 Aug 2022 05:50:59 AM UTC using RSA key ID 876CE99C337FE298 GPG: Good signature from "Yarnlab Photon Test Key <contact@yarnlab.io>" GPG: Key expires Wed 29 May 2024 09:30:23 AM UTC [+] Running 4/4 ⠿ Container ym-yarnman Removed 10.3s ⠿ Container ym-couchdb Removed 1.6s ⠿ Container ym-redis Removed 0.2s ⠿ Network yarnman_yl-yarnman Removed 0.1ss removing yarnman registry Stopping local registry containers Removing local registry images ● yarnman.service - yarnman Loaded: loaded (/usr/lib/systemd/system/yarnman.service; disabled; vendor preset: enabled) Active: active (running) since Wed 2022-08-17 07:58:32 UTC; 6ms ago Process: 4211 ExecStartPre=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS) Main PID: 4221 (docker-compose) Tasks: 5 (limit: 4694) Memory: 4.9M CGroup: /system.slice/yarnman.service └─4221 /usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans Aug 17 07:58:32 yarnman-test systemd[1]: Starting yarnman... Aug 17 07:58:32 yarnman-test docker-compose[4211]: yarnman Warning: No resource found to remove Aug 17 07:58:32 yarnman-test systemd[1]: Started yarnman. Created symlink /etc/systemd/system/multi-user.target.wants/yarnman.service → /usr/lib/systemd/system/yarnman.service. Yarnman installation finished |
Tip |
---|
3 Minute screen cast of deployment https://youtu.be/F_JBA5B_QzI https://youtu.be/F_JBA5B_QzI |
7. Web Browser, browse to Yarnman IP and set the administrator account password.
Accept the End User License Agreement by selecting the check box.
Under the Set Administrator Password option, enter the password that is used later to log in to the GUI & click "Save Acceptance and Update Administrator".
Login with the username of the administrator and password that you created.
Yarnman is installed
Key and Configuration Encryption
Info |
---|
A script will be added in the future to automate creation of the clevis.json although it will require out of band collection of the thp |
...
Collect the thp (thumbprint for each tang servers individually this is a unique identifier for each nodes tang server
Code Block yarnman@ym-ph-test [ ~ ]$ sudo ym-service-commands.sh tang-thp E7SN3eGxrnyoGiHGJBt4GDU8MRw OR if older versions this command was previously yarnman@ym-ph-test [ ~ ]$ sudo ym-service-commands.sh tang-adv E7SN3eGxrnyoGiHGJBt4GDU8MRw
Run the key and config encryption
Code Block |
---|
Node1
sudo ym-encrypt-at-rest.sh
Database key found proceeding
Number of pins required for decryption :1
Number of pins this must be equal or greater than the number of pins required for decryption :3
Enter URL for tang server 1 :http://10.101.10.13:6655
Enter THP for tang server 1 :o38piqOs5UwunlcUmqMVwulml34
Connection successful to : http://10.101.10.10:6655
Enter URL for tang server 2 :http://10.101.10.11:6655
Enter THP for tang server 2 :0Lqk7DroJ0g3patTCgTweMUAHPc
Connection successful to : http://10.101.10.11:6655
Enter URL for tang server 3 :http://10.101.10.12:6655
Enter THP for tang server 3 :GEpmSTQfz8ctVxdgQEp_rnS3za
Connection successful to : http://10.101.10.12:6655
...
Node4
sudo ym-encrypt-at-rest.sh
Database key found proceeding
Number of pins required for decryption :1
Number of pins this must be equal or greater than the number of pins required for decryption :3
Enter URL for tang server 1 :http://10.101.10.10:6655
Enter THP for tang server 1 :DwLco7FJtXWxFTprQ5M3cojJsZo
Connection successful to : http://10.101.10.10:6655
Enter URL for tang server 2 :http://10.101.10.11:6655
Enter THP for tang server 2 :0Lqk7DroJ0g3patTCgTweMUAHPc
Connection successful to : http://10.101.10.11:6655
Enter URL for tang server 3 :http://10.101.10.12:6655
Enter THP for tang server 3 :GEpmSTQfz8ctVxdgQEp_rnS3za
Connection successful to : http://10.101.10.12:6655 |
Note |
---|
Do not include the Local server in the encryption at rest. If you have 4 Nodes, you will enter in Number of Pins as 3, and exclude the IP address of the local server |
Customisation
These manual customisations will be moved into scripts in a future release
When editing .yml document ensure that the correct space indentation for the relevent lines
Yarnman Application Additional Ports
With yarn_man Photon additional steps are required for Adding Secondary Local Auth Administration Access
This step requires root access
to switch to root access run the following command “su root” and enter the root password set during installation
Manually edit the following file
nano /var/opt/yarnlab/yarnman/docker-compose-override.yml
Code Block version: '3.7' services: yarnman: ports: - "3999:3999" expose: - "3999"
Ensure that the top row show version: '3.7'
Create the 2nd Administration application and ensure the port select matches what is set for ports and expose in docker-compose-override.yml
Restart yarnman services
sudo ym-service-commands.sh restart
You will now be able to access the second administration application on port 3999 using https://<IP address>:3999/
NOTE that http to https redirect will not work on this port and https:// must be entered
It is suggested to use in private browser or similar as the authentication sessions will conflict with LDAP users and the older session will close
Enable database access for Replication
Info |
---|
This step must be performed to enable couchdb clustering on every node |
This step requires root access
to switch to root access run the following command “su root” and enter the root password set during installation
Manually edit the following file
nano /var/opt/yarnlab/yarnman/docker-compose-override.yml
Code Block version: '3.7' services: couchdb: ports: - "6984:6984"
Ensure that the top row show version: '3.7'
NOTE : If you already have an existing services in /var/opt/yarnlab/yarnman/docker-compose-override.yml add another services line, below shows 2 services in the file
NOTE!! Make sure the spaces are exactly as below in the yml fike, else docker may not start …
Code Block version: '3.7' services: yarnman: ports: - "3999:3999" expose: - "3999" couchdb: ports: - "6984:6984"
Restart yarnman services
Code Block sudo ym-service-commands.sh restart restarting yarnman.service ● yarnman.service - yarnman Loaded: loaded (/usr/lib/systemd/system/yarnman.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2022-11-14 04:23:53 UTC; 9ms ago Process: 56653 ExecStartPre=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS) Main PID: 56663 (docker-compose) Tasks: 5 (limit: 4694) Memory: 7.1M CGroup: /system.slice/yarnman.service └─56663 /usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans Nov 14 04:23:53 ym-ph-test systemd[1]: Starting yarnman... Nov 14 04:23:53 ym-ph-test docker-compose[56653]: yarnman Warning: No resource found to remove Nov 14 04:23:53 ym-ph-test systemd[1]: Started yarnman.
Optionally access to couchdb can be restricted to IP addresses
Changing private key default passphrase
This step requires root access
...
to switch to root access run the following command “su root” and enter the root password set during installation
If the encryption at rest process has been run previously the private key must be decrypted
...
If the key is not encrypted skip the decryption step
...
to verify run the following command If no file is found that means the key is encrypted
Code Block |
---|
ls -la /var/opt/yarnlab/yarnman/config/private-encryption-key.pem
ls: cannot access '/var/opt/yarnlab/yarnman/config/private-encryption-key.pem': No such file or directory |
...
to verify the key is encrypted
...
Code Block |
---|
ls -la /var/opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe
-rw-r--r-- 1 ym-yarnman-app ym-yarnman-app-gp 8129 Nov 14 03:40 /var/opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe |
...
Switch into docker container by running the following command - Note that the prompt changes from the root to container shell
Code Block docker exec -it ym-yarnman /bin/bash ym-yarnman-app@yl-ym-yarnman:/opt/yarnlab/yarnman$
...
to decrypt the key run the following command
Code Block clevis decrypt < /opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe > /opt/yarnlab/yarnman/config/private-encryption-key.pem
...
reset permissions
chmod 600 /opt/yarnlab/yarnman/config/private-encryption-key.pem
...
change passphrase from default “yarnman”
Code Block ssh-keygen -p -f /opt/yarnlab/yarnman/config/private-encryption-key.pem Enter old passphrase: Enter new passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved with the new passphrase.
...
backup old key
Code Block mv /opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe /opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe.bk
...
exit the container shell
Code Block exit exit root@ym-ph-test [ /var/opt/yarnlab/ ]#
verify the key is decrypted and ensure the that
Code Block ls -la /var/opt/yarnlab/yarnman/config/private-encryption-key.pem -rw-r--r-- 1 ym-yarnman-app ym-yarnman-app-gp 3326 Nov 20 20:21 /var/opt/yarnlab/yarnman/config/private-encryption-key.pem
...
add new passphrase to /var/opt/yarnlab/yarnman/config/local.yaml
Code Block encryption: dbPassphrase: 'Clouduc123'
TO update
yq -i '.encryption.dbPassphrase = "Clouduc123"' /var/opt/yarnlab/yarnman/config/local.yaml
...
encrypt passphrase
Code Block docker exec ym-yarnman node ./scripts/encrypt-local-config.js -k encryption.dbPassphrase 1668977064139 INFO Starting the encryption of 1 local configuration fields through Clevis Shamir Secret Sharing 1668977064142 INFO Attempting to encrypt the following local config fields: encryption.dbPassphrase 1668977064371 INFO Local key 'encryption.dbPassphrase' encrypted successfully 1668977064371 INFO 1 local config fields encrypted, 0 fields omitted
verify
Code Block cat cat /var/opt/yarnlab/yarnman/config/local.yaml
...
re encrypt keys
Code Block docker exec ym-yarnman node ./scripts/encrypt-keys.js 1668977138519 INFO Encrypting private and SSL keys using settings: 1668977138521 INFO - not overwriting existing encrypted files and not deleting any original files after encryption 1668977138522 INFO -------------------------------- 1668977138522 INFO Encrypting... 1668977138768 INFO - 'private-encryption-key.pem' encrypted successfully 1668977138768 INFO - 'ssl-key.pem' already encrypted, not overwriting 1668977138768 INFO -------------------------------- 1668977138768 INFO Finished encrypting the files
...
restart services
Code Block systemctl restart yarnman
verify while services are restarting look for
Code Block docker logs ym-yarnman -f 2|administration-app-b6925c3239fc4c878ff6888ce5cb2b51 | 1668977206414 INFO Decrypting 1 encrypted configuration keys 2|administration-app-b6925c3239fc4c878ff6888ce5cb2b51 | 1668977206415 INFO Decrypting configuration key 'encryption.dbPassphrase' 2|administration-app-b6925c3239fc4c878ff6888ce5cb2b51 | 1668977206500 INFO Configuration key 'encryption.dbPassphrase' decrypted successfully 2|administration-app-b6925c3239fc4c878ff6888ce5cb2b51 | 1668977206500 INFO Finished decrypting 1 configuration keys
Setup Couchdb Replication
Its recommended to have completed the the Yarngate LDAP configuration with at least 1 role configured before completing replication on additional nodes for the first time time setup. Refer to Yarngate Service Setup for more information
...
login to yarnman administration application web interface
...
Navigate to Authentication database
Rename the Default Authentication Database name from “Central DB” to “<Node Name> Central DB” or other suitably unique name
...
Navigate to Authentication Policies
Rename the Default Authentication Policy name from “Central DB-Only Policy” to “<Node Name> Central DB-Only Policy” or other suitably unique name
...
Navigate to Nodes and select the Standalone node
...
Update the yarnman node name
...
Navigate to Nodes , select the node you wanted to setup and click on the Replication tab
...
Click on Add Replication
Enter the source and target connection strings
Source Address Syntax https://10.222.1.4:6984 - NOTE that the default source address is 10.222.1.4
Source address: https://10.222.1.4:6984
Source username: architect
source password: yarnman
Remote address Syntax https://<TargetIP>:6984
Target address https://10.101.10.10:6984
Target username: architect
Target password: yarnman
Once replication is setup status can be reviewed by clicking on the replication address, eg https://10.101.10.10:6984 . If the replication shows blank, the Sync button can be pressed to kick off replication again.
...
Repeat for each pair of nodes to achieve a full mesh
If there are 2 datacenters repeat for each primary node in each data centre -
2 node - 2 replications
n1->n2
n2->n1
3 node - 6 replications
n1->n2
n1->n3
n2->n1
n2->n3
n3->n1
n3->n2
4 node - 12 replications
n1->n2
n1->n3
n1->n4
n2->n1
n2->n3
n2->n4
n3->n1
n3->n2
n3->n4
n4->n1
n4->n2
n4->n3
...
if you have any issues with replications in state failing run the following command and review the log messages
Yarnman PH4 Command Reference
Note |
---|
All yarnman service commands need to be run with sudo |
ym-set-static-ip.sh
Info |
---|
This command encrypts the local keys and configuration using clevis/tang |
ym-generate-certs.sh
Info |
---|
This command encrypts the local keys and configuration using clevis/tang |
ym-install.sh
Info |
---|
This command encrypts the local keys and configuration using clevis/tang |
ym-encrypt-at-rest.sh
Info |
---|
This command encrypts the local keys and configuration using clevis/tang |
Code Block |
---|
yarnman@ym-ph-test [ ~ ]$ sudo ym-encrypt-at-rest.sh
Database key found proceeding
Number of pins required for decryption :1
Number of pins this must be equal or greater than the number of pins required for decryption :3
Enter URL for tang server 1 :http://10.101.10.10:6655
Enter THP for tang server 1 :DwLco7FJtXWxFTprQ5M3cojJsZo
Connection successful to : http://10.101.10.10:6655
Enter URL for tang server 2 :http://10.101.10.11:6655
Enter THP for tang server 2 :0Lqk7DroJ0g3patTCgTweMUAHPc
Connection successful to : http://10.101.10.11:6655
Enter URL for tang server 3 :http://10.101.10.12:6655
Enter THP for tang server 3 :GEpmSTQfz8ctVxdgQEp_rnS3za
Connection successful to : http://10.101.10.12:6655
{
"t": 1,
"pins": {
"tang": [
{
"url": "http://10.101.10.10:6655",
"thp": "DwLco7FJtXWxFTprQ5M3cojJsZo"
},
{
"url": "http://10.101.10.11:6655",
"thp": "0Lqk7DroJ0g3patTCgTweMUAHPc"
},
{
"url": "http://10.101.10.12:6655",
"thp": "GEpmSTQfz8ctVxdgQEp_rnS3za"
}
]
}
}
Do you want to encrypt configuration? Y or Ny
encrypt configuration
Encrypting keys
1668397245104 INFO Encrypting private and SSL keys using settings:
1668397245106 INFO - not overwriting existing encrypted files and not deleting any original files after encryption
1668397245106 INFO --------------------------------
1668397245106 INFO Encrypting...
1668397245308 INFO - 'private-encryption-key.pem' encrypted successfully
1668397245543 INFO - 'ssl-key.pem' encrypted successfully
1668397245543 INFO --------------------------------
1668397245543 INFO Finished encrypting the files
Encrypting config
1668397245643 INFO Starting the encryption of 1 local configuration fields through Clevis Shamir Secret Sharing
1668397245743 INFO Attempting to encrypt the following local config fields: couchdb.password
1668397245843 INFO Local key 'couchdb.password' encrypted successfully
1668397245943 INFO 1 local config fields encrypted, 0 fields omitted
Do you want to take a backup of database key this will be shown on console? Y orNy
Echo private key to console
-----BEGIN RSA PRIVATE KEY-----
REMOVED
-----END RSA PRIVATE KEY-----
Encrypted private key is 8129 bytes
restarting services
Config encryption is complete |
ym-upgrade.sh
Info |
---|
This command upgrades yarnman |
Copy the upgrade file into /var/opt/yarnlab/upgrade
eg wget http://xxxxxxxx or sftp/scp the file onto the server
SSH into the yarnman host and change into the directory /var/opt/yarnlab/upgrade
run the command yarnman@host [ ~ ]$ sudo ym-upgrade.sh upgradefile.tar.gz
Code Block |
---|
Yarnman Upgrade file found /var/opt/yarnlab/upgrade/ym-registry:package-upgrade-yl-ph-8023676b.tar.gz
Do you want to upgrade yarnman to ym-registry:package-upgrade-yl-ph-8023676b.tar.gz ? Y or Ny
Upgrade yarnman
Stopping yarnman services
Stopping local registry containers
Removing local registry images
Loading local registry package tgz
Loaded image: ym-registry:package
Launching yarnman registry
f39ac12322df9a3add72c0ad135e691c6fc3ca0fc7be463a5b4534b88e8e68e6
Loading upgrade pre-req script from registry container
Starting upgrade pre-req script
TEMP upgrade script
Setting up tang
groupadd: group 'ym-tang-app-gp' already exists
Showing package container registry catalog
{"repositories":["ym-couchdb","ym-ostree-upgrade","ym-redis","ym-tang","ym-yarnman"]}
{"name":"ym-ostree-upgrade","tags":["yl-ph-8023676b"]}
{"name":"ym-yarnman","tags":["yl-ph-8023676b"]}
[+] Running 2/4
*** lots of docker pull output ***
*** lots of ostree output ***
State: idle
Deployments:
photon:photon/4.0/x86_64/yarnman
Version: 4.0_yarnman (2022-11-16T23:54:09Z)
Commit: 9941830a095f3a8630eabca846414afa03a935e95462845f7e71cc17f8437438
GPGSignature: Valid signature by 352365935446AC840528AF8703F9C95608035F3C
Diff: 15 added
● photon:photon/4.0/x86_64/yarnman
Version: 4.0_yarnman (2022-11-14T04:04:13Z)
Commit: 7fe66e8afc639d7a006b60208b5981748426ef4487581924e897d69a7b7c87cd
GPGSignature: Valid signature by 352365935446AC840528AF8703F9C95608035F3C
Do you want to remove upgrade file ? Y or N
Removing :ym-registry:package-upgrade-yl-ph-n18-a23846af.tar.gz
Removing old containers
Removing old yarnman image :localhost:5000/ym-yarnman:yl-ph-n18-475aac7a
Removing old couchdb image :localhost:5000/ym-couchdb:yl-ph-n18-475aac7a
Removing old redis image :localhost:5000/ym-redis:yl-ph-n18-475aac7a
Removing old tang image :localhost:5000/ym-tang:yl-ph-n18-475aac7a
Do you want to reboot yarnman ? Y or N
Reboot yarnman |
Note |
---|
A reboot may be required to apply OS patches if they are bundled into the update. |
ym-backup-setup.sh
Sets up the local backup service account on the yarnman node, and the passphrase used on the backup
Code Block |
---|
yarnman@node1 [ ~ ]$ sudo ym-backup-setup.sh
Starting yarnman ph4 backup
Backup password not set
Set Backup password:
Backup password (again):
Clevis not setup
using local backup password
no backup configuration file found creating
yarnman@node1 [ ~ ]$ |
Note |
---|
No login access is available to the backup service account |
ym-backup-actions.sh
all the backup commands are done via the script above
Setup sftp as the backup method and ssh public keys
Code Block |
---|
yarnman@node1 [ ~ ]$ sudo ym-backup-actions.sh -p sftp -a sftp-user-setup
backup config found
PROFILE_NAME_VAR = sftp
ACTION_VAR = sftp-user-setup
RESTORECOMMIT =
RESTORE_IP =
RESTORE_PATH =
settting sftp mode
profile mode :yarnman-sftp
creating keys for ym-backup-user
public key for ssh/sftp
ssh-rsa ****LongStringForPubKey****
yarnman@node1 [ ~ ]$
|
Copy ssh pub key to sftp server
if ssh access is available to the SFTP server you can copy the ssh public key for login, otherwise provide the key to your SFTP Administrator.
Code Block |
---|
yarnman@node1 [ ~ ]$ su
Password:
yarnman@node1 [ /var/home/yarnman ]# sudo -u ym-backup-user ssh-copy-id -i /home/ym-backup-user/.ssh/id_rsa.pub sftpbackup@10.101.10.86
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ym-backup-user/.ssh/id_rsa.pub"
The authenticity of host '10.101.10.86 (10.101.10.86)' can't be established.
ED25519 key fingerprint is SHA256:****j7t+o1aQu5FoWlxS0uhKzCe414jt3****
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Authorized uses only. All activity may be monitored and reported.
sftpbackup@10.101.10.86's password:
Number of key(s) added: 1
|
Setup SFTP destination for backup
the script will prompt for backup path, ip address and userid to the SFTP server
Code Block |
---|
yarnman@node1 [ ~ ]$ sudo ym-backup-actions.sh -p sftp -a sftp-setup-connection
backup config found
PROFILE_NAME_VAR = sftp
ACTION_VAR = sftp-setup-connection
RESTORECOMMIT =
RESTORE_IP =
RESTORE_PATH =
settting sftp mode
profile mode :yarnman-sftp
SFTP config is /var/opt/yarnlab/backup/sftp
enter sftp infomation
SFTP Username: sftpbackup
SFTP Host: 10.101.10.86
SFTP backup directory Path i.e /srv/yarnman/backup: /home/sftpbackup/yarnman
sftp:yarnman@10.101.10.86:/home/sftpbackup/yarnman
yarnman@node1 [ ~ ]$ |
Info |
---|
you may be prompted for username/password if the SSH pub key hasn’t been added to the SFTP server, this is OK for the initial setup, however scheduled/automated backups will fail |
Check if backups exist at location
for first time configuration no backups will be available, nor a backup repository which will be setup in the next section.
Code Block |
---|
yarnman@node1 [ ~ ]$ sudo ym-backup-actions.sh -p sftp -a snapshots
backup config found
PROFILE_NAME_VAR = sftp
ACTION_VAR = snapshots
RESTORECOMMIT =
RESTORE_IP =
RESTORE_PATH =
settting sftp mode
profile mode :yarnman-sftp
Checking snapshots for profile :yarnman-sftp
2023/08/11 04:41:34 profile 'yarnman-sftp': starting 'snapshots'
2023/08/11 04:41:34 unfiltered extra flags:
subprocess ssh: Authorized uses only. All activity may be monitored and reported.
Fatal: unable to open config file: Lstat: file does not exist
Is there a repository at the following location?
sftp:sftpbackup@10.101.10.86:/home/sftpbackup/yarnman
2023/08/11 04:41:34 snapshots on profile 'yarnman-sftp': exit status 1 |
Initialise the repository
the password used from the initial ym-backup-setup.sh
will automatically be used
Code Block |
---|
yarnman@node1 [ ~ ]$ sudo ym-backup-actions.sh -p sftp -a init
backup config found
PROFILE_NAME_VAR = sftp
ACTION_VAR = init
RESTORECOMMIT =
RESTORE_IP =
RESTORE_PATH =
settting sftp mode
profile mode :yarnman-sftp
Initialise backup for profile :yarnman-sftp
2023/08/11 04:43:57 profile 'yarnman-sftp': starting 'init'
2023/08/11 04:43:57 unfiltered extra flags:
created restic repository 7180598c67 at sftp:yarnman@10.101.10.86:/home/sftpbackup/yarnman
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
2023/08/11 04:44:00 profile 'yarnman-sftp': finished 'init'
yarnman@node1 [ ~ ]$ |
Info |
---|
Initialising can only be preformed once to a repository, an error will occur if it exists already. |
List backups (snapshots)
list all backups available , on a new repository this will be blank
Code Block |
---|
yarnman@node1 [ ~ ]$ sudo ym-backup-actions.sh -p sftp -a snapshots
backup config found
PROFILE_NAME_VAR = sftp
ACTION_VAR = snapshots
RESTORECOMMIT =
RESTORE_IP =
RESTORE_PATH =
settting sftp mode
profile mode :yarnman-sftp
Checking snapshots for profile :yarnman-sftp
2023/08/11 04:44:19 profile 'yarnman-sftp': starting 'snapshots'
2023/08/11 04:44:19 unfiltered extra flags:
subprocess ssh: Authorized uses only. All activity may be monitored and reported.
repository 7180598c opened (version 2, compression level auto)
2023/08/11 04:44:20 profile 'yarnman-sftp': finished 'snapshots'
yarnman@node1 [ ~ ]$ |
Info |
---|
|
Manual Backup
preform a manual backup
Code Block |
---|
yarnman@node1 [ ~ ]$ sudo ym-backup-actions.sh -p sftp -a backup
backup config found
PROFILE_NAME_VAR = sftp
ACTION_VAR = backup
RESTORECOMMIT =
RESTORE_IP =
RESTORE_PATH =
settting sftp mode
profile mode :yarnman-sftp
Running backup for profile :yarnman-sftp
2023/08/11 04:46:11 profile 'yarnman-sftp': starting 'backup'
2023/08/11 04:46:11 unfiltered extra flags:
subprocess ssh: Authorized uses only. All activity may be monitored and reported.
repository 7180598c opened (version 2, compression level auto)
lock repository
no parent snapshot found, will read all files
load index files
start scan on [/var/opt/yarnlab/yarnman/config /var/opt/yarnlab/couchdb/config /var/opt/yarnlab/couchdb/data /var/opt/yarnlab/couchdb/certs /var/opt/yarnlab/tang/db /var/opt/yarnlab/certs /var/opt/yarnlab/registry]
start backup on [/var/opt/yarnlab/yarnman/config /var/opt/yarnlab/couchdb/config /var/opt/yarnlab/couchdb/data /var/opt/yarnlab/couchdb/certs /var/opt/yarnlab/tang/db /var/opt/yarnlab/certs /var/opt/yarnlab/registry]
scan finished in 0.233s: 564 files, 5.211 MiB
Files: 564 new, 0 changed, 0 unmodified
Dirs: 348 new, 0 changed, 0 unmodified
Data Blobs: 404 new
Tree Blobs: 349 new
Added to the repository: 5.479 MiB (736.577 KiB stored)
processed 564 files, 5.211 MiB in 0:00
snapshot fa50ff98 saved
2023/08/11 04:46:12 profile 'yarnman-sftp': finished 'backup'
2023/08/11 04:46:12 profile 'yarnman-sftp': cleaning up repository using retention information
2023/08/11 04:46:12 unfiltered extra flags:
repository 7180598c opened (version 2, compression level auto)
Applying Policy: keep 3 daily, 1 weekly, 1 monthly snapshots and all snapshots with tags [[manual]] and all snapshots within 3m of the newest
snapshots for (host [node76-restore4], paths [/var/opt/yarnlab/certs, /var/opt/yarnlab/couchdb/certs, /var/opt/yarnlab/couchdb/config, /var/opt/yarnlab/couchdb/data, /var/opt/yarnlab/registry, /var/opt/yarnlab/tang/db, /var/opt/yarnlab/yarnman/config]):
keep 1 snapshots:
ID Time Host Tags Reasons Paths
-----------------------------------------------------------------------------------------------------------------
fa50ff98 2023-08-11 04:46:11 node1 ym-backup-sftp within 3m /var/opt/yarnlab/certs
daily snapshot /var/opt/yarnlab/couchdb/certs
weekly snapshot /var/opt/yarnlab/couchdb/config
monthly snapshot /var/opt/yarnlab/couchdb/data
/var/opt/yarnlab/registry
/var/opt/yarnlab/tang/db
/var/opt/yarnlab/yarnman/config
-----------------------------------------------------------------------------------------------------------------
1 snapshots
yarnman@node1 [ ~ ]$ |
Schedule
By default the schedule is setup to backup at 1am UTC every day, This can be modified in the config file with as the root user
Code Block |
---|
nano /var/opt/yarnlab/yarnman/config/ym-backup-config.yml |
Code Block |
---|
PENDING
Enable Schedule
sudo ym-backup-actions.sh -p sftp -a schedule
Disable Schedule
sudo ym-backup-actions.sh -p sftp -a unschedule
Check status of schedule
sudo ym-backup-actions.sh -p sftp -a status
|
Restore backup
To restore a snapshot to an existing node.
List the snapshots available as shown earlier to restore the required snapshot.
the restore script will create a Local backup before starting the restore in the event you need to rollback.
Code Block |
---|
yarnman@node1 [ ~ ]$ sudo ym-backup-actions.sh -p sftp -a restore -r fa50ff98
backup config found
PROFILE_NAME_VAR = sftp
ACTION_VAR = restore
RESTORECOMMIT = latest
BACKUP_IP =
BACKUP_PATH =
settting sftp mode
profile mode :yarnman-sftp
Restore backup for profile :yarnman-sftp
starting restore
Restore backup for profile :yarnman-sftp commit :latest
Are you sure you want to restore backup? Y or Ny
Restore Backup
subprocess ssh: Authorized uses only. All activity may be monitored and reported.
Backup nodeId's match
Stopping yarnman services
Removing exising configuration to prevent duplicates
Starting restic restore
2023/08/16 08:08:33 profile 'yarnman-sftp': finished 'restore'
Resetting permissions
Starting Database and Encryption services
[+] Creating 5/5
✔ Network yarnman_yl-yarnman Created 0.0s
✔ Container ym-redis Created 0.1s
✔ Container ym-couchdb Created 0.1s
✔ Container ym-tang Created 0.1s
✔ Container ym-yarnman Created 0.1s
[+] Running 1/1
✔ Container ym-couchdb Started 0.3s
[+] Running 1/1
✔ Container ym-tang Started
|
If you are restoring a node in a multi node deployment you will see an additional message of
Code Block |
---|
Checking number of admin nodes
number of admin nodes :x
Yarnman is in distributed mode
Check couchdb replication on other nodes is healthy and after 5 minutes reboot yarnman or run systemctl stop yarnman.service and systemctl start yarnman.service |
This is to allow replication to all nodes, to prevent any schedule jobs/ reports from rerunning from the last backup
Rebuild Disaster recovery
Pre-Req
Deploy new OVA with same version as the backup
Setup as a new install (eg Configure with ip, user/pass, generate certificates if prompted)
install yarnman
confirm can reach appadmin webpage, Do not Login or Accept the EULA as we will restore over this.
Setup backup to same repo for the node to be restored, Do Not initiate the repo or preform a backup
Info |
---|
A new SFTP/SSH key will be created, this will need to be added to the backups server for future automated backups to function again. interactive (user/pass) can be used for a restore , if the new ssh key can’t be added to the backup server at time of restore. |
Note |
---|
the Hostname doesn’t need to be the same as the restore backup, however any new backups will backup with the new hostname. If building with a different IP address, Replication will need to be adjusted to the new IP address as well as Clevin/Tang if setup. |
Run the following, Refer to previous detailed command instructions if required
Code Block |
---|
sudo ym-backup-setup.sh
sudo ym-backup-actions.sh -p sftp -a sftp-user-setup
as root user ; sudo -u ym-backup-user ssh-copy-id -i /home/ym-backup-user/.ssh/id_rsa.pub sftpbackup@10.101.10.86
sudo ym-backup-actions.sh -p sftp -a sftp-setup-connection
sudo ym-backup-actions.sh -p sftp -a snapshots
sudo ym-backup-actions.sh -p sftp -a restore -r xxxxx |
The restore script will warn we are restoring to a different node, Continue.
Code Block |
---|
yarnman@node79-restore [ ~ ]$ sudo ym-backup-actions.sh -p sftp -a restore -r 5f13f62b
backup config found
PROFILE_NAME_VAR = sftp
ACTION_VAR = restore
RESTORECOMMIT = 5f13f62b
BACKUP_IP =
BACKUP_PATH =
settting sftp mode
profile mode :yarnman-sftp
Restore backup for profile :yarnman-sftp
starting restore
Restore backup for profile :yarnman-sftp commit :5f13f62b
Are you sure you want to restore backup? Y or Ny
Restore Backup
subprocess ssh: Authorized uses only. All activity may be monitored and reported.
Current Node Id is :arm_46b194ad3d374b7397fa14b1a3136d56
Backup Node Id is :arm_3110b0b79eb84bd899291d5e0d231009
Do you want to apply this backup that has different nodeId? Y or N
|
Follow instructions after the restore completes.
Alternate Manual Method (not recommended)
*** snapshot command doesnt work in manual mode yet, also requires sudo ym-backup-setup.sh
to be run ?
Code Block |
---|
sudo ym-backup-actions.sh -p manual -a manual-sftp-snapshots -i 10.101.10.86 -k /home/sftpbackup/path/ |
Code Block |
---|
sudo ym-backup-actions.sh -p manual -a manual-sftp-restore -i 10.101.10.86 -k /home/sftpbackup/path/ -r xxxxx |
ym-service-commands.sh start
Info |
---|
This command starts the yarnman services |
Code Block |
---|
yarnman@yarnman-test [ ~ ]$ sudo ym-service-commands.sh start
starting yarnman.service
● yarnman.service - yarnman
Loaded: loaded (/usr/lib/systemd/system/yarnman.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-08-17 08:24:21 UTC; 5ms ago
Process: 56027 ExecStartPre=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS)
Main PID: 56037 (docker-compose)
Tasks: 5 (limit: 4694)
Memory: 5.0M
CGroup: /system.slice/yarnman.service
└─56037 /usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans |
ym-service-commands.sh stop
Info |
---|
This command stops the yarnman services |
Code Block |
---|
yarnman@yarnman-test [ ~ ]$ sudo ym-service-commands.sh stop
stopping yarnman.service
● yarnman.service - yarnman
Loaded: loaded (/usr/lib/systemd/system/yarnman.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Wed 2022-08-17 08:24:16 UTC; 6ms ago
Process: 4221 ExecStart=/usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans (code=exited, status=0/SUCCESS)
Process: 55552 ExecStop=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS)
Main PID: 4221 (code=exited, status=0/SUCCESS)
Aug 17 08:24:14 yarnman-test docker-compose[4221]: ym-redis exited with code 0
Aug 17 08:24:14 yarnman-test docker-compose[55552]: Container ym-redis Removed
Aug 17 08:24:15 yarnman-test docker-compose[55552]: Container ym-couchdb Stopped
Aug 17 08:24:15 yarnman-test docker-compose[55552]: Container ym-couchdb Removing
Aug 17 08:24:15 yarnman-test docker-compose[4221]: ym-couchdb exited with code 0
Aug 17 08:24:15 yarnman-test docker-compose[55552]: Container ym-couchdb Removed
Aug 17 08:24:15 yarnman-test docker-compose[55552]: Network yarnman_yl-yarnman Removing
Aug 17 08:24:16 yarnman-test docker-compose[55552]: Network yarnman_yl-yarnman Removed
Aug 17 08:24:16 yarnman-test systemd[1]: yarnman.service: Succeeded.
Aug 17 08:24:16 yarnman-test systemd[1]: Stopped yarnman. |
ym-service-commands.sh restart
Info |
---|
this command restarts the yarnman services |
Code Block |
---|
yarnman@yarnman-test [ ~ ]$ sudo ym-service-commands.sh restart
restarting yarnman.service
● yarnman.service - yarnman
Loaded: loaded (/usr/lib/systemd/system/yarnman.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-08-17 08:27:36 UTC; 6ms ago
Process: 63277 ExecStartPre=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS)
Main PID: 63287 (docker-compose)
Tasks: 6 (limit: 4694)
Memory: 4.9M
CGroup: /system.slice/yarnman.service
└─63287 /usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans
Aug 17 08:27:36 yarnman-test systemd[1]: Starting yarnman...
Aug 17 08:27:36 yarnman-test docker-compose[63277]: yarnman Warning: No resource found to remove
Aug 17 08:27:36 yarnman-test systemd[1]: Started yarnman.
|
ym-service-commands.sh status
Info |
---|
this command shows the systemd service status |
Code Block |
---|
yarnman@yarnman-test [ ~ ]$ sudo ym-service-commands.sh status
● yarnman.service - yarnman
Loaded: loaded (/usr/lib/systemd/system/yarnman.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-08-17 08:29:13 UTC; 4s ago
Process: 67157 ExecStartPre=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS)
Main PID: 67167 (docker-compose)
Tasks: 9 (limit: 4694)
Memory: 15.7M
CGroup: /system.slice/yarnman.service
└─67167 /usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans
Aug 17 08:29:14 yarnman-test docker-compose[67167]: ym-couchdb | [info] 2022-08-17T08:29:14.759420Z nonode@nohost <0.11.0> -------- Application ddoc_cache started on node nonode@nohost
Aug 17 08:29:14 yarnman-test docker-compose[67167]: ym-couchdb | [info] 2022-08-17T08:29:14.769878Z nonode@nohost <0.11.0> -------- Application global_changes started on node nonode@nohost
Aug 17 08:29:14 yarnman-test docker-compose[67167]: ym-couchdb | [info] 2022-08-17T08:29:14.769962Z nonode@nohost <0.11.0> -------- Application jiffy started on node nonode@nohost
Aug 17 08:29:14 yarnman-test docker-compose[67167]: ym-couchdb | [info] 2022-08-17T08:29:14.774590Z nonode@nohost <0.11.0> -------- Application mango started on node nonode@nohost
Aug 17 08:29:14 yarnman-test docker-compose[67167]: ym-couchdb | [info] 2022-08-17T08:29:14.779025Z nonode@nohost <0.11.0> -------- Application setup started on node nonode@nohost
Aug 17 08:29:14 yarnman-test docker-compose[67167]: ym-couchdb | [info] 2022-08-17T08:29:14.779045Z nonode@nohost <0.11.0> -------- Application snappy started on node nonode@nohost
Aug 17 08:29:15 yarnman-test docker-compose[67167]: ym-yarnman | 1660724955149 WARN Setting Default startup.
Aug 17 08:29:15 yarnman-test docker-compose[67167]: ym-couchdb | [notice] 2022-08-17T08:29:15.166800Z nonode@nohost <0.334.0> 144d89930f localhost:5984 127.0.0.1 undefined GET / 200 ok 70
Aug 17 08:29:16 yarnman-test docker-compose[67167]: ym-couchdb | [notice] 2022-08-17T08:29:16.252345Z nonode@nohost <0.335.0> 23ea8ef0ca localhost:5984 127.0.0.1 undefined GET / 200 ok 1
Aug 17 08:29:17 yarnman-test docker-compose[67167]: ym-couchdb | [notice] 2022-08-17T08:29:17.323062Z nonode@nohost <0.465.0> a377eb4c4c localhost:5984 127.0.0.1 undefined GET / 200 ok 0
|
ym-service-commands.sh status-pm2
Info |
---|
this command shows the internal processes of yarnman |
Code Block |
---|
yarnman@yarnman-test [ ~ ]$ sudo ym-service-commands.sh status-pm2
┌─────┬──────────────────────────────────────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────────────────────────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 2 │ administration-app-0ca298ae6a834cf29c661930c58cb621 │ default │ 2.5.18 │ fork │ 236 │ 10s │ 0 │ online │ 0% │ 137.8mb │ ym-… │ enabled │
│ 0 │ arm_fc30b4f5d59f4275829ff8b65d02914b │ default │ 2.5.18 │ fork │ 121 │ 19s │ 5 │ online │ 0% │ 65.1mb │ ym-… │ enabled │
│ 3 │ interconnect-service-49ab91419f064823b8ab85806b3b4ce1 │ default │ 2.5.18 │ fork │ 260 │ 8s │ 0 │ online │ 0% │ 138.8mb │ ym-… │ enabled │
│ 1 │ jadeberlin_arm_fc30b4f5d59f4275829ff8b65d02914b │ default │ N/A │ fork │ 0 │ 0 │ 4 │ errored │ 0% │ 0b │ ym-… │ disabled │
│ 4 │ proxy-service-a4500ec67fcc491399dc395e12c1bbe1 │ default │ 2.5.18 │ fork │ 271 │ 6s │ 0 │ online │ 0% │ 105.3mb │ ym-… │ enabled │
│ 5 │ workflow-service-8b4edbbb287c468cae0f023dd7e0cf44 │ default │ 2.5.18 │ fork │ 282 │ 5s │ 0 │ online │ 0% │ 175.4mb │ ym-… │ enabled │
└─────┴──────────────────────────────────────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
[PM2][WARN] Current process list is not synchronized with saved list. Type 'pm2 save' to synchronize.
|
Note that the jadeberlin service will be in an errored state till setup
Note that the status-pm2 options will change based on the terminal/console width/resolution
ym-service-commands.sh yarnman-logs
Info |
---|
This command shows the scrolling output of yarnman services press CTRL+c to exit |
ym-service-commands.sh couchdb-logs
Info |
---|
This command shows the scrolling output of dabase logs press CTRL+c to exit |
ym-service-commands.sh redis-logs
Info |
---|
This command shows the scrolling output of message bus logs press CTRL+c to exit |
ym-service-commands.sh tang-logs
Info |
---|
This command shows the scrolling output of NBE logs press CTRL+c to exit |
ym-service-commands.sh tang-thp
Note |
---|
Note that this command was previously ym-service-commands.sh tang-adv |
Info |
---|
This command shows the tag thp used for setting up configuration encryption |
Code Block |
---|
yarnman@ym-ph-test [ ~ ]$ sudo ym-service-commands.sh tang-adv
9_CZiwV9PKBlQfehPKZO7cd5ZpM |
ym-service-commands.sh update-jtapi
Info |
---|
This command updates jtapi for test_mate |
Code Block |
---|
PENDING |
ym-edit-config.sh enable-local-admin-access
Info |
---|
This command enables local admin access on port 3999 |
Code Block |
---|
PENDING |
ym-edit-config.sh disable-local-admin-access
Info |
---|
This command disables local admin access on port 3999 |
Code Block |
---|
PENDING |
ym-edit-config.sh enable-local-couchdb-access
Info |
---|
This command enables couchdb access |
Code Block |
---|
PENDING |
ym-edit-config.sh disable-local-couchdb-access
Info |
---|
This command disables couchdb access |
Code Block |
---|
PENDING |
ym-edit-config.sh set-local-yarnman-container-name
Info |
---|
This command sets the container hostname for clustered systems |
Code Block |
---|
PENDING |
ym-edit-config.sh unset-local-yarnman-container-name
Info |
---|
This command unsets the container hostname for clustered systems |
Code Block |
---|
PENDING |
ym-edit-config.sh enable-yarnman-logs
Info |
---|
This command enables yarnman trace logs |
Code Block |
---|
PENDING |
ym-edit-config.sh disable-yarnman-logs
Info |
---|
This command enables yarnman debug logs (default) |
Code Block |
---|
PENDING |
Yarnman HTTP Certificate Notes
This is a manual proces until
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Generate CSR
Switch user to root
Code Block su root
Run the following command to create the CSR request config file
Code Block nano /var/opt/yarnlab/yarnman/config/yarnman-ssl.cnf
add copy the following contenst and replace <FQDN>with the Fully Quailifed Domain Name of the server
Code Block [req] distinguished_name = req_distinguished_name req_extensions = v3_req [ req_distinguished_name ] emailAddress = Email Address (emailAddress_max = 64) [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <FQDN>
Run the following command to generate the CSR
Command Syntax
Code Block openssl req -config /var/opt/yarnlab/yarnman/config/yarnman-ssl.cnf -new -subj "/C=${COUNTRY}/ST=${STATE}/L=${LOCATION}/O=${ORGANIZATION}/OU=${FUNCTION}/CN=${FQDN}" \ -out /var/opt/yarnlab/yarnman/config/yarnman-ssl.csr -key /var/opt/yarnlab/yarnman/config/ssl-key.pem -passin pass:yarnman -sha512 -newkey rsa:4096
All of the following need to be replaced
${COUNTRY}
${STATE}
${LOCATION}
${ORGANIZATION}
${FUNCTION}
${FQDN
Example
Code Block openssl req -config /var/opt/yarnlab/yarnman/config/yarnman-ssl.cnf -new -subj "/C=AU/ST=NSW/L=SYDNEY/O=yarnlab/OU=lab/CN=yarnman.test.yarnlab.io" \ -out /var/opt/yarnlab/yarnman/config/yarnman-ssl.csr -key /var/opt/yarnlab/yarnman/config/ssl-key.pem -passin pass:yarnman -sha512 -newkey rsa:4096
Collect CSR for signing
Option 1- SFTP download from /var/opt/yarnlab/upgrade/
cp /var/opt/yarnlab/yarnman/config/yarnman-ssl.csr /var/opt/yarnlab/yarnman/upgrade/yarnman-ssl.csr
Option 2 - copy content to new file yarnman-ssl.cnf
cat /var/opt/yarnlab/yarnman/config/yarnman-ssl.csr
Once signed certificate has been received from CA
Review if certificate has intermediate CA siging and follow process below
Backup existing SSL public certificate
Code Block cp /var/opt/yarnlab/yarnman/config/ssl-cert.cert /var/opt/yarnlab/yarnman/config/ssl-cert.cert.bk
Code Block cat /var/opt/yarnlab/yarnman/config/ssl-cert.cert
Update public certificatge
Option 1
upload to /var/opt/yarnlab/yarnman/upgrade/ssl-cert.cert
Code Block rm /var/opt/yarnlab/yarnman/config/ssl-cert.cert mv /var/opt/yarnlab/yarnman/upgrade/ssl-cert.cert /var/opt/yarnlab/yarnman/config/ssl-cert.cert systemctl restart yarnman
nano /var/opt/yarnlab/yarnman/config/ssl-cert.cert
Code Block systemctl restart yarnman
Verification
Code Block |
---|
PENDING openssl verification commands |
Configuring Intermediate CA Certificates
Typical format for standard SSL.
/var/opt/yarnlab/yarnman/config/
ssl-cert.cert - Standard certificate sent to clients
ssl-key.pem - Private key file for checking response
In order to enable intermediate certificates we must create new folder in /var/opt/yarnlab/yarnman/config/
Code Block |
---|
/var/opt/yarnlab/yarnman/config/
/ca
1-name.crt
2-name.crt
3-name.crt
|
The /ca folder contains the intermediate certificates that will be loaded in order. The easiest way to achieve this is to use the naming conventions 1-, 2- etc. Each certificate must end in .crt in order to be loaded.
Once the folder is created and at least one certificate is added in the format indicated the services on the node must be restarted.
File permissions should be as follow
Code Block |
---|
root@yarnman-2 [ /var/home/yarnman ]# ls -lh /var/opt/yarnlab/yarnman/config
total 60K
drwxr-xr-x 2 ym-yarnman-app ym-yarnman-app-gp 4.0K Jan 10 05:31 ca
root@yarnman-2 [ /var/home/yarnman ]# ls -lh /var/opt/yarnlab/yarnman/config/ca/1-name.crt
-rw-r--r-- 1 ym-yarnman-app ym-yarnman-app-gp 1.3K Jan 10 05:31 /var/opt/yarnlab/yarnman/config/ca/1-name.crt |
If required to change the owner and group
Code Block |
---|
chown ym-yarnman-app /var/opt/yarnlab/yarnman/config/ca
chown ym-yarnman-app /var/opt/yarnlab/yarnman/config/ca/*.crt |
If required to change the permissions
Code Block |
---|
chmod 755 /var/opt/yarnlab/yarnman/config/ca
chmod 644 /var/opt/yarnlab/yarnman/config/ca/*.crt |
Early Release Patch Notes
This is only required for patching from yarnman-ph4-2.5.18-460fada2.ova 16 Aug 2022
Info |
---|
before running patch apply the following change |
Switch user to root
Code Block su root
Run the following commands
Code Block ostree remote delete photon ostree remote add photon http://127.0.0.1:8080/rpm-ostree/base/4.0/x86_64/repo
Photon iptables
In order to have perstitant firewall rules for docker containers, we need to populate the DOCKER-USER table, this is processed by iptables before traffic hits the host, hence we can’t apply the firewall rules directly on INPUT table (used by eth0)
In this example we will allow traffic to couchdb from ip address
10.202.30.10
10.202.30.11
10.101.10.36
You will need to su as root to modify this file.
Modify the Existing ruleset applied on startup /etc/systemd/scripts/ip4save
We need to add the table/filter :DOCKER-USER - [0:0]
under the existing filter/table list and the required firewall rules at the bottom before the COMMIT
Code Block |
---|
:DOCKER-USER - [0:0]
-A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6984 -m comment --comment "Allow CouchDB Traffic - " -j RETURN
-A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb - " -j DROP |
The File will look similar to
Code Block |
---|
# init
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]
:DOCKER-USER - [0:0]
# Allow local-only connections
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
#keep commented till upgrade issues are sorted
#-A INPUT -j LOG --log-prefix "FIREWALL:INPUT "
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A OUTPUT -j ACCEPT
-A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6984 -m comment --comment "Allow CouchDB Traffic - " -j RETURN
-A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -j DROP
COMMIT |
Reload the server for the firewall rules to take affect
Note |
---|
If you get the syntax wrong, or don’t place the correct filters in the right place you may lock yourself out of the server as it will block all traffic in and out, you would then be required to use vmware console to console into the server to remove the firewall rules. |
You can verify the ruleset with the command in a root prompt
Code Block |
---|
iptables -t filter -vnL --line-numbers |
some output removed from the other tables and docker internal but this shows 10.202.30.10
, 10.202.30.11
, 10.101.10.36
can communicate with TCP/6984 and everything else is dropped . You can see 55 packets have been blocked, and 9 packets have been allowed from 10.202.30.11
Code Block |
---|
[ /var/home/yarnman ]# iptables -t filter -vL --line-numbers
Chain INPUT (policy DROP 161 packets, 9805 bytes)
num pkts bytes target prot opt in out source destination
1 70 5662 ACCEPT all -- lo any anywhere anywhere
2 321 24962 ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
3 1 64 ACCEPT tcp -- any any anywhere anywhere tcp dpt:22
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 RETURN tcp -- eth0 any 10.202.30.10 anywhere tcp dpt:6984 /* Allow CouchDB Traffic - */
2 9 732 RETURN tcp -- eth0 any 10.202.30.11 anywhere tcp dpt:6984 /* Allow CouchDB Traffic - */
3 0 0 RETURN tcp -- eth0 any 10.101.10.36 anywhere tcp dpt:6984 /* Allow CouchDB Traffic - */
4 55 3300 DROP tcp -- eth0 any anywhere anywhere tcp dpt:6984 /* block non replication Coucdb nodes - */
5 3591 264K RETURN all -- any any anywhere anywhere |
default ip4save
This is the default of /etc/systemd/scripts/ip4save
on the off-chance you need to rollback to it
Code Block |
---|
# init
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]
# Allow local-only connections
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
#keep commented till upgrade issues are sorted
#-A INPUT -j LOG --log-prefix "FIREWALL:INPUT "
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A OUTPUT -j ACCEPT
COMMIT |
To add other firewall rules eg allow traffic only between 3 ips to Clevins-Tang insert them above the COMMIT line eg
Code Block |
---|
-A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6984 -m comment --comment "Allow CouchDB Traffic - " -j RETURN
-A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -j DROP
-A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6655 -m comment --comment "Allow ClevisTang Traffic - " -j RETURN
-A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6655 -m comment --comment "block non ClevisTang nodes - " -j DROP
COMMIT |
Final file will look like /etc/systemd/scripts/ip4save
Code Block |
---|
# init
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]
:DOCKER-USER - [0:0]
# Allow local-only connections
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
#keep commented till upgrade issues are sorted
#-A INPUT -j LOG --log-prefix "FIREWALL:INPUT "
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A OUTPUT -j ACCEPT
-A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6984 -m comment --comment "Allow CouchDB Traffic - " -j RETURN
-A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -j DROP
-A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6655 -m comment --comment "Allow ClevisTang Traffic - " -j RETURN
-A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6655 -m comment --comment "block non ClevisTang nodes - " -j DROP
COMMIT |
Logging
Info |
---|
work in progress, some of the logging comments will be slightly different |
With IPtables you need to LOG before dropping the packet, the simplest way is to duplicate the rule with the LOG jump
Code Block |
---|
-A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -m limit --limit 5/min -j LOG --log-prefix "couchdb drop -"
-A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -j DROP
you can view these in dmesg as root
root@yarnman-1 [ /var/home/yarnman ]# dmesg
[ 34.799506] couchdb drop -IN=eth0 OUT=br-7cee03840940 MAC=00:50:56:9f:04:4f:02:50:56:56:44:52:08:00 SRC=10.101.10.86 DST=10.222.1.4 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61886 DF PROTO=TCP SPT=59210 DPT=6984 WINDOW=42340 RES=0x00 SYN URGP=0 |
Monitoring
watch can be used to repeat the same command, to watch the counters increase
Code Block |
---|
root@yarnman-1 [ /var/home/yarnman ]# watch 'iptables -t filter -v -L DOCKER-USER --line-numbers -n'
Every 2.0s: iptables -t filter -v -L DOCKER-USER --line-numbers -n yarnman-1: Mon Jan 16 05:04:39 2023
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 RETURN tcp -- eth0 * 10.202.30.10 0.0.0.0/0 tcp dpt:6984 /* Allow CouchDB Traffic - */
2 0 0 RETURN tcp -- eth0 * 10.101.12.81 0.0.0.0/0 tcp dpt:6984 /* Allow CouchDB Traffic - */
3 8670 1270K RETURN tcp -- eth0 * 10.101.12.82 0.0.0.0/0 tcp dpt:6984 /* Allow CouchDB Traffic - */
4 788 115K RETURN tcp -- eth0 * 10.101.12.83 0.0.0.0/0 tcp dpt:6984 /* Allow CouchDB Traffic - */
5 0 0 LOG tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6984 /* block non replication Coucdb nodes - */ limit: avg 2/min burst 5 LOG flags 0 level 4 pr
efix "COUCHDB DROP-"
6 0 0 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6984 /* block non replication Coucdb nodes - */
7 0 0 RETURN tcp -- eth0 * 10.202.30.10 0.0.0.0/0 tcp dpt:6655 /* Allow ClevisTang Traffic - */
8 0 0 RETURN tcp -- eth0 * 10.101.12.81 0.0.0.0/0 tcp dpt:6655 /* Allow ClevisTang Traffic - */
9 0 0 RETURN tcp -- eth0 * 10.101.12.82 0.0.0.0/0 tcp dpt:6655 /* Allow ClevisTang Traffic - */
10 0 0 RETURN tcp -- eth0 * 10.101.12.83 0.0.0.0/0 tcp dpt:6655 /* Allow ClevisTang Traffic - */
11 0 0 LOG tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6655 /* block non ClevisTang nodes - */ limit: avg 2/min burst 5 LOG flags 0 level 4 prefix "CL
EVISTANG DROP-"
12 0 0 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6655 /* block non ClevisTang nodes - */
13 865K 96M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
|
Highlight keywords from Log and current packet counts
Code Block |
---|
root@yarnman-2 [ /var/home/yarnman ]# journalctl _TRANSPORT=kernel | grep "DROP-" ; echo -e "\n#time now $(date)#\n"; iptables -t filter -v -L DOCKER-USER --line-numbers -n | grep -e "DROP " -e "pkts"
### lots of text scrolling ###
Jan 16 01:06:33 yarnman-2 kernel: COUCHDB DROP-IN=eth0 OUT=br-7adbc3fd84ef MAC=00:50:56:9f:cb:2d:02:50:56:56:44:52:08:00 SRC=10.101.10.100 DST=10.222.1.4 LEN=52 TOS=0x02 PREC=0x00 TTL=126 ID=9481 DF PROTO=TCP SPT=50173 DPT=6984 WINDOW=64240 RES=0x00 CWR ECE SYN URGP=0
Jan 16 01:06:33 yarnman-2 kernel: COUCHDB DROP-IN=eth0 OUT=br-7adbc3fd84ef MAC=00:50:56:9f:cb:2d:02:50:56:56:44:52:08:00 SRC=10.101.10.100 DST=10.222.1.4 LEN=52 TOS=0x02 PREC=0x00 TTL=126 ID=9482 DF PROTO=TCP SPT=50174 DPT=6984 WINDOW=64240 RES=0x00 CWR ECE SYN URGP=0
#time now Mon Jan 16 01:31:55 AM UTC 2023#
num pkts bytes target prot opt in out source destination
6 18 936 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6984 /* block non replication Coucdb nodes - */
12 0 0 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6655 /* block non ClevisTang nodes - */ |
...
Hints Tips
Upgrade fails - Error response from daemon
Code Block |
---|
Error response from daemon: Conflict. The container name "/ym-yarnman-default-yaml" is already in use by container "af729483fc7f094f47532bf5afeb6e975295c5438863f6347447492e730159b0". You have to remove (or rename) that container to be able to reuse that name. |
Run the following to remove the container, run upgrade again
Code Block |
---|
su -
docker rm -f ym-yarnman-default-yaml
### if the above fails, copy the container hash and remove that as shown below , looks to be a typo missing "t" ym-yarnman-defaul-yaml in the ym-upgrade.sh script ###
docker rm -f af729483fc7f094f47532bf5afeb6e975295c5438863f6347447492e730159b0
rerun upgrade |
Upgrade fails - Error processing tar file no space left on device
Code Block |
---|
Error processing tar file(exit status 1): write /var/lib/registry-package/docker/registry/v2/blobs/sha256/cb/cb0483711606d81fd2300800ccdc07fbb92540275e5a90cc91ec7107c07f6df5/data: no space left on device |
Run the following to remove unused image files, run upgrade again
Code Block |
---|
su -
docker image prune -a
WARNING! This will remove all images without at least one container associated to them.
Are you sure you want to continue? [y/N] y
Deleted Images:
untagged: localhost:5000/ym-ostree-upgrade:yl-ph-1bf1e2bb
untagged: localhost:5000/ym-ostree-upgrade@sha256:f0335aa31906d2c72347c5cdae2fa51ced6944cd2f3dbac9aecd2fcc6c676153
...
deleted: sha256:061a02788e65fcd3f28253482d8899c44e3acf8c1597c11454c576f999874aed
deleted: sha256:45bbe3d22998589317c7f6c4dd591475423bb37ca9b922529c5878653483b18d
rerun upgrade |
Temp HotFix
browse to raw commit in gitlab eg
https://bitbucket.org/yarnlab/secundus/commits/40f5cc2cca96a84a3490ab3d0abea30f8ea66030/raw
copy the raw text file to clipboard
ssh to node and su and connect to yarnman container
Code Block |
---|
su -
docker exec -it --user ym-yarnman-app ym-yarnman /bin/bash |
paste buffer into text file of patch.txt within the container and save
Code Block |
---|
nano /tmp/patch.txt
ctrl + X , save file |
run patch command and restart pm2 inside container
Code Block |
---|
patch -p1 < /tmp/patch.txt ; pm2 restart all
sample output
patching file lib/couch/common/cucm-access-control-group/add.js
patching file lib/couch/common/cucm-access-control-group/update.js
patching file lib/couch/common/cucm-role/add.js
patching file lib/couch/common/cucm-role/update.js
patching file lib/ipc/clients/interfaces/cucm/index.js
patching file lib/ipc/clients/interfaces/cucm/parse-axl-sql-response.js
Use --update-env to update environment variables
[PM2] Applying action restartProcessId on app [all](ids: [
0, 1, 2, 3, 4,
5, 6, 7, 8 |
to reverse the patch
Code Block |
---|
patch -R -p1 < /tmp/patch.txt ; pm2 restart all
You may get a exception such as
patching file lib/couch/common/cucm-access-control-group/add.js
Unreversed patch detected! Ignore -R? [n]
or
Reversed (or previously applied) patch detected! Skipping patch.
Depending if you are trying to roll forward/backwards the patch.
|
...
Changes will be lost if the container or node reboots.
...