Key and Configuration Encryption
A script will be added in the future to automate creation of the clevis.json although it will require out of band collection of the thp
Collect the thp (thumbprint for each tang servers individually this is a unique identifier for each nodes tang server
yarnman@ym-ph-test [ ~ ]$ sudo ym-service-commands.sh tang-thp E7SN3eGxrnyoGiHGJBt4GDU8MRw OR if older versions this command was previously yarnman@ym-ph-test [ ~ ]$ sudo ym-service-commands.sh tang-adv E7SN3eGxrnyoGiHGJBt4GDU8MRw
Run the key and config encryption
Node1 sudo ym-encrypt-at-rest.sh Database key found proceeding Number of pins required for decryption :1 Number of pins this must be equal or greater than the number of pins required for decryption :3 Enter URL for tang server 1 :http://10.101.10.13:6655 Enter THP for tang server 1 :o38piqOs5UwunlcUmqMVwulml34 Connection successful to : http://10.101.10.10:6655 Enter URL for tang server 2 :http://10.101.10.11:6655 Enter THP for tang server 2 :0Lqk7DroJ0g3patTCgTweMUAHPc Connection successful to : http://10.101.10.11:6655 Enter URL for tang server 3 :http://10.101.10.12:6655 Enter THP for tang server 3 :GEpmSTQfz8ctVxdgQEp_rnS3za Connection successful to : http://10.101.10.12:6655 ... Node4 sudo ym-encrypt-at-rest.sh Database key found proceeding Number of pins required for decryption :1 Number of pins this must be equal or greater than the number of pins required for decryption :3 Enter URL for tang server 1 :http://10.101.10.10:6655 Enter THP for tang server 1 :DwLco7FJtXWxFTprQ5M3cojJsZo Connection successful to : http://10.101.10.10:6655 Enter URL for tang server 2 :http://10.101.10.11:6655 Enter THP for tang server 2 :0Lqk7DroJ0g3patTCgTweMUAHPc Connection successful to : http://10.101.10.11:6655 Enter URL for tang server 3 :http://10.101.10.12:6655 Enter THP for tang server 3 :GEpmSTQfz8ctVxdgQEp_rnS3za Connection successful to : http://10.101.10.12:6655
Do not include the Local server in the encryption at rest. If you have 4 Nodes, you will enter in Number of Pins as 3, and exclude the IP address of the local server
Customisation
These manual customisations will be moved into scripts in a future release
When editing .yml document ensure that the correct space indentation for the relevent lines
Yarnman Application Additional Ports
With yarn_man Photon additional steps are required for Adding Secondary Local Auth Administration Access
Enable the Additional listening port with the ym-edit-config command
yarnman@yarnman-1 [ ~ ]$ sudo ym-edit-config.sh enable-local-admin-access
You will be prompted to restart the yarnman service
Port 3999 is the default alternative port
Manually edit the following file
cat /var/opt/yarnlab/yarnman/docker-compose-override.yml
version: '3.7' services: yarnman: ports: - "3999:3999"
Ensure that the top row show version: '3.7'
Create the 2nd Administration application and ensure the port select matches what is set for ports and expose in docker-compose-override.yml
Restart yarnman services
sudo ym-service-commands.sh restart
You will now be able to access the second administration application on port 3999 using https://<IP address>:3999/
NOTE that http to https redirect will not work on this port and https:// must be entered
It is suggested to use in private browser or similar as the authentication sessions will conflict with LDAP users and the older session will close
Enable database access for Replication
This step must be performed to enable couchdb clustering on every node
This step requires root access
to switch to root access run the following command “su root” and enter the root password set during installation
Manually edit the following file
nano /var/opt/yarnlab/yarnman/docker-compose-override.yml
version: '3.7' services: couchdb: ports: - "6984:6984"
Ensure that the top row show version: '3.7'
NOTE : If you already have an existing services in /var/opt/yarnlab/yarnman/docker-compose-override.yml add another services line, below shows 2 services in the file
NOTE!! Make sure the spaces are exactly as below in the yml fike, else docker may not start …
version: '3.7' services: yarnman: ports: - "3999:3999" expose: - "3999" couchdb: ports: - "6984:6984"
Restart yarnman services
sudo ym-service-commands.sh restart restarting yarnman.service ● yarnman.service - yarnman Loaded: loaded (/usr/lib/systemd/system/yarnman.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2022-11-14 04:23:53 UTC; 9ms ago Process: 56653 ExecStartPre=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS) Main PID: 56663 (docker-compose) Tasks: 5 (limit: 4694) Memory: 7.1M CGroup: /system.slice/yarnman.service └─56663 /usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans Nov 14 04:23:53 ym-ph-test systemd[1]: Starting yarnman... Nov 14 04:23:53 ym-ph-test docker-compose[56653]: yarnman Warning: No resource found to remove Nov 14 04:23:53 ym-ph-test systemd[1]: Started yarnman.
Optionally access to couchdb can be restricted to IP addresses
Changing private key default passphrase
This step requires root access
to switch to root access run the following command “su root” and enter the root password set during installation
If the encryption at rest process has been run previously the private key must be decrypted
If the key is not encrypted skip the decryption step
to verify run the following command If no file is found that means the key is encrypted
ls -la /var/opt/yarnlab/yarnman/config/private-encryption-key.pem ls: cannot access '/var/opt/yarnlab/yarnman/config/private-encryption-key.pem': No such file or directory
to verify the key is encrypted
ls -la /var/opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe -rw-r--r-- 1 ym-yarnman-app ym-yarnman-app-gp 8129 Nov 14 03:40 /var/opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe
Switch into docker container by running the following command - Note that the prompt changes from the root to container shell
docker exec -it ym-yarnman /bin/bash ym-yarnman-app@yl-ym-yarnman:/opt/yarnlab/yarnman$
to decrypt the key run the following command
clevis decrypt < /opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe > /opt/yarnlab/yarnman/config/private-encryption-key.pem
reset permissions
chmod 600 /opt/yarnlab/yarnman/config/private-encryption-key.pem
change passphrase from default “yarnman”
ssh-keygen -p -f /opt/yarnlab/yarnman/config/private-encryption-key.pem Enter old passphrase: Enter new passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved with the new passphrase.
backup old key
mv /opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe /opt/yarnlab/yarnman/config/private-encryption-key.pem.jwe.bk
exit the container shell
exit exit root@ym-ph-test [ /var/opt/yarnlab/ ]#
verify the key is decrypted and ensure the that
ls -la /var/opt/yarnlab/yarnman/config/private-encryption-key.pem -rw-r--r-- 1 ym-yarnman-app ym-yarnman-app-gp 3326 Nov 20 20:21 /var/opt/yarnlab/yarnman/config/private-encryption-key.pem
add new passphrase to /var/opt/yarnlab/yarnman/config/local.yaml
encryption: dbPassphrase: 'Clouduc123'
TO update
yq -i '.encryption.dbPassphrase = "Clouduc123"' /var/opt/yarnlab/yarnman/config/local.yaml
encrypt passphrase
docker exec ym-yarnman node ./scripts/encrypt-local-config.js -k encryption.dbPassphrase 1668977064139 INFO Starting the encryption of 1 local configuration fields through Clevis Shamir Secret Sharing 1668977064142 INFO Attempting to encrypt the following local config fields: encryption.dbPassphrase 1668977064371 INFO Local key 'encryption.dbPassphrase' encrypted successfully 1668977064371 INFO 1 local config fields encrypted, 0 fields omitted
verify
cat cat /var/opt/yarnlab/yarnman/config/local.yaml
re encrypt keys
docker exec ym-yarnman node ./scripts/encrypt-keys.js 1668977138519 INFO Encrypting private and SSL keys using settings: 1668977138521 INFO - not overwriting existing encrypted files and not deleting any original files after encryption 1668977138522 INFO -------------------------------- 1668977138522 INFO Encrypting... 1668977138768 INFO - 'private-encryption-key.pem' encrypted successfully 1668977138768 INFO - 'ssl-key.pem' already encrypted, not overwriting 1668977138768 INFO -------------------------------- 1668977138768 INFO Finished encrypting the files
restart services
systemctl restart yarnman
verify while services are restarting look for
docker logs ym-yarnman -f 2|administration-app-b6925c3239fc4c878ff6888ce5cb2b51 | 1668977206414 INFO Decrypting 1 encrypted configuration keys 2|administration-app-b6925c3239fc4c878ff6888ce5cb2b51 | 1668977206415 INFO Decrypting configuration key 'encryption.dbPassphrase' 2|administration-app-b6925c3239fc4c878ff6888ce5cb2b51 | 1668977206500 INFO Configuration key 'encryption.dbPassphrase' decrypted successfully 2|administration-app-b6925c3239fc4c878ff6888ce5cb2b51 | 1668977206500 INFO Finished decrypting 1 configuration keys
Setup Couchdb Replication
Its recommended to have completed the the Yarngate LDAP configuration with at least 1 role configured before completing replication on additional nodes for the first time time setup. Refer to Yarngate Service Setup for more information
login to yarnman administration application web interface
Navigate to Authentication database
Rename the Default Authentication Database name from “Central DB” to “<Node Name> Central DB” or other suitably unique name
Navigate to Authentication Policies
Rename the Default Authentication Policy name from “Central DB-Only Policy” to “<Node Name> Central DB-Only Policy” or other suitably unique name
Navigate to Nodes and select the Standalone node
Update the yarnman node name
Navigate to Nodes , select the node you wanted to setup and click on the Replication tab
Click on Add Replication
Enter the source and target connection strings
Source Address Syntax https://10.222.1.4:6984 - NOTE that the default source address is 10.222.1.4
Source address: https://10.222.1.4:6984
Source username: architect
source password: yarnman
Remote address Syntax https://<TargetIP>:6984
Target address https://10.101.10.10:6984
Target username: architect
Target password: yarnman
Once replication is setup status can be reviewed by clicking on the replication address, eg https://10.101.10.10:6984 . If the replication shows blank, the Sync button can be pressed to kick off replication again.
Repeat for each pair of nodes to achieve a full mesh
If there are 2 datacenters repeat for each primary node in each data centre -
2 node - 2 replications
n1->n2
n2->n1
3 node - 6 replications
n1->n2
n1->n3
n2->n1
n2->n3
n3->n1
n3->n2
4 node - 12 replications
n1->n2
n1->n3
n1->n4
n2->n1
n2->n3
n2->n4
n3->n1
n3->n2
n3->n4
n4->n1
n4->n2
n4->n3
if you have any issues with replications in state failing run the following command and review the log messages
Yarnman HTTP Certificate Notes
This is a manual proces until - YMN-4962Getting issue details... STATUS
Generate CSR
Switch user to root
su root
Run the following command to create the CSR request config file
nano /var/opt/yarnlab/yarnman/config/yarnman-ssl.cnf
add copy the following contenst and replace <FQDN>with the Fully Quailifed Domain Name of the server
[req] distinguished_name = req_distinguished_name req_extensions = v3_req [ req_distinguished_name ] emailAddress = Email Address (emailAddress_max = 64) [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <FQDN>
Run the following command to generate the CSR
Command Syntax
openssl req -config /var/opt/yarnlab/yarnman/config/yarnman-ssl.cnf -new -subj "/C=${COUNTRY}/ST=${STATE}/L=${LOCATION}/O=${ORGANIZATION}/OU=${FUNCTION}/CN=${FQDN}" \ -out /var/opt/yarnlab/yarnman/config/yarnman-ssl.csr -key /var/opt/yarnlab/yarnman/config/ssl-key.pem -passin pass:yarnman -sha512 -newkey rsa:4096
All of the following need to be replaced
${COUNTRY}
${STATE}
${LOCATION}
${ORGANIZATION}
${FUNCTION}
${FQDN
Example
openssl req -config /var/opt/yarnlab/yarnman/config/yarnman-ssl.cnf -new -subj "/C=AU/ST=NSW/L=SYDNEY/O=yarnlab/OU=lab/CN=yarnman.test.yarnlab.io" \ -out /var/opt/yarnlab/yarnman/config/yarnman-ssl.csr -key /var/opt/yarnlab/yarnman/config/ssl-key.pem -passin pass:yarnman -sha512 -newkey rsa:4096
Collect CSR for signing
Option 1- SFTP download from /var/opt/yarnlab/upgrade/
cp /var/opt/yarnlab/yarnman/config/yarnman-ssl.csr /var/opt/yarnlab/yarnman/upgrade/yarnman-ssl.csr
Option 2 - copy content to new file yarnman-ssl.cnf
cat /var/opt/yarnlab/yarnman/config/yarnman-ssl.csr
Once signed certificate has been received from CA
Review if certificate has intermediate CA siging and follow process below
Backup existing SSL public certificate
cp /var/opt/yarnlab/yarnman/config/ssl-cert.cert /var/opt/yarnlab/yarnman/config/ssl-cert.cert.bk
cat /var/opt/yarnlab/yarnman/config/ssl-cert.cert
Update public certificatge
Option 1
upload to /var/opt/yarnlab/yarnman/upgrade/ssl-cert.cert
rm /var/opt/yarnlab/yarnman/config/ssl-cert.cert mv /var/opt/yarnlab/yarnman/upgrade/ssl-cert.cert /var/opt/yarnlab/yarnman/config/ssl-cert.cert systemctl restart yarnman
nano /var/opt/yarnlab/yarnman/config/ssl-cert.cert
systemctl restart yarnman
Verification
PENDING openssl verification commands
Configuring Intermediate CA Certificates
Typical format for standard SSL.
/var/opt/yarnlab/yarnman/config/
ssl-cert.cert - Standard certificate sent to clients
ssl-key.pem - Private key file for checking response
In order to enable intermediate certificates we must create new folder in /var/opt/yarnlab/yarnman/config/
/var/opt/yarnlab/yarnman/config/ /ca 1-name.crt 2-name.crt 3-name.crt
The /ca folder contains the intermediate certificates that will be loaded in order. The easiest way to achieve this is to use the naming conventions 1-, 2- etc. Each certificate must end in .crt in order to be loaded.
Once the folder is created and at least one certificate is added in the format indicated the services on the node must be restarted.
File permissions should be as follow
root@yarnman-2 [ /var/home/yarnman ]# ls -lh /var/opt/yarnlab/yarnman/config total 60K drwxr-xr-x 2 ym-yarnman-app ym-yarnman-app-gp 4.0K Jan 10 05:31 ca root@yarnman-2 [ /var/home/yarnman ]# ls -lh /var/opt/yarnlab/yarnman/config/ca/1-name.crt -rw-r--r-- 1 ym-yarnman-app ym-yarnman-app-gp 1.3K Jan 10 05:31 /var/opt/yarnlab/yarnman/config/ca/1-name.crt
If required to change the owner and group
chown ym-yarnman-app /var/opt/yarnlab/yarnman/config/ca chown ym-yarnman-app /var/opt/yarnlab/yarnman/config/ca/*.crt
If required to change the permissions
chmod 755 /var/opt/yarnlab/yarnman/config/ca chmod 644 /var/opt/yarnlab/yarnman/config/ca/*.crt
Photon iptables
In order to have perstitant firewall rules for docker containers, we need to populate the DOCKER-USER table, this is processed by iptables before traffic hits the host, hence we can’t apply the firewall rules directly on INPUT table (used by eth0)
In this example we will allow traffic to couchdb from ip address
10.202.30.10
10.202.30.11
10.101.10.36
You will need to su as root to modify this file.
Modify the Existing ruleset applied on startup /etc/systemd/scripts/ip4save
We need to add the table/filter :DOCKER-USER - [0:0]
under the existing filter/table list and the required firewall rules at the bottom before the COMMIT
:DOCKER-USER - [0:0] -A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6984 -m comment --comment "Allow CouchDB Traffic - " -j RETURN -A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb - " -j DROP
The File will look similar to
# init *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] :DOCKER-USER - [0:0] # Allow local-only connections -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT #keep commented till upgrade issues are sorted #-A INPUT -j LOG --log-prefix "FIREWALL:INPUT " -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A OUTPUT -j ACCEPT -A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6984 -m comment --comment "Allow CouchDB Traffic - " -j RETURN -A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -j DROP COMMIT
Reload the server for the firewall rules to take affect
If you get the syntax wrong, or don’t place the correct filters in the right place you may lock yourself out of the server as it will block all traffic in and out, you would then be required to use vmware console to console into the server to remove the firewall rules.
You can verify the ruleset with the command in a root prompt
iptables -t filter -vnL --line-numbers
some output removed from the other tables and docker internal but this shows 10.202.30.10
, 10.202.30.11
, 10.101.10.36
can communicate with TCP/6984 and everything else is dropped . You can see 55 packets have been blocked, and 9 packets have been allowed from 10.202.30.11
[ /var/home/yarnman ]# iptables -t filter -vL --line-numbers Chain INPUT (policy DROP 161 packets, 9805 bytes) num pkts bytes target prot opt in out source destination 1 70 5662 ACCEPT all -- lo any anywhere anywhere 2 321 24962 ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED 3 1 64 ACCEPT tcp -- any any anywhere anywhere tcp dpt:22 Chain DOCKER-USER (1 references) num pkts bytes target prot opt in out source destination 1 0 0 RETURN tcp -- eth0 any 10.202.30.10 anywhere tcp dpt:6984 /* Allow CouchDB Traffic - */ 2 9 732 RETURN tcp -- eth0 any 10.202.30.11 anywhere tcp dpt:6984 /* Allow CouchDB Traffic - */ 3 0 0 RETURN tcp -- eth0 any 10.101.10.36 anywhere tcp dpt:6984 /* Allow CouchDB Traffic - */ 4 55 3300 DROP tcp -- eth0 any anywhere anywhere tcp dpt:6984 /* block non replication Coucdb nodes - */ 5 3591 264K RETURN all -- any any anywhere anywhere
default ip4save
This is the default of /etc/systemd/scripts/ip4save
on the off-chance you need to rollback to it
# init *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] # Allow local-only connections -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT #keep commented till upgrade issues are sorted #-A INPUT -j LOG --log-prefix "FIREWALL:INPUT " -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A OUTPUT -j ACCEPT COMMIT
To add other firewall rules eg allow traffic only between 3 ips to Clevins-Tang insert them above the COMMIT line eg
-A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6984 -m comment --comment "Allow CouchDB Traffic - " -j RETURN -A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -j DROP -A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6655 -m comment --comment "Allow ClevisTang Traffic - " -j RETURN -A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6655 -m comment --comment "block non ClevisTang nodes - " -j DROP COMMIT
Final file will look like /etc/systemd/scripts/ip4save
# init *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] :DOCKER-USER - [0:0] # Allow local-only connections -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT #keep commented till upgrade issues are sorted #-A INPUT -j LOG --log-prefix "FIREWALL:INPUT " -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A OUTPUT -j ACCEPT -A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6984 -m comment --comment "Allow CouchDB Traffic - " -j RETURN -A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -j DROP -A DOCKER-USER -i eth0 -p tcp -s 10.202.30.10,10.202.30.11,10.101.10.36 --dport 6655 -m comment --comment "Allow ClevisTang Traffic - " -j RETURN -A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6655 -m comment --comment "block non ClevisTang nodes - " -j DROP COMMIT
Logging
work in progress, some of the logging comments will be slightly different
With IPtables you need to LOG before dropping the packet, the simplest way is to duplicate the rule with the LOG jump
-A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -m limit --limit 5/min -j LOG --log-prefix "couchdb drop -" -A DOCKER-USER -i eth0 -p tcp -s 0.0.0.0/0 --dport 6984 -m comment --comment "block non replication Coucdb nodes - " -j DROP you can view these in dmesg as root root@yarnman-1 [ /var/home/yarnman ]# dmesg [ 34.799506] couchdb drop -IN=eth0 OUT=br-7cee03840940 MAC=00:50:56:9f:04:4f:02:50:56:56:44:52:08:00 SRC=10.101.10.86 DST=10.222.1.4 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61886 DF PROTO=TCP SPT=59210 DPT=6984 WINDOW=42340 RES=0x00 SYN URGP=0
Monitoring
watch can be used to repeat the same command, to watch the counters increase
root@yarnman-1 [ /var/home/yarnman ]# watch 'iptables -t filter -v -L DOCKER-USER --line-numbers -n' Every 2.0s: iptables -t filter -v -L DOCKER-USER --line-numbers -n yarnman-1: Mon Jan 16 05:04:39 2023 Chain DOCKER-USER (1 references) num pkts bytes target prot opt in out source destination 1 0 0 RETURN tcp -- eth0 * 10.202.30.10 0.0.0.0/0 tcp dpt:6984 /* Allow CouchDB Traffic - */ 2 0 0 RETURN tcp -- eth0 * 10.101.12.81 0.0.0.0/0 tcp dpt:6984 /* Allow CouchDB Traffic - */ 3 8670 1270K RETURN tcp -- eth0 * 10.101.12.82 0.0.0.0/0 tcp dpt:6984 /* Allow CouchDB Traffic - */ 4 788 115K RETURN tcp -- eth0 * 10.101.12.83 0.0.0.0/0 tcp dpt:6984 /* Allow CouchDB Traffic - */ 5 0 0 LOG tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6984 /* block non replication Coucdb nodes - */ limit: avg 2/min burst 5 LOG flags 0 level 4 pr efix "COUCHDB DROP-" 6 0 0 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6984 /* block non replication Coucdb nodes - */ 7 0 0 RETURN tcp -- eth0 * 10.202.30.10 0.0.0.0/0 tcp dpt:6655 /* Allow ClevisTang Traffic - */ 8 0 0 RETURN tcp -- eth0 * 10.101.12.81 0.0.0.0/0 tcp dpt:6655 /* Allow ClevisTang Traffic - */ 9 0 0 RETURN tcp -- eth0 * 10.101.12.82 0.0.0.0/0 tcp dpt:6655 /* Allow ClevisTang Traffic - */ 10 0 0 RETURN tcp -- eth0 * 10.101.12.83 0.0.0.0/0 tcp dpt:6655 /* Allow ClevisTang Traffic - */ 11 0 0 LOG tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6655 /* block non ClevisTang nodes - */ limit: avg 2/min burst 5 LOG flags 0 level 4 prefix "CL EVISTANG DROP-" 12 0 0 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6655 /* block non ClevisTang nodes - */ 13 865K 96M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Highlight keywords from Log and current packet counts
root@yarnman-2 [ /var/home/yarnman ]# journalctl _TRANSPORT=kernel | grep "DROP-" ; echo -e "\n#time now $(date)#\n"; iptables -t filter -v -L DOCKER-USER --line-numbers -n | grep -e "DROP " -e "pkts" ### lots of text scrolling ### Jan 16 01:06:33 yarnman-2 kernel: COUCHDB DROP-IN=eth0 OUT=br-7adbc3fd84ef MAC=00:50:56:9f:cb:2d:02:50:56:56:44:52:08:00 SRC=10.101.10.100 DST=10.222.1.4 LEN=52 TOS=0x02 PREC=0x00 TTL=126 ID=9481 DF PROTO=TCP SPT=50173 DPT=6984 WINDOW=64240 RES=0x00 CWR ECE SYN URGP=0 Jan 16 01:06:33 yarnman-2 kernel: COUCHDB DROP-IN=eth0 OUT=br-7adbc3fd84ef MAC=00:50:56:9f:cb:2d:02:50:56:56:44:52:08:00 SRC=10.101.10.100 DST=10.222.1.4 LEN=52 TOS=0x02 PREC=0x00 TTL=126 ID=9482 DF PROTO=TCP SPT=50174 DPT=6984 WINDOW=64240 RES=0x00 CWR ECE SYN URGP=0 #time now Mon Jan 16 01:31:55 AM UTC 2023# num pkts bytes target prot opt in out source destination 6 18 936 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6984 /* block non replication Coucdb nodes - */ 12 0 0 DROP tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6655 /* block non ClevisTang nodes - */