Yarngate - L1 L2 L3 Troubleshooting guide

 

 

Can’t log in to Yarn_Gate Web Interface

  • An LDAP/AD user account is required to access Yarn_Gate.

you cannot log into Yarn_Gate using a local Yarnman user

  • Ensure the user is mapped to a Security Group in LDAP/AD that has a role assigned with access to Yarn_Gate in the authentication policy.

 

 

Missing Customer/Cluster/Interface when preparing new sessions

Example showing no matches found

Example showing limited visibility

Ensure the entitlement group has the correct expected Customer/Cluster/Interface assigned.

Ensure entitlement group associated with the Access Rule

Ensure the AD matching group is associated

Unexpected level of access granted

Yarn_Gate allows for granular access by matching multiple AD/LDAP groups associated with different App Profiles that control the Read/Write permissions for applications, e.g., UCM.

Multiple AD/LDAP security groups may be assigned to a user; in this scenario, the App Profile weighting is applied, with the Higher level being the permission granted.

The App Profile weighting is manually set when creating the App Profile in Yarn_Gate.

In this example, we are testing for the access granted with L1 and L3 Security Groups in AD/LDAP assigned to a user. Below shows that Both ReadOnly and ReadWrite are matched, as ReadWrite has a weighting of 2000. Therefore, this is the access Granted.

 

how to view logs

GUI

Login to each Node via AppAdmin

Navigate from the left hand menu, Administration > Yarnman Logs >

Select the Node , Duration and request logs.

The job will be summited and show as a waiting status followed by completed with the logs available to download.

Logs can only be collected to the navigate node, however logs can be viewed from all nodes

CLI

SSH into the Yarnman server

issue sudo /usr/bin/ym-service-commands.sh yarnman-logs

This will display logs continuously (which could be very chatty on a busy system) for the Yarnman service (including Yarn_Gate)

The following command can be issued to export the logs to /tmp as the hostname-yarnman.logs

sudo ym-service-commands.sh yarnman-logs &> /tmp/"$(uname -n)"-yarnman.logs 

Press Ctrl C to stop exporting logs to /tmp

These logs can be collected by SCP

gzip hostname-yarnman.logs can be used to compress the file as a zip to reduce the size

Can’t Access Yarn_Gate or Yarnman web interface

Check Yarnman is running

SSH into the Yarnman node with the issue

Check Yarnman is running with sudo /usr/bin/ym-service-commands.sh status

Below shows Yarnman is inactive and not running Active: inactive (dead) since Wed 2023-03-22 04:44:19 UTC; 10s ago

You can also see in the last part of the logs the containers have stopped

yarnman@yarnman-1 [ ~ ]$ sudo /usr/bin/ym-service-commands.sh status ● yarnman.service - yarnman Loaded: loaded (/usr/lib/systemd/system/yarnman.service; enabled; vendor preset: enabled) Active: inactive (dead) since Wed 2023-03-22 04:44:19 UTC; 10s ago Process: 1057 ExecStartPre=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS) Process: 1118 ExecStart=/usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans (code=exited, status=0/SUCCESS) Process: 3303552 ExecStop=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS) Main PID: 1118 (code=exited, status=0/SUCCESS) Mar 22 04:44:11 yarnman-1 docker-compose[1118]: ym-couchdb exited with code 0 Mar 22 04:44:11 yarnman-1 docker-compose[3303552]: Container ym-couchdb Removed Mar 22 04:44:19 yarnman-1 docker-compose[3303552]: Container ym-tang Stopped Mar 22 04:44:19 yarnman-1 docker-compose[3303552]: Container ym-tang Removing Mar 22 04:44:19 yarnman-1 docker-compose[1118]: ym-tang exited with code 137 Mar 22 04:44:19 yarnman-1 docker-compose[3303552]: Container ym-tang Removed Mar 22 04:44:19 yarnman-1 docker-compose[3303552]: Network yarnman_yl-yarnman Removing Mar 22 04:44:19 yarnman-1 docker-compose[3303552]: Network yarnman_yl-yarnman Removed Mar 22 04:44:19 yarnman-1 systemd[1]: yarnman.service: Succeeded. Mar 22 04:44:19 yarnman-1 systemd[1]: Stopped yarnman.

Try restarting the Yarnman service to restart the docker containers

sudo /usr/bin/ym-service-commands.sh restart

yarnman@yarnman-1 [ ~ ]$ sudo /usr/bin/ym-service-commands.sh restart restarting yarnman.service ● yarnman.service - yarnman Loaded: loaded (/usr/lib/systemd/system/yarnman.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2023-03-22 04:47:18 UTC; 6ms ago Process: 3303994 ExecStartPre=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS) Main PID: 3304004 (docker-compose) Tasks: 4 (limit: 4694) Memory: 4.9M CGroup: /system.slice/yarnman.service └─3304004 /usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans

Recheck the status sudo /usr/bin/ym-service-commands.sh status

Which is showing Active: active (running) since Wed 2023-03-22 04:47:18 UTC; 12s ago

and Yarnman running

yarnman@yarnman-1 [ ~ ]$ sudo /usr/bin/ym-service-commands.sh status ● yarnman.service - yarnman Loaded: loaded (/usr/lib/systemd/system/yarnman.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2023-03-22 04:47:18 UTC; 12s ago Process: 3303994 ExecStartPre=/usr/bin/docker-compose -f docker-compose.yml down (code=exited, status=0/SUCCESS) Main PID: 3304004 (docker-compose) Tasks: 10 (limit: 4694) Memory: 11.5M CGroup: /system.slice/yarnman.service └─3304004 /usr/bin/docker-compose -f docker-compose.yml -f docker-compose-override.yml up --remove-orphans Mar 22 04:47:31 yarnman-1 docker-compose[3304004]: ym-couchdb | [notice] 2023-03-22T04:47:31.051226Z nonode@nohost <0.701.0> -------- Starting replication bb32861690d0bd9795787bfe24566304+continuous (https://10.222.1.4:6984/yarnman-wrangler-migration-changes/ -> https://10.101.12.83:6984/yarnman-wrangler-migration-changes/) from doc _replicator:df82314b5c5f9e50578144a98d0775ec worker_procesess:4 worker_batch_size:500 session_id:095a96de4eac5987ae30f45a89762561

Yarnman Starts then stops with Clevis Tang/Encryption at rest enabled

The Clevis Tang encryption method requires at least* 2 Nodes to be online to unlock the encryption keys, Yarnman will continue to try to obtain the keys from Clevis Tang

Issue sudo /usr/bin/ym-service-commands.sh status

Shows the last few lines on the active log. The key message is 1679460919710 FATAL Could not decrypt configuration key 'couchdb.password': Failed to decrypt

 

  • Check connectivity to the other nodes.

  • Check firewall rules

  • Try restarting Yarnman services on local or remote nodes

Below show successful decryption with Yarnman starting up

 

Error Creating an account to target the system

HTTP 400 error from the unity system
Session Exists
ACG/Role can’t be added/updated
Connection Error

 

Error trying to close session (use tombstone)

The tombstone feature will force the session to be removed and the state to be reset. This could be required if a target system is offline or interface address changes or can’t reach the target node.

This will be limited to an Admin user

Error with missing version number for target interface

Requires an Admin user with access to appadmin to run test interface connection. Test interface connection is normally done when adding a new interface or bulk loading.

Cannot log into target system CUCM/UCXN

Check that the role provisioned in the target system by yarngate permits login

Cannot create sessions in target system

  • Check the error when creating sessions to ensure that the CUCM or UCXN application user password has not expired or is locked

  • Check that there are interconnects available on each Yarngate node

  • Changes with added/removing an interface requires the target credentials to be set again followed by test connection

No Matching Groups when testing access

  • Ensure that Full Memberof groups strings/keys are used in AD matching Groups or via the Test Access Tool

  • MemberOf is case-sensitive

  • Use ASDI EDIT or Powershell in windows to obtain correct syntax/formatting/case

Below shows the User L13-yarngate is a MemberOf

CN=yarngate-L3,OU=yarngate,DC=lab,DC=yarnlab,DC=io

and

CN=yarngate-L1,OU=yarngate,DC=lab,DC=yarnlab,DC=io

 

Database Replication Status

Navigate to Nodes > Local Node > Replication

Below shows the status of the replication to one of the servers ins a bad ‘crashing’ state

  • Check connectivity to the other nodes

  • Check firewall rules

  • Press Sync to force a re-sync

  • CouchDB has an exponential backoff algorithm to hand connections timeouts

 

RBAC overview flow

  • Entitlement groups contain Customers, Cluster, Interfaces , These Associated to Access Rule(s)

  • Matching Groups match defined Active Directory Security Groups, These are Associated to Access Rule(s)

  • Access Profile Contain permission and Authentication rules for CUCM, Unity, Expressway, SSH

  • Access Rules Tie all the above together for the correct permission/access to be enforced

How Yarn_Gate determines access

 

  • AD/LDAP Userid contains Security Groups >

  • Yarn_Gate compares to AD Matching Groups >

  • Matches Access Rule(s) >

  • Matches Entitlements >

  • Matches App Profile with Level/Weighting to select read/readwrite

How to Configure Yarn_Gate

 

  • Configure AD Matching Group(s) >

  • Configure App Profile(s) >

  • Configure Access Rule(s)