Yarngate Configuration

This section covers the setup of the yarngate application

 

This section assumes that that Yarngate access has been configured and the user can login to Yarngate using LDAP credentials

This section assumes that Customers/Clusters/Interfaces have been added either manually via the administration application or via Bulk import of Customers/Cluster/Interfaces

 

Yarngate Access Profiles and Access rules

 

Access to the yarngate application and the various roles is controlled by the Authentication policy that matches LDAP groups to Yarnman Role

 

Once a user has access to Yarngate their system access is determined by

 

Entitlement Group

The entitlement profile defines what systems can be accessed. To access Entitlements Group setup - from Yarngate menu, select Access Rules->Entitlements Group, then select New Entitlement Group. Access can be added using Customers, Clusters or specific interfaces.

Entitlement Group bulk Import

There is an option to bulk import using excel

Download the template and setup as necessary and upload

Matching Group

The match group defines match of LDAP groups or users for access

Access Profile

The access profile defines the configuration of the access - note that screenshot below show screen after General Settings first saved

  • CUCM - there is additional template configuration required

  • UCXN - there is additional template configuration required

CUCM Template

CUCM Roles (Custom)
  1. If custom CUCM roles are required they can be created, Go Access Profiles->CUCM Templates->Roles then select New Role Template

  2. The roles will need to be imported from a test CUCM use the latest version of CUCM in scope to import the latest roles - Yarngate handles backward compatibility for roles

  3. Select the interface to import the role

  4. Select the required Roles

     

 

 

CUCM Credential Policy
  1. Create the credential policy with the required settings - Go System Templates → CUCM Templates → Credential Policies, then select New Credential Policy

CUCM Access Control Group
  1. Create the Access Control profile - Go System Templates → CUCM Templates → Access Control Groups, then select New Access Control Group Template

CUCM default Roles
Custom role template

UCXN Templates

Authentication Rule
  1. Create the required Authentication rule - Go System Templates → UCXN Templates → Authentication Rule, then select New Authentication Rule Template

User template

Access Rule

The access rule links all of the configuration together - Go Access Rules → Rules, then select New Access Rule

Test Access

The test access tool can be used to check what access users have

Configure policies

are a set of checks or parameters that are applied to Caches when they run.

Node Age

This policy is applied to a NodeName discovery, allowing reports to be run on Nodes removed from Yarngate (e.g. the interface) or if a node in a cluster is removed.

Defines the number of days before a Node is marked as Inactive and when the Node is deleted. Below shows the Default

A custom policy can be added by Navigating to Administration > Caching > Policies > Create Policy.

Account age

A custom policy can be added by Navigating to Administration > Caching > Policies > Create Policy.

 

Audit Config

An Audit policy is used with the Node Cache and System Audit Check report. Yarngate will collect via AXL/Soap/REST the audit configuration settings and check these against the policy to validate

CUCM

  • audit enabled

  • detailed audit enabled

  • audit level at least (6, informational)

  • being sent to one of the Syslog servers defined (the cucm audit config can only send to 1 syslog server)

Expressway

At least 1 of the 4 Syslog destination matches the following

  • audit level at least (informational)

  • being sent to one of the Syslog servers defined

  • Format set to IETF

  • port Number 601

  • Filter set

Unity

Not Supported at this time due to CSCwi88877, only syslog activity can be used

An Audit policy can be added by Navigating to Administration > Caching > Policies > Create Policy.

 

 

Configure Caches

Nodes

The Node Cache speeds up reporting by not needing to poll the systems each time a report is run continually. The Node Cache is populated via entitlement group(s).

Policies are applied to the Node cache to check for Audit configuration settings and age out deleted or unreachable Nodes per the Policy.

A Schedule is recommended to keep the cache up-to-date and run just before a maintenance window ends to allow the capture of any new nodes or removal.

Include syslog activity is optional, which will display a tally of the number of syslogs in the last 3 days from elastic search

Navigate to Administration > Caching > Caches > Create Cache

Apply the Node, Audit Policy and Entitlement group(s) and press save

Schedule

Once saved, navigate back to the new cache, and the Add a Schedule button will be visible.

 

You can add the required details for the schedule with enable toggled at the bottom.

You can just navigate back to the cache to see the Schedule status.

To modify or disable the schedule, press modify, make the required changes and press save.

 

Subnet Groups

Subnet groups contain Individual hosts or Subnets; These are then linked to Subnet Group Lists.

Subnet Lists

Subnet List contains one or more groups of Subnets; These are applied to robot accounts to validate that Robot accounts are being used from known hosts/systems.

Robot Accounts

Robot accounts are manually entered and assigned to a Subnet List. These will feed into the Robot report.

Admin Users

Provides a Cache and history of all Admin user accounts

 

Reporting

Templates

Templates can be used to perform ongoing AdHoc reports with pre-filled details or assigned to a Schedule to perform reoccurring reports.

Navigate to Administration > Reporting > Templates

Select the template type and populate the presets.

 

Select a Template and press Create Report From Template to run an ad hoc report.

 

 

 

Schedules

Report templates are assigned to a Schedule to be run at regular intervals

  • Hourly

  • Daily

  • Weekly

  • Monthly

PRTG Sensor Push

Reports are created based on a Schedule with the option to send Element details to winprtg via an HTTP (s) Push.

Sensor Prefix specifies a name for each winprtg sensor, depending on the options of the 2 toggles.

Detailed Node Message text will include the Hostname of the Node with the issue in the alert text

Per Customer Sensor PRTG will suffix each customername as a sensor in to be sent to winprtg. If Detailed Node message text is selected each Customer sensor will list the nodes for that customer only.

See details in the Report for the toggle outputs

Currently, the Metrics pushed to winprtg include

Field

Value

Field

Value

sensorId

Text

prefix-status

prefix-customer

Text

Text

ScheduleName xyz Ran Ok <report>uuid</report><error>nodename</error<warn>nodename</warn>

ReportStatus

0 for normal

1 for warning flag

PassedElements

Numeric

PassedElements

Numeric

ErrorElements

Numeric

WarningElements

Numeric

SkippedElements

Numeric

TotalElements

Numeric

PassedElementsPct

Percent

ErrorElementsPct

Percent

WarningElementsPct

Percent

Robot Accounts

This report allows us to cross-check Robot accounts logging in from unknown IP addresses based on matching Defined subnets.

The report within the GUI will show logins from unknown (unmatched subnets) and the tally.

The xlsx contains all the Matched and Unmatched Subnets for the Robot accounts.

Due to the unstructured nature of the syslog messages, we need to exclude any activities from an enduser as well as some internal system actions; below are the default exclusions.

 

 

System Audit Check

This report uses the Node Cache to validate settings to confirm the Audit Policy complies with the expected values.

This is done on NodeCache data

the xlsx export contains all the configured details and the policy being tested against for CUCM, Expressway and Unity* and syslog tally if enabled.

Note: Unity only supports syslog tally at this time.

 

Sample PRTG Rest Push

System Audit Check

Prefix is set to each sensor push. prefix-status is always sent for each type.

Yarngate will record the response from PRTG, which only gives back an error if the sensor doesn't match (eg not configured) This is recorded as Sucesss True/False

Setting Detailed Node Message Text to Off, Per Customer Sensor PRTG to Off

sensor prefix is sac00, only a single Push, with numeric data only .

{"success":true,"data":{"sensorId":"sac00-status","inputs":{"Prtg":{"Text":"ScheduleName xyz00 Ran Ok <report>902aa63985d91e9a275c4965a7d73b36</report>","Result":[{"Channel":"ReportStatus","Warning":0,"Value":0},{"Channel":"PassedElements","Value":8},{"Channel":"ErrorElements","Value":6},{"Channel":"WarningElements","Value":1},{"Channel":"SkippedElements","Value":4},{"Channel":"TotalElements","Value":19},{"Channel":"PassedElementsPct","Value":43,"Unit":"Percent"},{"Channel":"ErrorElementsPct","Value":32,"Unit":"Percent"},{"Channel":"WarningElementsPct","Value":6,"Unit":"Percent"}]}}}}
Setting Detailed Node Message Text to On, Per Customer Sensor PRTG to Off

sensor prefix is sac01, only a single Push, with numeric data and all nodenames in text

{"success":true,"data":{"sensorId":"sac10-status","inputs":{"Prtg":{"Text":"ScheduleName xyz10 Ran Ok <report>2b30510b4640128bec616f2cbcbc41fb</report><error>ucmc7-cuc ucmc7 ucmc6 ucmc5 labimp115-pub labcucm115-sub</error><warn>tm999cms01</warn>","Result":[{"Channel":"ReportStatus","Warning":0,"Value":0},{"Channel":"PassedElements","Value":8},{"Channel":"ErrorElements","Value":6},{"Channel":"WarningElements","Value":1},{"Channel":"SkippedElements","Value":4},{"Channel":"TotalElements","Value":19},{"Channel":"PassedElementsPct","Value":43,"Unit":"Percent"},{"Channel":"ErrorElementsPct","Value":32,"Unit":"Percent"},{"Channel":"WarningElementsPct","Value":6,"Unit":"Percent"}]}}}}
Setting Detailed Node Message Text to Off, Per Customer Sensor PRTG to On

sac-01 as the prefix, for each customer a seperate sensor push with numeric counters, followed by -status as the last

{"success":true,"data":{"sensorId":"sac01-status","inputs":{"Prtg":{"Text":"ScheduleName xyz01 Ran Ok <report>902aa63985d91e9a275c4965a7d69290</report>","Result":[{"Channel":"ReportStatus","Warning":0,"Value":0},{"Channel":"PassedElements","Value":8},{"Channel":"ErrorElements","Value":6},{"Channel":"WarningElements","Value":1},{"Channel":"SkippedElements","Value":4},{"Channel":"TotalElements","Value":19},{"Channel":"PassedElementsPct","Value":43,"Unit":"Percent"},{"Channel":"ErrorElementsPct","Value":32,"Unit":"Percent"},{"Channel":"WarningElementsPct","Value":6,"Unit":"Percent"}]}}}}

Note success false for the 2nd element, indicating its not configured in PRTG

Setting Detailed Node Message Text to On, Per Customer Sensor PRTG to On

sac-11 as the prefix, for each customer a seperate sensor push with numeric and nodenames in text, followed by -status as the last

suffix-customer , only the nodes that belong to the customer tm999-cust are populated to the text