One of the most common questions that comes up during CA Single Sign-On Professional Services engagements is: “What ports do I need to open for CA Single Sign-On?". This is generally followed by: “What does each port do?”. These are great questions and we wanted to consolidate the answers in one place. And so, without further ado, CoreBlox proudly presents our first chapter in our Unofficial CA Single Sign-On Guide: Ports!
When CA Single Sign-On is configured correctly, it just works and it works well! Sometimes getting through that initial configuration can be a bit like playing a game of Tetris, especially in an organization that relies on firewalls to control access to specific ports.
Below is a list of the default ports that are commonly associated with CA Single Sign-On implementations. By no means is this definitive, as configurations will vary between organization based upon requirements and standards. However, this is a good starting point when working with security and network teams during the installation and configuration of CA Single Sign-On.
Port #
Use
Open Between
Comment
44441
Web Agent Accounting Port
Web Agent / Policy Server
Accounting Port
44442
Web Agent Authentication Port
Web Agent / Policy Server
* Required - Peforms Authentication Requests to Policy Server
44443
Web Agent Authorization Port
Web Agent / Policy Server
* Required - Peforms Authorization Requests to Policy Server
44444
Web Agent Administration Port
Policy Server
Not used by the WebAgent , Used by Policy Server for AdminUI
8080
AdminUI HTTP
Browser / AdminUI Service
Used for non-secure connection to the WAMUI console
8443
AdminUI HTTPS
Browser / AdminUI Service
Used for secure connection to the WAMUI console
8180
JBOSS Service Ports
Browser / JBOSS
Not used in normal operation
389
LDAP
Policy Server / User-Policy Store
Used for non-secure connection to an LDAP Sever
636
LDAP (Secure)
Policy Server / User-Policy Store
Used for secure-connection to an LDAP Server
1433
SQL
Policy Server / User-Policy Store
Used for communication with an SQL data source
44449
OneView Agent
OneView Agent/ OneView Montor
Used for communication between the OneView Agent and Montitor
Two decades in the Identity & Access Management space has exposed us to our fair share of “where did we go wrong?” scenarios - organizations that thought they were following best practices and ended up creating problems for themselves over time. One especially problematic area has to do with role management and traditional RBAC (role-based access control). Often, organizations start off with the best intentions and establish just a few roles:
Admin
Employee
Customer
Partner
The roles become more granular over time:
Admin
Employee
Customer
Partner
SuperAdmin
Employee - HR
Customer - Platinum Support
Partner - Support
RegularAdmin
Employee - IT
Customer - Gold Support
Partner - Implementation
LightAdmin
Employee - Sales
Customer - Trial
Partner - Temp
AdminTemp
Employee - Support
Customer - Temp
Partner - Marketing
Before you know it, that “handful” of roles you started with has expanded into a tangled web, creating an administrative burden and taxing the systems whose rules rely upon them. CoreBlox has seen environments that have over 15,000 roles! In the IAM industry this is generally referred to as the dreaded “role proliferation” (cue Darth Vadar theme).
Fortunately, there is a great alternative to RBAC. Our partner, Axiomatics, has pioneered the concept of Attribute-Based Access Control, also known as “ABAC”. The thought process behind ABAC is easy to understand: why create new data attributes to manage (e.g. Roles) when you can let the user data speak for itself?
Organizations that already use CA Single Sign-On for web access control have a distinct advantage when it comes to implementing an ABAC approach. The Axiomatics Extension for CA Single Sign-On allows policy decisions to be made by Axiomatics’ ABAC-based engine. A simple yes/no response is returned to CA SSO based upon the user’s attributes. It just works, no coding necessary!
(This is the second chapter in our new series, the Unofficial CA Single Sign-On Guide. You can find Chapter 1 here.)
I’m sure you’ve seen it! Whether it was on one of those tacky motivation posters or during a 3 a.m. Tony Robbins infomercial… the concept of "trust". It is usually demonstrated by somebody blindly falling backwards and trusting their partner or team to catch them. It looks convincing when you see it on television, but if you are like me you start wondering how many takes it took to make it look that easy. I believe it is part of human nature to want to ‘Trust’ but in the end we usually go with ‘Trust, but verify!’. That verification piece is especially important when it comes to your SSO solution!
If you have installed a CA security product in the past, you have no doubt seen one of the following conclusion messages: ‘Installation Successful’, ‘Installation Successful but with errors’ or ‘Installation Failed’. Unfortunately, these messages are not always accurate. I have seen successful completions that were…. well…not successful. Other times it was successful with errors, but when you review the installation log there is little to no information in it. So, what is one to do?
This brings us to the installation debugger. It is not in the manual, and often when I am on-site with a client they have no idea this function even exists but Yes, Virginia: there is a debugger!
Below are the methods for starting the debugger during Windows and Linux installations of CA Single Sign-On:
Windows
Running the debugger in Windows is very simple. Once you start the installer just hold down the [Ctrl] button during the initialization screen (see below) until you see a DOS box pop up in the background. Once the DOS box has opened you can release the [Ctrl] button and continue with your install. One important thing to note for Windows is that the DOS window will close once you have exited the installer so before you hit that final button to exit, be sure to select all the content of the DOS window and copy and paste to a text editor so that it can be saved for reference.
Initialization Screen - Hold down the [Ctrl] button until you see the screen below then release the control button.
You know the debugger has started once you see this DOS window pop-up in the background.
Linux
Unlike Windows, running the debugger in Linux will automatically write the content to a log file.
Before running the installation script, enter the following command (note this command could vary slightly depending on the shell in use)
export LAX_DEBUG=true
Then start the installer script as you normally would.
Running the debugger during the installation will not ‘fix’ a potential problem, but it may provide some specific information (or errors if you are lucky) to assist you with finding the source of the problem so that you can resolve it.
where you’ll also have a chance to compete for the 2014 PingCup and get your offering in front of our customers.”
These were the words that arrived in my in-box back at the end of May. As a Ping Identity Platinum Services Partner, we look forward to these types of events as it gives us insight into Ping’s roadmap as well as a great opportunity to share our expertise with Ping’s customers we might not otherwise have.
We knew we weren’t going to miss PingCon, so that left one question, “What do we present for PingCup?”
With CoreBlox’s history and expertise in Identity and Access Management consulting, we’ve been very excited about Ping’s Federated Access Control product, PingAccess. PingAcces brings a standards-based approach (like all things Ping Identity does) to the SSO/Access Control space. With support for OpenID Connect (OIDC) for user Authentication as well as a Reverse Proxy and/or “Agent” based deployment option, it’s a powerful piece of technology. Ping has made it as simple to deploy and upgrade as their PingFederate product which means organizations of all sizes will find it an attractive entrant to the IAM space.
In the process of exploring PingAccess, we realized that for customers that choose to utilize both PingAccess and SiteMinder (or are migrating from one to the other), we had an opportunity to simplify the integration between the two by leveraging our CTS solution in the form of a custom Site Authenticator.
PingAccess Site Authenticator – Some Background
PingAccess has a feature called the “Site Authenticator”. Site Authenticator’s are used when PingAccess is deployed in “Gateway” mode (a.k.a. Reverse Proxy) and the web resources being protected by PingAccess have their own token/session requirements for authenticated users that must be kept in place.
PingAccess ships with the Token Mediator Site Authenticator, which allows PingAccess to leverage the Security Token Service (STS) built into PingFederate and exchange the PingAccess session token (called PA Token, which is a signed or encrypted JSON Web Token (JWT)) for the required back-end token type utilizing an available PingFederate Token Translator. For customers using the CoreBlox Token Service, PingAccess, PingFederate and CA SiteMinder, the token exchange looks something like this:
PingAccess Token Mediator Site Authenticator
The notable piece here is that PingAccess calls PingFederate in order to utilize the CoreBlox Token Translator. The CoreBlox Token Translator is then calling CTS in order get the necessary SMSESSION information from SiteMinder. All this info is returned to the Token Mediator Site Authenticator which injects the SMSESSION cookie into the backend request without the user ever being prompted by SiteMinder to authenticate.
While this works today to give customers seamless SSO between PingAccess and SiteMinder, it does have some drawbacks:
Limited to WAM systems where a PingFederate Token Translator exists
Extra protocol translation from WS-Trust to JSON REST
Extra traffic/hops that must pass through PingFederate
Another service that needs to be configured, monitored and troubleshot
Potentially new license file to enable STS in PingFederate that must be applied
We think there’s a simpler way to achieve the same thing: The PingAccess CTS Site Authenticator
PingAccess CTS Site Authenticator
We thought to ourselves, “What if there was a way to directly integrate PingAccess to the CoreBlox Token Service without having to pass through PingFederate?” This would eliminate issues #1-#5 listed above AND be simpler/quicker for customers to configure. So we went ahead and wrote the custom integration using the brand new PingAccess 3.0 SDK that Ping Identity just released at the Cloud Identity Summit and presented it to all the attendees at PingCon.
Now, when customers need to provide seamless access between PingAccess and SiteMinder, they can utilize our CTS Site Authenticator for PingAccess. With the custom Site Authenticator in place, the flow now looks like:
PingAccess CTS Site Authenticator
In the flow above, you’ll notice that PingFederate is no longer used as an intermediary to CTS. By using the CTS Site Authenticator PingAccess has the ability to interface directory with the CoreBlox Token Service.
This setup has the following benefits:
Support for CA SiteMinder today via CTS. Additional WAM support is being planned.
No protocol translation required. Simple JSON REST call from PingAccess to CTS
Reduced traffic load on PingFederate
2 fewer configuration points. No need for Token Processor & Token Generator in PingFederate STS
Uses the existing PingFederate license. No additional features required.
If you’d like to find out more about our CTS Site Authenticator and/or CoreBlox Token Service, email sales@coreblox.com or dial 1-877-TRY-BLOX.
If you’ve recently visited ca.com then you’re probably aware of CA Technologies' focus on the evolving needs of the enterprise as it builds the “Modern Software Factory”. At CA World 2016, CEO Michael Gregoire used his to keynote to discuss companies that are built to change. Otto Berkes' keynote described what a Modern Software Factory is and why enterprises need to streamline innovation so that ideas can turn into new customer experiences quickly and efficiently.
He identified 5 key principles of a Modern Software Factory:
Agility
Experience
Automation
Security
Insight
It was a fresh perspective on the challenges our customers face and how to meet them. I recently found myself reflecting on how CoreBlox, a CA Focus Partner, is already aligned with the vision for the Modern Software Factory. Many IAM industry people know of our architecture and services delivery capabilities, but we are also a software company. Our CoreBlox Token Service allows CA Single Sign-On to securely exchange tokens with PingFederate, an increasingly common need within large organizations that have security solutions from multiple vendors. Our ToolBox for CA Single Sign-On automates and streamlines common CA SSO administrative tasks while increasing overall security and easing regulatory compliance. Developing, refining and supporting these products has given us a taste of what it's like to run our own Modern Software Factory. But how do they contribute to our clients' own ability to adapt to an ever changing market?
ToolBox allows you to be Agile in your daily security management practices. It enables you to easily promote SSO policies across environments and seamlessly onboard new applications.
ToolBox helps to drive ever evolving user Experiences. Companies that are releasing new applications and on boarding new users daily need to be able to control access by defining new policies and updating existing ones. ToolBox centralizes the management of these policies across environments so that the user experience is consistent and predictable.
ToolBox is the Automation engine for CA Single Sign-On. Its intuitive user interface makes most of your common administrative tasks as simple as pushing a button. ToolBox's template-based approach makes it easy to re-use configurations that have already been created.
ToolBox was designed to bring Security to your CA Single Sign-On operations. With ToolBox, you'll be able to delegate administrative functions and precisely control user access across environments. Simplified policy testing allows you to eliminate errors that cause unintended vulnerabilities. With all of your environment changes audited, compliance requirements are easy to fulfill.
ToolBox delivers Insights into how your security policies are being configured and the subtle differences between your environments that could impact user experiences. Its optimization functions highlight subtle configuration tweaks that can improve performance and allow CA Single Sign-On to grow and change along with your business.
CoreBlox is committed to building products and solutions for the Modern Software Factory while incorporating its key principles into our own day to day experiences as a software company. We're excited to be aligned with CA Technologies on this quest!
CoreBlox Senior Architect Anthony Hammonds recently participated in our partner Radiant Logic's webinar focused on how to virtualize SailPoint IdentityIQ's database with RadiantOne such that it can be easily extended for use with LDAP applications, WAM systems, and Federation. The webinar playback and presentation can be found on Radiant Logic's web site:
We ran into a problem during a recent installation of CA Access Gateway 12.6 (formerly known as CA Secure Proxy Server) on Red Hat Linux, and would like to share the solution.
Upon launching the installer, the following error was displayed: "JRE libraries are missing or not compatible..."
This may have to do with insufficient permissions in the /tmp directory. In environments where obtaining the required permissions may not be straightforward due to how the server is locked down, security policies, etc., there is a simple workaround.
You need to create a new "temp" directory in a location where you do have the proper permissions (for example, /opt/myapplication/tmp), and then set an environment variable called "IATEMPDIR". Example:
mkdir /opt/myapplication/tmp
export IATEMPDIR=/opt/myapplication/tmp
You should not be able to successfully launch the installer without encountering the "JRE libraries are missing or not compatible" error.
Microservices allow applications to be created using a collection of loosely coupled services. The services are fine-grained and lightweight. This improves modularity and enables flexibility during the development phase of the application, making the application easier to understand. When designing applications, identity becomes a key factor to building out a personalized user experience. Identity also enables other microservices for tasks like authorization applications like Axiomatics, single sign-on, identity management and compliance.
However, access to profile data presents a challenge since it is contained across multiple repositories, contained in other applications or even must be consumed from other microservices. The Identity Microservice must be able to not only respond to requests through a standard protocol for identity information, but must also have the means to reach out to these identity repositories in an efficient and responsive manner. The Identity Microservice must also allow for both user-driven and server-to-server access to identity data.
The following diagram breaks down the components of the Identity Microservice:
The Identity Microservice at its core is made up of four layers:
The server and web application clients of the Identity Microservice
Each of these layers performs a crucial role in securing access to identity data and also allows the microservice to obtain identity data from the required repositories. Breaking this down further:
The oAuth Authorization Server provides secure access to the Identity Microservices
The UserInfo Endpoint handles the requests for identity data and returns the requested profile information (claims)
The Federated Identity Service provides a centralized hub for obtaining application-specific profile data from directories, applications, databases and other microservices
Additionally, the Federated Identity Service needs to be able to aggregate and correlate profile data and leverage a real-time cache to ensure that access to profile data performs quickly and within the required application service levels
Today, the Identity Microservice’s components are based upon open standards and are both lightweight and highly leveraged by web applications and servers.
There are two main client flows supported by the microservice:
User-driven Web Application flow
Server-driven flow
Each of these flows require a different means of interacting with the Identity Microservice.
User-Driven Web Application Flow
Identity is at the core of nearly all web applications - everything from the initial authentication and authorization through to personalization with profile data. When logging into your banking application you not only need to securely identify you as the user, but also must authorize access to your accounts and personalize the site for your profile. Would you trust a banking application that listed your identity as “User”?
The following diagram breaks down the user-driven Web Application flow:
User accesses the Web Application
The Web Application redirects user to the Identity Microservice’s Authorization Server with a client ID and application scope
User authenticates and authorizes request
Authorization Server redirects user back to the Web Application with an authorization code
The Web Application sends the authorization code to the Authorization Server with its client secret
The Authorization Server returns an access token and ID token
The Web Application sends the access token to the Identity Microservice’s UserInfo endpoint
The Identity Microservice’s Federated Identity Service matches the application scope to the defined view and returns requested attributes
The Authorization Server returns the requested user information (claims) from the UserInfo endpoint to the Web Application
There are several key factors in this flow:
The scope sent to the Identity Microservice is the application, or view, for the requested profile data
The view defined in the Federated Identity Service is application-specific and can be limited to just the profile data needed for the authorized application
Multiple application-specific views can be supported by the Identity Microservice
Authentication can be easily mapped back to the user’s profile repository by the Federated Identity Service allowing client web applications to completely delegate authentication to the microservice
Server-Driven Flow
While similar to the user-driven Web Application flow, no user interaction is present for this transaction. The Server-driven flow allows for backend access to profile data. In this case, the server is being authenticated and not the user.
The following diagram breaks down the Server-driven flow:
Server sends client credentials and application scope to the Authorization Server
Authorization Server returns an access token and ID token
Server sends the access token to the UserInfo endpoint
Federated Identity Service matches the application scope to the defined view and returns requested attributes
Authorization Server returns the requested user information (claims) from the UserInfo endpoint to the Server
This allows the server to access the same profile data as defined for a Web Application. Additionally, the same views in the Federated Identity Service can be leveraged, if desired, for both Servers and Web Application.
The Identity Microservice allows for powerful, yet lightweight access to all the needed profile data in an efficient manner. This microservice can provide what is needed at the core of all applications, and for the Server-driven flow can even be used for transaction-specific data unrelated to users. As the world moves toward the model of easily consumable services, the Identity Microservice must be one of the main considerations when designing an application.
After registering the SaaS CA API Developer Portal, the applications created on the developer portal cannot be synced with the CA API Gateway OTK database.
Solution:
1- Check your JDBC connection configured for OTK. When you installed your OTK onto the CA API Gateway, you were asked to configure a JDBC connection for your OTK persistence layer if you choose to use SQL databases. By default, you should use “OAuth” for that JDBC connection, but in many cases, the JDBC name can be set to anything. This will not give you problems with OTK but when you register with SaaS Developer Portal, the auto registration will create services and encapsulated assertions that contain a JDBC Query and those query assertions are mapped to JDBC connection by name which is “OAuth”. If you have a different name for your OTK JDBC Connection, those JDBC queries will fail.
To fix the issue, you need to update the JDBC query assertions in the following services:
Portal Application Sync Fragment
Change default connection OAuth to the connection you configured for your OTK.
2- If you are using a dual API Gateway configuration and you Installed OTK onto both DMZ and INT gateways, after you register your DMZ API Gateway to the Developer Portal, the Application will show that it is out of sync. This is because the DMZ Gateway should not have the OTK Database configured. For most of the steps in deploying an application from the SaaS Portal to the API Gateway, the request can be handled by built-in OTK assertions where it will route the DB query requests to the INT gateway. The INT Gateway then queries the OTK DB. Unfortunately, a simple error in the portal sync service breaks the flow.
In Portal Application Sync Fragment, the direct JDBC query will return API key count value in a context variable called: ${apiKeyCount.count}, meanwhile the OTK assertion will return the API Key count value in ${apiKeyCount}. The following policy will refer to ${apiKeyCount.count} for the API Key count value. Therefore, when trying to sync an Application from the DMZ Gateway, the OTK Assertion is used and returns the value in the wrong context variable.
To fix this issue, simply add a context variable after the OTK Assertion to assign the value of ${apiKeyCount} to ${apiKeyCount.count}.
Adding a Context Variable After the OTK Assertion
3- If you are using the Cassandra Database for OTK Token store, you need to upgrade your CA API Gateway to v9.4 or up and OTK to v4.3 or up. Otherwise, it will not support integration with SaaS CA API Developer Portal. Only OTK v4.3 or up will have the updated database schema to be able to store API Key information and API Access information, which is required when creating an application from the SaaS CA API Developer Portal.
If your current OTK version is 3.x, then you need to manually uninstall and re-install your OTK to be upgraded to OTK v4.3. If your current OTK version is 4.x, then you can use the upgrade button to upgrade your OTK to v4.3. Unfortunately, due to some defects with the OTK, some manual configuration after the auto upgrade is required. Please check my other blog post entitled “Layer 7 Gateway OTK Upgrade” for details.
After the upgrade, the SaaS CA API Developer Portal will still not work properly due to a defect in the current Portal Application Sync Policy Fragment. In this fragment, it will try to make a JDBC query first and if it fails then it will rely on OTK Assertion to make the NoSQL query to the Cassandra DB. Unfortunately, the OTK Assertion returns the request in the wrong context variable, which breaks the workflow.
The fix is simple - add a context variable to assign the value of ${apiKeyCount} to variable ${apiKeyCount.count}
Add a Context Variable
4- To avoid having the SaaS CA API Developer Portal push data to the CA API Gateway and break the API Gateway runtime traffic, the communication between the CA API Gateway and SaaS CA API Developer Portal occurs by having the CA API Gateway pull information from the CA API Developer Portal. Therefore, to sync any configuration or modifications from the Portal to the Gateway, it requires the API Gateway to make outbound calls.
In most enterprise environments, outbound calls usually require a secure proxy, otherwise it will be blocked by the firewall. Here are a few things we need to know about the proxy configuration:
a- You need to configure a global proxy for the registration URL to work and you can disable/delete that global URL after registration. However, you need to enable/add the global proxy again when you run Portal Upgrade Tasks.
CA API Gateway - Policy Manager
b- The outbound proxy will only work with “Automatic” and “Scripted” deployment. “On-Demand” deployment type is NOT supported for proxy settings. Because for “On-Demand” there is a portal deployer module running on the background to sync APIs from SaaS Portal to API Gateway. That module is not configurable by API Gateway admins and it will make an websocket call to SaaS Portal where proxy setting cannot be added.
Add API Proxy
c- You need to update every routing assertion inside Portal services to manually add a proxy configuration. Here is a list of service you need to change and add proxy to:
Move Metrics Data Off Box
Portal Application Sync Fragment
Portal Bulk Sync Application
Portal Check Bundle Version
Portal Delete Entities
Portal Sync Account Plan (Two routings needs to be edited)
Portal Sync API (Two routings needs to be edited)
Portal Sync API Plan (Two routings needs to be edited)
Some of the API Gateway components like Identity Provider, Encapsulated Assertions, Policy Fragments, cluster-wide properties, stored passwords, private keys, etc. are consumed by many services configured on the gateway. When you try to edit any of these components, it is difficult to tell which APIs such changes will affect. Therefore, it is usually difficult to make changes to those shared components. If we can find out which APIs have a dependency on the target component, that can help you come up with a deployment plan and even design the changes to accommodate all affected APIs.
Solution:
The restman services on API Gateway provide search for dependencies of an API. This will give a list of shared components like Encapsulated Assertions, Identity Providers, etc. that this one API is depending on. However, it does not offer the search to find out which APIs are depending on certain shared components. The attached XML file is a dependency check service allowing you to identify which APIs are depending on certain shared components. This service utilizes restman service - it calls restman service in a most efficient way to get the result you need. Also, to avoid consuming too many resources on the gateway by calling restman service, you can choose to cache the result to avoid unnecessary duplicate searches. Optionally, you can choose to protect the service, rate and quota limit, and time availability.
How to Deploy Gateway Dependency Check Service:
1. Publish Restman service on gateway
Publish Internal Service
Choose Gateway REST Management Service and publish
4. If your gateway restman service is available via different Hostname and port configuration, update the restmanHost and restmanPort context variables in the service. These variables are defined in “Init” folder.
5. If you have certain folders that you want to bypass in the search by default, you can add a cluster-wide property with Folder IDs separated by spaces.
This completes your deployment of Dependency Check service
Provide API Gateway Admin credentials to access the service.
2. Optional Query Parameters:
Parameter Name
Parameter Value
Description
targetName
Name of target components
This is a required parameter.
Put the name of the components that you wish to check dependencies in a comma separated list.
This is not case sensitive.
refresh
true/false
Default: false
By default, the search result will be cached for 5 min. You can force to refresh it by set it to true.
overwriteQuota
true/false
Default: false
By default, the service allows 10 calls per day to protect gateway itself. You can disable quota check by set it to true
overwriteAvailability
true/false
Default: false
By default, the service can only be called during off hours to avoid affecting production traffic. (9 pm – 6 am local time) This can be disabled by set it to true
addToBlacklist
FolderIDs
By default, the service will pick up the bypass folders from cluster wide property, but you can also remove and folders from default blacklist dynamically with this parameter.
RemoveFromBlacklist
FolderIDs
By default, the service will pick up the bypass folders from cluster wide property, but you can also remove and folders from default blacklist dynamically with this parameter.
overwriteBlacklist
FolderIDs
By default, the service will pick up the bypass folders from cluster wide property, but you can also replace that list dynamically with this parameter.
Within the CoreBlox Token Adapter (provided by Ping Identity):
“Secure Cookies” must be ENABLED
“HTTPonly” must be ENABLED
If you intend to pass SameSite cookies to SiteMinder, you must ensure that you have patched your SiteMinder Web Agents so that they will respect the new ACO parameter that applies a SameSite fix. A description of the solution can be found here.
The need for social distancing due to the COVID-19 Coronavirus is critical in stopping its spread. Many businesses have heeded that advice by allowing employees to work from home. While some companies may already have policies allowing work from home, many only support limited access for key personnel. There are several areas where your Identity and Access Management services may be impacted by this location change to your workforce.
The following seven items should be considered for your IAM infrastructure:
1. External Access Increases the Need for Federation
Many organizations rely on “being on the network” (internal) for access to many resources. While things like VPN can provide remote access to on-premise applications, cloud-based services may require extra hand-holding. Users that cannot access the VPN, leverage shared workstations, or have other limited access can use cloud-based applications to continue work functions. These workers may also not exist in your core directory and need to be authenticated against other sources. Leveraging federations protocols, like SAML, can ease moving these workers to externally hosted applications by eliminating the need to manage and remember additional ID’s and passwords.
2. Scaling Up Your MFA
Multi-Factor Authentication (or MFA) becomes more critical for external off-network users while working from home. While an ID and password may be sufficient internally, moving your workforce remote requires more sophisticated and secure authentication mechanisms. Push notifications, emails, SMS, voice or other channels can be leveraged for the additional credential. This infrastructure needs to quickly scale as more workforce needs to access applications off the network. Having thought-through MFA policies and infrastructure ensures that you are ready for this transition. MFA should be leveraged for both network access through VPN and also for access to cloud-based or externally-facing applications.
3. Authentication Policies Become Key
Being able to define authentication policies based upon risk analysis ensures that the user is challenged for appropriate credentials. A simple ID and password may be required when on-network or for low risk applications, but when the user is accessing a server or an application with sensitive information, step-up authentication is required. Using risk-based analysis, other data points can also be used to determine if an additional credential is required. Perhaps the user is accessing the application from a new network or at an unusual time, that user should be prompted for an additional means to validate that the user is correctly identified. This works well with the MFA solution identified above.
4. Flexible Authorization Services Reduce Time For Needed Access Changes
Temporary changes to application authorization policies may be required. Integrated authorization solutions or services allow for centralized changes to access policies, which limits the need to make application changes. Applications that need to allow different user constituencies or allow access from new locations may require changes to policies to allow access. For example, contingent workers may normally not require access to the HR portal and access is restricted. However, that portal becomes the mechanism to distribute information about corporate status. These workers need to be granted access which requires changing the portal authorization policies.
5. Role Management Simplifies Changes to User Privileges
Roles can be used to drive authorization decisions and support the changes identified above. These can be VPN or application access roles and can also drive decisions on provisioning user objects and role membership. Having a solution for role automation and the processes defined and documented for what changes are required allows for flexible automation of needed changes. This ensures that you can rapidly adjust to business demands. Since many roles are defined by directory groups, membership in those groups can be quickly assigned when needed and then revoked once the emergency has ended. This also ties into compliance systems and processes which supports future attestation for resource access.
6. Self-Service Saves the Day
On-boarding additional users requires both the processes and tools to be deployed that allow users to register, reset passwords, update account information, unlock accounts and provide other self-service functions. Self-service not only minimizes help desk load, but also ensures that users can active the appropriate credentials and register for access. Undertaking a large effort to distribute MFA tokens during an emergency is not an attenable solution. Self-service can then be integrated into your provisioning systems to handle assigning registered users roles, distribute meta data to applications, and to potentially host the forms used for self-service.
7. Mobile Application Support Might Be Required
A remote workforce requires access to applications that support the needed job functions to be productive. These applications may require additional protocol support for user authentication, authorization and profile information. Protocols like oAuth, OIDC, DSML and others allow for mobile applications to access these services. Modern IAM solutions provide support for these and other protocols and can be leveraged as a gateway for access to identity services. This also allows for both service and user authentication, authorization, and consent.
Our lives have been changed by the challenges we face due to the COVID-19 coronavirus. Long-standing corporate procedures must adapt to these new rules through enhanced technical solutions and changes to processes and policy. Now that new groups of employees are working remotely, the ways that historically worked to enable these former on-premise employees must change. Companies no longer have an easy way to provide centralized services for provisioning personal computers, standardized images, or account registration. Small to midsize companies are looking at ways to allow employees to self-provision without requiring IT involvement to deploy a standard image, setup the machine, and then either ship it to the end user or require personal pick-up of the device.
Self-provisioning has multiple meanings. From self-service identity registration to provisioning of development infrastructure, the key is that you are putting power and control in the hands of the users. Historically, terms like provisioning have been tied to management of user identities, but this now needs to be extended to all the tools used by employees based upon job function. The processes for enabling users to perform these tasks need to be put in place to not only automate self-service provisioning, but also to securely expose these services to the public internet.
Companies like Microsoft and VMWare have solutions that allow companies to remotely deliver standardized installation and configuration of remote devices. Depending on the tool, different capabilities are available for off-network provisioning of computers and laptops, but the key is to allow employees to acquire their own devices and automated the process of configuring those devices with the corporate standardized software and configuration. Cloud-based management allows organizations to configure devices remotely with things like remote updates, install of corporate applications, and configuration of security policies for employee-procured devices.
However, Identity and Access management services need to be in place to support these self-provisioning processes. This includes handling the initial identification of the user, creation and provisioning of user accounts, and securing access to the provisioning systems. These systems can be integrated with solutions that secure authentication with technologies like multi-factor authentication (MFA) and single sign-on (SSO).
The following diagram highlights a sample workflow:
The user acquires a device from a local store (e.g. the Apple Store or Best Buy).
The user enrolls through an public facing site based upon a secure set of factors known only to the user. Enrollment includes things like creation of a password, setting profile data, and management of other security data.
The user is associated with the defined network identity configured for that user (typically in the HR system). That identity is then provisioned into the corporate user repositories. Roles automatically assigned to the user control provisioning targets and infrastructure access. Federated Identity solutions can be leveraged to create a centralized global profile for the users based upon multiple backend repositories.
Once the identity is created the device is enrolled for MFA, prompting the user to create MFA credentials if they do not already exist. MFA is critical for ensuring that access to the provisioning solution is secure.
After the user is fully enrolled and their accounts have been created, the device is configured by the provisioning solution. This includes any required software and updates defined by the corporate standards.
A self-provisioning solution minimizes the need for IT involvement and speeds on-boarding of new users. The solution also simplifies distribution of configuration, corporate applications, and updates without requiring users to come to a central location. Although implementation of such a solution has some inherent complexity to implement, once deployed, users working remotely can be easily managed without the overhead of legacy processes. Keep in mind that an internet connection is required for this solution.
The last couple of blog articles have focused on some of the remote workforce challenges and recommendations for responding to COVID-19. CoreBlox has partnered with Ping Identity to deliver a cloud-based single sign-on (SSO) and multi-factor authentication (MFA) solution to allow your remote workers to continue being productive. Details on this offering can be found at https://www.coreblox.com/offers.
Here are a few things to think about to best take advantage of the offer:
1. Minimize Complexity
Your workforce may not have experience working for home, and adding more complexity or passwords for accessing needed resources only compounds the challenges. Plan a strategy that continues to move toward your security objectives, while ensuring incremental benefits. Technologies like SSO and MFA can go a long way in simplifying access and better securing the off-network experience. Approach the effort by securing a combination of high value applications and quick hits. This shows progress and also helps to balance helpdesk load.
2. Add Security Not a Headache
Processes like intelligent risk-based authentication ensure that users are authenticated at the appropriate level for the resource being accessed. Prompts for step-up authentication should be based upon risk evaluation. Your long term goal should be to deliver risk-based MFA for as many systems as possible, but don’t wait for a “big bang.” Deploy MFA to critical systems like VPN connections first. Pair the delivery with self-service tools to simplify the enrollment process. Also, don’t risk authentication burnout with complex authentication processes that do not take risk into account. Solutions from companies like Preempt provide a Ping integrated solution for risk analysis and authentication policy definition.
3. Single Sign-On Makes Workers More Productive
SSO technologies have grown significantly since their initial introduction. What started as simple on-premise web-based SSO now extends integration to cloud providers and may include securing applications that include both on-premise and cloud-based components. It takes time to enter a password every time an application is accessed. Marrying SSO with technologies that eliminate passwords and evaluate risk delivers increased security while reducing the number of passwords needed and centralizes the management of credentials.
4. Provide a Centralized Jumping Off Point for Corporate Resources
Working from home is not only be isolating, but also complicates locating the resources needed to do your job. Look to provide a central portal that links job function to needed applications and tools. This can include things like HR or CRM access, links to the internal corporate wiki, or even access to collaboration tools. Centralizing access ensures employees have a single location for all needed information. SSO into linked applications improves productivity and reduces support calls.
5. Over Communicate
Security projects can be perceived as providing limited value to those outside of the security field. You are making people learn new processes, authenticate in different ways, and access resources with which the users may not be familiar. It is better to communicate more often than to only send notifications for something that has already been implemented. Set the stage for what is coming, tout the benefits of improving your SSO and MFA infrastructure, and celebrate small victories. Security projects may be behind the scenes, but implementation of these initiatives can have very visible implications. Try to get as many users on-board as possible as early as possible. People are willing to change if they understand the benefits. Making the process easy to use is also never a bad thing.
Keeping these factors in mind will help to ensure that you are making working from home as secure and productive as possible. Remote access delivered with forethought and the right tools minimizes risk, improves access, and reduces IT overhead.
This blog post describes how to integrate SiteMinder and ForgeRock. Bi-directional single-sign-on between SiteMinder and ForgeRock is achieved, so that both environments can co-exist during migration. Medium to large size businesses will find the ability for these two solutions to co-exist very useful. It reduces burden on application and operation teams, therefore providing flexibility during the application migration timeline. It also brings the least impact to end users.
Solution Description
A request with a valid SiteMinder session to the ForgeRock environment will result in an automatic creation of a ForgeRock session. Conversely, if the request comes to the ForgeRock environment first, a post authentication plugin will create a SiteMinder session using a custom Authentication Scheme provided by ForgeRock. This Authentication Scheme uses the standard interfaces provided by SiteMinder. Hence, the ForgeRock-provided plugins ensure seamless single sign-on between the two environments. As a matter of fact, the end user doesn't really know which environment they are in.
Solution Components
ForgeRock Access Management 6.5.2
ForgeRock Identity Gateway 6.5.1
CA Single Sign-On / SiteMinder Policy Server 12.80
CA Single Sign-On SDK 12.80
Solution Overview
In the SiteMinder environment:
• ForgeRock Authentication Scheme: used by SiteMinder to validate ForgeRock OpenAM token
• Sync App: a SiteMinder protected resource used to receive ForgeRock SSO token
In the ForgeRock environment:
• SiteMinder Authentication Module: used by OpenAM to verify SiteMinder session
• Post Authentication Plugin: sends OpenAM SSO token to SiteMinder upon successful authentication
User requests to access FR protected application first
IG intercepts the request and redirects the browser to AM for authentication
AM authenticates the user, creates a FR SSO token
Post authentication, AM sends FR SSO token to SiteMinder
SiteMinder creates a SMSESSION cookie if FR SSO token is valid
SiteMinder sends back the SMSESSION cookie to AM
AM sends back both of the FR and SM cookies to the user
User requests to access SM protected application first
SM creates a SM SSO token, and sends back to the user
User requests to access FR protected application
SM Auth Module configured in the AM authentication chain detects the existence of a SMSESSION cookie
SM Auth Module validates SMSESSION cookie with SiteMinder using standard SM API
If the SMSESSION cookie is valid. Authentication completes. AM creates FR SSO token
AM sends back both of the FR and SM cookies to the user
Conclusion
This blog post describes the technical details on co-existence between SiteMinder and ForgeRock. This type of solution can help your IAM modernization journey be seamless. It supports the latest ForgeRock AM version 6.5. Let Coreblox help catapult your business to the next generation of IAM platforms.
The release of the Raspberry Pi 4 with a quad core processor and 8GB of memory opens up new possibilities for enterprise level applications on a small form factor. At $75, multiple boards can be purchased and incorporated into an appliance form factor. By clustering the boards you can achieve enhanced performance and improved availability.
One use for such an appliance is what I call a Federated Data Caching Appliance. This drop-in appliance allows you link information from various data sources together, build views into the data based upon a schema you define, cache the information for quick retrieval and surface the views in a variety of different protocols. I have based this on technology from Radiant Logic, but other technologies can be substituted.
Imagine taking data from your HR, CRM and inventory systems and joining the information into a common view. What insights could you gain from that information? How could your applications leverage that data? How about building a view that linked a salesperson, his or her manager, his or her vacation schedule, what the salesperson has sold and the inventory available of those items. With that information, alerts could easily be generated for a manager when a client is running low on a product and inventory is available and the salesperson covering that client is out on vacation. It's a complicated scenario, but any data that can be pulled together and correlated can then be made available for consumption by applications. By separating the view from the physical representation, you have complete control over how the data is represented and made available through multiple protocols.
Radiant Logic's solutions provide the following capabilities:
With its sophisticated methods allowing you to quickly link to underlying data sources, define a schema for the data, join it, and deliver the data through multiple protocols, the technology provides a good engine for the Federated Data Caching Appliance. Additionally, the solution supports clustering for high availability and scalability. The solution requires three servers, but more can be used.
This appliance could be designed to house three (or more) Raspberry Pi's into a single highly-available device at a low cost point. By adding a second appliance you gain external high-availability as well. With the web-based administration and dashboards available in FID, a UI for managing the appliance could be quickly created:
The appliance could be designed something as follows:
The appliance has three Raspberry Pi's for the FID cluster which are powered over ethernet. The box also has redundant power. Two of the units would be deployed for high-availability.
Granted, there are some challenges to this approach. A build of FID that runs on ARM Java would have to be made available. Additionally, the default microSD-based storage would have to be replaced with something more scalable. However, this is an interesting experiment.
A wise colleague once told me, "If there is something that can take any data, build a schema, and lets you mount it somehow, it's going to have many use cases." That sent me down the path of looking at ways to easily surface important details and to make querying that data responsive. By dynamically generating the needed information instead of building static representations, you can quickly integrate this data into other systems and can modify it on the fly without needing to change the underlying systems. I described this type of solution in my previous blog article, "Building a Federated Data Caching Appliance."
With the release of the 8GB version of the Raspberry Pi 4, there seemed to be an opportunity to build a low cost solution based around those principles.
Table of Contents
Part 1: Overview of the Components and Their Assembly
There are of course many options for building such a device. Keep in mind that this is 100% unsupported by Radiant Logic as it is not a supported platform.
I am breaking this out into four articles:
Overview of the Components and Their Assembly
Base Install and Configuration
Radiant Logic Install Instructions
Implementing the Use Case
There are many ways to do this. I have chosen these steps to make things easier for me. Please keep that in mind as you review these instructions.
Before getting too far into this, I wanted to list out the various components I used in this proof-of-concept. I decided to simplify things by using Power Over Ethernet (PoE) instead of plugging in the Raspberry Pi’s. This made it easier for me to manage the jumble of cords I needed. If you do not have a switch with PoE capabilities, be sure to use a power supply instead.
A great way to get started with all of the needed components is by leveraging CanaKit’s Raspberry Pi 4 Starter Kits. I highly recommend them. I have no affiliation with the company. This gets you going with everything you need.
Screwdriver (I didn’t use it, but it can come in handy)
I used 11, 12 and 13 in order to still have access to the GPIO pins and also to raise the PoE Hat enough to make space for the heat sinks.
Assembly
The following steps outline the how I assembled the Raspberry Pi’s. There are probably a million ways to do this. This works for me. Do it your way if you want.
b. Peel off the protective film from the heat sink
c. Press the heat sink onto the Raspberry Pi using the CanaKit diagram above
1. Attach M2.5 x 15mm Standoffs, 2X2 Pins, and GPIO Stacking Headers
a. Attach the standoffs using the supplied screws to the board at the four hole points – I personally do this finger tight
b. Push the 2x2 Pins onto the PoE Header (see diagram in step 2) – Be careful not to bend the pins
c. Push the GPIO Stacking Header onto GPIO Header (see diagram in step 2) – Be careful not to bend the pins
1. Attach PoE hat
a. Remove the PoE hat from the box and static bag – you can discard the screws and standoffs that came with the PoE Hat
b. Carefully press the hat onto the GPIO and PoE Header pins
c. Screw the 4 nuts onto the standoffs
1. Put the Raspberry Pi in its case
a. Attach the feet to the bottom of the case
b. Flip over the case and follow the instructions printed in the case
c. First insert the front of the board and then snap it into place
d. Snap the lid into place
6. Set up a monitor and keyboard If Desired (we will use ssh in this set of instructions)
Attach the Raspberry Pi to a USB keyboard and HDMI display if desired. A HDMI television will work as well. The HDMI port on a computer will not work since it is out, not in. If you want to capture this onto the computer, use a capture card. Keep in mind the USB-C port on the Raspberry Pi is used to power the unit if you are not using PoE. I have a different keyboard in the image below, but it wasn't used in the configuration.
Refer to the diagram in section 2 for the connection ports.
This post is a continuation of Part 1 - Overview of the Components and Their Assembly. In this article we will install the base operating system, Ubuntu, and get the Raspberry Pi’s (RPI) ready to install Radiant Logic. We will start by flashing the Micro SD card, assigning the RPI a static IP address, and then update Ubuntu.
b. Put the Micro SD card in an adapter and mount it on your computer
c. Open the Raspberry Pi Imager
d. Click the [CHOOSE OS] button
e. Select: Ubuntu
f. Select: Ubuntu 20.04.01 LTS (Raspberry Pi 3/4) - 64-bit server OS for arm64 architectures
g. Click the [CHOOSE SD CARD] button
h. Select the inserted Micro SD card (NOTE: Be careful to select the correct drive or you can permanently lose data)
i. Click the [WRITE] button
j. Click the [YES] button (enter admin credentials if needed)
k. Click the [CONTINUE] button once the imager finishes
2. Remove the micro SD card and put the card into Raspberry Pi
3. Plug in the ethernet connection (or power if not using PoE)
4. Load a terminal window or ssh client and ssh to the Raspberry Pi (the use of ssh and a ssh client is beyond the scope of this article)
a. To locate the IP address of the Raspberry Pi, consult your router’s instructions or use the following method:
On Ubuntu and Mac OS use the command:
arp -na | grep -i "b8:27:eb"
If this doesn't work and you are using the latest Raspberry Pi 4, instead run:
arp -na | grep -i "dc:a6:32"
On Windows:
arp -a | findstr b8-27-eb
If this doesn't work and you are using the latest Raspberry Pi 4, instead run:
arp -a | findstr dc-a6-32
b. This returns output similar to the following:
(xx.xx.xx.x) at b8:27:eb:yy:yy:yy [ether] on xxxxxx
5. Use the following credentials:
ID: ubuntu
Password: ubuntu
6. The login screen loads
7. Enter the ubuntu user's password (note that it will not be displayed)
8. Enter and confirm a secure password (note that it will not be displayed)
9. The connection will close
10. ssh back to the Raspberry Pi with the ubuntu user's new password
11. Next update the apt repository: sudo apt update
12. The packages are updated
13. Upgrade the software packages: sudo apt upgrade
14. Select 'Y' to upgrade the Raspberry Pi
15. The upgrade process completes
16. Set the hostname for the Raspberry P (replace <HOSTNAME> with your desired name)i: sudo hostnamectl set-hostname <HOSTNAME>
17. Validate that the /etc/hosts file does not contain any other names (remove them leaving the localhost entry): more /etc/hosts
18. Assign static IP to the Raspberry Pi based upon your router's configuration. The router specific configuration for this is beyond the scope of this article.
19. Reboot the Raspberry Pi: sudo reboot
The base setup is complete. Repeat this process on the other two Raspberry Pi’s.
This post is a continuation of Part 2 - Base Install and Configuration. In this article we will install the Radiant Logic FID and complete any initial configuration steps.
NOTE: These are non-standard and unsupported configurations. Refer to screenshots from previous articles for steps in this section without screenshots.
p. sftp the slim package and template file to the /home/ubuntu/Installers/radiantlogic directory on the Raspberry Pi (the use of sftp is beyond the scope of this article)
q. Edit the /etc/environment file: sudo nano /etc/environment
p. sftp the slim package and template file to the /home/ubuntu/Installers/radiantlogic directory on the Raspberry Pi (the use of sftp is beyond the scope of this article)
q. Edit the /etc/environment file: sudo nano /etc/environment