Monday, 19 January 2015

WebCenter Content Replication

I currently have a WebCenter Portal installation that uses the Content Presenter task flow extensively to display links to documents stored in WebCenter Content. This content was previously being manually uploaded/migrated between environments which resulted in the same folders/documents existing but with different internal IDs for each environment. 

Since the Content Presenter references WebCenter Content using these internal IDs, any Content Presenters needed to be manually reconfigured to use the different IDs in each environment and WebCenter Portal could not be automatically deployed/migrated.

After some reading, I found that automated replication of WebCenter Content via the Archiver tool was the way to go. This was way more complicated then I expected (as UCM tends to be) so I compiled the below overview of what is involved in setting this up.

Server Configuration

Before content replication can be configured, the source and target WebCenter Content servers both need to be configured as per below:

Enable Trash

So that deleted folders/content can be replicated, you need to enable Trash on both the source and target servers. If Trash is enabled, then any deleted folders/content are simply not included in an export, and therefore the target server is not notified of their removal.

To configure Trash:
  1. Login to the content server as an admin
  2. Expand Administration and select Folder Configuration
  3. Click the System Folder Configuration button
  4. Click the disabled, grey, circle icon beside Trash
  5. Click Allow <user> to Mark Modified Folders

Configure the FolderStructureArchive Component

By default, the Archiver tool will only export content, and not the folders/collections storing the content. This FolderStructureArchive is used to handle folders, but it is not enabled by default.

This component needs to be enabled on the source (pushing) WebCenter Content server, but I like to enable the same components across all of my WebCenter installs, so I completed the below on both the source and target servers:
  1. Login to the content server as an admin
  2. Expand Administration and select Admin Server -> Component Manager
  3. Select the advanced component manager link at the top of the pages
  4. Under the Disabled Components list select FolderStructureArchive
  5. Click the Enable button
  6. Expand Administration and select Admin Server -> General Configuration
  7. In the Additional Configuration Variables text box add the following line.
    • AllowArchiveNoneFolderItem=false
  8. Click Save

Define the Outgoing Provider(s)

To allow your source server to be able to push to a target server, the target server must be defined as an outgoing provider:
  1. Login to the Source content server as an admin
  2. Expand Administration and select Providers
  3. Under the Create a New Provider heading, click the Add link beside the outgoing provider type
  4. Provide the relevant values for the following properties (using Test as an example):
    •  Provider Name: <EnvironmentName> (e.g. Test)
    • Provider Description: The WebCenter Content Test Environment
    • Server Host Name: server.host.name
    • Server Port: 4444
    • Instance Name: serverinstancename
    • Relative Web Root: cs
  5. Leave everything else as default, and click Add
NOTE: I also defined the source server as an Outgoing Provider of the target server using the same instructions as above (using the source server details where appropriate). Please see below for an explanation from Oracle as to why this is a good idea:

"For performance monitoring of a push transfer, you also should set up an outgoing provider from the target (proxied) Content Server instance back to the source (local) Content Server instance. This 'talkback' provider can then notify the source Content Server instance when each transfer is complete. A push transfer will work without the talkback provider, but the source Content Server instance would not be aware of transfer completion or problems."

Restart UCM Managed Server

For the above changes to take effect you need to restart the UCM managed server on the source and target

Archive Creation

Once your content servers are configured, you can then define the exports and imports archives for replication.

Create the Import Archive

An import archive needs to be defined on the target server to accept the export from the source:
  1. Login to the Target content server as an admin
  2. Expand Administration and select Admin Applets
  3. Select Archiver to open the Java applet
  4. Select Options -> Open Archive Collection...
  5. Double click the default collection (should be the same as the instance name)
  6. Click Edit -> Add...
  7. Enter a Archive Name such as "<Env>FoldersAndContentImport" then click OK
Make the target archive 'targetable' so that the source server can send archives to it:
  1. Select the new Archive
  2. Click the Transfer To tab
  3. Under Transfer Options click the Edit button
  4. Select the Is Targetable checkbox
  5. Click OK
  6. Close Archiver

Create the Export Archive

Instead of using Archiver on the source content server to define the archive, we create it using the FolderStructureArchive component we enabled earlier:
  1. Login to the Source content server as an admin
  2. Expand Administration and select Folder Archiver Configuration
  3. Select the default Collection Name (should be the same as the instance name)
  4. Enter a descriptive name e.g. "<Env>FoldersAndContentExport"
  5. Expand and select your relevant folders you wish to export:
    • Contribution Folders - This is where my WebCenter Portal content is stored
    • Trash - Required to allow folder/content deletes
  6. Click Add
Now we have our source archive created, we need to configure its target archive:
  1. Expand Administration and select Admin Applets
  2. Select Archiver to open the Java applet
  3. Select Options -> Open Archive Collection...
  4. If the Target collection you created above is not visible:
    • Click Browse Proxied...
    • Select the target server from the Proxied Server
    • Select the default collection (should be the same as the instance name)
    • Click OK
  5. Double click the default Source collection from the Open Archive Collection... menu
  6. Select the new export archive
  7. Click the Transfer To tab
  8. Under Transfer Destination click the Edit button
  9. Select the default Collection on the target server (should be the same as the target's instance name)
  10. Select the targetable archive which you created a in the previous steps ("<Env>FoldersAndContentImport")
  11. Click OK

Manual Replication

A manual test run of the export, transfer and import process must be run before automation is configured. This is to ensure that any import mappings are done and also make sure everything is hunky dory.

Remove Content From Target

If the same folders and content were already created on the target server manually, then any imports will fail due to duplicate content exceptions. If this is the case, then you first need to remove any duplicate folders/content:
  1. Log on to the Target content server as an admin
  2. Expand Browse Content and select Contribution Folders
  3. Select all child folders and content and select Item Actions -> Delete
  4. Click OK to confirm
You should also remove everything from the Trash:
  1. Log on to the Target content server as an admin
  2. Expand Browse Content and select Trash
  3.  Select all child folders and content and select Item Actions -> Delete
  4. Click OK to confirm

Run the Export/Transfer

Now that you have a fresh target server you can run the export. The export/transfer is completed on the Source server:
  1. Select the Export archive in Archiver
  2. Select Export... from the Actions menu
  3. Accept the defaults and click OK
  4. Wait for the message at the bottom of Archiver to read “Exporting <archive_name> in <collection_name>: Finished”
  5. Select Transfer... from the Actions menu
  6. Wait for the message at the bottom of Archiver to read “Transferring <archive_name> in <collection_name>: Finished”
  7. Close Archiver

Configure Import Maps

In some cases, the internal ID of the root 'Contribution Folders' and 'Trash' folders on the target server is different to the source server. This means that any folders imported would not be placed anywhere viewable since the ID of the parent folder does not exist in the target.

Only after you have an export transferred to the target server (see previous step) can you configure the import mapping:
  1. Open Archiver on the target server
  2. Select the new import Archive you created above
  3. Click the Import Maps tab
  4. Select the Table tab
  5. Expand the Collection Name folder and you should see a dated subfolder that lines up with the export you just performed
  6. Expand this folder and select the child Collections entry
  7. Besides Value Maps click the Edit button
  8. Enter the following details:
    •  Input Value: <id_of_the_source_contribution_folder>
    • Field: dParentCollectionID
    • Input Value: <id_of_the_target_contribution_folder>
  9. Click Add
  10. Enter the following details:
    • Input Value: <id_of_the_source_Trash_folder>
    • Field: dParentCollectionID
    • Input Value: <id_of_the_target_Trash_folder>
  11. Click Add
  12. Click OK

Run the Import

The import is completed on the Target server:
  1. Select the Import archive in Archiver
  2. Select Import... from the Actions menu
  3. Accept the defaults and click OK
  4. Wait for the message at the bottom of Archiver to read “Importing <archive_name> in <collection_name>: Finished”
  5. Close Archiver
You should now verify that everything exists on the target server.

Automatic Replication

NOTE: Automation can only be configured after the initial manual export, transfer and import is completed (see above).

In our environment we want exports from the source server to be manual, but transfers to and imports from the target server to be automatic.

Configure Automatic Transfer

Automatic transfer is configured on the Source server:
  1. Open Archiver
  2. Select the Export archive you created earlier
  3. Select the Transfer To tab
  4. Click the Edit button under the Transfer Options heading
  5. Select the Is Transfer Automated radio button
  6. Click OK
  7. Close Archiver

Configure Automatic Import

Automatic import is configured on the Target server:
  1. Open Archiver
  2. Select the Import archive you created earlier
  3. Select the Replication tab
  4. Click the Register Self button
  5. Close Archiver

Run Replication

  1. Login to the Source content server as an admin
  2. Expand Administration and select Admin Applets
  3. Select Archiver to open the Java applet
  4. Ensure the export archive ("<Env>FoldersAndContentExport") is selected
  5. Select Actions -> Export...
  6. Make sure Export Tables is the only option checked and click OK
  7. Wait for the status at the bottom of the Archiver windows to say "Exporting <archive_name> in <collection_name>: Finished"
  8. Wait for the status at the bottom of the Archiver windows to say "Transferring <archive_name> in <collection_name>: Finished" - This will automatically occur since we made the export "Auto Transferable"
  9. Close Archiver
As per our above configuration, the export from the source server should automatically transfer to the target, and the target server to automatically import any transferred exports. This process can take some time, but once complete, you can verify by:
  1. Login to the Target content server as an admin
  2. Select Browse Content -> Contribution Folders
  3. Check that the new folder and child content exists

Troubleshooting


Checking the Archive Logs

If anything does not appear to work as expected, you can view the archiver logs on either the source or target server:
  1. Login to the content server as an admin
  2. Expand Administration -> Log Files
  3. Select Archiver Logs
  4. Select the relevant date

Unable to Run Java Applets

Initially, I was unable to run any of the WebCenter Content Java applets, and kept receiving an "Application Blocked by Java Security" exception. To rectify this:
  1. Open Java Control Panel (via Control Panel -> Java)
  2. Select Security tab
  3. Click Edit Site List...
  4. Click Add
  5. Enter the exception URL as "[[http://<host_name>]]"
  6. Click OK twice
  7. Restart browser and clear cache
NOTE: Even after clearing caches and restarting browsers this didn't take effect for me and I need to do a full machine restart.

Folders not Appearing

A few times in my trial and error to determine the above process I would encounter the following messages in the Archiver logs:

Skip importing Row: [<content_id>(10EA6634-6FEE-1A1F-E22B-BA6427095FAB,<parent_content_id>,public,0,,,1,1,0,0,1,{ts '2014-09-23 16:52:17.992'},,,,{ts '2014-09-23 16:52:17.992'},,,,,,,,weblogic,weblogic,weblogic,/Contribution Folders/<folder_name/,/9CF69ED3-8A7C-C75B-E3CB-D101CDAEF577/10EA6634-6FEE-1A1F-E22B-BA6427095FAB,/<parent_content_id>/<content_id>] in Table Collections.)

This generally meant on of two things:
  • That the folders actually had been imported, but into a different location, OR
  • The import was attempting to import folders with the same name to the same location
The steps to fix were almost always:
  1. Delete the suspected duplicate folders
  2. Rebuild indexes
  3. Rerun the export/import

Thursday, 27 September 2012

Propagating Faults with an OSB Proxy

Summary

I recently have been using Oracle Service Bus (11g) to virtualise a series of web service endpoints. I was using the standard approach of:

  1. Importing resources from a WSDL
  2. Creating a Business Service based on that WSDL
  3. Creating a Proxy Service based on the Business Service
This all works well, except when it comes to the handling of faults. 

When I used the Business Service to test my web service through the OSB console, I would get the correct fault as a response:

<env:Fault xmlns:ns0="http://custom.com/Test/">
    <faultcode>ns0:DuplicatePrimaryKeyException</faultcode>
    <faultstring>
    <![CDATA[
        CAUSE. . . . . The Primary Key already exists
        RESOLUTION . . Please enter a valid Primary Key
        ]]>
    </faultstring>
    <faultactor/>
    <detail>
        <exception>&lt;exception/></exception>
    </detail>
</env:Fault>

However, making the same call via the Proxy Service, I would get the following fault:

<soapenv:Fault>
    <faultcode>soapenv:Server</faultcode>
    <faultstring>BEA-380001: Internal Server Error</faultstring>
    <detail>
        <con:fault xmlns:con="http://www.bea.com/wli/sb/context">
            <con:errorCode>BEA-380001</con:errorCode>
            <con:reason>Internal Server Error</con:reason>
            <con:location>
                <con:node>RouteTo_JDE_CustomItemMasterManager_BS</con:node>
                <con:path>response-pipeline</con:path>
            </con:location>
        </con:fault>
    </detail>
</soapenv:Fault>

I googled around for more information on OSB fault handling, and found the following excellent article:


The only problem was that it didn't exactly address what I was after, which was to simply propagate any faults from a Business Service to a Proxy Service.

Solution

I was able to use the information from the link above to solve my problem. The instructions below use the OSB console to modify the Proxy Service object, but you should be able to adapt them if using the Eclipse IDE:
  • Start a new change session by clicking Create in the Change Center
  • Locate your Proxy Service and select View Message Flow to start editing
  • Click on the RouteTo_<BusinessServiceName> and select Add Route Error Handler
  • Click on the new Error Handler and select Add Stage
  • Click on the new Stage and select Edit Name and Comments 
  • Enter a useful name (e.g. "Reply with Failure") and click Save
  • Click on the same Stage and select Edit Stage
  • Click the Add an Action link and select Flow Control -> Reply
  • Select the With Failure radio button and click Save All
  • Click Activate in the Change Center to save your changes.
The Proxy Service will now propagate the fault from the Business Service, instead of masking it and throwing a generic fault instead.

Sunday, 2 September 2012

Creating a JD Edwards Business Service (BSSV)

I recently was given a requirement to develop a web service that would simply insert a record into one or more JD Edwards tables. This service required no vaildation, and would simply trust that the input to the web service would be complete and valid.

My knee-jerk reaction was to write a simple BPEL process with a database adapter to the relevant table(s). However, I discovered that JD Edwards actually provides a series of web services out-of-the-box which they refer to as "Business Services" or "BSSVs". A list and description of the standard business services are outlined in the Business Services Reference Guide.

The problem was, there was no BSSV out-of-the-box to insert into the table(s) I required. However, during my investigation of JDE BSSVs, I discovered that it was relatively straight forward to write a custom Business Service, and that the development was even done using java (JDeveloper) and not the JDE tools.

With my interest piqued, I decided to implement the require web service as a JDE BSSV. The tutorial below uses the F0101 as a working example, since the table I actually needed to insert into was a custom one.

Prerequisites

This guide assumes that you have a Windows PC with JDE Tools 9.10 installed, along with JDeveloper 11.1.1.5.0.

Creating the BSSV Object in JDE

Before you can develop the BSSV using JDeveloper, you first need to create the corresponding JDE object within the JDE tools:
  1. Open JD Edwards Solutions Explorer
  2. Type in "OMW" into the Fast Path
  3. Select an OMW Project, and click Add
  4. Under Object Librarian Objects, select the Business Function radio button and click OK
  5. Enter a Name of  JP550100. This name basically indicates that this is a custom BSSV (55) and that it relates to the Address Book (01). This fits in with the recommendation outlined in the Business Services Development Methodology Guide. This guide also recommends to only create one business service for each category, so any future Business Service relating to the Address Book will be developed as additional methods under this same BSSV.
  6. Enter values for the remaining details and click OK:

Configuring and Opening JDeveloper

Now that the BSSV JDE object has been created, you need to tell the JDE tools where your JDeveloper is installed and open your BSSV for implementation:
  1. You should already be in Design for your BSSV, but if not, locate the BSSV object from OMW and click Design
  2. Select the Design Tools tab
  3. Click JDeveloper Install Path button.
  4. Enter the path you chose when installing JDeveloper. For me this was C:\Oracle\Middleware
  5. Click OK
  6. Back on the Design Tools tab, click Invoke JDeveloper
JDeveloper should open and contain your new BSSV project, click OK to "Save Files" if you are prompted. Expand your project, and you should see the basic package structure. 


NOTE: JDeveloper will also display any previous BSSV project that have been opened locally, which for me, includes the standard Address Book Manager BSSV (JP010000).

By opening JDeveloper from JDE, a lot of new features are available including serveral wizards for creating new objects:


These wizards are heavily utilised for the implementation of BSSVs, as well as several other JDE-specific menu items throughout JDeveloper.


Creating Internal Value Objects

Internal Value Objects are objects that can be used as inputs/outputs for business functions or when making database calls. The internal variables of these objects are "JDE  friendly" types like MathNumeric, String and Date, and have names which represent the internal datatype in JDE (e.g. AN8). 

Since we will be calling an insert on the F0101, we will need an Internal Value Object to represent the input to this database call:
  1. Select your BSSV JDeveloper project and select File -> New
  2. On the Current Project Technologies tab select EnterpriseOne -> Classes.
  3. Select Database Value Object and click OK
  4. Click Next  to skip the Welcome screen
  5. Enter the details to search for your table and click Find. I enter F0101 in the Object Name
  6. Select your table and click Next
  7. Select the relevant columns (all in my case) and click Next
  8. Enter InternalAddressBookRecord for the Value Object Name and make sure Internal is selected
  9. Click Finish
A Java class will be created with a private variable for every column in the F0101. The problem is that there is no way to access these variables: we need getters and setters. Luckily, JDeveloper can generate these for us by right clicking the java source and selecting Generate Accessors. Simply select all of the available fields, and click OK.

Creating Published Value Objects

Published Value Objects are used as the input and outputs to the Business Service, which are available to external systems. The internal variables of these objects are "XML/Web Service friendly" types like Integer, BigDecimal, String and Calendar, and have user friendly names like addressBookNumber (instead of AN8).

We want to create an input variable to this BSSV with all of the fields of the Address Book Master, but this time, in XML and user friendly format:
  1. Select your BSSV JDeveloper project and select File -> New
  2. On the Current Project Technologies tab select EnterpriseOne -> Classes.
  3. Select Database Value Object and click OK
  4. Click Next  to skip the Welcome screen
  5. Enter the details to search for your table and click Find. I enter F0101 in the Object Name
  6. Select your table and click Next
  7. Select the relevant columns (all in my case) and click Next
  8. Enter AddressBookRecord for the Value Object Name and make sure Published is selected
  9. Click Finish
  10. Generate Accessors the same way you did previously
Since I do not require any specific output from my BSSV, I am not creating a Value Object for it. Instead, I will simply use the MessageValueObject as the return type. Normally your return type will extend this class, but there is nothing stopping you from using the base class itself.

Creating the Published Business Service Class

With our value objects sorted, we can now create the actual BSSV implementation:
  1. Select your BSSV JDeveloper project and select File -> New
  2. Click Classes under the  EnterpriseOne category
  3. Select Published Business Service Class and click OK
  4. Enter the following details:
    • Name - CustomAddressBookManager
    • Method Name - insert
    • Input Class - oracle.e1.bssv.JP550100.valueobject.F0101
    • Return Class - oracle.e1.bssvfoundation.base.MessageValueObject
  5. Click OK
The Java class for the implementation of you BSSV is now created. You will notice that there are a few errors and "TODO" references that will be address by your implementation.

Transforming Between Published and Internal Value Objects

Looking at the method that was generated for our Business Service, you can see that the input is of the Published Value Object type F0101. When we call the insert into the Address Book Master table, we will need an object of the Internal Value Object type InternalF0101, meaning we will need to write code which converts from the Published object to the Internal version.

I have chosen to do implement this as a new constructor for the Internal class which accepts the Published type as an input parameter.
public InternalF0101 (F0101 vo) {
    this.F0101_AN8 = new MathNumeric(vo.getAddressNumber());
    this.F0101_ALKY = vo.getAlternateAddressKey();
    this.F0101_EFTB = vo.getDateBeginningEffective().getTime();
    ...
}
The code above needs to set every internal variable of InternalF0101 to the corresponding variable within the F0101 input object. 

NOTE: The javadoc in the Published Object contains an EnterpriseOne Alias above every internal variable declaration which can be used to help when mapping to/from the Internal Object internal variables e.g. addressBookNumber -> F0101_AN8.

Implementing the BSSV

Within the BSSV class, there are three "TODO" statements to guide you in your implementation:
//TODO: Create a new internal value object.
//TODO: Call BusinessService passing context, connection and internal VO
//TODO: Add messages returned from BusinessService to message list for PublishedBusinessService.
Under the first TODO, we want to construct the Internal Value Object to be used when inserting into the table. We can use the new constructor we created above, passing in the Published Value Object:
// Create a new internal value object.
InternalF0101 internalVO = new InternalF0101(vo);
The second TODO requires us to use a new context menu item in JDeveloper added by JDE:
  1. Right-click the BSSV source file under the second TODO
  2. Select Create Database Call
  3. Click Next on the Welcome screen
  4. Select Insert and click Next
  5. Search for and select your table (F0101), then click Next
  6. Select the relevant columns (all in my case) and click Next
  7. Review the generated database operation and click Finish
This wizard will create a new private method that performs the database insert as well as add the call to this method to wherever you initiated the wizard (for us, under the second TODO):
// Call BusinessService passing context, connection and internal VO
insertToF0101(context, connection, internalVO);
The generated private method will have many errors because the wizard defaults the input value object to be of the type InputVOType. We need to change this to our Internal object type (bold below):
private static E1MessageList insertToF0101(IContext context, IConnection connection, InternalF0101 internalVO) {
To address the final TODO we need to use the return value from the private method which performs the database insert. Modify the line under the second TODO to use the capture the return value:
// Call BusinessService passing context, connection and internal VO
messages = insertToF0101(context, connection, internalVO);
At this point, there should only be one error left in the implementation, which occurs on the following line:
MessageValueObject confirmVO = new MessageValueObject(internalVO);
Simply remove the highlighted text above, and your BSSV implementation is now complete.

Testing the BSSV Logic

To test the code that you have written, you can run the BSSV class locally with a main method like that below:
public static void main(String args[]) {
    CustomAddressBookManager a = new CustomAddressBookManager();
    F0101 f = new F0101();
    f.setAddressNumber(new Integer(123));
    f.setAlternateAddressKey("abc");
    f.setDateBeginningEffective(Calendar.getInstance());
    // TODO set other fields
 
    try {
        a.insert(f);
    } catch (Exception e) {
        e.printStackTrace();
    }
}
Running this method will insert the relevant record into the database with the correct values. The values in the table will be in the correct JDE format for dates, numbers etc.

Running the exact same code again, correctly results in the following error:
oracle.e1.bssvfoundation.exception.BusinessServiceException:
Description: Table/View - Error during database operation: [DUPLICATE_KEY_ERROR] Duplicate key error obtained for table F0101.,
Resolution: See logs for detail of database operation failure

Wednesday, 29 August 2012

Using Identity Switching from SOA Suite


My requirement for Identity Switching came when I was implementing a BPM process. We had a process where a user would initiate a simple process and attach a relevant document. This document then needed to be checked in to UCM using the provided web service.

The original setup had the BPM process authenticating as a system user when calling the UCM web service, regardless of which user actually approved the task. This worked, but meant that every document appeared to be checked in as that system user, and not as the user who actually uploaded the document.

The solution to this problem was to use Identity Switching to effectively authenticate as one user, but then switch the SAML token that is actually used when calling the UCM Web Service.

This blog post uses the UCM Web Service as an example, but can be applied to any web service that has the oracle/wss11_saml_token_with_message_protection_service_policy applied to it. For instructions on how to do apply this policy to the UCM (or another) web service, see Securing a Web Service on WebLogic

Setting Up the JDeveloper Project

The first step is to create an empty JDeveloper SOA Application/Project:
  1. Open JDeveloper and select File->New->SOA Application
  2. Enter an Application Name and click Next
  3. Enter a Project Name and click Next
  4. Select Empty Composite and click Finish

Adding the External Web Service

Now that you have your empty composite you want to add the Web Service. To do this, first open the the composite.xml file then drag a Web Service adapter from the Component Palette onto the External References right-hand column. You will be prompted to enter your web service details. I have used the UCM Web Service:

Option 1 - Identity Switching Using Mediator

I find the Mediator is simpler to implement and makes more sense if all you are doing is simply setting up Identity Switching.

For a Mediator, drag the Mediator Service Component from the Component Palette onto the Components central column of the composite.xml. I have set up my Mediator to be based off of the same WSDL as the Web Service to keep things simple. See below for details:


Connect the Mediator to the Web Service on the composite.xml.

Because the composite input is the same type as the web sevrice input, the Mediator implemntation is complete. If you used a different input schema for your Mediator, you will most likely need transform your Mediator and Web Service inputs and outputs.
    The only step left is to set up the actual Identity Switching in the mediator:
    1. Open the Mediator
    2. Under the Static Routing click the button beside the first Assign Values field for the request
    3. Click the Add (plus) button
    4. In the From column, select constant from the drop down
    5. Enter the username of the user whose identity you want to switch to e.g. usera
    6. In the To column, select property from the drop down
    7. From the list of available property values, select javax.xml.ws.security.auth.username
    8. Click OK
    9. Save all files
    Below are my Routing Rules after implementing the Mediator:


    Option 2 - Identity Switching Using BPEL

    Identity Switching can also be implemented using a BPEL Process which can be useful if you are trying to achieve more than Identity Switching in your Composite.

    For a BPEL Process, drag the BPEL Process Service Component from the Component Palette onto the Components central column of the composite.xml. I have set up my BPEL Process to be based off of the same WSDL as the Web Service to keep things simple. See below for details:


    Connect the BPEL Process to the Web Service on the composite.xml.

    The BPEL Process does not automatically map between the process and Web Service input/output. As such you will need to add:
    1. An Invoke which connects to your Web Service
    2. A Transform (or Assign) to map the BPEL process input to the Web Service input
    3. A Transform (or Assign) to map the Web Service output to the BPEL process output
    The BPEL process should resemble that below:



    The final steps are to perform the actual Identity Switching on the BPEL Invoke:
    1. Open the BPEL Process
    2. Double Click the Invoke and select the Properties tab
    3. Locate the property named javax.xml.ws.security.auth.username
    4. In the Value column, click the "..." lookup button
    5. Select the Expression radio button and enter the username string of the user whose identity you want to switch to e.g. 'usera'
    6. Click OK twice
    7. Save all files

    Securing the Endpoints

    At this point, all of the relevant service endpoints have been added to the composite.xml, so we are ready to set their security. For this example, the composite will require a simple username/password token, and will use this token to authenticate against the UCM web service, but switch the Identity.

    The first step is to setup the UCM web service for Identity Switching:
    1. Right click on the Web Service under External References and select Configure WS Policies...
    2. Click the Add button beside the Security heading
    3. Select oracle/wss11_saml_token_identity_switch_with_message_protection_client_policy
    4. Click OK to both popups
    5. Save All files
    Next, set the security for the endpoint of our BPEL/Mediator:
    1. Right click on the Exposed Service and click Configure WS Policies...
    2. Click the Add button beside the Security heading
    3. Select oracle/wss_username_token_service_policy
    4. Click OK to both popups
    5. Save All files

    Adding System Policies

    Before a SOA Composite can use Identity Switching, it must be given explicit permission. The easiest way to do this is to log on to Enterprise Manager, right click on your domain and select Security ->System Policies:


    From the System Policies screen:
    1. Click the Create... button.
    2. Enter a Code Base of file:${common.components.home}/modules/oracle.wsm.agent.common_11.1.1/wsm-agent-core.jar
    3. Click the Add button under Permissions
    4. Check the Select here to enter details for a new permission checkbox
    5. Enter the following details:
      • Permission Class: oracle.wsm.security.WSIdentityPermission
      • Resource Name: *
      • Permission Actions: assert
    6. Click OK twice to save your changes
    You can also use the following WLST command to create the same System Policy as above:

    grantPermission(codeBaseURL='file:${common.components.home}/modules/oracle.wsm.agent.common_11.1.1/wsm-agent-core.jar', permClass="oracle.wsm.security.WSIdentityPermission", permTarget="resource=*", permActions="assert")

    NOTE: The value for Resource Name above should ideally be the name (and version) of your particular composite e.g. IdentitySwitchingComposite[1.0]. However, I could not seem to get this working correctly, so took the easy way out and entered "*", so that all composite would be granted access to perform Identity Switching.

    Testing Identity Switching

    Now that your Composite has been developed and the necessary System Policies have been created, you can finally deploy your Composite for testing. To test I call my Composite passing in the credentials of userb but the update of metadata in UCM is recorded against usera

    Securing a Web Service on WebLogic

    This post outlines how to apply WS-Policies to Web Service endpoints on a WebLogic server using the UCM Web Service as an example. It also details how to fix potential issues when securing web services in a multi-machine domain.

    Securing the Endpoint

    • Log into the Admin Console (http://<server>:7001/console)
    • Select Deployments from the left menu
    • Locate and select the relevant web service deployment. I am using GenericSoapService under the Oracle UCM Web Services deployment
    • Select the Configuration tab, then the WS-Policy sub tab
    • Click the relevant Service Endpoint. I am using GenericSoapPort
    • If prompted, select OWSM and click Next
    • Add the desired policy to the Chosen Endpoint PoliciesFor the UCM Web Service, only the following two policies are supported:
    oracle/wss11_saml_token_with_message_protection_service_policy
     oracle/wss11_username_token_with_message_protection_service_policy
    • Click Finish. IMPORTANT: Take note of the deployment plan path. For me, it was:

    <MW_HOME>/Oracle_ECM1/ucm/idc/components/ServletPlugin

    • If prompted, click OK to confirm on the Save Deployment Plan screen.
    • Activate Changes
    If you are running a domain with multiple machines, you may see an an error message at the top of the screen indicating that the Plan.xml file was not found. If this occurs, follow the instructions below.

    Configuring a Multi-Machine Domain

    When you update the WS-Policy for the web service deployment, a Plan.xml file (and other supporting files) are created on the machine where Admin Server is running. If your web service is deployed to any other Managed Servers which are running on separate machines, these supporting files will not exist on these machines. This is why you receive the Plan.xml not found error.
    This problem is easily fixed by copying the relevant files from the Admin Server machine to the same location on every machine with a Managed Server which has the relevant Web Service deployed to it. 
    For my UCM Web Service example, I had to copy the Plan.xml:
    <MW_HOME>/Oracle_ECM1/ucm/idc/components/Plan.xml
    And the following directory:
    <MW_HOME>/Oracle_ECM1/ucm/idc/components/ServletPlugin
    From the Admin Server's machine to the same locations on the two machines that were running the UCM Managed servers.
    NOTE: After copying the above files/directories, please check the permissions are the same/similar to those set on the Admin Server copies.
    Now that all of the supporting files exist in the relevant location, you need to update the your Web Service deployment:
    1. Log into the Admin Console (http://<server>:7001/console)
    2. Select Deployments from the left menu
    3. Check the box beside the relevant deployment. For me, this is Oracle UCM Web Services
    4. Click the Update button at the bottom or top of the page
    5. Make sure that the Redeploy radio button is selected and click Finish
    6. No restart is required

    Missing Users from Active Directory Provider in WebLogic

    I recently encountered an issue when configuring an Active Directory Authentication Provider within WebLogic. The issue was that, although the provider was created successfully, only a dozen or so users were displayed under Users and Groups tab when there should have been several hundred.

    I verified the following:
    1. The Active Directory provider was the default Authentication Provider
    2. The Control Flag of all of the Providers were set to "Sufficient"
    3. I could connect to the Active Directory using JXplorer, meaning I had the correct values for:
      • Host
      • Port
      • Principal
      • Credential (Password)
      • User Base DN
    The issue had to be related to some other property for Users in WebLogic. I reviewed the values I was using:

      • All Users Filter:  <blank>
      • User From Name Filter: (&(cn=%u)(objectclass=user))
      • User Search Scope: subtree
      • User Name Attribute: sAMAccountName
      • User Object Class: user
      • Use Retrieved User Name as Principal: <checked>

    The first thing that jumped out at me was the User From Name Filter. I had simply left the value WebLogic uses by default, which assumes that the User Name attribute is "cn". However, in my case, the username attribute in Active Directory was "sAMAccountName".

    Using JXplorer, I confirmed that the only reason that some Users were being displayed in WebLogic was that their "cn" and "sAMAccountName" attributes were identical. 

    I changed the value of of the User From Name Filter to be "(&(sAMAccountName=%u)(objectclass=user))", and restarted WebLogic. Lo and behold, all users were now being displayed in WebLogic. Success!