Dealing with errors when creating an adhoc role – part three – WF_DIRECTORY error codes

In this last part in the series, I’m going to look at some other common errors which you might hit when creating an adhoc role using the WF_DIRECTORY APIs.  The first part of the series can be read here, and the second part is here.

The table below lists the error numbers and messages which can be returned by the WF_DIRECTORY package – the SQLCODE returned by the error will always be -20002, with the error number included in the text of the SQLERRM.

Error Name Number Error Message
WF_DUP_ROLE 395 Duplicate role &DISPNAME (‘&ROLENAME’)
WF_INVALID_ROLE 396 Invalid role name ‘&ROLENAME’
WF_DUP_USER 397 Duplicate user &DISPNAME (‘&USERNAME’)
WF_INVALID_USER 398 Invalid user name ‘&USERNAME’
WF_NO_LANG_TERR 399 Could not resolve NLS language and territory
WF_INVALID_NTF_PREF 4004 ‘&NTF_PREF’ is not an allowed notification preference.
WF_INVAL_CONCAT_NAME 4001 ‘&NAME’ is an invalid internal name because it contains a colon. For this particular role, if the name contains a colon, it has to be ‘&ORIG_SYSTEM:&ORIG_SYS_ID’ .
WF_ROLENAME_TOO_LONG 4002 ‘&NAME’ exceeds the maximimum length of &LENGTH characters.
WF_INVALID_ROLE_STATUS 4003 ‘&STATUS’ is not an allowed status.
WF_DUP_USER_ROLE 4016 User/Role relationship for user ‘&UNAME’ and role ‘&RNAME’ already exists.

This list of errors is not a definitive list of every error which may be thrown by the API, this is a list of the most likely ones which can occur when using adhoc roles.

If you need to look for any of these errors in your code, then include the error codes within the exception handler – one thing to be careful of (and it’s caught me in the past!) is that if you are working in multiple languages, ensure that you only look for the error number rather than the error message as well.  For example, check for “%4016%” within SQLERRM rather than looking for “%4016%already exists”, since the text will vary depending on the Workflow language!

Dealing with errors when creating an adhoc role – part two – duplicate roles

In this post, I wrote about how you can use custom code to create an adhoc role which suppresses errors when the role is invalid.

The next logical extension to the code is another common problem – what happens if the list of users contains duplicate records?  If you attempt to call the standard WF_DIRECTORY APIs, and the list of users contains duplicates, then the API will return an error:

  1  DECLARE
  2    v_rn           VARCHAR2(100);
  3    v_dn           VARCHAR2(100);
  4    v_role_users   WF_DIRECTORY.UserTable;
  5
  6  BEGIN
  7    v_rn := 'MATT'||TO_CHAR(SYSTIMESTAMP,'YYYYMMDDHH24MISSFF');
  8    v_dn := v_rn;
  9
 10    v_role_users(1) := 'SEARLEM';
 11    v_role_users(2) := 'SEARLEM';
 12    v_role_users(3) := 'SEARLEM';
 13
 14    WF_DIRECTORY.CREATEADHOCROLE2 ( ROLE_NAME         => v_rn
 15                                  , ROLE_DISPLAY_NAME => v_dn
 16                                  , ROLE_USERS        => v_role_users );
 17
 18* END;
APPS@R12_DEV > /
DECLARE
*
ERROR at line 1:
ORA-20002: 4016: User/Role relationship for user 'SEARLEM' and
          role 'MATT20120123115841745021000' already exists.
ORA-06512: at "APPS.WF_DIRECTORY", line 2005
ORA-06512: at line 14

In this example, the API raises the same ORA-20002 error with a Workflow specific error ID of 4016, which we can modify the code from the earlier post to catch and deal with this error as well:

CREATE OR REPLACE PROCEDURE xx_create_adhocrole
       ( role_name               IN VARCHAR2
       , role_display_name       IN VARCHAR2
       , language                IN VARCHAR2 DEFAULT NULL
       , territory               IN VARCHAR2 DEFAULT NULL
       , role_description        IN VARCHAR2 DEFAULT NULL
       , notification_preference IN VARCHAR2 DEFAULT 'MAILHTML'
       , role_users              IN WF_DIRECTORY.UserTable
       , email_address           IN VARCHAR2 DEFAULT NULL
       , fax                     IN VARCHAR2 DEFAULT NULL
       , status                  IN VARCHAR2 DEFAULT 'ACTIVE'
       , expiration_date         IN DATE     DEFAULT NULL
       , parent_orig_system      IN VARCHAR2 DEFAULT NULL
       , parent_orig_system_id   IN NUMBER   DEFAULT NULL
       , owner_tag               IN VARCHAR2 DEFAULT NULL ) IS

  e_workflow_error  EXCEPTION;
  PRAGMA EXCEPTION_INIT (e_workflow_error , -20002);

BEGIN

  WF_DIRECTORY.CreateRole ( role_name               => role_name
                          , role_display_name       => role_display_name
                          , orig_system             => 'WF_LOCAL_ROLES'
                          , orig_system_id          => 0
                          , language                => language
                          , territory               => territory
                          , role_description        => role_description
                          , notification_preference => notification_preference
                          , email_address           => email_address
                          , fax                     => fax
                          , status                  => status
                          , expiration_date         => expiration_date
                          , parent_orig_system      => parent_orig_system
                          , parent_orig_system_id   => parent_orig_system_id
                          , owner_tag               => owner_tag );

  IF role_users.COUNT > 0 THEN
    FOR i IN 1 .. role_users.COUNT LOOP
      BEGIN

        WF_DIRECTORY.AddUsersToAdHocRole ( role_name  => role_name
                                         , role_users => role_users(i));

      EXCEPTION
        WHEN e_workflow_error THEN
          IF   (SQLERRM LIKE 'ORA-20002:%3205%')
            OR (SQLERRM LIKE 'ORA-20002:%4016%') THEN
            NULL;
          ELSE
            RAISE;
          END IF;

        WHEN OTHERS THEN
          RAISE;

      END;
    END LOOP;
  END IF;

END xx_create_adhocrole;
/

This version of the code can be downloaded here.

In the next post in this series, I’ll be looking at the different errors which the WF_DIRECTORY API can throw.

Dealing with errors when creating an adhoc role – part one – invalid roles

I was recently asked about a problem with trying to create and adhoc role, and some of the errors which might occur.

I am sending notification to multiple users.  While doing that if the roles are valid, then there is no problem.

The problem occurs when one of the roles isn’t valid, for example:
v_role_users => ‘ROLE1 ROLE2 ROLE3’;

In this example, if ROLE2 is not valid then the notification is not sent to anyone because the role cannot be built. Instead, we get this error: 3205: ‘ADHOC_ROLE’ is not a valid role or user name.

If one of the roles is invalid, I still want the notification to be sent to the valid roles.

How to achieve this?

The way round the problem is to modify the code which builds the role, so that rather than passing all the users into the API at the same time, they are passed in sequentially.  When each role is added, include additional error handling to catch any error and handle that.

Unfortunately, the Workflow development team use the same custom error code (-20002) for all errors which occur in the product, which means that as well as looking for the error to occur, we then need to parse the SQLERRM error message to check for the Workflow specific error number (3205 in this case).

The following code creates a new procedure which you can call to create the role, without any particularly invasive changes – instead of calling WF_DIRECTORY APIs to create a new role, call the procedure instead, which will suppress expected errors but still raise anything unexpected:

CREATE OR REPLACE PROCEDURE xx_create_adhocrole
       ( role_name               IN VARCHAR2
       , role_display_name       IN VARCHAR2
       , language                IN VARCHAR2 DEFAULT NULL
       , territory               IN VARCHAR2 DEFAULT NULL
       , role_description        IN VARCHAR2 DEFAULT NULL
       , notification_preference IN VARCHAR2 DEFAULT 'MAILHTML'
       , role_users              IN WF_DIRECTORY.UserTable
       , email_address           IN VARCHAR2 DEFAULT NULL
       , fax                     IN VARCHAR2 DEFAULT NULL
       , status                  IN VARCHAR2 DEFAULT 'ACTIVE'
       , expiration_date         IN DATE     DEFAULT NULL
       , parent_orig_system      IN VARCHAR2 DEFAULT NULL
       , parent_orig_system_id   IN NUMBER   DEFAULT NULL
       , owner_tag               IN VARCHAR2 DEFAULT NULL ) IS

  e_workflow_error  EXCEPTION;
  PRAGMA EXCEPTION_INIT (e_workflow_error , -20002);

BEGIN

  WF_DIRECTORY.CreateRole ( role_name               => role_name
                          , role_display_name       => role_display_name
                          , orig_system             => 'WF_LOCAL_ROLES'
                          , orig_system_id          => 0
                          , language                => language
                          , territory               => territory
                          , role_description        => role_description
                          , notification_preference => notification_preference
                          , email_address           => email_address
                          , fax                     => fax
                          , status                  => status
                          , expiration_date         => expiration_date
                          , parent_orig_system      => parent_orig_system
                          , parent_orig_system_id   => parent_orig_system_id
                          , owner_tag               => owner_tag );

  IF role_users.COUNT > 0 THEN
    FOR i IN 1 .. role_users.COUNT LOOP
      BEGIN

        WF_DIRECTORY.AddUsersToAdHocRole ( role_name  => role_name
                                         , role_users => role_users(i));

      EXCEPTION
        WHEN e_workflow_error THEN
          IF SQLERRM LIKE 'ORA-20002:%3205%' THEN
            NULL;
          ELSE
            RAISE;
          END IF;

        WHEN OTHERS THEN
          RAISE;

      END;
    END LOOP;
  END IF;

END xx_create_adhocrole;
/

The SQL file can also be found here.

In later posts, I’ll be looking at extending this to catch different, common errors.

Can I change the display name for an attachment link?

Some time last year, I was asked about adding different attachments to notifications – something that has been discussed in the forum a number of times. This time, I was asked about whether it was possible to modify the display name, so that each attachment link contained the name of the file that had been attached:

Hello

I have a requirement to send a notification with attachment but attachment name should be attachment name. Is there any way to change the display name of the Attachment?

This is something that was also asked in the forum in June last year – where there are multiple attachments (or even if there is a single attachment), users like to see the details of the attachment rather than a generic “attached file” link, for example.

When the notification is built, any attached documents are included as attributes to the message.  The link to the attachment is created using the display name for the attribute in the text.  It is possible to modify the display name of an attribute programmatically (although this is NOT supported by Oracle since it would be a direct table update), BUT attributes aren’t versioned.  There is only one definition of an attribute within the process definition, so if we changed the display name, this would impact EVERY notification that uses the same message.

So, the quick and simple answer is “No”, and hopefully this quick post explains why you can’t.

How to permanently mount a SAMBA drive under Linux

A while back, I bought a QNAP NAS drive for backing up my server and other machines which are attached to the network.  Having created a number of shared folders on the NAS, I had shortcuts to them available to me via SAMBA, but I wanted to configure my desktop machine so that the folders were permanently mounted.

So, the first step was to make sure that SMBFS was installed and up to date:

sudo apt-get install smbfs

Once the installation was complete, rather than store the connection details in plain sight, I wrote a text file which would contain the details

sudo gedit .nascredentials

The text file includes the user name and password to connect to the NAS (which are different from my Linux username and password):

username=mynasuser
password=mynaspassword

To make sure that the file was then restricted, I changed the access permissions for the credentials file so that only root user could read it

sudo chmod 600 .nascredentials

The next step was to create a mount point folder, which was accessible to any user on the system

sudo mkdir /media/nas

sudo chmod 777 /media/nas

After this, I edited the fstab file to include the mounting commands (note that the text should all be on one line, but I’ve had to include line breaks here for formatting reasons!)

sudo gedit /etc/fstab

//<NAS IP Address>/backups /media/nas cifs
  credentials=/home/myuser/.nascredentials,dir_mode=0777,
  file_mode=0777,umask=000,uid=myuser 0 0

This does rely on the NAS drive having a fixed IP address, and “backups” is the name of the samba share name.

Finally, rather than reboot, I only had to execute the following command to mount the new share

sudo mount -a

If the NAS is not switched on when the computer starts, then the drive won’t be mounted – if that happens, then I just need to run the last command to mount the folder again.

Why do messages remain visible after being dequeued?

A while back, I had an email asking why when a message has been removed from a Workflow-related advanced queue, it still remained visible:

We have a requirement to delete the future planned event from the WF_DEFERRED table. I am executing the below code to delete / remove a record from WF_DEFERRED table, however it is not deleting.

BEGIN
  WF_QUEUE.purgeevent (wf_queue.deferredqueue,'<Msg ID>', TRUE);
  COMMIT;
END;

Looking at the example code that they gave me, there is nothing particularly wrong with what they were doing – personally, I’m wary of using the WF_QUEUE APIs to manipulate messages on queues because I prefer to use DBMS_AQ and DBMS_AQADM when working with queues.  The reason I’m so reluctant to rely on the Workflow package stems from when Workflow 2.6 first shipped and introduced BES.  Working for a large banking client in London, we were the first implementation of Workflow which really pushed what BES was designed to do – however, dealing with Workflow product development regularly made me shudder with some of their suggestions on how to work with queues.  At the time, the team leader that I was working with was one of the leading lights of the Oracle integration space, and between us we concluded that we were better steering clear of the Workflow manipulation of queues (anything that relies on polling to check something which could / should have been written as a blocking dequeue was not a good sign, nor was the idea that the product was only shipping ready to work with single consumer queues…)

Anyway, that’s my personal preference – you can use the WF_QUEUE package if you want, but you won’t find anything that I work with that uses that package in preference to the DBMS_AQ package.

The first thing that I said to check was what the retention period for the queue was.  When a queue is defined in Oracle, you can define how long a message can remain visible once it has been dequeued (i.e. the retention period), and Oracle Workflow defines a retention period for the seeded queues as standard.  The retention period can be checked by running the following SQL:

SELECT owner, name, retention FROM all_queues WHERE name LIKE 'WF%';

If the retention time is not zero, then the message will remain visible for that length of time after it has been processed.  Assuming that the message is still visible, we can then check the status of the message by running the following SQL:

SELECT msg_state, count(*) FROM applsys.aq$wf_deferred GROUP BY msg_state;

SELECT msg_state FROM applsys.aq$wf_deferred WHERE msg_id = <your message ID>;

If the message has been removed then the status would show as PROCESSED.  All that the purgeevent API does is to dequeue and discard the message.  If the message has not been processed, then is may well be in a READY state, which indicates that the API has not worked.

As it turned out, the message had been dequeued correctly (the message state was PROCESSED), but the user was still seeing the message because the retention period had not passed completely when they were checking in the queue for the message.

So – that’s why the messages were still visible, despite them being removed from the queue.

How can I stop one notification going by email to one particular person?

I was asked recently whether there was any way to stop one particular notification going to one particular user via email – the user wanted to receive all their other notifications via email, but not this one.

Since the user wanted to receive all their other notifications via email, it wasn’t a case of being able to change their notification preference, since that would stop all the others going out as well.  We could consider flicking the profile before sending the notification and then changing it back immediately afterwards, but I’m not convinced that this would work.  Also, it’s a pretty intrusive change to make to process, since you would need new logic before and after the notification activity.

So, there are two other ways that the client could do this.

Firstly, they could write a trigger on WF_NOTIFICATIONS which fires when a new notification is created.  The trigger would check whether the recipient is the user we are interested in and the notification type being created, and if so immediately set the MAIL_STATUS to SENT.  This would fool the mailer into believing that the notification had already been emailed, so would never go out via email.

The second approach would be to change the Workflow definition and include a post notification function for the one notification which performs the same logic – if the user is the one person who doesn’t want to receive the notification via email, then update the MAIL_STATUS to SENT again to stop the email being sent.

Both of these approaches have pros and cons.  Creating a trigger on the table may introduce a performance overhead – every notification which is created would need to be checked to determine whether to suppress the email being sent.  Creating a post-notification function means that changes need to be made to the Workflow definition, and the support guidelines would need to be checked before making changes to a seeded notification.  Also, if the notification already has a post notification function, then you would need to either change that PNF, or create a wrapper which calls the original PNF and then does the new logic.

There were three questions that I put to the client before recommending an approach to adopt:

  1. Is it possible / likely that more than one user will want this functionality?
  2. Is is possible / likely that the users might change their minds about whether they want this enabled or disabled?
  3. Is it possible / likely that the user(s) might want to include different notifications?

If the answer to the last question is “No”, then I would change the existing Workflow definition and include a post notification function.  If the answers to the other questions are “Yes”, but it still only applies to one notification, then I would still create a PNF to perform the logic, but build extra flexibility into the logic.  For example, I would also have a profile option set at the right level which indicates whether emails should be suppressed or not, which provides flexibility and ease of use to enable / disable the suppression of emails via the front-end screens rather than needing a coding change.

If, however, the answer to the last question is “Yes”, then I would define a trigger on the table which can then fire under multiple scenarios.  I would define a value set which dictates which notifications should be suppressed (either for a whole item type or on an individual notification basis), and have the trigger check the value set first to see whether this notification is one where email might not be needed.  If the notification is one which shouldn’t be emailed, then again I would have a profile option for the user / responsibility / org / site which determines whether this particular notification should not be emailed.  This may be more processor intensive than a quick and easy check of one thing, but it provides the most flexibility to the solution – no coding changes are required to add / remove notifications or people to the list, which means that changes can be implemented quickly and easily without the need for further testing to be performed.

In the end, the client adopted the post notification function approach and decided not to make it widely known that this was a possibility.  It may be that in future years, more notifications / users are required, at which point they should revisit the recommendations and switch to the trigger approach.  There is nothing wrong with the way they have gone – it meets their current requirements, but if those requirements change significantly in the future then a review will be required.

Reusing a process in Workflow

I was recently a question which took me back to my very first Workflow project where we needed to re-use a sub-process repeatedly:

When I try to save the workflow, I am getting the error below:
WFPACTV_FAIL OPERATION=SAVE NAME=XXOM_P_VLVS_LN_FLW_GEN_SCH_DEF TYPE=OEOL
WFPACRV_REUSE_PROCESS NAME=XXOM_P_ASSIGN_VS_SO_LINE

What’s the problem?

The error is caused because Workflow will not allow the developer to re-use a sub-process within the same hierarchy, i.e. Start -> SUB1 -> SUB2 -> SUB1 -> End.  When I did my first Workflow project, we had a very complex process flow which managed sending messages via AQ from one system to a transformation hub, which then passed them onto a back-office system.  What was a simple message from the front-end became a series of simple messages in the back-end, and Workflow was used to manage the co-ordination between the systems.

What we found, however, was that in the front-end, we could build a message which allowed the user to enter a mailing address for the customer, for the product that the customer was buying (so different products could be posted to different addresses), for a proxy for a product and so forth – essentially we had four or five different levels where we wanted to do the same thing.  Unfortunately, because the “create an address” process could be called a number of times within the same hierarchy, we hit the limitation in Workflow that you cannot reuse the process.  As a workaround, we ended up creating copies of the sub-process and naming them differently (we added a different letter to the name), which became pretty annoying when it came to maintenance later on – if we had to make a change to one sub-process, we had to make sure that we were making the same change to each copy as well….

Why can’t you do it?

Here’s the background about WHY you can’t do it – feel free to skip ahead to my thoughts on bypassing the problem now.

Because of the way that the Workflow definition is stored in the database, if the same process is re-used, there is no way that the Workflow Engine can determine which time through the activities are being run.  For a single activity, when it is revisited, the previous visit is archived in the Workflow history tables.  For a series of activities (i.e. a sub-process), the same holds true when it is revisited – the whole process is recorded in the history tables.  However, there is a big difference between revisiting a process and reusing it.  If you are reusing the process, then there is nothing to archive – the previous use should remain in the current run.  When the activity then completes, the WF_ITEM_ACTIVITY_STATUSES table would have two records for the same activity – the first run and then the re-used process.  Sadly, the code which manages the update cannot take the two into account, so wouldn’t know which record to update.

If Workflow was being written today, then if the activity statuses used TIMESTAMP rather than a DATE to store the execution time (or even included a sequence number), then the Workflow Engine could more easily determine which record to update, to help bypass this limitation.  However, if you had a parallel branch, which routed to two identical re-used processes (I can’t see WHY you would do that, but it could be possible), then you hit the same limitation again – how do you identify within the Workflow tables which activity in the process is running???  Overall, it makes sense to include the limitation, IMO.

Ways Round The Problem

If there are only going to be a limited number of times that the process is going to be re-used, then I would be inclined to make a copy of the process and rename it slightly and use that.  That process is significantly easier if you know that the process is finalized, so that you won’t be needing to make the same change to multiple copies.  If you are going this route, then you need to ensure that the different process copies are sufficiently clearly labelled / documented / commented to allow you (or someone else) to maintain the definition going forwards.

If there are a large number of processes that need to be copied (I remember having copies A through to K of one sub-process!), then I would define the process as a separate, runnable process which is invoked as a child process with it’s own item type and item key.  There are no limitations on the number of times that the process can be re-used, and if the process is still in the same item type, then when it is created, the item attribute definitions will be the same – you just need to include something in your code which copies them from one instance to another.

One downside of implementing a parent-child relationship, however, is that when you look at the entire history of the process, it isn’t going to be all in one place – there will be some parts recorded in one process and others in other bits.  Also, I would be wary of using the standard WAITFORFLOW / CONTINUEFLOW activities to manage the flow, if there is a way that there could be more than one waiting process.  Instead, I would define some custom business events – have the main flow wait to receive the event, and the child flows raise the event.  This way, when you raise the event, you can pass attribute values easily back into the parent without having to directly call the WF_ENGINE APIs.

 

If I were building that first project from scratch, there are many things that I would change.  This would definitely be one of those.

Why isn’t the approval history available to all members of a role?

I recently had an email asking for help with trying to work out why a notification wasn’t showing the approval history:

I have a custom Workflow which has notification where in I am using Adhoc role to send notification to multiple users. The Expand roles checkbox is checked.
Now the issue is the Action History table which comes in the approval notifications does not appear in the notification sent to each user.  The history is visible to the user who was added first to the adhoc role (where the Notification Group ID = Notification ID), but for all the other users the history table itself is not shown in the notification.

I have tried using WF_NOTIFICATION(HISTORY) in the workflow message but this is unsuccessful.

This intrigued me for a while about why the history would appear for one user but not for any others, when the content should obviously be the same.  Then I considered the issue a bit more, and this turns out to be standard functionality.

Here’s a more detailed scenario.  A notification is sent to a role, which has four users assigned to the role.  The notification requires a response, and the “Expand Roles” checkbox is ticked so that each user receives their own copy of the notification.  The notification is accepted as approved when either 75% of the recipients have approved it or 50% have rejected it.  Within the notification, there is a history table.  Over the following few days, the following activities take place:

  • Day 1 – Within five minutes of receiving the notification, the first approver responds with an approval.
  • Day 1 – Within five minutes of receiving the notification, the fourth approver rejects the notification.
  • Day 2 – The second approver asks for more information from the person who submitted the request.
  • Day 2 – The third approver asks for more information from the first approver.
  • Day 3 – The first approver answers the question from the third approver.
  • Day 5 – The person who submitted the request responds to the question from the second approver.
  • Day 5 – The second approver approves the request.
  • Approval / rejection from the third approver is still outstanding.

At this stage, what should the approval history be showing?  As far as the Workflow process is concerned, the notification (although they all have different notification IDs) can only have one history.  So, what is the history?  Is the history of the notification which was sent to the first approver the same as that sent to the second?

At some stage, the Workflow product development team decided that it made sense that a notification could only have one history, therefore even when the role was expanded, only the first person in the role would have the history.  So, what was happening was expected functionality – only that one person will generate history in the notification.   (It turns out that there is a brief mention of this being standard functionality in Metalink note 431219.1)

So, what can be done about it?

There isn’t much that can really be done to circumvent the standard functionality.  I discussed the issue with another Certified Workflow Expert that I know and bounced a few ideas back and forth.

The first suggestion he made was to use the special attribute #RELATED_HISTORY on the notification, which allows you to include the history from one notification in another notification.  His thoughts were that if a post notification function were included, it might be possible to set the attribute on the second and third copies so that they include the history of the first approval notification.  However, when we looked at it a bit more, it became obvious that this idea won’t work.  The #RELATED_HISTORY attribute can only be used to associate different notifications together, since it needs to be set to the label for the notification to relate, and the attribute only works on an FYI notification.

The next suggestion was that a single notification could be sent (i.e. without expanding the roles) to an email distribution list.  This would allow the user to receive what appears to be their own copy of the notification and act on it, but means that there is no way that the voting requirements could be met, and would only allow a response via email if the flag was set to allow forwarded responses, since any response would come from a different email address to the one that was notified.  Depending on the exact business requirements, this may or may not meet the bill.

So, it comes to the final suggestion.  Rather than including the standard history within the notification, we could suppress it from the first notification and then replace it with a custom table.  Including an attribute as a PL/SQL document in the code to generate a table which provides the history of all the notifications would enable all copies of the notification to include an up-to-date history for all copies of the notification.  Setting the document ID to equal the notification ID would mean that you can programmatically determine the group ID for the notification and build a complete history of the process.

This turned out to be the solution that the client wanted, needed and implemented in the end, but for some clients it would be overkill.  I’ll ask for some screenshots to include, but am not hopeful given their penchant for security – failing that, I’ll try to come up with some examples of how we did it.

Finding the Response to a Notification

A while back, I was asked for advice on finding the result of a notification within the process:

Can you please help me, how to capture the result of notification approval?  My notification has result type approval.

My first response was to question why the client needed to know the result within the process.  Merely by having a result for the notification, the knowledge is automatically communicated back to the process, which can then have conditional branching based on the response provided.  If the result is needed further down the line as an attribute, then a standard ASSIGN activity could be used to store the result in an attribute.

The problem with this approach is that if the process models multiple results on the same transition branch (i.e. by using a <Default> or <Any> transition), then there is no easy way to determine which of the multiple responses was actually provided.  For example, imagine that the notification has a lookup type as a response which has lookup codes defined which mean “Approved”, “Finally Approved”, “Rejected” and “Resubmit”.  In the process flow, the notification has a transition defined for “Rejected”, one for “Resubmit” and a “default” transition which manages the two approval routes.  If the business requirement is that the only thing that we need to capture is the “type” of response (i.e. approval or rejection) then we can still take the same approach.  If, however, the business requirement is that the exact response is required rather than the type of response, then the response needs to be captured in a different method.

There are two approaches that can be adopted to find the exact response to the notification, both requiring custom PL/SQL.  Firstly, you can define an activity in the process which accesses the Workflow tables directly to find the previous notification and then find the response to the notification.  In this case, you will need to hard-code in the PL/SQL the type of message that you are looking for, and if there is any change to the Workflow process at a later stage, the PL/SQL will need to be modified to handle the change of notification type.

The second approach, which is the one that I would recommend is to write a custom post-notification function (PNF).  Within the PNF, you can find the details of the response directly, and perform your processing logic.  Additionally, because the code can be written in a generic manner, the PNF can be re-used for any number of scenarios where the result is required.

The developer for the client followed this advice, but still had problems trying to find the result:

I attached the function to the notification. in the function I’m trying to capture the result using getItemattrtext

lc_result := wf_engine.getitemattrtext( itemtype => itemtype,
itemkey => itemkey,
aname => 'RESULT'
);

I always get null

The problem here, which is an easy fix is that he was using the wrong API to fetch the result.  When the notification is being responded to, the result won’t be stored in item attributes. That would only happen once the processing of the post notification function is complete.  The way to find the result is to firstly find the notification ID using WF_ENGINE.context_nid.

Once you know the notification ID, call WF_NOTIFICATION.getAttrText to find the attribute from the notification. You’ll need to do that for the RESULT attribute and any other attributes on the notification that the business requires.

Why would you need the result?

After some discussion, we finally got to the business requirements, which then meant that we could design an effective solution – one of my bugbears about providing support and advice is that I would much prefer for someone to ask “what is the best way to solve the problem?” rather than “how do I do XYZ?” which means that some times you go off down the wrong path (see this post featuring a video by Tom Kyte on the same issue!).

The business requirements were that the notification has a “comments” field, which if the notification is being rejected must be completed to conform to the business processing requirements.  Therefore, the check needs to be that when the notification is responded to, and it is being rejected, and the comments field has not been completed, then an error should be raised.  This can only be done effectively using a post notification function – if I had known the business requirements from the start, then I would not have suggested by first approach.

Anyway, that’s how to find the response in different ways – depending on the business requirements, either one may be the right approach to adopt.

By continuing to use the site, you agree to the use of cookies. more information

In common with almost all professionally run websites, this website logs the IP address of each visitor in order to keep it running reliably. This is also essential for protecting the website and its visitors from malicious attacks, including infection with malware.

This website provides information as a service to visitors such as yourself, and to do this reliably and efficiently, it sometimes places small amounts of information on your computer or device (e.g. mobile phone). This includes small files known as cookies. The cookies stored by this website cannot be used to identify you personally.

We use cookies to understand what pages and information visitors find useful, and to detect problems such as broken links, or pages which are taking a long time to load.

We sometimes use cookies to remember a choice you make on one page, when you have moved to another page if that information can be used to make the website work better. For example:
- avoiding the need to ask for the same information several times during a session (e.g. when filling in forms), or
- remembering that you have logged in, so that you don’t have to re-enter your username and password on every page.

You can prevent the setting of cookies by adjusting the settings on your browser (see your browser Help for how to do this). Be aware that disabling cookies will affect the functionality of this and many other websites that you visit.

Close