Subsequent Lookup Operators causes OWB to generate undeployable mappings

Hi,
I am using OWB 11gR2 .
I am trying to create a fact loading mapping based on datavault.
That gives me error during deployment.
Validating and Generate doesn't give me an error.
So in trying to load the fact i tie various datavault tables together with a joiner operator.
All tables except the driving table are set to outer join role.
The output fields are tied to various lookup operator objects.
The output from those is tied to the target fact table.
All of this goes well, this mapping is deployable and upon genrate one can see the statements.
The problem arises when i try to insert another lookup operator between the output of one lookup operator and the fact.
That mapping does not give a validation error and also generate intermediate doesn't error.
Deploying doesn't work however, it complains of a incorrect identifier.
Inspecting the generate intermediate does reveal the problem:
OWB appends all of the join clauses from the first joiner to the total used for loading the fact as a where statement.
When you look at the first joiner though it just displays nicely all of the left outer join statements.
There is no where statement to be found on this first joiner.
It is only added at the fact stage at exactly the same place where the left outerjoins are from the first joiner.
Questions:
Is there a limit to the number of subsequent lookup operators one can use ? 2 can not be it i hope ..
Is there a patch for this ?
Other remarks; i have noticed that when i use more than 8 lookup operators on my canvas that the lookup conditons get corrupt.
It becomes something like: lookup.fieldname = null instead of lookup.fieldname = input.fieldname .
When this happens i have to correct every lookup operator on the mapping.
Is this known error ?
Hope somebody has an answer fro my first problem.
rgrds Mike
Edited by: MichaelR64 on 16-jan-2011 23:39

Hi,
I did some further testing:
This happens when there is an unequal number of lookups "attached"to the driving table.
What i mean is that if there is one lookup attached to a port of the driving table, then the next port that has a lookup can not have two (serially connected) lookups.
Or put the other way : if a port has two lookups(sequentially connected) than the errror disappears when all the other ports with lookups also have two lookups(serially connected that is).
At first i thought it had something to do with the joiner used in the first stage.
Replacing that with a view didn't solve it
In fact using a lookup where multiple rows output is specified causes owb to create this with a outer join.
It is this outer join part that is being mangled by owb as specified before.
If anyone can comment on this..

Similar Messages

  • Profile mappings - generated pluggable mappings

    When you create a correction mapping using the create correction in data profile editor, the mapping contains generated pluggable mappings. Where can you find these mappings?

    I'm not sure where you find the pluggable mapping but if you select the plug in your map and use the 'Visit Child Graph' button from the 'General' toolbar you should be able to see mapping detail.
    Cheers
    Si

  • OWB Code generator - does not generate consistent code in every deployment

    Hi,
    I have a mapping to load dimension WM_USERS_ADVISORS_DIM.
    Every time I generate the code for this mapping code generated is different than previous. Hence the query optimization applied at database level does not work.
    following are 2 different queries generated for the same mapping.
    Also, In target load order for this dimension, automatically values are populated like USERS_ADVISORS_STG, where as I have not declared this table/dim.
    I am unable to remove this from load order. Every time code gets generated these _STG is added to each level in Dimension.
    Please suggest how to have a consistent ETL generated every single time ETL deploys?
    INSERT /*+ APPEND PARALLEL ("USERS_ADVISORS_STG") */ INTO "OWB$ *USERS_ADVISORS__1FFE2A* " "USERS_ADVISORS_STG" ( "PERSON_ID", "ADVISOR_PERSON_ID", "INVITE_CLIENT_ID", "IS_PRIMARY_ADVISOR", "SCHEMA_NAME", "DUMMY_LEVEL_KEY", "DESCRIPTION", "DUMMY_VALUE") (SELECT "INGRP1". "PERSON_ID" "PERSON_ID$10", "INGRP1". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$10", "INGRP1". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID$8", "INGRP1". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$10", "INGRP1". "SCHEMA_NAME" "SCHEMA_NAME$9", "INGRP2". "DUMMY_LEVEL_KEY" "DUMMY_LEVEL_KEY$8", "INGRP2". "DESCRIPTION" "DESCRIPTION$12", "INGRP1". "LOOKUP$$$_1_DUMMY_VALUE" "LOOKUP$$$_1_DUMMY_VALUE$6" FROM ( SELECT "LOOKUP_INPUT_SUBQUERY$5". "USERS_ADVISORS_KEY$6" "USERS_ADVISORS_KEY", "LOOKUP_INPUT_SUBQUERY$5". "PERSON_ID$11" "PERSON_ID", "LOOKUP_INPUT_SUBQUERY$5". "ADVISOR_PERSON_ID$11" "ADVISOR_PERSON_ID", "LOOKUP_INPUT_SUBQUERY$5". "INVITE_CLIENT_ID$9" "INVITE_CLIENT_ID", "LOOKUP_INPUT_SUBQUERY$5". "IS_PRIMARY_ADVISOR$11" "IS_PRIMARY_ADVISOR", "LOOKUP_INPUT_SUBQUERY$5". "SCHEMA_NAME$10" "SCHEMA_NAME", "LOOKUP_INPUT_SUBQUERY$5". "LOOKUP$$$_1_DUMMY_VALUE$7" "LOOKUP$$$_1_DUMMY_VALUE" FROM (SELECT "DEDUP_SRC_0$2". "USERS_ADVISORS_KEY$7" "USERS_ADVISORS_KEY$6", "DEDUP_SRC_0$2". "PERSON_ID$12" "PERSON_ID$11", "DEDUP_SRC_0$2". "ADVISOR_PERSON_ID$12" "ADVISOR_PERSON_ID$11", "DEDUP_SRC_0$2". "INVITE_CLIENT_ID$10" "INVITE_CLIENT_ID$9", "DEDUP_SRC_0$2". "IS_PRIMARY_ADVISOR$12" "IS_PRIMARY_ADVISOR$11", "DEDUP_SRC_0$2". "SCHEMA_NAME$11" "SCHEMA_NAME$10", "DEDUP_SRC_0$2". "LOOKUP$$$_1_DUMMY_VALUE$8" "LOOKUP$$$_1_DUMMY_VALUE$7" FROM (SELECT CAST (NULL AS NUMERIC) "USERS_ADVISORS_KEY$7", "AGG_INPUT$2". "PERSON_ID$13" "PERSON_ID$12", "AGG_INPUT$2". "ADVISOR_PERSON_ID$13" "ADVISOR_PERSON_ID$12", MIN( "AGG_INPUT$2". "INVITE_CLIENT_ID$11") "INVITE_CLIENT_ID$10", MIN( "AGG_INPUT$2". "IS_PRIMARY_ADVISOR$13") "IS_PRIMARY_ADVISOR$12", "AGG_INPUT$2". "SCHEMA_NAME$12" "SCHEMA_NAME$11", MIN( "AGG_INPUT$2". "LOOKUP$$$_1_DUMMY_VALUE$9") "LOOKUP$$$_1_DUMMY_VALUE$8" FROM (SELECT ( :B4 ) "USERS_ADVISORS_KEY$8", "USER_ADVISOR". "PERSON_ID" "PERSON_ID$13", "USER_ADVISOR". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$13", "INVITE_CLIENTS". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID$11", "USER_ADVISOR". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$13", ( :B3 ) "SCHEMA_NAME$12", 0 "LOOKUP$$$_1_DUMMY_VALUE$9" FROM ( SELECT "SET_OPERATION$2". "PERSON_ID$14" "PERSON_ID", "SET_OPERATION$2". "ADVISOR_PERSON_ID$14" "ADVISOR_PERSON_ID", "SET_OPERATION$2". "CREATED_ON$2" "CREATED_ON", "SET_OPERATION$2". "IS_PRIMARY_ADVISOR$14" "IS_PRIMARY_ADVISOR", "SET_OPERATION$2". "MOD_TAG$2" "MOD_TAG", "SET_OPERATION$2". "MODIFIED_ON$2" "MODIFIED_ON" FROM (SELECT "PERSON_ID" "PERSON_ID$14", "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$14", "CREATED_ON" "CREATED_ON$2", "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$14", "MOD_TAG" "MOD_TAG$2", "MODIFIED_ON" "MODIFIED_ON$2" FROM (SELECT "USERS_ADVISORS". "PERSON_ID" "PERSON_ID", "USERS_ADVISORS". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "USERS_ADVISORS". "CREATED_ON" "CREATED_ON", "USERS_ADVISORS". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR", "USERS_ADVISORS". "MOD_TAG" "MOD_TAG", "USERS_ADVISORS". "MODIFIED_ON" "MODIFIED_ON" FROM "ADVIEWPROD". "USERS_ADVISORS" "USERS_ADVISORS" UNION SELECT "USERS_ADVISORS_DLOG". "PERSON_ID" "PERSON_ID", "USERS_ADVISORS_DLOG". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "USERS_ADVISORS_DLOG". "CREATED_ON" "CREATED_ON", "USERS_ADVISORS_DLOG". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR", "USERS_ADVISORS_DLOG". "MOD_TAG" "MOD_TAG", "USERS_ADVISORS_DLOG". "MODIFIED_ON" "MODIFIED_ON" FROM "ADVIEWPROD". "USERS_ADVISORS_DLOG" "USERS_ADVISORS_DLOG") ) "SET_OPERATION$2" ) "USER_ADVISOR" LEFT OUTER JOIN ( SELECT "INVITE_CLIENTS". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "INVITE_CLIENTS". "PERSON_ID" "PERSON_ID", "INVITE_CLIENTS". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID" FROM "ADVIEWPROD". "INVITE_CLIENTS" "INVITE_CLIENTS" ) "INVITE_CLIENTS" ON ( (( "USER_ADVISOR". "PERSON_ID" = "INVITE_CLIENTS". "PERSON_ID" )) AND (( "USER_ADVISOR". "ADVISOR_PERSON_ID" = "INVITE_CLIENTS". "ADVISOR_PERSON_ID" )) ) WHERE ( ( "USER_ADVISOR". "CREATED_ON" BETWEEN (TO_DATE( ( :B2 ) , 'mm/dd/yyyy hh24:mi:ss') ) AND (TO_DATE( ( :B1 ) , 'mm/dd/yyyy hh24:mi:ss') ) OR "USER_ADVISOR". "MODIFIED_ON" BETWEEN (TO_DATE( ( :B2 ) , 'mm/dd/yyyy hh24:mi:ss') ) AND (TO_DATE( ( :B1 ) , 'mm/dd/yyyy hh24:mi:ss') ) ) ) ) "AGG_INPUT$2" GROUP BY "AGG_INPUT$2". "PERSON_ID$13", "AGG_INPUT$2". "ADVISOR_PERSON_ID$13", "AGG_INPUT$2". "SCHEMA_NAME$12" ) "DEDUP_SRC_0$2" ) "LOOKUP_INPUT_SUBQUERY$5" WHERE ( (NOT ( "LOOKUP_INPUT_SUBQUERY$5". "PERSON_ID$11" IS NULL AND "LOOKUP_INPUT_SUBQUERY$5". "ADVISOR_PERSON_ID$11" IS NULL AND "LOOKUP_INPUT_SUBQUERY$5". "SCHEMA_NAME$10" IS NULL)) ) ) "INGRP1" LEFT OUTER JOIN ( SELECT "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" "DUMMY_LEVEL_KEY", "DUMMY_LEVEL_0". "DUMMY_VALUE" "DUMMY_VALUE", "DUMMY_LEVEL_0". "DESCRIPTION" "DESCRIPTION" FROM "WM_USERS_ADVISORS_DIM" "DUMMY_LEVEL_0" WHERE ( "DUMMY_LEVEL_0". "DIMENSION_KEY" = "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" ) AND ( "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" IS NOT NULL ) ) "INGRP2" ON ( ( "INGRP1". "LOOKUP$$$_1_DUMMY_VALUE" = "INGRP2". "DUMMY_VALUE" ) ) )
    INSERT /*+ APPEND PARALLEL ("USERS_ADVISORS_STG") */ INTO "OWB$ *USERS_ADVISORS__1FEA66* " "USERS_ADVISORS_STG" ( "PERSON_ID", "ADVISOR_PERSON_ID", "INVITE_CLIENT_ID", "IS_PRIMARY_ADVISOR", "SCHEMA_NAME", "DUMMY_LEVEL_KEY", "DESCRIPTION", "DUMMY_VALUE") (SELECT "INGRP1". "PERSON_ID" "PERSON_ID$10", "INGRP1". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$10", "INGRP1". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID$8", "INGRP1". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$10", "INGRP1". "SCHEMA_NAME" "SCHEMA_NAME$9", "INGRP2". "DUMMY_LEVEL_KEY" "DUMMY_LEVEL_KEY$8", "INGRP2". "DESCRIPTION" "DESCRIPTION$12", "INGRP1". "LOOKUP$$$_1_DUMMY_VALUE" "LOOKUP$$$_1_DUMMY_VALUE$6" FROM ( SELECT "LOOKUP_INPUT_SUBQUERY$5". "USERS_ADVISORS_KEY$6" "USERS_ADVISORS_KEY", "LOOKUP_INPUT_SUBQUERY$5". "PERSON_ID$11" "PERSON_ID", "LOOKUP_INPUT_SUBQUERY$5". "ADVISOR_PERSON_ID$11" "ADVISOR_PERSON_ID", "LOOKUP_INPUT_SUBQUERY$5". "INVITE_CLIENT_ID$9" "INVITE_CLIENT_ID", "LOOKUP_INPUT_SUBQUERY$5". "IS_PRIMARY_ADVISOR$11" "IS_PRIMARY_ADVISOR", "LOOKUP_INPUT_SUBQUERY$5". "SCHEMA_NAME$10" "SCHEMA_NAME", "LOOKUP_INPUT_SUBQUERY$5". "LOOKUP$$$_1_DUMMY_VALUE$7" "LOOKUP$$$_1_DUMMY_VALUE" FROM (SELECT "DEDUP_SRC_0$2". "USERS_ADVISORS_KEY$7" "USERS_ADVISORS_KEY$6", "DEDUP_SRC_0$2". "PERSON_ID$12" "PERSON_ID$11", "DEDUP_SRC_0$2". "ADVISOR_PERSON_ID$12" "ADVISOR_PERSON_ID$11", "DEDUP_SRC_0$2". "INVITE_CLIENT_ID$10" "INVITE_CLIENT_ID$9", "DEDUP_SRC_0$2". "IS_PRIMARY_ADVISOR$12" "IS_PRIMARY_ADVISOR$11", "DEDUP_SRC_0$2". "SCHEMA_NAME$11" "SCHEMA_NAME$10", "DEDUP_SRC_0$2". "LOOKUP$$$_1_DUMMY_VALUE$8" "LOOKUP$$$_1_DUMMY_VALUE$7" FROM (SELECT CAST (NULL AS NUMERIC) "USERS_ADVISORS_KEY$7", "AGG_INPUT$2". "PERSON_ID$13" "PERSON_ID$12", "AGG_INPUT$2". "ADVISOR_PERSON_ID$13" "ADVISOR_PERSON_ID$12", MIN( "AGG_INPUT$2". "INVITE_CLIENT_ID$11") "INVITE_CLIENT_ID$10", MIN( "AGG_INPUT$2". "IS_PRIMARY_ADVISOR$13") "IS_PRIMARY_ADVISOR$12", "AGG_INPUT$2". "SCHEMA_NAME$12" "SCHEMA_NAME$11", MIN( "AGG_INPUT$2". "LOOKUP$$$_1_DUMMY_VALUE$9") "LOOKUP$$$_1_DUMMY_VALUE$8" FROM (SELECT ( :B4 ) "USERS_ADVISORS_KEY$8", "USER_ADVISOR". "PERSON_ID" "PERSON_ID$13", "USER_ADVISOR". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$13", "INVITE_CLIENTS". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID$11", "USER_ADVISOR". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$13", ( :B3 ) "SCHEMA_NAME$12", 0 "LOOKUP$$$_1_DUMMY_VALUE$9" FROM ( SELECT "SET_OPERATION$2". "PERSON_ID$14" "PERSON_ID", "SET_OPERATION$2". "ADVISOR_PERSON_ID$14" "ADVISOR_PERSON_ID", "SET_OPERATION$2". "CREATED_ON$2" "CREATED_ON", "SET_OPERATION$2". "IS_PRIMARY_ADVISOR$14" "IS_PRIMARY_ADVISOR", "SET_OPERATION$2". "MOD_TAG$2" "MOD_TAG", "SET_OPERATION$2". "MODIFIED_ON$2" "MODIFIED_ON" FROM (SELECT "PERSON_ID" "PERSON_ID$14", "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID$14", "CREATED_ON" "CREATED_ON$2", "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR$14", "MOD_TAG" "MOD_TAG$2", "MODIFIED_ON" "MODIFIED_ON$2" FROM (SELECT "USERS_ADVISORS". "PERSON_ID" "PERSON_ID", "USERS_ADVISORS". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "USERS_ADVISORS". "CREATED_ON" "CREATED_ON", "USERS_ADVISORS". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR", "USERS_ADVISORS". "MOD_TAG" "MOD_TAG", "USERS_ADVISORS". "MODIFIED_ON" "MODIFIED_ON" FROM "ADVIEWPROD". "USERS_ADVISORS" "USERS_ADVISORS" UNION SELECT "USERS_ADVISORS_DLOG". "PERSON_ID" "PERSON_ID", "USERS_ADVISORS_DLOG". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "USERS_ADVISORS_DLOG". "CREATED_ON" "CREATED_ON", "USERS_ADVISORS_DLOG". "IS_PRIMARY_ADVISOR" "IS_PRIMARY_ADVISOR", "USERS_ADVISORS_DLOG". "MOD_TAG" "MOD_TAG", "USERS_ADVISORS_DLOG". "MODIFIED_ON" "MODIFIED_ON" FROM "ADVIEWPROD". "USERS_ADVISORS_DLOG" "USERS_ADVISORS_DLOG") ) "SET_OPERATION$2" ) "USER_ADVISOR" LEFT OUTER JOIN ( SELECT "INVITE_CLIENTS". "ADVISOR_PERSON_ID" "ADVISOR_PERSON_ID", "INVITE_CLIENTS". "PERSON_ID" "PERSON_ID", "INVITE_CLIENTS". "INVITE_CLIENT_ID" "INVITE_CLIENT_ID" FROM "ADVIEWPROD". "INVITE_CLIENTS" "INVITE_CLIENTS" ) "INVITE_CLIENTS" ON ( (( "USER_ADVISOR". "PERSON_ID" = "INVITE_CLIENTS". "PERSON_ID" )) AND (( "USER_ADVISOR". "ADVISOR_PERSON_ID" = "INVITE_CLIENTS". "ADVISOR_PERSON_ID" )) ) WHERE ( ( "USER_ADVISOR". "CREATED_ON" BETWEEN (TO_DATE( ( :B2 ) , 'mm/dd/yyyy hh24:mi:ss') ) AND (TO_DATE( ( :B1 ) , 'mm/dd/yyyy hh24:mi:ss') ) OR "USER_ADVISOR". "MODIFIED_ON" BETWEEN (TO_DATE( ( :B2 ) , 'mm/dd/yyyy hh24:mi:ss') ) AND (TO_DATE( ( :B1 ) , 'mm/dd/yyyy hh24:mi:ss') ) ) ) ) "AGG_INPUT$2" GROUP BY "AGG_INPUT$2". "PERSON_ID$13", "AGG_INPUT$2". "ADVISOR_PERSON_ID$13", "AGG_INPUT$2". "SCHEMA_NAME$12" ) "DEDUP_SRC_0$2" ) "LOOKUP_INPUT_SUBQUERY$5" WHERE ( (NOT ( "LOOKUP_INPUT_SUBQUERY$5". "PERSON_ID$11" IS NULL AND "LOOKUP_INPUT_SUBQUERY$5". "ADVISOR_PERSON_ID$11" IS NULL AND "LOOKUP_INPUT_SUBQUERY$5". "SCHEMA_NAME$10" IS NULL)) ) ) "INGRP1" LEFT OUTER JOIN ( SELECT "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" "DUMMY_LEVEL_KEY", "DUMMY_LEVEL_0". "DUMMY_VALUE" "DUMMY_VALUE", "DUMMY_LEVEL_0". "DESCRIPTION" "DESCRIPTION" FROM "WM_USERS_ADVISORS_DIM" "DUMMY_LEVEL_0" WHERE ( "DUMMY_LEVEL_0". "DIMENSION_KEY" = "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" ) AND ( "DUMMY_LEVEL_0". "DUMMY_LEVEL_KEY" IS NOT NULL ) ) "INGRP2" ON ( ( "INGRP1". "LOOKUP$$$_1_DUMMY_VALUE" = "INGRP2". "DUMMY_VALUE" ) ) )
    Thanks in advance
    Meg
    Edited by: Meg on Jan 4, 2012 5:33 PM

    Hello,
    These wrappers were separated into different templates.  What you would need to do is to run the templates that you need. You can find these templates in the C:\Program Files\National Instruments\MATRIXx\mx_71.4\case\ACC\templates folder.
    Hope this helps.
    Ricardo S.
    National Instruments

  • Custom Purchase Order template causes Error while generating PDF

    The standard XSLFO works, my custom one errors:
    History of the world:
    1) I downloaded the XML Publisher thing for Word, installed it no problems
    2) Downloaded the XML data definition for the Standard Purchase Order from XML Publisher Administrator
    3) Created a blank word document and created the purchase order layout from scratch using the XML Publisher plug-in
    4) Previewed it as a PDF in word - it looked fine (well, it was a start)
    5) Exported the XSLFO
    6) In XML Publisher created a new template and uploaded the XSLFO
    7) Assigned the new template to the document in Purchasing
    All good... the new template is defintately the one being used by the PO Output for Communication program. The problem of course is that it throws a useless error message :) - namely:
    PoPrintingUtil.getBlobPDF(input,input) - After initializing the FOProcessor
    PoPrintingUtil.getBlobPDF(input,input) - After setting the i/o stream and output format
    PoPrintingUtil.getBlobPDF(input,input) - Error while generating the PDForacle.apps.xdo.XDOException
    genDoc() : Exceptionjava.lang.Exception: Error while generating PDF :null
    java.lang.Exception: Error while generating PDF :null
    java.lang.Exception: Error while generating PDF :null
         at oracle.apps.po.communicate.PoGenerateDocument.genDoc(PoGenerateDocument.java:2011)
         at oracle.apps.po.communicate.PoGenerateDocumentCP.runProgram(PoGenerateDocumentCP.java:421)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:148)
    When I run POXPOPDF in Debug I get:
    getArchiveOn(): APPROVE
    After calling genDocThu May 18 12:50:05 EST 2006
    Adding the blob to vector
    java.lang.NullPointerException
    java.lang.NullPointerException
         at java.io.ByteArrayInputStream.<init>(ByteArrayInputStream.java:89)
         at oracle.apps.po.communicate.PoGenerateDocumentCP.runProgram(PoGenerateDocumentCP.java:304)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:148)
    I know no one can magically fix this for me (I wish!) but does anyone have any suggestions on what to do next? I have no conditional formatting or any other more complex functionality, just a really boring PO layout with a logo.
    Any suggestions welcome, in the meantime I will keep trawling through Metalink in search of a clue ;)
    Thanks
    Jo

    Hi Jo,
    The first version for which the Template Builder was released is 5.0
    Well, I guess I am one of the few who has a backported 4.5 version of the template builder. I did that for testing exactly your case. I just replaced our xdocore.jar file with the 4.5 version and it worked. The core.jar is not easily available. The files are part of the 4.5 patch, but I think it is too much work to get them out.
    However, I would strongly recommend to upgrade to a later version of XML Publisher. We made huge improvements, since 4.5 - performance, translation, RTF template capabilities....
    I just checked the process of converting an RTF template to FO and uploading it to EBS with 5.6.2 and it still worked. So it seems you can go straight to the latest version.
    Hope that helps,
    Klaus

  • Error in Fork Step causing issue in generating next work items

    All,
    We have ECC 6.0 with the following SP:
    SAP_BASIS     700     0012     SAPKB70012     SAP Basis Component
    SAP_ABA                     700     0012     SAPKA70012     Cross-Application Component
    I have this strange issue. In one of my custom workflows, i am using a Fork step with 02/ 02 necessary outcomes. In branch 1, i am sending a mail to a user's e-mail id. In branch 2, i have 3 activity steps for another user. Step 1 is to display an invoice and step 2 is a user decision to apprve/deny this invoice.
    I know that during runtime, at the Fork step, the work item for branch 1 and the 1st work item for activity step 1 in branch 2 get generated almost simultaneously(though with some small time difference). This is OK. But the issue is, when the sendmail in branch 1 errors out due to some error, and when step 1 is finished successfully by another user, the work item for 2nd activity step in branch 2 is not getting created.
    As i understand, in the Fork, the 2 branches, execution should be independent of each other, that means, the work item creation and execution of branch 1 and 3 steps in branch 2 should be happening independently. But, this doesn't seem to be the case.
    I understand when the sendmail in branch failed, the wflow is set to 'Error' Status.But there is already an open workitem for step 1 of branch 2. When this workitem is completed successfully, I expect the work item for 2nd step should get generated.But this is not happening. But, when i restart the whole workflow using SWPR, it generates the witem for 2nd step in branch 2. And still the wflow is in 'Error' status.
    When the user finishes the 2nd workitem of branch 2, the next work item for 3rd step is not getting generated. Instead i needed to restart the whole wflow again and this time, it generated the 3rd witem in ranch 2.
    To my understanding, when a Fork is there, if i specify 02/02 outcomes necessary, these 2 branches should go in parallel, both creation and execution of workitems, until they are joined. And at this point, if the whole Fork step is successful, then workflow will proceed further with steps below the Fork. Shouldn't this be the case ?
    Anybody had similar issue? Pl share your thoughts on this. I'd really appreciate if somebody can clarify whats going on in the above case and how to fix this.
    Thank you in advance
    Regards,
    venu
    Edited by: Venugopal Jogi on Jun 10, 2009 5:58 PM

    Hi Arghadip,
    Thanks for your reply. You said
    "When a Workflow goes into error the processing should stop whether it is in same branch or in different.".
    If this is correct, then as i explained earlier, when i restart the workflow without fixing the errored branch, then also, the next work item should not be created..Right ? But, it is. I didn't fix the errored branch but simply restarted the workflow. Then it created the witem for 2nd step, in the branch 2 that is not errored.
    How can this be, the branch 1 is still in ERROR status only. So, can you pl clarify if something else might be going on here.
    Also, fixing the issue is fine. But, the SendMail step in branch 1 is just an information only and per business requirement, this should not be holding up the approval process. And if we don't process the invoice in time, just because of some informational mail errored out, this will not be a feasible solution to the client,Right ? That's the whole reason i am using Fork step, so that they both proceed parallelly (independently).
    If step 1 is a pre-requisite for step 2, then it makes sense to fix error in step 1 so that we can proceed with step 2.
    Any other thoughts on this.....
    Regards,
    venu

  • Qualified Lookup Table in Webdynpro Content Generator

    I am trying to build a user interface with the webynpro component generators from the business content deployed on the portal.  The problem I face is with qualified lookup tables.  From the main table, there is an embedded table which contains the qualified lookup table.  I can press the "edit" button to open a pop-up window which shows the details of the qualified link.  Unfortunately, on MDM 7.1 SP05, the pop-up window contains a table at the top of the screen that forces the entire window to span the width of the table, which will go on quite far because I have about 20 fields in my qualified lookup table.  Below I can enter the values for the individual fields, but it is difficult to use because you have to scroll all the way to the right to select drop down values.  Is there a way to remove the summary table from the pop-up window for qualified lookup tables?  I havent seen an option in the UI config.

    Apparently you can configure the fields that are shown in the pop-up window, but it also limits the fields that are displayed on the iView of the main table.

  • Getting Error in OWB while generating

    Hi Every one
    I am getting the following error while trying to compile my mapping in OWB. Is there any experienced person who can guide me how to come out of this issue.
    Thanks in advance
    RB
    Cannot invoke method handleSelectionChanged in class oracle.wh.ui.tsmapping.GenerateEventHandler

    I'm getting a very similar error when attempting to delete an object from my mapping in OWB 10.2.0.3:
    Cannot invoke method handleObjectBeforeDeleted in class oracle.wh.ui.tsmapping.MappingGraph
    I'm assuming this is some type of error with the OWB java client itself, but I'm with the OP in hoping someone out there has any idea where to start looking for answers regarding these types of errors. Thanks.

  • List of operators in OWB mapping using OMB Plus

    Hi,
    I got a situation where I need to find out the list of operators names and its type that I used in a mapping. I tried to search but could not find it. Can any body help me regarding this.
    Thanks in advance
    Yeswanth

    Try this and similarly you can get all operator.
    OMBRETRIEVE MAPPING 'MAP_NAME' GET TABLE OPERATORS
    OMBRETRIEVE MAPPING 'MAP_NAME' GET VIEW OPERATORS
    OMBRETRIEVE MAPPING 'MAP_NAME' GET TRANSFORMATION OPERATORS
    OMBRETRIEVE MAPPING 'MAP_NAME' GET TABLE_FUNCTION OPERATORS
    Cheers
    Nawneet
    Edited by: Nawneet_Aswal on Aug 30, 2010 8:43 AM

  • Signal Express 2009 causing NIMAX to generate an Error Exception message

    I have a system that gives an error message when I try to open an instrument in NIMAX. The popup error message says "unexpected Error", with a code of "MAXKnownException. This system is running Windows XP and we had Signal Express 3.0 with NIMAX 4.3 running perfectly. We upgraded to Signal Express 2009 with NIMAX 4.6 and this problem occurs.
    The error log message that is gereated is
    "The niVISAui.mxx plug-in caused an exception in the CmxAggregateItemUI::GetToolbar function in the NIMax process."
    Thanks
    Solved!
    Go to Solution.

    Hello,
    I am getting the same message stating "Measurement and Automation Explorer encountered an unexpected error since it was last run. For more information, visit ni.com/info and enter the Info Code MAXKnownException." I did so and matched all the known exceptions but could not really find anything useful.
    I am using MAX version 4.7 have LabVIEW 2010 and the latest verion of NI-DAQ installed as well. I am using this on Windows XP OS.
    I have also attached the error message screenshot in this post (Error1.jpg).
    In addition I have also added the error messages which I received on continuing to open the MAX nonetheless and then opening Devices and Interfaces and the error on trying to work through using LabVIEW Signal Express for DAQ. (Error.jpg) 
    Could you please let me know what this might be about and how can I resolve the errors?
    Thanks,
    Rohit Parakh
    Attachments:
    Error.JPG ‏31 KB
    Error1.JPG ‏42 KB

  • Writing expression in OWB to generate SEQ no.?

    I want to generate a Sequence number for different records of each unique record.
    For example
    Cust_no R_visit_num Seq_vis_num
    11 8 8.01
    11 8 8.02
    11 8 8.03
    22 77 77.01
    33 55 55.01
    33 55 55.01
    Let me know if you can help me to generate seq_vis_num for all my records?
    Thanks,
    Maddy

    Cust_no R_visit_num Seq_vis_num
    11             8             8.01
    11             8             8.02
    11             8             8.03
    22             77           77.01
    33             55           55.01
    33             55           55.01Can you explain it more detail here how the Seq_vis_num is generater...
    as for 11 8 pair its increasing as 8.01,8.02,8.03 but for 33 ,55 pair it is same 55.01.

  • Need some inputs to work on OWB 9.0.4 mappings

    In existing source data base tables client added new column and I have to update this table with target table.
    How to update source and target tables . which transformation do I need to use between source and targets and we are using oracle OWB 9.0.4

    Hi,
    Re-import the source table (it will be in the OWB registry with new column).
    Modify the target table by adding the new column and deploy it.
    In the mapping synchronize both the objects (i.e. source and target table).
    With regards to transformation it depends what exactly you want to do, if you do not want to change any thing no need of any transformation directly map the new source column to the new column of the target table.
    How about migrating to OWB 10gr2, it is easier and better option.
    Cheers,
    - Mohammed

  • OWB 10.2 - Pluggable Mappings Implementation: Black-Box, White-Box, ...

    Hi,
    We recently migrated from OWB 10.1 to version 10.2, and for the moment I'm investigating the new feature 'Pluggable Mappings'.
    When looking at the OWB documentation I can find 3 different data transformation logics: black-box, white-box and system-managed.
    But when going to the OWB 10.2 design client I can't find anywhere this property on the pluggable mappings?
    Can anybody help?
    Thanks a lot in advance!
    Bart

    I haven't seen that in the OWB manual but perhaps it just refers to how you can treat a pluggable mapping.
    As a pluggable mapping is in effect a reusable component it can be tested independently.
    Black box testing usually refers to testing with no knowledge of the inner workings of the component i.e. you test by passing specific inputs and ensuring you get desired outputs, what the component did in the middle is irrelevant to your test.
    White box testing means that you test every path through the component so it requires a detailed understanding of the inner workings.
    Not sure about system managed, perhaps that just means a system test where you test the whole application.
    Pluggable mappings are useful to split large or complex mappings into smaller more manageable parts or to create a reusable component that you include in several mappings but only need to build/test once.
    Si
    PS. prior to 10.2.0.3 I encountered many bugs with pluggable mappings.

  • Re-generate toplink mappings used by db adapter

    When changes are performed in the underlying relationel model, and these need to be reflected in the xsd-mapping file and toplink-mappings used in my bpel process. How can i propagate, reverse-engineer the changes from my database into the toplink mapping files?
    For example new releases are being scheduled for our project and in these new release, changes are made to the db-model, which are non-intrusive because only attributes are added, existing attributes aren't deleted/removed.
    How can I propagate the relational db changes inside my bpel process using db adapters?
    Kind regards,
    Nathalie

    You're able to update the defined toplink mapping files by re-importing the tables in your database adapter, used in the bpel process.
    But this will only update the associated mapping file, but won't update the xsd which was created the first time you've defined the database adapter.
    Has anyone found a solution for this?
    Kind regards,
    Nathalie

  • Dynamic Lookup in OWB 10.1g

    Can we execute dynamic lookup in OWB 10.1g?
    I want update the columns of the target table, based on the previous values of the columns.
    Suppose there is a record in the target table with previous status and current status columns.
    The source table consist of 10 records which need to be processed one at a time in a single batch. Now we need to compare the status of record with the current status of target table. If the source contains next higher status then the current status of target record need to go to previous status and the new status coming from source need to overwrite the current status of target record.
    We have tried using row based option as well as setting commit frequency equal to 1 but we are not able to get the required result.
    how can we implement this in OWB10.1g?

    OK, now what I would do in an odd case like this is to look at the desired FINAL result of a run rather than worry so much about the intermediate steps.
    Based on your statement of the status incrementing upward, and only upward, your logic can actually be distilled down to the following:
    At the end of the load, the current status for a given primary key is the maximum status, and the previous status will be the second highest status. All the intermediate status values are transitional status values that have no real bearing on the desired final result.
    So, let's try a simple prototype:
    --drop table mb_tmp_src; /* SOURCE TABLE */
    --drop table mb_tmp_tgt; /*TARGET TABLE */
    create table mb_tmp_src (pk number, val number);
    insert into mb_tmp_src (pk, val) values (1,1);
    insert into mb_tmp_src (pk, val) values (1,2);
    insert into mb_tmp_src (pk, val) values (1,3);
    insert into mb_tmp_src (pk, val) values (2,2);
    insert into mb_tmp_src (pk, val) values (2,3);
    insert into mb_tmp_src (pk, val) values (3,1);
    insert into mb_tmp_src (pk, val) values (4,1);
    insert into mb_tmp_src (pk, val) values (4,3);
    insert into mb_tmp_src (pk, val) values (4,4);
    insert into mb_tmp_src (pk, val) values (4,5);
    insert into mb_tmp_src (pk, val) values (4,6);
    insert into mb_tmp_src (pk, val) values (5,5);
    commit;
    create table mb_tmp_tgt (pk number, val number, prv_val number);
    insert into mb_tmp_tgt (pk, val, prv_val) values (2,1,null);
    insert into mb_tmp_tgt (pk, val, prv_val) values (5,4,2);
    commit;
    -- for PK=1 we will want a current status of 3, prev =2
    -- for PK=2 we will want a current status of 3, prev =2
    -- for PK=3 we will want a current status of 1, prev = null
    -- for PK=4 we will want a current status of 6, prev = 5
    -- for PK=5 we will want a current status of 5, prev = 4
    Now, lets's create a pure SQL query that gives us this result:
    select pk, val, lastval
    from
    select pk,
    val,
    max(val) over (partition by pk) maxval,
    lag(val) over (partition by pk order by val ) lastval
    from (
    select pk, val
    from mb_tmp_src mts
    union
    select pk, val
    from mb_tmp_tgt mtt
    where val = maxval
    (NOTE: UNION, not UNION ALL to avoid multiples where tgt = src, and would want a distinct in the union if multiple instances of same value can occur in source table too)
    OK, now I'm not at my work right now, but you can see how unioning (SET operator) the target with the source, passing the union through an expression to get the analytics, and then through a filter to get the final rows before updating the target table will get you what you want. And the bonus is that you don't have to commit per row. If you can get OWB to generate this sort of statement, then it can go set-based.
    EDIT: And if you can't figure out how to get OWB to generate thisentirely within the mapping editor, then use it to create a view from the main subquery with the analytics, and then use that as the source in your mapping.
    If your problem was time-based where the code values could go up or down, then you would do pretty much the same thing except you want to grab the last change and have that become the current value in your dimension. The only time you would care about the intermediate values is if you were coding for a type 2 SCD, in which case you would need to track all the changes.
    Hope this helps.
    Mike
    Edited by: zeppo on Oct 25, 2008 10:46 AM

  • Can't generate Merge statement in OWB

    Hi All,
    I'm facing a very strange problem in OWB. Version is 9.0.3. I have a target table which has its properties set to INSERT/UPDATE and I have set one of its fileds to be the one used for matching. However when generating the code, OWB is generating only an INSERT statement and not a MERGE as I'd expect. The table is very simple, it has 4 columns, 1 of which I will be using to match on. the properties of them are:
    Load on Insert Load on Update Use for Matching
    F1 Y Y N
    F2 Y Y N
    F3 Y Y N
    F4 N N Y
    The module in which this mapping was created is pointing to an Oracle8i/9i DB. All other mappings when created in a seperate module with this project CAN generate the Merge...strange!
    I have used the Merge statement successfully on many other mappings but at this client site I'm having problems
    Any help would me most appreciated.
    Take care
    Mitesh

    Thanks for your reply. My target table does not have any contraints on it and that is why in the properties in OWB I have selected the value "no contraints" for "match by constrainsts". I have then set one of the 4 fields to:
    Insert:Use for loading = No
    Update:Use for loading = No
    Update:Use for matching = Yes
    The other threee columns are set to:
    Insert:Use for loading = Yes
    Update:Use for loading = Yes
    Update:Use for matching = No
    The table Loading/Type is Insert/Update but I'm still not getting the merge to work. I really am not sure if this is a Java bug or not. I have deleted the mapping and created it from scratch but I get the same problem. Then next point of call I think would be to create a new module within my projects - but its just strange how other moules in this same project work with merge.
    Thanks
    Mitesh

Maybe you are looking for

  • Solstice backup 6.1 hang when invoking the autochanger "SUN Storedge L8 "

    My company have a SUN Storedge L8 attached to a SUN Solaris 9 (SPARC) while I can only do a tar to the /dev/rmt/0, whenever I tried to interact with the jukebox with the solstice backup, the program hang definitely. grep '/usr/kernel/drv/lus' /var/sa

  • Inserting String data to BLOB column

    Hi All I want to insert String data into BLOB column using DBAdapter - through database procedure. anybody can help? Regards Albin Issac

  • SQL Developer bug? It does not return DATEs if database is only MOUNTED

    I'm logging into a mounted database as SYS to check on things using the V$ views, but any query I do involving DATEs never returns any data. If I remove the DATE column, the query works fine. If the database is OPEN, the query works fine with the DAT

  • Syncronisation between AD - OID Version 11.1.1.2.0

    I have bootstrapped successfully users from AD to OID. However the synchronisation for new users extra are failing. Below is the error messages I am receiving. Grateful for any advice. Thanks error in mapping engine AD2OID Error Creating Entry in Dir

  • How do I change the Linux based LMS 4.2 root password

    How do I change the Linux based LMS 4.2 root password?  I tried "passwd" and "passwd root" while in shell mode and it said "passwd: all authenticaiton tokens updated successfully."  However it makes me use the old password to get into the shell.