ODI Error Handling: IKM for Essbase (Data), Check reject at Commit Intervls

All,
I am trying to see if there is a way I can handle errors in the ODI IKM for SQL to Hyperion Essbase (Data), so I can switch to using a load rule interface if there are rejects.
I am thinking, if we can check for rejects after every commit interval (right now using the default of 1000 records), and continue to the next set of 1000 records only if there are no rejects.
If there is a way of even aborting the interface run i.e. prevent it from switching to loading records line-by-line on the occurrence of a reject, I can check a log and kick off an interface which will use an Essbase Load rule to continue loading.
I don't know if it all sounds too hypothetical, but I want to see if anyone has ideas around this approach.
Please share any thoughts.
Thanks,
Anindyo

Thanks John, I was thinking on those lines.
But it would help if there is a way of collecting information on what rejected, without having to set up a new physical object to pull from the work repository or from the file.
We are trying to get away from any KM customization.
Do you know what we can check for here? Is there a way of refreshing a variable in case of a failure, which we can check in the next step?
Thanks,
Anindyo

Similar Messages

  • Time series functions are not working in OBIEE for ESSBASE data source

    Hi All,
    I am facing a problem in OBIEE as I am getting error messages for measure columns with Time series functions(Ago,ToDate and PeriodRolling) in both RPD and Answers.
    Error is "Target database does not support Ago operation".
    But I am aware of OBIEE supports Time Series functions for Essbase data source.
    using Hyperion 9.3.1 as data source and obiee 11.1.1.5.0 as reporting tool.
    Appreciate your help.
    Thanks,
    Aravind

    Hi,
    is because the time series function are not supported for the framentation content, see the content of the oracle support:
    The error occurs due to the fact the fragmented data sources are used on some Time series measures. Time series measures (i.e. AGO) are not supported on fragmented data sources.
    Confirmation is documented in the following guide - Creating and Administering the Business Model and Mapping Layer in an Oracle BI Repository > Process of Creating and Administering Dimensions
    Ago or ToDate functionality is not supported on fragmented logical table sources. For more information, refer to “About Time Series Conversion Functions” on page 197.
    Regards,
    Gianluca

  • Exit for delivery Date check in Sales order [Via EDI]

    Hi Friends ,
    We have a requirement in which if the Sales Order is created Via EDI with INVALID delivery date then I have to set a flag in some Z Table . I have tried searching for exits and debugging the code but was unable to find a place where SAP checks for delivery date validity . Please help me with possible exits/Badis?Enhancement Pts etc where I can mark this flag .
    Thanks & Regards
    Gaurav Deep

    @Vinod Thanks  for your reply , the delivery date is there in EXIT_SAPLVEDA_009 .
    But how will i check if the delivery date is valid or not , If you go to VA01 and give a invalid delivery date a warning message is issued , here if the warning message is issued then I want to set my flag , this might save my from  coding redundant delivery date check logic , can someone please help me how can i track if this warning message is issued .
    Thanks & Regards
    Gaurav Deep

  • Site Wide Error Handling works for Hidden iframe?

    I have implemented Site Wide Error Handling within my
    application. It is working great except that if there is an error
    within a hidden iframe, the Error Page is not visable to the end
    user. How can I bring up the Error page to the Parent so that the
    Error page can be seen by the end user?
    Does anyone has a solution for this?
    Thanks

    How about
    place this in the <head> tags on your error page
    <script>
    function makeParent(){
    if(window.location <> parent.window.location){
    window.parent.location = this.parent.location;
    add this to your body tag
    <body onLoad="makeParent();">
    Hope this helps

  • Userexit or Badi for vl01n date check

    Hi experts,
                     While preparing VL01N, BLDATE, WADAT,WADAT_IST  date should be a current date,
    if else i won t allow that delivery to PGI. kindly give userexit or badi  for this issue.
    Thanks and Regards
    G.Vendhan

    Hi,
    Use these steps to find Badi easily.
    1. Go to the TCode SE24 and enter CL_EXITHANDLER as object type.
    2. In 'Display' mode, go to 'Methods' tab.
    3. Double click the method 'Get Instance' to display it source code.
    4. Set a breakpoint on 'CALL METHOD cl_exithandler => get_class_name_by_interface'.
    5. Then run your transaction.
    6. The screen will stop at this method.
    7. Check the value of parameter 'EXIT_NAME'. It will show you the BADI for that transaction.
    Hope this helps u.
    thanks.

  • Error Handling for DSO

    Hi all,
    Is it not possible to give error handling options for DSO in Delta loads?
    I have created a delta infopackage to load data from Order Line Item (2LIS_11_VAITM) into a DSO.
    But the error handling gives only option 'No Update ,No reporting'.
    I want to give the option 'Valid Records update, reporting possible (request green)' option.
    Please let me know whether any setting needs to be done to make this option available.
    Thanks in advance.
    Meera

    Check this help - self explanatory - trying checking F1 thats very informative and self explantory most of the times
    Switched off
    If an error occurs, the error is reported as a package error in the DTP monitor. The error is not assigned to the data record. The cross-reference tables for determining the data record numbers are not built; this results in faster processing.
    The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.
    No update, no reporting (default)
    If errors occur, the system terminates the update of the entire data package. The request is not released for reporting. The incorrect record is highlighted so that the error can be assigned to the data record.
    The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.
    Update valid records, no reporting (request red)
    This option allows you to update valid data. This data is only released for reporting after the administrator checks the incorrect records that are not updated and manually releases the request (by a QM action, that is, setting the overall status on the Status tab page in the monitor).
    The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.
    Update valid records, reporting possible
    Valid records can be reported immediately. Automatic follow-up actions, such as adjusting the aggregates, are also carried out.
    The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.

  • Job for alert data reorganisation is not release or running -ERROR BPM

    Hi
    I am trying to set up BPM in solution manager EHP1 .
    I ahve created a business process and also added teh required job,When i try to generate and activate the BPM i get an error as
    "Job for alert data reorganisation is not release or running"
    kindly suggest what i need to do to resolve this issue, which job to be scheduled and also all details and also any relevant document for the same
    i have a project dead line to be met kindly help me ASAP
    Thanks
    Michael

    Hi Michael
    Was it show red alert ?
    Do you still get same alert if you generate and activate
    the BPMon session ?
    If yes, there seems some problem on job "BPM_ALERT_REORG".
    Maybe it is scheduled but not released.
    (do you see scheduled job in SM37 ?)
    1st, according to your description, you use ehp1.
    In that case, as described in note 521820, do you already have
    required note. Especially related to ST-SER like following.
    1273127
    1275225
    1298310
    1319473
    1332197
    1355132
    1390111
    I quickly checked description of each note, but could not find out
    correction related to alert reorganization job. But it is better to
    implement above notes if you do not have yet.
    And if you still have problem (try generation and activation again)
    even with above note, I recommend you to create SAP customer message
    for component "SV-SMG-MON-BPM".
    Best Regards
    Keiji

  • ESS error for Personal data and Family members

    Hi,
    I am having an error in ESS For pesonal data and Family members.
    The error message is
    Acritical error has occured.Processing of the service had to be terminated.Unsaved data has been lost.
    PL contact you sytem Admin
    An exception error has occured that was not caught error key RFC_ERROR_SYSTEM FAILURE.
    Thanks
    Sasikanth

    Check the table V_T7XSSPERSUBTYP.
    Also check by the t.code HRUSER if the user have an employee assigned.
    Check the log using the T.Code ST22.
    Please You should do a trace using the t.code ST01
    Hope this help you.
    Regards

  • Data error handling documents

    Dear Freinds,
    can any one send error handling documents for data loads.
    Regards,
    Pavan

    Pavan,
    <b>load failed .
    fullload-- masterdata or transcational data-- data in PSA--- directly update from PSA by deleting request in data target.</b>
    Yea you can update the data from the PSA if the data is correct.
    <b>full load -- master data or transcation data-- no PSA- schedule the infopackage again.</b>
    Yes when you have no data backup in BW (PSA) you need to extract data from the source system. In case of delta if the load fails you can repeat the delta load which extracts data from source system.
    <b>delta load- master data or transactional data data in PSA-- directly update from PSA by deleting request in data target.</b>
    <b>delta laod-- master data or transactional data- no psa- schedule the IP again.</b>
    If the delta load fails you can repeat delta even if the data is present in the PSA or not
    <b>also when we go for repair full request and repeat delta?</b>
    You can go for repair full request only when you have data in PSA.

  • QUESTION:  Essbase data extraction and Installing ODI Agent??

    For extracting data from Essbase cubes, ODI has "LKM Hyperion Essbase DATA to SQL".
    We can use (1). ReportScript, or (2). MDX-query, or (3). CalcScript
    For data-extraction using CalcScript, ODI Agent must be running on the same server as the Essbase server.
    Does anyone know if there is a need for ODI Agent on the Essbase machine if we use MDX-query method for data-extraction?
    We would like to avoid installing ODI Agent for Essbase data-extraction.
    .

    Thanks John.
    One related question. To move data from one Essbase cube to another Essbase cube using ODI Interface, Can we do it efficiently through MDX-query?
    We want to avoid Replicated-partitioning OR CalcScripts, if possible.
    BTW... Your ODI/Hyperion blog is a bible for us.

  • Error handling for distributed cache synchronization

    Hello,
    Can somebody explain to me how the error handling works for the distributed cache synchronization ?
    Say I have four nodes of a weblogic cluster and 4 different sessions on each one of those nodes.
    On Node A an update happens on object B. This update is going to be propogated to all the other nodes B, C, D. But for some reason the connection between node A and node B is lost.
    In the following xml
    <cache-synchronization-manager>
    <clustering-service>...</clustering-service>
    <should-remove-connection-on-error>true</should-remove-connection-on-error>
    If I set this to true does this mean that the Toplink will stop sending updates from node A to node B ? I presume all of this is transparent. In order to handle any errors I do not have to write any code to capture this kind of error .
    Is that correct ?
    Aswin.

    This "should-remove-connection-on-error" option mainly applies to RMI or RMI_IIOP cache synchronization. If you use JMS for cache synchronization, then connectivity and error handling is provided by the JMS service.
    For RMI, when this is set to true (which is the default) if a communication exception occurs in sending the cache synchronization to a server, that server will be removed and no longer synchronized with. The assumption is that the server has gone down, and when it comes back up it will rejoin the cluster and reconnect to this server and resume synchronization. Since it will have an empty cache when it starts back up, it will not have missed anything.
    You do not have to perform any error handling, however if you wish to handle cache synchronization errors you can use a TopLink Session ExceptionHandler. Any cache synchronization errors will be sent to the session's exception handler and allow it to handle the error or be notified of the error. Any errors will also be logged to the TopLink session's log.

  • Error Handling for ORA-02291

    Dear all,
    please help me, what exception name i must use
    (like "NO_DATA_FOUND" or "DUP_VAL_ON_INDEX") for error handling
    (exception) for ORA-02291 : integrity constraint(....) violated -
    parent key not found.
    Thank you.
    Regards
    Teguh Santoso

    Find out the error no. Oracle returns for this error & in the
    front-end (Ex:Forms) create a Pragma Exception error handler &
    give ur user defined error message when the user encounters
    it....
    Hope this suffices.
    Santhosh

  • Frequent "Searching for Movie Data in File ..." errors

    OK, I searched the forums for this but didn't find anything, so don't nobody jump down my throat please ...
    I very often find myself, when trying to export a file through QuickTime conversion, confronted with the error message "Searching for Movie Data in File [XXX - 0000000008]" which FCP invariably cannot find. I go to look in my capture drive myself, and sure enough, no file exists of that name. Without the file in question, FCP won't run the conversion. I get around it by exporting directly to QT, opening that file in QT Pro and re-exporting, but what the he11 is FCP doing in the first place? Why is it making all these files then losing/destroying them?
    Help very much appreciated. Running FCP 4.5 HD; rest of specs in profile.
    Thanks!

    Without being able to see your project's history, I can only surmise you have created some copies of some clips and then tossed the originals. Or used reference movies that point to renamed clips. Or you've renamed some clips but used a differently-named copy of it. You may have copied a clip from one project and placed into another and then later discarded the source media.
    I get around it by exporting directly to QT, opening that file in QT Pro and re-exporting, but what the he11 is FCP doing in the first place? Why is it making all these files then losing/destroying them? < </div>
    That's because the original source media is always required for Compressor's operations. You see copies and renders in the timeline but Compressor will always recreate fresh media so looks for the sources.
    This problem is almost always a user error, though it is not easily debugged from where I'm sitting. It is caused by a few subtle misperceptions about nesting and copying clips from one project to another without taking the source media with them.
    Sorry not to be more definitive, I'd need to have watched you work for several hours to know what's going on with this.
    bogiesan

  • Idoc error handling for error status 51

    hi all,
    I am trying to set up inbound idoc error handling. For that I have activated task TS00008068.
    This task gets triggered for all error statuses except error status 51.
    Could you please help me understand why it doesn't trigger the task for error status 51?
    KR,
    Vithal

    Standard task 8068 is triggered for IDOC syntax, partner profile issues etc. The error message 51 is probably coming in from an inbound function module.
    Firstly there isn't a single standard task that will be triggered for all errors. Each IDOC error, depending on the events triggered, will fire a predefined standard task. For example, you're trying to process an order inbound and the IDOC is successfully created and passed on to application. Then the inbound FM will attempt to create an order and probably fail at determining a sales order or a ship to etc due to a missing setup. Then a different event is triggered which is attached to standard task 8046. That is different from 8068.
    Here is small subset of error 'vs' tasks:
    Message       Workflow Task
    SHPCON      TS20000051
    INVOIC      TS00008057
    ORDERS      TS00008046
    SDPICK       TS00008031
    WMMBXY TS00008009
    So you have to choose the IDOC which you want to activate WF task. Then look at the event that is associated with it. Then make sure the event is linked to the message type and routing is active.

  • Error handling in simulatenous loops

    I am trying to design a good error handling system for a project I am working on, but I have run into a "design" problem. I thought it would be good to ask for some guidance here before I sit down and start create the error handling system.
    I have more than one subVI started from one mainVI, each subVI with an individual while loop running (they all stop when I press the same stop button from the mainVI). Each while loop continously retrievews information from various serial devices. Each VISA call etc. can thus of course generate errors. I only want one error dialog box in my mainVI front panel displaying any error that happens.
    How would I design this in a good way? As I see it, I would have to use the error dialog box in the mainVI as a global/functional global. Each subVI would then write to this global error dialog box. This could however cause race conditions where only the latest error gets displayed even if earlier errors happened. Appreciate some good advice here.
    Solved!
    Go to Solution.

    First and foremost I would avoid using the sequence structure. LabVIEW is a data flow language and you should take advantage of that rather than forcing it to be a sequenctial language. Take a look various examples that ship with LabVIEW. You will want to definitely check out the examples for state machines and the producer consumer architectures. Your current code will not meet your needs of continually monitoring for errors since your "Error" queue is not in a parallel loop task.
    I have attached a very quik example of a producer consumer architecture with an error processing loop. There are no real code details but this is a simple example of an approach to take for an application. This along with the above examples should give you a decent starting point.
    Message Edited by Mark Yedinak on 10-05-2009 04:05 PM
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot
    Attachments:
    Simple Application Architecture (8-6).vi ‏13 KB

Maybe you are looking for