How to configure once data load then trigerd or run ibot?

Hi Experts,
I have a one requirement,
1) Every day run one workflow( means data load into data warehouse)
2) After, ibot should be run and delivery to users.
3) We scheduled the workflows in DAC for every day morning.
Requirement:
Once data loaded, then IBot should be run and send to users dynamically (without scheduling).
If workflow failed, IBot won’t be delivered.
How to find out or configure once data load then trigerd or run ibot?
I am using obi 10g and informatica 8 and os xp.
Advance thanks..
Thanks,
Raja

Hi,
Below are the details for automating the OBIEE Scheduler.
Create a batch file or Sh file with following command
D:\OracleBI\server\Bin\saschinvoke -u Administrator/udrbiee007 -j 8
-u is username/Password for Scheduler (username/password that u given while configuration)
-j is a job id number when u create a ibot it will assign the new job number that can be identified from"Jobmanager"
Refer the below thread for more information.
iBot scheduling after ETL load
Or ,
What you the above also it will work but problem is we need specify the time like every day 6.30 am .
Note: The condition report is true then the report will be delivered at 6.30 pm only but the condition is false the report will not triggered.
I also implemented this but that is little bit different .
Hope this help's
Thanks
Satya
Edited by: Satya Ranki Reddy on Jul 13, 2012 12:05 PM

Similar Messages

  • [Forum FAQ] How to configure a Data Driven Subscription which get multi-value parameters from one column of a database table?

    Introduction
    In SQL Server Reporting Services, we can define a mapping between the fields that are returned in the query to specific delivery options and to report parameters in a data-driven subscription.
    For a report with a parameter (such as YEAR) that allow multiple values, when creating a data-driven subscription, how can we pass a record like below to show correct data (data for year 2012, 2013 and 2014).
    EmailAddress                             Parameter                      
    Comment
    [email protected]              2012,2013,2014               NULL
    In this article, I will demonstrate how to configure a Data Driven Subscription which get multi-value parameters from one column of a database table
    Workaround
    Generally, if we pass the “Parameter” column to report directly in the step 5 when creating data-driven subscription.
    The value “2012,2013,2014” will be regarded as a single value, Reporting Services will use “2012,2013,2014” to filter data. However, there are no any records that YEAR filed equal to “2012,2013,2014”, and we will get an error when the subscription executed
    on the log. (C:\Program Files\Microsoft SQL Server\MSRS10_50.MSSQLSERVER\Reporting Services\LogFiles)
    Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportParameterException: Default value or value provided for the report parameter 'Name' is not a valid value.
    This means that there is no such a value on parameter’s available value list, this is an invalid parameter value. If we change the parameter records like below.
    EmailAddress                        Parameter             Comment
    [email protected]         2012                     NULL
    [email protected]         2013                     NULL
    [email protected]         2014                     NULL
    In this case, Reporting Services will generate 3 reports for one data-driven subscription. Each report for only one year which cannot fit the requirement obviously.
    Currently, there is no a solution to solve this issue. The workaround for it is that create two report, one is used for view report for end users, another one is used for create data-driven subscription.
    On the report that used create data-driven subscription, uncheck “Allow multiple values” option for the parameter, do not specify and available values and default values for this parameter. Then change the Filter
    From
    Expression:[ParameterName]
    Operator   :In
    Value         :[@ParameterName]
    To
    Expression:[ParameterName]
    Operator   :In
    Value         :Split(Parameters!ParameterName.Value,",")
    In this case, we can specify a value like "2012,2013,2014" from database to the data-driven subscription.
    Applies to
    Microsoft SQL Server 2005
    Microsoft SQL Server 2008
    Microsoft SQL Server 2008 R2
    Microsoft SQL Server 2012
    Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.

    For every Auftrag, there are multiple Position entries.
    Rest of the blocks don't seems to have any relation.
    So you can check this code to see how internal table lt_str is built whose first 3 fields have data contained in Auftrag, and next 3 fields have Position data. The structure is flat, assuming that every Position record is related to preceding Auftrag.
    Try out this snippet.
    DATA lt_data TYPE TABLE OF string.
    DATA lv_data TYPE string.
    CALL METHOD cl_gui_frontend_services=>gui_upload
      EXPORTING
        filename = 'C:\temp\test.txt'
      CHANGING
        data_tab = lt_data
      EXCEPTIONS
        OTHERS   = 19.
    CHECK sy-subrc EQ 0.
    TYPES:
    BEGIN OF ty_str,
      a1 TYPE string,
      a2 TYPE string,
      a3 TYPE string,
      p1 TYPE string,
      p2 TYPE string,
      p3 TYPE string,
    END OF ty_str.
    DATA: lt_str TYPE TABLE OF ty_str,
          ls_str TYPE ty_str,
          lv_block TYPE string,
          lv_flag TYPE boolean.
    LOOP AT lt_data INTO lv_data.
      CASE lv_data.
        WHEN '[Version]' OR '[StdSatz]' OR '[Arbeitstag]' OR '[Pecunia]'
             OR '[Mita]' OR '[Kunde]' OR '[Auftrag]' OR '[Position]'.
          lv_block = lv_data.
          lv_flag = abap_false.
        WHEN OTHERS.
          lv_flag = abap_true.
      ENDCASE.
      CHECK lv_flag EQ abap_true.
      CASE lv_block.
        WHEN '[Auftrag]'.
          SPLIT lv_data AT ';' INTO ls_str-a1 ls_str-a2 ls_str-a3.
        WHEN '[Position]'.
          SPLIT lv_data AT ';' INTO ls_str-p1 ls_str-p2 ls_str-p3.
          APPEND ls_str TO lt_str.
      ENDCASE.
    ENDLOOP.

  • How to find the data loaded from r/3 to bw

    hi
    how to find the data loaded from r/3 to bw is correct . i am not able to find which feild in the query is connected to which feild in the r/3 . where i am geting the data from r/3 . is there any process to find which feild  and table the data is comming from . plz help
    thanks in advance to u all

    Hi Veda ... the mapping between R/3 fields and BW InfoObjects should take place in Transfer Rules. Other transformation could take place in Update Rule.
    So you could proceed this way: look at InfoProvider Data Model and see if the Query does perform any calculation (even with Virtual keyfigures / chars). Than go back to Update Rules and search for other calculation / transformation. At least there are Tranfer Rule and eventually DataSource / Extraction Enhancements.
    As you can easily get there are many points where you have to look for ... it's a quite complex work but very usefull.
    Once you will have identified all mappings / transfromation see if BW data matchs R/3 (considering calculations ...)
    Good job
    GFV

  • How to automate the data load process using data load file & task Scheduler

    Hi,
    I am doing Automated Process to load the data in Hyperion Planning application with the help of data_Load.bat file & Task Scheduler.
    I have created Data_Load.bat file but rest of the process i am unable complete.
    So could you help me , how to automate the data load process using Data_load.bat file & task Scheduler or what are the rest of the file is require to achieve this.
    Thanks

    To follow up on your question are you using the maxl scripts for the dataload?
    If so I have seen and issue within the batch (ex: load_data.bat) that if you do not have the full maxl script path with a batch when running it through event task scheduler the task will work but the log and/ or error file will not be created. Meaning the batch claims it ran from the task scheduler although it didn't do what you needed it to.
    If you are using maxl use this as the batch
    "essmsh C:\data\DataLoad.mxl" Or you can also use the full path for the maxl either way works. The only reason I would think that the maxl may then not work is if you do not have the batch updated to call on all the maxl PATH changes or if you need to update your environment variables to correct the essmsh command to work in a command prompt.

  • How to configure Essbase data source in OBIEE11g on Unix system?

    Hi,
    I am looking for documentation/link for how to configure Essbase data source in OBIEE 11g on UNIX system.
    Thanks in advance

    Hi Fayaz
    First You need "BI Administrator Client Tool" cause you need to make changes with in RPD (repository) BUT "BI Administrator Client Tool" for Unix is not Available.
    So you have download OBIEE 11.1.1.5 ""BI Administrator Client Tool" and Install it on Windows Platform.
    Now regarding your Original Issue I am also looking for this Info.
    Regards
    Sher
    Edited by: Sher Ullah Baig on Apr 17, 2012 4:00 PM

  • How to best reduce data load on MAC due to duplicate Adobe files?

    I just got hired at a small business. I don't have a lot of experience with MACs, so I need to know some best practices here.
    I am working with CS3, Ai, Ps, Id, and later, Dw.
    It's a magazine publishing company. I have it organizing so each magazine has its folder, and I want to have an "old editions" and a "working edition" folders. Within each, I want to break it down into "Ads this issue", "Links", and "stories".
    The Ads and Links are where I'm concerned. I want to have a copy of each ad's file within that folder, and a copy of all the other files its linked to, so that if the original ads/images get moved, the links won't be disturbed.
    I'm wondering if there is a way to do this without bogging down the machine's HD with duplicates of really large files. The machine moves slow enough as it is.
    I've theorized that I could:
    A) keep the Main "Ads" folder along with the subfolders compressed, and the "old editions" compressed, and have a regular copy in the working folder only. This also works because the ads get edited for different editions sometimes.
    or
    B) Is there a way to do this with Aliases? Being unfamiliar with alias, or even shortcuts, because I haven't worked in an actual production environment yet, I don't know they functionality of linking alias into an ID file. I read a couple of previous posts and the outlook isn't very good for it.
    or
    C) Just place a PDF (or whatever you guys think is the best quality preserving filetype) in with the magazine itself? Then each company could have its own ad folder with all the rest of the files...
    What do you all think? If you can even link me to a post that goes into further detail on which option you think is best, or if  you have a different solution, that would be wonderful. I am open to answers.
    I want to be sure to leave a cleaner computer/work environment then the last few punks who were here... That's my "best practice". Documentation and file organization got drilled into me at Uni.

    Sorry, I am overcaffienated today, this response is kind of long.
    "Data load?" Do you mean that:
    a) handling lots of large files is too much for your computer to handle, or
    b) simply having lots of large files on your hard drive (even if they are not currently in use) slows your computer down?
    Because b) is pretty much impossible, unless you are almost out of space on your system drive. Which can be ameliorated by... buying another drive.
    I once set up an install of InDesign on a Mac for a friend of mine who is chipping away at a big-data math PhD. and who is sick to death of LaTeX. (Can't blame her, really.) Because we are both BSD nerds from way back, she wanted to do what you are suggesting - but instead of thinking about aliases, which you are correct to regard with dubiousness, she wanted to do it with hardlinks. Which worked, more or less. She liked it. Seemed like overkill to me.
    I suspect that this is because she is a highfalutin' academic whereas I am a production wonk in a business. I have to compare the cost of my time resolving a broken-link issue due to a complicated archiving scheme versus Just Buying Another Drive. Having clocked myself on solving problems induced by complicated archival schemes (or failure of overworked project managers to correctly follow the rules for same) I know that it doesn't take many hours of my work invested in combing through archives or rebuilding lost image files to equal Another Drive that I can go out and Just Buy.
    If you set up a reasonable method of file organization, and document it clearly, then you have already saved your organization (and your successors!) significant amounts of time and cash. Hard drive space is cheap. Don't spend your time on figuring out a way to save a few terabytes here and there. In fact, what I'd suggest for you is to try to figure out how many terabytes you've already spent on this question, by figuring out todays ratio of easily purchaseable reliable external hard drives to your unit of preferred currency, then figuring out how many hours you've already spent on the question.
    The only reason I can make this argument is that price-per-unit-of-magnetic-data-storage has, with remarkablly few exceptions, been constantly plummeting for decades, while the space requirements for documentation have been going up comparatively slowly. If you need a faster computer to do your job more efficiently, then price out a SSD drive for your OS and applications and jobs-on-deck, and then show your higher-ups the math that proves that the SSD pays for itself in your saved time within n weeks. My gut feeling these days is that, unless you are seriously underpaid, n is between two and six.
    Finally: I didn't really address your suggested possibilities. Procedure C (placing PDFs) usually works, but you do need to figure out how to make PDFs in such a way as to ensure they play nicely with your print method. Procedure A (compress stuff you don't need anymore) probably works okay, but I hope that you have some sort of command-line scripting ability to be able to quickly route stuff into and out of archives.

  • How to stop the data loads through process chains

    hi,
    I want to stop all the data loads to BI through Process chains where load happens periodic.
    kindly suggest how can I proceed.

    Hi,
    Goto RSPC find your PC and double click on START then change the timings, i.e. give starting date is 01.01.9999 like that Save and ACtivate the PC, it won't start till 01.01.9999.
    Thanks
    Reddy

  • How to Improve large data loads ?

    Hello Gurus,
    Large data loads at my client long hours. I have tried using the recommedations from various blogs and SAP sites, for control parameters for DTP's and Infopackages. I need some viewpoints on what are the parameters that can be checked in the Oracle and Unix systems. I would also need some insight on:-
    1) How to clear log files
    2) How to clear any cached up memory in SAP BW.
    3) Control parameters in Oracle and Unix for any improvements.
    Thanks in advance.

    Hi
    I think those work should be performed by the BASIS guys.
    2)U can delete the cache memory by using the Tcode : RSRT and then select the cache monitor and then delete.
    Thanx & Regards,
    RaviChandra

  • How to delete the data loaded into MySQL target table using Scripts

    Hi Experts
    I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
    My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
    But in the script i have written the code as
    sql('database','delete from <tablename>');
    but as it is an SQL Query execution it is rising exception for the query.
    How can i delete the data loaded into MySQL Target table using scripts.
    Please guide me for this error
    Thanks in Advance
    PrasannaKumar

    Hi Dirk Venken
    I got the Solution, the mistake i did was the query is not correct regarding MySQL.
    sql('MySQL', 'truncate world.customer_salesfact_details')
    error query
    sql('MySQL', 'delete table world.customer_salesfact_details')
    Thanks for your concern
    PrasannaKumar

  • How to configure Dynamic Data Connection for Business View

    Hi,
    How can we configure Dynamic Data connection that we can save the profile of the connection to somewhere that we do not need to enter it everytime when we refresh the report?
    thanks and regards
    nora

    Hi James,
    Thanks for the reply. Its solved now. For anybody if interested you can set the dynamic email address either i) having it as part of payload - In this case use the xpath to query the payload varaible ii) use identity service - follow the following steps
    1.Create a user in Application enterprise manager/also you can use a existing account, if u are creating a new one assign the correct role
    2. In either case edit the user-properties.xml(bpel/system/services/config) file for the corresponding user and add an attribute called email
    3. Bounce the server for this changes to take effect
    4. In the notification properties in the to address use ids:getUserProperty and pass the attribute name

  • How to configure HR Data mapping to LDAP

    Hi everyone,
    I configure LDAP connection and it works. But I cannot do mappings of attributes to LDAP. Is there any document what are the fields to map and what is the meanings?
    Thanks
    Haldun

    Dear Abhishek,
    Before you proceed to Create Jobs & Positions first you check whether the Personnel Actions (PA40) is configured or not. Then after uploading the employeee master data you can proceed to other process.
    if PA40 is not configured then Configure in SPRO using PA/PM/Customizing procudures/ Infotype Menus etc. You can also maintain user profile using Tcode. SU3.
    All the best.
    Rgds,
    Vikrant

  • How to use incremental data load in OWB? can CDC be used?

    hi,
    i am using oracle 10g relese 2 and OWB 10g relese 1
    i want know how can i implement incremental data load in OWB?
    is it having such implicit feature in OWB tool like informatica?
    can i use CDC concept for this/ is it viable and compatible with my envoirnment?
    what could be other possible ways?

    Hi ,
    As such the current version of OWB does not provide the functionality to directly use CDC feature available. You have to come up with your own strategy for incremental loading. Like, try to use the Update Dates if available on your source systems or use CDC packages to pick the changed data from your source systems.
    rgds
    mahesh

  • Master data loads vs attr change runs

    Hi experts,
    I need to load master data daily and want to know the difference between attribute change runs and process chains for master data. Please explain with steps. I know how to create process chains.

    Attribute change run is nothing but adjusting the master data after its been loaded from time to time so that it can change or generate or adjust the sid's so that u may not have any problem when loading the trasaction data in to data targets.
    Whenever master data attributes or hierarchy is loaded an attribute chnage run need to be performd due to the following reasons:
    1. When a master data is changed/updated it gets loaded into the master data table as version "M"(modified). It is only after a attribute chnage run that the master data becomes activated i.e. version "A".
    2. Master data attributes and hierachies are used in cube aggregates also. So to do update the latest master data attributes values in the aggregtes attribute change run need to be performed.
    Re: Attribute Change Run
    Re: Attribute Change Run for Hierarchy

  • How to..Show a wait dialog when data loads, then hide once data loads

    Description: This code will have the SAP default wait dialog become visible when the page loads and hidden after the data is returned for the table.
    1. Place this code within the head tags.
    <head>
    <SCRIPT Language="JavaScript">
    function show_wait_dialog () {
    document.body.style.cursor = "wait";
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("display", "block", false);
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("visibility", "visible", false);
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("overflow", "auto", false);
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("width", "255", false);
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("left", document.body.offsetWidth/2-125, false); document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("top",
    document.body.offsetHeight/2-38, false);
    function hide_wait_dialog () {
    document.body.style.cursor = "auto";
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("display", "block", false);
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("visibility", "hidden", false);
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("overflow", "auto", false);
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("width", "255", false);
    document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("left", document.body.offsetWidth/2-125, false); document.getElementById("SAPBWProcessBoxSpan").style.setAttribute("top", document.body.offsetHeight/2-38, false);
    </script>
    </head>
    2. Place this code right after the <body> tag.
    <body>
    <!-- New Code -->
    <script>
    show_wait_dialog()
    </script>
    3. Lastly, place this code below right after the Table Item and before the </body> tag. (in this case GR1Table)
    <object>
             <param name="OWNER" value="SAP_BW"/>
             <param name="CMD" value="GET_ITEM"/>
             <param name="NAME" value="GR1Table"/>
             <param name="ITEM_CLASS" value="CL_RSR_WWW_ITEM_GRID"/>
             <param name="DATA_PROVIDER" value="DP"/>
             <param name="WIDTH" value="700"/>
             <param name="BLOCK_SIZE" value="3000"/>
             <param name="SHOW_PAGING_AREA_TOP" value="X"/>
             <param name="HELP_SERVICE" value="ZPRINTING"/>
             <param name="HELP_SERVICE_CLASS" value="Z_PRINTING_HELP_SERVICE"/>
             ITEM:            GR1Table
    </object></P>
    <P>
    <!—New code --&#61664;
    <script>
    hide_wait_dialog ()
    </script>
    </body>

    This is very helpful. Thanks.

  • How to create a data load file from Excel !!!

    Hi All,
    I'm new to HFM and would like to load data into an HFM application. As I have an Excel file with all the data. When I'm directly loading the data it throws an error saying "No section has been specified to determine if this is data, description or line item detail". How can I convert this excel file into proper format (.dat) file understandable by HFM ?

    There are several ways to get this data into HFM.
    1) FDM - best option if you have it
    2) Webforms/Data Grids
    3) HsSetValue formulas in Excel
    4) DAT file loads
    5) JVs, etc
    If you wish to use DAT files created via Excel, you will likely want to use Excel VBA macros to create your DAT file to load. We do this on occasion for special projects and it works quite well. What you can do is set up an Excel file with your data inputs to look however you want, then link your POV members and amounts to another tab (we commonly call this the Export tab and it is set up in an HFM-friendly format).
    Create a macro to write a DAT file to a specified location using data from the Export tab. The DAT file will need to be formatted as below. For a specific sample, you can extract data from your HFM app and see the format.
    !Data
    Scenario;Year;Period;View;Entity;Value;Account;ICP;Custom1;Custom2;Custom3;Custom4;Amount
    Scenario;Year;Period;View;Entity;Value;Account;ICP;Custom1;Custom2;Custom3;Custom4;Amount
    Scenario;Year;Period;View;Entity;Value;Account;ICP;Custom1;Custom2;Custom3;Custom4;Amount
    Scenario;Year;Period;View;Entity;Value;Account;ICP;Custom1;Custom2;Custom3;Custom4;Amount
    Scenario;Year;Period;View;Entity;Value;Account;ICP;Custom1;Custom2;Custom3;Custom4;Amount
    Brush up on Replace, Merge, or Accumulate load options in the HFM Admin and User Guides, then upload your new DAT file.

Maybe you are looking for

  • NF-e Serviços - São Paulo / Cotia

    Henrique, Por favor, já implementamos a nota 981687, mas não encontramos as orientações de como configurar esta solução. Onde posso encontrar? Já é do nosso conhecimento que a nota é referente a Nota Fiscal Eletronica de Serviços - São Paulo. Estamos

  • DVD encoded video and audio progressively get out of sync

    Hi Compressor professionados, I've been trying to finalise a DVD I’m creating but keep having the same show-stopping problem where I just can't get the timing of the audio and video right after encoding it to MPEG-2/ac3 for DVD Studio Pro. When I lay

  • GOS(Generic Object Services) for Custom program ?

    Hi All, My requirement is to have GOS option to attach document for Custom program against each record. Do anybody have faced the same kind of requirement,please let me know how to do. Bharathi.J

  • Install SMD agent on EP6 fails

    Hi, I get an error when installing SDM agent on EP6 (HP-UX): STEP 7: checking P4 Connection and SMD Agent Registration External Command Executed: /opt/java1.4/bin/java -DP4ClassLoad=P4Connection -Dsap.p4.remote_classloading=false -cp ":/usr/sap/SMD/J

  • Why so fuzzy?

    I know that you need to use a monitor or TV to truly view output, but I still don't understand why titles look great UNTIL rendering. As soon as rendering is done, the titles get all fuzzy on the computer screen. What changes with the render? Can som