MAXL SCRIPT TO EXECUTE IN BACKGROUND THE DATA LOAD

Hi,
I have problem with a MaxL script, I don´t know the command to execute a data load in the background, someone knows??? it would be very grateful if you can help me because now I have to load the data manually and then tick execute in background.
Thanks for your help
Regards,

If the two processes are in no way dependent on each other, why not just use two separate MaxL scripts and run / schedule them separately?
If you really need to launch multiple MaxL operations against different cubes to run in the background from a single script you can only do this with a shell / command script, not 'natively' in MaxL. If you're on Windows and using CMD, for example, see the 'START' command.
--EDIT: Crossed over with Sunil, think he pretty much covers it!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • How to delete the data loaded into MySQL target table using Scripts

    Hi Experts
    I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
    My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
    But in the script i have written the code as
    sql('database','delete from <tablename>');
    but as it is an SQL Query execution it is rising exception for the query.
    How can i delete the data loaded into MySQL Target table using scripts.
    Please guide me for this error
    Thanks in Advance
    PrasannaKumar

    Hi Dirk Venken
    I got the Solution, the mistake i did was the query is not correct regarding MySQL.
    sql('MySQL', 'truncate world.customer_salesfact_details')
    error query
    sql('MySQL', 'delete table world.customer_salesfact_details')
    Thanks for your concern
    PrasannaKumar

  • Is it possible to execute the Data Load Utility from a button?

    I want to have a button on a form to load spreadsheet data each day.
    Can the XE Utility to load data be called from a button?
    How? It is not in any of the documentation I looked at.
    Jean

    No. The "data load button" is not something you can embed in your applications. However, there are solutions to your problem that have been developed by some very talented folks. Take a look at Vikas' examples here: http://htmldb.oracle.com/pls/otn/f?p=38131:1
    Although it wasn't designed with XE it should work just fine.
    Earl

  • How to automate the data load process using data load file & task Scheduler

    Hi,
    I am doing Automated Process to load the data in Hyperion Planning application with the help of data_Load.bat file & Task Scheduler.
    I have created Data_Load.bat file but rest of the process i am unable complete.
    So could you help me , how to automate the data load process using Data_load.bat file & task Scheduler or what are the rest of the file is require to achieve this.
    Thanks

    To follow up on your question are you using the maxl scripts for the dataload?
    If so I have seen and issue within the batch (ex: load_data.bat) that if you do not have the full maxl script path with a batch when running it through event task scheduler the task will work but the log and/ or error file will not be created. Meaning the batch claims it ran from the task scheduler although it didn't do what you needed it to.
    If you are using maxl use this as the batch
    "essmsh C:\data\DataLoad.mxl" Or you can also use the full path for the maxl either way works. The only reason I would think that the maxl may then not work is if you do not have the batch updated to call on all the maxl PATH changes or if you need to update your environment variables to correct the essmsh command to work in a command prompt.

  • Spread the Data Loads in a SAP BW System

    Gurus,
    I want to spread the data loads in  our BW system, as a BASIS person how do I identify the jobs if they are full loads or delta loads, our goal is to make the load on the system to be evenly distributed as we see too many data loads starting and running around the same time. Can you suggest a right approach to achieve our goal.
    Thanks in advance
    Siva

    Hello Siva,
    As already mentioned the solution is to include the different steps of the data flow, extraction , ODS activation , rollup etc
    in process chains and schedule these chains to run at differet times so that they do not place too much load on the system.
    If the problem is specific to data loads  on the extraction step then I guess that maybe you see the resource problem on the
    source system side, if you don't have the load distribtion switched on in the RFC connection to the source system it is
    possible that you can specify that the source system extraxction jobs are executed on a particular application server,
    please see the information in the 'Solution' part of the note 147104 and read it carefully.
    Best Regards,
    Des

  • How to find the data loaded from r/3 to bw

    hi
    how to find the data loaded from r/3 to bw is correct . i am not able to find which feild in the query is connected to which feild in the r/3 . where i am geting the data from r/3 . is there any process to find which feild  and table the data is comming from . plz help
    thanks in advance to u all

    Hi Veda ... the mapping between R/3 fields and BW InfoObjects should take place in Transfer Rules. Other transformation could take place in Update Rule.
    So you could proceed this way: look at InfoProvider Data Model and see if the Query does perform any calculation (even with Virtual keyfigures / chars). Than go back to Update Rules and search for other calculation / transformation. At least there are Tranfer Rule and eventually DataSource / Extraction Enhancements.
    As you can easily get there are many points where you have to look for ... it's a quite complex work but very usefull.
    Once you will have identified all mappings / transfromation see if BW data matchs R/3 (considering calculations ...)
    Good job
    GFV

  • How we can automate the data loading from BI-BPC

    Dear  Guru's
    Thanks for watching this thread,my question is
                  How we can load the data from BI7.0 to BPC.My environment is SAP-BI 7.0 and BPC is 7.5 MS version and 2008SQL.
    How we can automate the data loading from  BI- BPC Ms version.Is manual flat file load is mandatory in ms version.
    Thanks in Advance,
    Srinivasan.

    Here are some options
    1) Use standars packages and schedule them :
        A) Openhub masterdata file into a flat file/ BPC App server  and Schedule the package - Import Master Data from a Data File and  other relevent packages.
    2 ) Using Custom Tasks in Custom Packages ( SSIS)
    Procedure
    From the Microsoft SQL Server Business Intelligence Developer Studio, open the Microsoft SSIS folder.
    Create a new package, or select an existing package to modify.
    Choose  Task  Register Custom Task .
    In the Task Location field, browse for the target .dll file.
    Note
    By default, the .dll files are stored in BPC/Websrvr/bin.
    End of the note.
    Enter a task description, select an appropriate icon, then click OK.
    Drag the icon to the designer window. Enter data as required.
    Save the package.

  • Is there any setting in ODS to accelerate the data loads to ODS?

    Someone mentions that there is somewhere in ODS setting that make the data load to this ODS much faster.  Don't know if there is kind of settings there.
    Any idea?
    Thanks

    hi Kevin,
    think you are looking for transaction RSCUSTA2,
    Note 565725 - Optimizing the performance of ODS objects in BW 3.0B
    also check Note 670208 - Package size with delta extraction of ODS data, Note 567747 - Composite note BW 3.x performance: Extraction & loading
    hope this helps.
    565725 - Optimizing the performance of ODS objects in BW 3.0B
    Solution
    To obtain a good load performance for ODS objects, we recommend that you note the following:
    1. Activating data in the ODS object
                   In the Implementation Guide in the BW Customizing, you can implement different settings under Business Information Warehouse -> General BW settings -> Settings for the ODS object that will improve performance when you activate data in the ODS object.
    2. Creating SIDs
                   The creation of SIDs is time-consuming and may be avoided in the following cases:
    a) You should not set the indicator for BEx Reporting if you are only using the ODS object as a data store.Otherwise, SIDs are created for all new characteristic values by setting this indicator.
    b) If you are using line items (for example, document number, time stamp and so on) as characteristics in the ODS object, you should mark these as 'Attribute only' in the characteristics maintenance.
                   SIDs are created at the same time if parallel activation is activated (see above).They are then created using the same number of parallel processes as those set for the activation. However:if you specify a server group or a special server in the Customizing, these specifications only apply to activation and not the creation of SIDs.The creation of SIDs runs on the application server on which the batch job is also running.
    3. DB partitioning on the table for active data (technical name:
                   The process of deleting data from the ODS object may be accelerated by partitioning on the database level.Select the characteristic after which you want deletion to occur as a partitioning criterion.For more details on partitioning database tables, see the database documentation (DBMS CD).Partitioning is supported with the following databases:Oracle, DB2/390, Informix.
    4. Indexing
                   Selection criteria should be used for queries on ODS objects.The existing primary index is used if the key fields are specified.As a result, the characteristic that is accessed more frequently should be left justified.If the key fields are only partially specified in the selection criteria (recognizable in the SQL trace), the query runtime may be optimized by creating additional indexes.You can create these secondary indexes in the ODS object maintenance.
    5. Loading unique data records
                   If you only load unique data records (that is, data records with a one-time key combination) into the ODS object, the load performance will improve if you set the 'Unique data record' indicator in the ODS object maintenance.

  • Schema name is not showing in the Data load

    Hi All,
    I am trying to load a CSV file using oracle apex data load option. The options are using a new table and file upload(.csv). In the load data page, the schema name is not listing my current schema because of which i could not to upload the CSV file.
    Can anyone please help on this ?
    I am using oracle apex 4.1.1
    Regards
    Rajendrakumar.P

    Raj,
    Did you export this application from another workspace perhaps? I have seen in the past that if you create a data load page based on schema A and then import that application and set it to parse as schema B, it will not work.
    The solution - although unsupported - is to simply alter the parse-as schema reference in the APEX export file. A more supported version would be to re-create the data load pages in the target application, so that it picks up on the proper parse-as schema.
    Thanks,
    - Scott -
    http://spendolini.blogspot.com
    http://www.enkitec.com

  • Automate the data load process using task scheduler

    Hi,
    I am doing Automated Process to load the data in Hyperion Planning application with the help of Task Scheduler.
    I have created Data_Load.bat file but rest of the process i am unable complete.
    So could you help me , how to automate the data load process using Data_load.bat file & task Scheduler.
    Thanks

    Thanks for your help.
    I have done data loading using Data_load.batch file . the .bat file call to .msh file using this i have followed your steps & automatic data lload is possible.
    If you need this then please let me know.
    Thanks again.
    Edited by: 949936 on Oct 26, 2012 5:46 AM

  • How to stop the data loads through process chains

    hi,
    I want to stop all the data loads to BI through Process chains where load happens periodic.
    kindly suggest how can I proceed.

    Hi,
    Goto RSPC find your PC and double click on START then change the timings, i.e. give starting date is 01.01.9999 like that Save and ACtivate the PC, it won't start till 01.01.9999.
    Thanks
    Reddy

  • How to identify the status of the data load

    Hi All,
    Here is my requiremenet,
    I have a process called etl_pf2 which loads the data into staging tables. Now I have another process that does a partition exchange and moves the data from staging tables to online tables.
    In one procedure, I need to verify whether any data load into the staging tables is happings. If yes then partiontion should not occur, if no then movement should happen.
    Any idea on any parameter which can be used in the procedure to check the status of the data load in staging tables.
    Please help me.

    Thanx for reply
    But i thhink that the problem is with NQSServer which crashes but not disappears from the processes list.
    I tried to search obiee+USER_MEM_ARGS and nqsserver+USER_MEM_ARGS but found only this thread.
    How can JVM settings help to nqsserver.exe to work properly?

  • Need to skip the data load for one of the sub chain.

    Hi All,
    In our project, we have one meta chain and have many sub chains inside that. Today we don't want to run one of the sub chain. As we are in BW7.0, Skip options are not available. Can you please help how to stop the data load for the particular sub-chain.Please suggest.
    Thanks.

    Hi Jalina,
    If this is a frequent request then you can create a custom ABAP Program and then use the below simple logic to skip/run the meta chain, but you will also have to change the link between the sub chains as "Always" instead of "Successful" so that no matter if the dependent chain is successful OR not the nex chain will proceed. If you are comfortable with event then you can also use event to achieve this
    Program Logic- Create a new table where you maintain meaningful values (indicator/description) in it and the program should read the data from the table, Based on the values in the table the program will be successful/fail
    Thanks
    Abhishek Shanbhogue

  • Is there any limit (size of files or rows) to the data loading definition?

    Hi ,
    I have a query regarding the data loading pages.
    i have a file, size of 35KB (102 rows) which is not uploading .....its not even creating collections...
    But ,
    I uploaded files with more rows it got uploaded fine.
    I spited the the data in to two files and uploaded they worked fine .
    My questions are:
    1. Cant i upload file with more than 32K size.
    2. Is there any work around in apex to accept file whose size is more than 32K. Because its difficult to split data into small files and upload.
    Apex version: 4.1.1.00.23
    Thanks in advance.
    Thanks and Regards,
    K.Phani

    Hi PhaniKavuri,
    I'm not sure what your problem is, but I have uploaded data files with more more then 102 rows and much bigger then 32K.
    What's in the file? Did you try to upload this on apex.oracle.com, see if it's a local problem?
    Regards,
    Joni

  • Last Data packet for the data load.

    Hi All,
    How to find the last data packet loaded to target for the data load in SAP BI? In the Table?
    Thank you,
    Adhvi

    Hi Adhvirao,
    When u r loading the data from ECC go to monitor ---> details ---> check the data pacaket which are failed double click on that data packet then in bottom u will able to see the IDOC no for ecc side which are pending in ecc containing some data. you can excute them manually to BI side thorgh BD87.
    Thanks,
    Deepak

Maybe you are looking for