Data load approach

Consider me an XML novice who has an Oracle rdb to load with data...
I would like to define XML documents containing data to load into tables within the database. The documents would contain tags to describe the particular table and columns to load.
What is the best approach ? Can anyone refer me to white papers that describe this scenario ?

OK...so I have a file somewhere in an OS filesystem containing XML that describes the data I want to load. Once it's in the database, sure, I can use a table with an XMLType column to handle it, but what alternatives do I have to get that documents' contents from the filesystem into the db ?

Similar Messages

  • Statistic on throughput of data loader utility

    Hi All
    Can you guys share some statistics on throughput of data loader utility ? If you are looking for number of records you may consider 1 Million, how long it would take to import this ?.
    I need these number to make a call on using Web Service or Data loader utility. Any suggestion is appreciated.
    Thank you.

    It really depends on the object and the amount of data in there (both the number of fields you are mapping, and how much data is in the table).
    For example…
    One of my clients has over 1.2M Accounts. It takes about 3 hours (multi-tenant) to INSERT 28k new customers. But when we were first doing it, it was sub-1hour. Because the bulk loader is limited on the record count (most objects are limited to 30k records in the input file), you will need to break up your file accordingly.
    But strangely, the “Financial Account” object (not normally exposed in the standard CRMOD), we can insert 30k records in about 30 min (and there are over 1M rows in that table). Part of this is probably due to the number of fields on the account and the address itself (remember it is a separate table in the underlying DB, even though it looks like there are two address sets of fields on the account).
    The bulk loader and the wizard are roughly the same. However, the command line approach doesn’t allow for simultaneously INSERT/UPDATE (there are little tricks around this; depends how you might prepare the extract files from your other system... UPDATE file and a INSERT file, some systems aren't able to extract this due to the way they are built).
    Some objects you should be very careful with because the way the indexes are built. For example, ASSET and CONTACT both will create duplicates even when you have an “External Unique Id”. For those, we use web services. You aren’t limited to a file size there. I think (same client) we have over 800k ASSETS and 1.5M CONTACTS.
    The ASSET load (via webservice which does both INSERT and UPDATE) typically can insert about 40k records in about 6 hours.
    The CONTACT load (via webservice which does both INSERT and UPDATE) typically can insert about 40k records in about 10 hours.
    Your best shot is to do some timings via the import wizard and do a little linear time increase as you increase the data size sitting in the tables.
    My company (Hitachi Consulting) can help build these things (both automated bulk loaders and web services) if you are interested due to limited resource bandwidth or other factors.

  • Announcing 3 new Data Loader resources

    There are three new Data Loader resources available to customers and partners.
    •     Command Line Basics for Oracle Data Loader On Demand (for Windows) - This two-page guide (PDF) shows command line functions specifc to Data Loader.
    •     Writing a Properties File to Import Accounts - This 6-minute Webinar shows you how to write a properties file to import accounts using the Data Loader client. You'll also learn how to use the properties file to store parameters, and to use the command line to reference the properties file, thereby creating a reusable library of files to import or overwrite numerous record types.
    •     Writing a Batch File to Schedule a Contact Import - This 7-minute Webinar shows you how to write a batch file to schedule a contact import using the Data Loader client. You'll also learn how to reference the properties file.
    You can find these on the Data Import Resources page, on the Training and Support Center.
    •     Click the Learn More tab> Popular Resources> What's New> Data Import Resources
    or
    •     Simply search for "data import resources".
    You can also find the Data Import Resources page on My Oracle Support (ID 1085694.1).

    Unfortunately, I don't believe that approach will work.
    We use a similar mechanism for some loads (using the bulk loader instead of web services) for the objects that have a large qty of daily records).
    There is a technique (though messy) that works fine. Since Oracle does not allow the "queueing up" of objects of the same type (you have to wait for "account" to finish before you load the next "account" file), you can monitor the .LOG file to get the SBL 0363 error (which means you can submit another file yet (typically meaning one already exists).
    By monitoring for this error code in the log, you can sleep your process, then try again in a preset amount of time.
    We use this allow for an UPDATE, followed by an INSERT on the account... and then a similar technique so "dependent" objects have to wait for the prime object to finish processing.
    PS... Normal windows .BAT scripts aren't sophisticated enough to handle this. I would recommend either Windows POWERSHELL or C/Korn/Borne shell scripts in Unix.
    I hope that helps some.

  • Auto-kick off MaxL script after Oracle GL data load?

    Hi guys, this question will involve 2 different modules: Hyperion and Oracle GL.
    My client has their accounting department updating Oracle GL on a daily basis. My end-user client would like to write a script to automatically kick off the existing MaxL script which is for our daily data load in Hyperion. Currently, the MaxL script is manually executed.
    What's the best approach to build a connection for both modules to communicate with each other? Can we use a timer to trigger the run? If so, how?

    #1 External scheduler.
    I've worked on Appworx and it has build a chain dependent task. There are many other external schedulers like Tivoli,....
    #2 As Daniel pointed out you can use Windows scheduler.
    For every successful GL load add a file to a folder which is accessible for your Essbase task.
    COPY Nul C:\Hyperion\Scripts\Trigger\GL_Load_Finished.txt
    Create another bat file which is scheduled to run on every 5 or 10 mins (this should start just after your GL Load scheduled task)
    This is an example i've for a triggered Essbase job.
    IF EXIST %BASE_DIR%\Trigger\Full_Build_Started.txt (
    Echo "Full Build started"
    ) else (
         IF EXIST %BASE_DIR%\Trigger\Custom_Build_Started.txt (
         Echo "Custom Build started"
         ) else (
              IF EXIST %BASE_DIR%\Trigger\Post_Build_Batch_Started.txt (
              Echo "Post Build started"
              ) else (
              IF EXIST %BASE_DIR%\Trigger\Start_Full_Build.txt (
              Echo "Trigger found starting batch"
              MOVE %BASE_DIR%\Trigger\Start_Batch.txt %BASE_DIR%\Trigger\Full_Build_Started.txt
              call %BASE_DIR%\Scripts\Batch_Files\Monthly_Build_All_Cubes.bat
              ) else (
                   IF EXIST %BASE_DIR%\Trigger\Start_Custom_Build.txt (
                   Echo "Trigger found starting Custom batch"
                   MOVE %BASE_DIR%\Trigger\Start_Custom_Batch.txt %BASE_DIR%\Trigger\Custom_Build_Started.txt
                   call %BASE_DIR%\Scripts\Batch_Files\Monthly_Build_All_Cubes_Custom.bat
                   ) else (
                        IF EXIST %BASE_DIR%\Trigger\Start_Post_Build_Batch.txt (
                        Echo "Trigger found starting Post Build batch"
                        MOVE %BASE_DIR%\Trigger\Start_Post_Build_Batch.txt %BASE_DIR%\Trigger\Post_Build_Batch_Started.txt
                        call %BASE_DIR%\Scripts\Batch_Files\Monthly_Post_Build_All_Cubes.bat
    )So this bat file if it finds Start_Full_Build.txt in the trigger location, it'll rename that to Full_Build_Started.txt and will call the Full Build (likewise for custom and post build)
    Regards
    Celvin
    http://www.orahyplabs.com

  • Data loading on master data

    Hello Guys,
    I am thinking what would be the starting point to do data loading(master data) into SAP from legacy.I got the latest dump from legacy which got plenty of info on it...also I got the SAP data sheet from migration team as well. I am wondering whether I need to just send the SAP data sheet to business to fill it but again they dont know anything on SAP fields .So I thought let the users identify the equipments first which needs to be treated as a floc in SAP  and put it in a seperate file  by confirming floc level as well....Is this the right way ? or is there any other std procedures is available for data loading...I agree this varies from business to business but if someone could able to tell the exact approach, that will be great.
    Mahee

    a simple ( but you can standardize it) method would be the <a href="http://help.sap.com/erp2005_ehp_04/helpdata/En/70/93a417ecf411d296400000e82debf7/frameset.htm">excel upload</a>
    A.
    Edited by: Andreas Mann on Apr 9, 2010 10:08 AM

  • Spread the Data Loads in a SAP BW System

    Gurus,
    I want to spread the data loads in  our BW system, as a BASIS person how do I identify the jobs if they are full loads or delta loads, our goal is to make the load on the system to be evenly distributed as we see too many data loads starting and running around the same time. Can you suggest a right approach to achieve our goal.
    Thanks in advance
    Siva

    Hello Siva,
    As already mentioned the solution is to include the different steps of the data flow, extraction , ODS activation , rollup etc
    in process chains and schedule these chains to run at differet times so that they do not place too much load on the system.
    If the problem is specific to data loads  on the extraction step then I guess that maybe you see the resource problem on the
    source system side, if you don't have the load distribtion switched on in the RFC connection to the source system it is
    possible that you can specify that the source system extraxction jobs are executed on a particular application server,
    please see the information in the 'Solution' part of the note 147104 and read it carefully.
    Best Regards,
    Des

  • Master Data/transactional Data Loading Sequence

    I am having trouble understanding the need to load master data prior to transactional data.  If you load transactional data and there is no supporting master data, when you subsequently load the master data, are the SIDs established at that time, or will then not sync up?
    I feel in order to do a complete reload of new master data, I need to delete the data from the cubes, reload master data, then reload transactional data.  However, I can't explain why I think this.
    Thanks,  Keith

    Different approach is required for different scenario of data target.  Below are just two scenarios out of many possibilities.
    Scenario A:
    Data target is a DataStore Object, with the indicator 'SIDs Generation upon Activation' is set in the DSO maintenance
    Using DTP for data loading.
    The following applies depending on the indicator 'No Update without Master Data' in DTP:
    - If the indicator is set, the system terminates activation if master data is missing and produces an error message.
    - If the indicator is not set, the system generates any missing SID values during activation.
    Scenario B:
    Data target has characteristic that is determined using transformation rules/update rules by reading master data attributes.
    If the attribute is not available during the data load to data target, the system writes initial value to the characteristic.
    When you reload the master data with attributes later, you need to delete the previous transaction data load and reload it, so that the transformation can re-determine the attributes values that writes to the characteristics in data target.
    Hope this help you understand.

  • Roll back data load

    Hi All,
    I am using essbase 7.1.5. Is there any way to rollback the database in case if it finds any error. For example if source file (Flat file) contains 1000 records where at 501 record essbase found an error and has rejected that record (501) and aborted the load. In this case is there a way to rollback the database before loading the 500 records.If so where should i do those settings?? Please advice.
    Thanks in advance.
    Hari

    A 6 Hour data load? I've never heard of such a thing. It sounds like the data should be sorted for a more optimum dataload to me.
    As for the two stage approach, this assumes that you are loading variances (delta's) to the existing values, rather than replacing the data itself. It further assumes that you can create an input level cube to handle the conversion. You have your base data in one scenario, and your variances/delta's in another. You can reload your delta's (as absolutes) any time, and derive the absolutes from the sum of the base and variance values.
    - Scenario
    -- Base (+) <--- this get's recalculated when the Delta's are considered "good"
    -- Delta (+) <--- this get's loaded for changes only, and reset to zero when the base is recalculated.
    You export the modified data from this cube to your existing/consolidation cube. If you can "pre-load" the changes (outside of your calc window for the main cube), you can optimize the calculation window -- although if it takes 6 hours to load the database your calc window is probably shot no matter what you do.
    However, if you mean that the load AND calc takes 6 hours, and it takes a relatively short time to load the data alone, this can be a performance enhancement because you can recalc this "input cube" in seconds from a new/complete load relative to the "in place" reset and reload changes approach (in your existing cube).
    You are simply redirecting your data into a staging table, essentially, and the staging table handles the conversion of variances to absolutes so you can make the process more efficient over all (it is often more efficient to break the process up into smaller pieces).

  • Data load into SAP ECC from Non SAP system

    Hi Experts,
    I am very new to BODS and I have want to load historical data from non SAP source system  into SAP R/3 tables like VBAK,VBAP using BODS, Can you please provide steps/documents or guidelines on how to achieve this.
    Regards,
    Monil

    Hi
    In order to load into SAP you have the following options
    1. Use IDocs. There are several standard IDocs in ECC for specific objects (MATMAS for materials, DEBMAS for customers, etc., ) You can generate and send IDocs as messages to the SAP Target using BODS.
    2. Use LSMW programs to load into SAP Target. These programs will require input files generated in specific layouts generated using BODS.
    3. Direct Input - The direct input method is to write ABAP programs targetting on specific tables. This approach is very complex and hence a lot of thought process needs to be applied.
    The OSS Notes supplied in previous messages are all excellent guidance to steer you in the right direction on the choice of load, etc.,
    However, the data load into SAP needs to be object specific. So targetting merely the sales tables will not help as the sales document data held in VBAK and VBAP tables you mentioned are related to Articles. These tables will hold sales document data for already created articles. So if you want to specifically target these tables, then you may need to prepare an LSMW program for the purpose.
    To answer your question on whether it is possible to load objects like Materials, customers, vendors etc using BODS, it is yes you can.
    Below is a standard list of IDocs that you can use for this purpose to load into SAP ECC system from a non SAP system.
    Customer Master - DEBMAS
    Article Master - ARTMAS
    Material Master - MATMAS
    Vendor Master - CREMAS
    Purchase Info Records (PIR) - INFREC
    The list is endless.........
    In order to achieve this, you will need to get the functional design consultants to provide ETL mapping for the legacy data to IDoc target schema and fields (better to ahve sa tech table names and fields too). You should then prepare the data after putting it through the standard check table validations for each object along with any business specific conversion rules and validations applied. Having prepared this data, you can either generate flat file output for load into SAP using LSMW programs or generate IDoc messages to the target SAPsystem.
    If you are going to post IDocs directly into SAP target using BODS, you will need to create a partner profile for BODS to send IDocs and define the IDocs you need as inbound IDocs. There are few more setings like RFC connectivity, authorizations etc, in order for BODS to successfully send IDocs into the SAP Target.
    Do let me know if you need more info on any specific queries or issues you may encounter.
    kind regards
    Raghu

  • Flat File Data Loads to BI 7.0

    Hi Experts,
    Please update me what is the best approach i have to follow for the below scenario of Flat File Data Loads
    I will get data in Excel ....with Two worksheets....from the user
    My requirment is to place the file in Central location avaliable to the user and BW to update if any changes necessary  and want to load data(full0  to bw from file if there are any changes
    Please update me how to deal with this scenarion of Two work sheets,In a central location...
    Thanks

    Easiest thing would be to use a DSO with change log to handle the changes to pass onto to any cubes and load a full every night
    Then let the change log worry about any changes to the workbook
    You have to be careful about the DSO keys though for this to work properly
    Now to automate the loads - just how are you planning to create the infopackage as it will only read a csv and not the binary xls
    Well it will read the binary xls if you maybe use a dbconnect with a jdbc driver to read the xls (that's on my next thing to do - but if you are as your user id suggests a "bw learner" then that may be a bit complicated)
    The only other thign to do is to write a macro that automatically creates the csv file on the app server when the user quits the xls
    Or off course you can just dump the csv each night - but then that is a manual task and in systems I design I hate manual tasks as staff go on holiday and peopel change jobs and it's not really very SoX compliant

  • Data Load Speed

    Hi all.
    We are starting the implementation of SAP at the company I work and I am designated to prepare the Data Load of the legacy systems. I have already asked our consultants about the data load speed but they didn´t answer really what I need.
    Does anyone have a statistic of the data load speed using tools like LSMW, CATT, eCATT, etc... per hour?
    I know that the speed depends of what data I´m loading and also the CPU speed but any information is good to me.
    Thank you and best regards.

    hi friedel,
    Again here is the complete details regarding data transfer techniques.
    <b>Call Transaction:</b>
    1.Synchronous Processing
    2.Synchronous and Asynchrounous database updates
    3.Transfer of data for individual transaction each time CALL TRANSACTION statement is executed.
    4.No batch input log gets generated
    5.No automatic error handling.
    <b>Session Method:</b>
    1.Asynchronous Processing
    2.Synchronous database updates.
    3.Transfer of data for multiple transaction
    4.Batch input log gets generated
    5.Automatic error handling
    6.SAP's standard approach
    <b>Direct Input Method:</b>
    1.Best suited for transferring large amount of data
    2.No screens are processed
    3.Database is updated directly using standard function modules.eg.check the program RFBIBL00.
    <b>LSMW.</b>
    1.A code free tool which helps you to transfer data into SAP.
    2.Suited for one time transfer only.
    <b>CALL DIALOG.</b>
    This approach is outdated and you should choose between one of the above techniques..
    Also check the knowledge pool for more reference
    http://help.sap.com
    Cheers,
    Abdul Hakim

  • OIM Initial Data Load - Suggestion Please

    Hi,
    I have the following scenario :
    1.My client is currently having 2 systems 1.AD and 2.HR application.
    2.HR application is having only Employee Information.(it contains information like firstname,lastname,employeenumber etc)
    3.AD is having both employees and contractor information
    4.Client wanted my HR application to be the trusted source but the IDM login ID of existing users should be same us that of AD samaccount name.
    I am using OIM 9.0 and 9041 connector pack.What would be the best way to do the initial data loading in this case?.Thanks in advance.

    Hi,
    Can you tell me how do you relate employee in HR to corresponding record in AD.Then I will be in better situation to explain how you can do it.
    But even without this information following approach will solve your requirment.
    1.Do the trusted recon from AD.
    2.samAccountName will be mapped to user id field of OIM profile.
    3.Do the trusted recon with HR.The matching key should be the answer of my above question.
    4.For trusted recon with HR remove the Action "Create User" on No Match Found event.
    Hope this will help.
    Regards
    Nitesh
    Regards
    Nitesh

  • Need to generate multiple error files with rule file names during parallel data load

    Hi,
    Is there a way that MAXL could generate multiple error files during parallel data load?
    import database AsoSamp.Sample data
      connect as TBC identified by 'password'
      using multiple rules_file 'rule1' , 'rule2'
      to load_buffer_block starting with buffer_id 100
      on error write to "error.txt";
    I want to get error files as this -  rule1.err, rule2.err (Error files with rule file name included). Is this possible in MAXL? 
    I even faced a situation , If i hard code the error file name like above, its giving me error file names as error1.err and error2.err. Is there any solution for this?
    Thanks,
    DS

    Are you saying that if you specify the error file as "error.txt" Essbase actually produces multiple error files and appends a number?
    Tim. 
    Yes its appending the way i said.
    Out of interest, though - why do you want to do this?  The load rules must be set up to select different 'chunks' of input data; is it impossible to tell which rule an error record came from if they are all in the same file?
    I have like 6 - 7 rule files using which the data will be pulled from SQL and loaded into Essbase. I dont say its impossible to track the error record.
    Regardless, the only way I can think of to have total control of the error file name is to use the 'manual' parallel load approach.  Set up a script to call multiple instances of MaxL, each performing a single load to a different buffer.  Then commit them all together.  This gives you most of the parallel load benefit, albeit with more complex scripting.
    Even i had the same thought of calling multiple instances of a Maxl using a shell script.  Could you please elaborate on this process? What sort of complexity is involved in this approach.? Did anyone tried it before?
    Thanks,
    DS

  • Master data loading in BPC NW 7.5

    Hi all,
    I am trying to load master data from BW into BPC using the packages provided in BPC7.5.
    Master data infoobject in Bw is compounded with 3 other objects.2 of them dont have any master data tables.All they have is a text table(T table).
    When i run the package i get a warning saying compounded object1 is empty.
    i think i am getting the warning because it doesnt have any master data tables(P table,M table and Z table).
    In a different case i had a compounded info object but i didnt have any issue because it had master data table to it.This case is diff since no master data tables exists.
    Can someone help me on how to approach master data loading in a scenario where compounded objects dont have any master data tables.
    Thanks,
    Any help is appreciated.
    KK.

    Hi,
    The document is for BPC when they didnt have the new process chains to load master data.Now they have 2 new process chains.One to load master data attributes and texts and the other one to load Hierarchy.And these 2 packages support compounded info objects too.
    I was able to load profit center attr/texts/hierarchy into BPC although it was compounded to Controlling Area.This wasnt an issue because Controlling Area had master data defined. meaning it had P table and M table created.
    But my issue is for this new master data load i am trying to do the compounded info object doesnt have master data tables.And thats causing an issue.It says Master data table for 0Division is empty.
    I want to know if there is any workaround in BPC for this issue.
    Any help is appreciated.
    Thanks,
    KK

  • Master Data Loading in Process Chain

    Hello All
    I want to design process chain for Master data loading we have 8 modules in our project such as SD,MM,FI,CO,PP,QM,PM and LO how Design PC .Each Module one start varient and All Text,Attribaute and hierarchies  Loading parallel ofter one Attribute Change run this Approach is Right Please guide me
    and Module dependence are there ?
    Like First load SD then MM like please guide me
    Regards,
    Ravi

    Hi.......
    Welcome to SDN.........
    It is also possible to load like this:
    load attributes
    change run load texts in parallel
    because chnage run and texts don't disturb each other.
    If you want also to load hierarchies the following order is recommended:
    load attributes
    load hierarchies
    changerun
    texts at last or in parallel with changerun.
    And for loading diiffterent area...........first tell me which architecture you are using..........split or LSA........
    as per ur update u didnt mentioned the type of architecture wheather it split,or,LSA or some other thing which u are useing,
    if you use Split architecture the load ,GD,DP,FC from FC completion of master data COPA will start so Complusary load FC first ,,u can run simulateanously (GD,DP,FC)it will be no problemand have to cross check about configuration of ur server also ,later you can start pp,tp in case of HR it will be total independent so check when u want to load
    IN Case of LSA (layerd Scaleable architecture) first laod Masterdata .. which will same for all the Process areas (like 0costcenetr,0employee) later start the Trancation data which you feel which is important as per my knowledege go for FC in turn COPA because usually reports morely based on FC and COPA
    Hope this helps......
    Thanks==Points as per SDN........;)
    Regards,
    Debjani..........
    Edited by: Debjani  Mukherjee on Oct 22, 2008 4:03 PM

Maybe you are looking for

  • IPhone out-of-sync with Exchange after performing inbox search?

    Hey everyone, I work as an IT technician for an Enterprise where the majority of end-users utilize iPhones (4 and 4s) and iPads (all versions) with varying versions of iOS. Many of these people have ActiveSync set up with our Exchange Server (2008RS,

  • Help required in the installation of Designer package

    Hi, I have a query from the user on the installation of Designer package of Oracle Forms 6i. He says "I have successfully isntalled a version of the runtime application but I am unable to install the Designer Package. The installer stops at plus80.ma

  • Mysterious popping noises and dropouts

    I have a very new iMac with 8 GB of memory so I doubt system performance is a part of my troubles, but Logic 9.1.1 keeps giving me trouble on a project I'm working on. In a few places on the song, the audio on a couple tracks drops out for a few mill

  • Bookmarks in webhelp generated from Framemaker

    Hi all We're using the trial version of Framemaker 12 as part of the TCS5 suite. I've been experimenting with the Publish pod to create WebHelp and am wondering how to specify which Framemaker headings appear in the Webhelp TOC? I thought it might us

  • Function modules for Goods Movement Types 501 and 502

    Hi, Is there any function modules to do Goods issues and Goods receipts without PO (501 & 052). Thanks in Advance. Eswar