Commit in a large load

Hi:
I have many control files that load about 17 millions de registry, how can i estimate the load time?
when should i do commit? Should i do it, every "x" number of registry o by "x" number of gigas, megas, etc and how can i do it?
What do you recommend me? where can i find information about this?
Thanks to all for your help.
Alex

Do you have more information about this:
Document says:
You can use the ROWS parameter to specify the frequency of the commit points. If the ROWS parameter is not specified, the entire load is rolled back.
it's possible but it doesn't´t tell me how... can you give an example?
I don´t understand if I need oracle partitioning for partitioned tables? What happens if i don't have partitioning...
Thanks for your help
Alex

Similar Messages

  • Dynamically loading a class that is part of a larger loaded package

    I am dynamically loading a class that is part of a large package, much of which is loaded at startup. The code directly references protected variables in the parts of the package that is loaded by the default class loader. Attempting to access these protected variables gives an access error when the class is dynamically loaded that doesnt occur when the class is loaded at startup.
    Is there a way to make this work?

    To answer my own question -- no
    A reference from http://access1.sun.com/techarticles/DR-article.html says:
    The use of Dynamic Class Reloading can introduce problems when classes that are being dynamically reloaded make calls to non-public methods of helper classes that are in the same package. Such calls are likely to cause a java.lang.IllegalAccesserror to be thrown. This is because a class that is dynamically reloaded is loaded by a different classloader than the one used to load the helper classes. Classes that appear to be in the same package are effectively in different packages when loaded by different classloaders. If class com.myapp.MyServlet is loaded by one classloader and an instance of it tries to call a non-public method of an instance of class com.myapp.Helper loaded by a different classloader, the call will fail because the two classes are in different packages.
    So not being able to access non-private variables in this scenario is the way it works.

  • Using large loaded jpg to heighten clarity/crispness?

    Hi There,
    I have a gallery on my flash site.  The way I'm working this is as follows:
    layer 1 that runs through all frames:  var loader1:Loader = new Loader();
    when a button is clicked, the playhead goes to a certain frame and does this:
    loader1.unloadAndStop();
    holder2.addChild(loader1);
    loader1.x = 0;
    loader1.y = 0;
    loader1.alpha = 0;
    loader1.load(new URLRequest("damoimages/med.jpg"));
    As a side note, I am also using Greensock's liquid stage to scale this holder up to a certain set size.
    Now I've noticed that when I blow up the screen, my loaded jpg loses some of its crispness, which is to be expected.  I've also noticed this is remedied if I import a larger image than I need I get a nice crisp image.  For example, I have been loading in an image that is 960 x 640 into my 'holder' which is 960 x 640.  But when I load a larger image (1200 x 800 or larger) say, the image is nice and crisp.  Only problem is it fills up the whole screen until I begin to resize my browser, then it snaps into place and is scaled correctly by staying crisp.  Any way to tell this loaded image to not come in at it's big size, but to scale down appropriately w/o me scaling down my browser window?
    Phew!
    Thanks, as always.
    R

    You need to wait until the image has loaded before you can try to resize it in any way, though you could scale the holder ahead of time if you know the dimensions you will be working with and are consistent with them.

  • InfoSpoke Delta - Avoiding First Large Load

    Morning all,
    In BW 3.5 we are about to implement a new InfoSpoke into the production client running off a large existing ODS. Extraction method is Delta but we don't care about the current information, just the daily deltas that will be driven going forwards. Is there any way of not having to process the tens of millions of records in the first delta run to get up to date ?
    Cheers,
    Rob.

    Hi Rob,
    Its tough to set that delta points based on the Change log PSA ID.. u need to maintain the entry in RSBSPOKEDELTA table.. here if u can write a program which brings all the Change log PSA Id's with the proper status set for every entry then it will work for you..
    But this requires a lot of efforts.....
    When u use the Delta Infospoke it dirently access the Changelog request but not the normal request(Active one)...... So the amount the data that u are extracting the entirely depends on the number of changelog requests which are present in ur ODS.
    My previous thread gives the answer I believe for ur case...!!
    Assigning points is the way to say thanks....
    thanks
    hope this helps
    Let us know if u need any more help..

  • Can you specify movieclips as cache as bitmap, inside one large loaded movieclip?

    at this point I'm not sure if i have to individualy add cacheasbitmap/cacheasbitmapmatrix to true to each display object that needs it and load them seperatly
    or if i can load the movieclip that they are in first and specify their properties later then add it to the stage when it is all loaded.
    can anyone tell me the best way to do this or maybe some tutorials to check out?
    the AS for the loading is
    var PLoading:Bitmap = new Bitmap (new PlaneLoading());
    addChild(PLoading); //loading screen
    var PLevel:PlaneLevel = new PlaneLevel();
    this.addEventListener(Event.ENTER_FRAME, loading); //reoccuring event listener happens as often as is set by FPS
    function loading(e:Event):void //
        var total:Number = this.stage.loaderInfo.bytesTotal;
        var loaded:Number = this.stage.loaderInfo.bytesLoaded;
        var percent:Number = loaded/total * 100;
        trace (percent);
        if (total == loaded) //checks if all bytes for all movieclips have been loaded
            this.removeEventListener(Event.ENTER_FRAME, loading);
            onComplete();
    function onComplete ():void //removes loading picture, and loads level movieclip
        removeChild(PLoading);
        addChild(PLevel);
        PLevel.x = 475.15;
        PLevel.y = 315.35;

    It is the same do it individually for each MovieClip or all contained in a MovieClip. What you should know is that a movie clip with cachebitmap = true consume resources whenever it is drawn and if a movie clip is an animation, each frame will be redrawn. 
    look this:
    http://help.adobe.com/en_US/as3/mobile/WS4bebcd66a74275c36c11f3d612431904db9-7ffc.html

  • Loading large number of coordinates

    Hello,
    We are trying to load the SDO_ORDINATES columnobject of the SDO_GEOMETRY column in a spatial feature table with a large number of coordinates (4844). This is not the largest linestring we will need to build. We have defined a procedure as:
    CREATE OR REPLACE PROCEDURE
    INSERT_GPS(GEOM MDSYS.SDO_GEOMETRY) IS
    BEGIN
    INSERT INTO GPS(long_lat_elv) VALUES (GEOM);
    COMMIT;
    END;
    and are loading the SDO_GEOMETRY column as follows:
    DECLARE
    myGEOM MDSYS.SDO_GEOMETRY := MDSYS.SDO_GEOMETRY(3002,NULL,NULL,MDSYS.SDO_ELEM_INFO_ARRAY(1,2,1),
    MDSYS.SDO_ORDINATE_ARRAY(
    -86.929501,35.751834,32.907,
    -86.929434,35.751906,32.913,
    87.270367,35.447903,.854,
    87.270273,35.447956,.86
    BEGIN
    dave_insert(myGEOM);
    END;
    This generates the following error:
    PLS-00123: Program too large.
    How can we load this many coordinates, using this workflow? We are using 8.1.6.
    Thanks
    Dave
    null

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Tilen Skraba ([email protected]):
    Why don't you use SQL*Loader to load the data? We have geometries which have more data than yours and sqlloader didn't have any problems.
    I believe PLS-00123 means that your package is too big.
    What else do you have in your package?
    Try and break the package in more smaller ones.
    Tilen<HR></BLOCKQUOTE>
    Telim,
    We are trying to use the LRS package and the measure value, which is part of the coordinate (x,y,m) has to be generated from a stored procedure for each x,y pair.
    Thanks
    Dave
    null

  • Unexpected query results during large data loads from BCS into BI Cube

    Gurus
    We have had an issue occur twice in the last few months but its causing our business partners a hard point.  When they send a large load of data from BCS to the real time BI Cube the queries are showing unexpected results.  We have the queries enabled to report on yellow requests and that works fine it seems the issue occurs as the system is processing the closing of the request and opening the next request.  Has anyone encountered this issue if so how did you fix it?
    Alex

    Hi Alex,
    There is not enough information to judge. BI queries in BCS may use different structure of real-time, basic, virtual cubes and multiproviders:
    http://help.sap.com/erp2005_ehp_02/helpdata/en/0d/eac080c1ce4476974c6bb75eddc8d2/frameset.htm
    In your case, most likely, you have a bad design of the reporting structures.

  • SQL*Loader . A column value in data file contains comma(,)

    Hi Friends,
    I am getting an issue while loading a csv file to database.
    A column in datafile contains a comma .. how to load such data?
    For ex, a record in data file is :
    453,1,452,N,5/18/2006,1,"FOREIGN, NON US$ CORPORATE",,,310
    Here "FOREIGN, NON US$ CORPORATE" is a column and contains a , in the value.
    I have specified optionally enclosed with " also.. but still not working
    Here is my control file:
    options (errors=100)
    load data
    infile 'TAX_LOT_DIM_1.csv'
    badfile 'TAX_LOT_DIM_1.bad'
    replace
    into table TAX_LOT_DIM
    fields terminated by ',' optionally enclosed by '"'
    trailing nullcols
    TAX_LOT_DIM_ID ,
    TAX_LOT_NBR ,
    TAX_LOT_ODS_ID ,
    RESTRICTION_IND ,
    LAST_UPDATE_DTM ,
    TRAN_LOT_NBR integer,
    MGR_GRP_CD optionally enclosed by '"' ,
    RESTRICTION_AMT "TO_NUMBER(:RESTRICTION_AMT,'99999999999999999999.999999999999')" ,
    RESTRICTION_INFO ,
    SRC_MGR_GRP_CD
    Problem is with MGR_GRP_CD column in ctrl file.
    Please reply asap.
    Regards,
    Kishore

    Thanks for the response.
    Actually my ctrl file is like this with some conversion functions:
    replace
    into table TAX_LOT_DIM
    fields terminated by ',' optionally enclosed by '"'
    trailing nullcols
    TAX_LOT_DIM_ID "TO_NUMBER(:TAX_LOT_DIM_ID ,'999999999999999.99999999')",
    TAX_LOT_NBR ,
    TAX_LOT_ODS_ID "to_number(:TAX_LOT_ODS_ID ,'999999999999999.999999')",
    RESTRICTION_IND ,
    LAST_UPDATE_DTM "to_date(:LAST_UPDATE_DTM ,'mm/dd/yyyy')",
    TRAN_LOT_NBR integer, --"TO_NUMBER(:TRAN_LOT_NBR,'999999999999999.99999999999999999')",
    MGR_GRP_CD char optionally enclosed by '"' ,
    RESTRICTION_AMT "TO_NUMBER(:RESTRICTION_AMT,'99999999999999999999.999999999999')" ,
    RESTRICTION_INFO ,
    SRC_MGR_GRP_CD
    For char columns , even i dont give any datatype, i think it will work.
    And pblm is not with this hopefully.
    Thanks,
    Kishore

  • SQL LOADER "LOGICAL COMMIT"

    Hello ,
    After each successful execution of sql loader ,it implicitly commits the underlying transactions .
    Why this implicit commit is happening and Can disable this auto commit feature of sql loader ?
    Thanks

    Hi ,
    We cannot stop the commit option in sqlloqder.
    By using the ROWS parameter we can control the feature
    For eg rows=200
    The records will be committed after every 200 records are over only.
    Thanks
    USR0072

  • Bulk loading BLOBs using PL/SQL - is it possible?

    Hi -
    Does anyone have a good reference article or example of how I can bulk load BLOBs (videos, images, audio, office docs/pdf) into the database using PL/SQL?
    Every example I've ever seen in PL/SQL for loading BLOBs does a commit; after each file loaded ... which doesn't seem very scalable.
    Can we pass in an array of BLOBs from the application, into PL/SQL and loop through that array and then issue a commit after the loop terminates?
    Any advice or help is appreciated. Thanks
    LJ

    It is easy enough to modify the example to commit every N files. If you are loading large amounts of media, I think that you will find that the time to load the media is far greater than the time spent in SQL statements doing inserts or retrieves. Thus, I would not expect to see any significant benefit to changing the example to use PL/SQL collection types in order to do bulk row operations.
    If your goal is high performance bulk load of binary content then I would suggest that you look to use Sqlldr. A PL/SQL program loading from BFILEs is limited to loading files that are accessible from the database server file system. Sqlldr can do this but it can also load data from a remote client. Sqlldr has parameters to control batching of operations.
    See section 7.3 of the Oracle Multimedia DICOM Developer's Guide for the example Loading DICOM Content Using the SQL*Loader Utility. You will need to adapt this example to the other Multimedia objects (ORDImage, ORDAudio .. etc) but the basic concepts are the same.
    Once the binary content is loaded into the database, you will need a to write a program to loop over the new content and initialize the Multimedia objects (extract attributes). The example in 7.3 contains a sample program that does this for the ORDDicom object.

  • 'Cannot begin data load. Analytic Server Error(1042006): Network Error

    Hi...
    I got a error message when I upload data from source file into Planning via IKM SQL to Essbase (data).
    Some records are found following errors.
    'Cannot begin data load. Analytic Server Error(1042006): Network Error [10061]: Unable To Connect To [localhost:32774]. The client timed out waiting to connect to the Essbase Agent using TCP/IP. Check your network connections. Also please make sure that Server and Port values are correct'
    What is this error about? is the commit interval too large? now the value is 1000.

    Hi,
    You could try the following
    1. From the Start menu, click Run.
    2. Type regedit and then click OK.
    3. In the Registry Editor window, click the following directory:
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters
    4. From the Edit menu, click New, DWORD Value.
    The new value appears in the list of parameters.
    5. Type MaxUserPort and then press Enter.
    Double-click MaxUserPort.
    6. In the Edit DWORD Value window, do the following:
    * Click Decimal.
    * Enter 65534.
    * Click OK.
    7. From the Edit menu, click New, DWORD Value.
    The new value appears in the list of parameters.
    8. Type TcpTimedWaitDelay and then press Enter.
    9. Double-click TcpTimedWaitDelay.
    10. In the Edit DWORD Value window, do the following:
    * Click Decimal.
    * Type 300
    * Click OK.
    11. Close the Registry Editor window.
    12. Reboot essbase server
    Let us know how it goes.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • OWB 10gR2: Commit everything or nothing

    Hello.
    We have a mapping which must run in row-based mode because it deletes records. There is one source and multiple targets. One flow directs into a logtable (LOG_LT). The other flow directs into 4 targets (MOD_LT for all targets) depending on the mode given in the recors (Update/Insert/Merge/Delete). The Logtable gets a part of the primary keys of the incoming rows wich are used for a replication application.
    It is important that there is nothing in the 4 targets which is not stored (part of pk as described) in the logtable. Otherwise information will be lost for the replication.Therfore we set target to order to use the LOG_LT first.
    a) We started with a high commit high freqency larger than the number of rows in source and autocommit. If the data flow had a error (pk e.g.) only this and not the log-flow was rollbacked. (Table based rollback)
    b) We used autocommit correlated. The logflow and the dataflow are ok, but the mapping was commiting to early (not the value in commit freqency).
    Why?
    c) The next idea was to set the mapping to manual commit and commit in a postmapping procedure if everything was ok. The mapping can be deployed but when we start the mapping we get the warning:
    RTC-5380: Manual commit control configured on maping xy is ignored.
    Why?
    Has anybody a good practice for a row-based-mapping (with multiple targets and one source) to commit everything or nothing?
    Thanks for your help
    Stephan

    Hello Stephan,
    In row based mappings, and one having single source and multiple targets then the following are the different ways of loading the targets.
    1. Correlated Commit = true and Row Based - Suppose there are 50 rows in the source and there are 4 target tables. When the mapping is run in this mode ansd suppose the mapping encounters error while loading row 10 in target table say TGT_3 then this row will be rolled back from all the four targets and the processing will continue, thus finally loading 49 records in all the 4 tables.
    When you run in row based mode, then please check the bulk processing mode. If it is set to true then the bulk size overrides the commit frequency set by you. It is thus advisable to make th bulk size and the commit frequency equal or set the bulk porcessing mode to false and then set the commit frequency.
    HTH
    -AP

  • Lock tables when load data

    Are there any way to lock tables when i insert data with SQL*Loader? or oracle do it for me automatically??
    how can i do this?
    Thanks a lot for your help

    Are there any problem if in the middle of my load (and commits) an user update o query data ?The only problem that I see is that you may run short of undo space (rollback segment space) if your undo space is limited and the user is running a long SELECT query for example: but this problem would only trigger ORA-1555 for the SELECT query or (less likely since you have several COMMIT) ORA-16XX because load transaction would not find enough undo space.
    Data is not visible to other sessions, unless, the session which is loading data, commits it. That's the way Oracle handle the read committed isolation level for transaction.
    See http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm#2689
    Or what happens if when i want to insert data someone has busy the table?You will get blocked if you try to insert data that has the same primary key as a row being inserted by a concurrent transaction.

  • Can only load a limited number of songs at a time

    this is a brand new issue no my ipod video that has me worried the hard drive is going. but.... i've never had ANY problems with playback, watching videos, photos, anything. i load all music and videos manually, with the only sync being for certain photo albums. the problem is that if i load a small number of stuff - anything less than 100 songs is fine - no problems. but if i try to load a lot of stuff at once, it starts loading them in order, then inevitably fails at some point. i get the following error window:
    "attempting to copy to the disk "iPod" failed. the disk could not be read from or written to."
    in the past, i've loaded 1,000 files for more at one time with no problems.
    because it only happens on large loads, i'm wondering if it's an overheating issue. but i've reset the ipod, reformatted the hard drive, gone into disk mode and ran disk utility (which found no problems at all), run through diagnostic mode on the ipod (which found no issues). i'm at a loss here. it's a big pain in the backside to have to load over 3,000 songs a hundred at a time.
    whaddya think?
    thanks.

    i found the problem. ipod doesn't like my new combo usb/firewire hub. plugged directly into USB card on computer, problem went away. yay.

  • Data Load PSA to IO (DTP) Dump: EXPORT_TOO_MUCH_DATA

    Hi Gurus,
    Iu2019m loading Data from PSA to IO: 0BPARTNER. I habe around 5 Mil entries.
    During the load the control Job dumps with the following dump:
    EXPORT_TOO_MUCH_DATA
    1. Data must be distributed into portions of 2GB
    2. 3 possible solutions:
        - Either increase the sequence counter (field SRTF2) to include INT4
        or
        -export less data
        or
        -distribute the data across several IDs
    If the error occures in a non-modified SAP program, you may be able to
    find an interim solution in an SAP Note.
    If you have access to SAP Notes, carry out a search with the following
    keywords:
    "EXPORT_TOO_MUCH_DATA" " "
    "CL_RSBK_DATA_SEGMENT==========CP" or "CL_RSBK_DATA_SEGMENT==========CM00V"
    "GET_DATA_FOR_EXPORT"
    This is not the first time I do such large load.
    The Field: SRTF2 is already an INT4 type.
    Version BI: 701. SP06
    I found a lot of OSS Notes for monitoring jobs, Industry solutions, by BI change runu2026 nothing however to data loading process.
    Has anyone encountered this problem please?
    Thanks in advance
    Martin

    Hi Martin,
    There were series of notes which may be applicable here.
    However if you have semantic grouping enabled it may be that this is a data driven issue.
    The System will try to put all records into one package in accordance with teh semantic key.
    If it is too generic many records could be input to one data package.
    Please choose another (more) fields for semantic grouping - or unselect all fields if the grouping is not nessessary at all.
    1409851  - Problems with package size in the case of DTPs with grouping.
    Hope this helps.
    Regards,
    Mani

Maybe you are looking for

  • Adobe Acrobat 9 Pro...bad installation?

    I recently installed the entire CS4 Design Premium onto Vista OS. The program itself seems to open fine. But after installing this, everytime I right-click on any icon or file (even if all I want to do is check properties or somehting), an installing

  • Custom mail format

    Dear All, We are creating custom mail format to receive alert in case job got cancelled. We have designed job chain with 2 steps. First step has job name and second one has custom mail send job definition. 1) I am able to get the job name for the ste

  • ABAP client Proxy authentication required

    Hallo, my problem is about ABAP client Proxy authentication. Scenario: Our Dev. BW MWDCLNT600 queries “forward” a (RetailPro) database (JDBC Receiver C.Channel), by Dev. XI , in order to "drive" data extraction (realized, backward, from RetailPro to

  • Lost control file, how to recover them

    Hi All, I have Oracle 11G running on RHEL 5.2 server and unexpectedly I can't start database after a server reboot. I have executed the following commands, sqlplus /as sysdba SQL> startup ORA-01078: failure in processing system parameters LRM-00109:

  • Lenovo Solution Centre Small Business Maintenance

    The model is a 3347 (ThinkPad Twist) running Win 8 64 bit.  The version of the Small Business Solution Centre being run is 1.1.27.5565 (which by the way has not been posted in Downloads). Four jobs are to run after hours: namely, After Hours Maintena