OWB execution bulk size parameter

We are using owb 9.2.0.2.8 and the database server 9.2.0.4. We are using the 2 external table as source and 2 oracle table as target. In the external table has one table 186 records and another 94 records.We are using setop operator(Union) to merging 2 external tables and then We are using the match-merge operator to put into two tables.
When we run the mapping there is one run time parameter called BULK_SIZE, when we give 100(for BULK_SIZE) it is inserting without any error and the number of records inserted is 63 in one table and 98 in another table.
If we give 50 (default BULK_SIZE) then we are getting error ORA-1403 NO DATA FOUND, at the same time records inserted is 50,103.
If we give more than 150 as (BULK_SIZE) then 0,0 records are inserted and there are no errors also..
Can anybody throw some light on the above case and it is behaving this way?I would appreciate any help on this regard
Regards
Senthil

This sounds like a bug - can you send me the mapping mdl file and the two files by e-mail, so that I can try to reproduce it? [email protected]
Regards:
Igor

Similar Messages

  • Bulk size seems to have no effect

    Hi folks,
    Any idea why bulk size setting of a mapping seems not to have any effect?
    My settings are as adviced in the documentation:
    Bulk size: 1000 (default)
    Default Operating Mode: Row based
    Bulk processing code: selected
    Source database (remote) Oracle 8.1
    Target database & OWB database 10gR2
    Nevertheless when I execute a mapping TOAD doesn't show me any row count until the whole table has been loaded. As I've understood the load should be done in parts of 1000 rows, right? Could it be that a database setting prevents bulk size parameter to work like it should?
    Thanks,
    Ilmari

    Hi there,
    the script generated contained the elements you mentioned David, thanks.
    I was trying to commit every 1000 rows and to process approximately 10M rows.
    Wasn't able to solve it, it still doesn't commit in the meantime. However, probably some database parameters were changed and it doesn't end in an error. So not solved, but somehow got past it. Not ideal, but works.
    BR,
    Ilmari

  • COMMIT FREQUENCY /BULK SIZE ignored ?

    OWB Client 9.2.0.2.8
    OWB Repository: 9.2.0.2.0
    I have a simple SRC to TRG toy map which does an update (row based)
    I was hoping to achieve the following: if any error is encountered , then 1) abort and 2) rollback any changes
    I could achieve 1) by setting MAX_ERRORS=0
    For 2) I tried COMMIT_FREQUENCY=BULKSIZE= some_very_high_number.
    But the map always insists on updating at least some rows before encountering the first error (and then aborting)
    I tried changing the bulk prcoessing to true, and got the same results. I changed UPDATE to INSERT=>similar behavior
    why is the COMMIT_FREQUENCY=BULK SIZE setting being ignored? Or is this just a bug with my version ?

    Hi,
    Good question... it should be the case as you describe (I think). I'd need to look at the code for this, but you may want to contact support on this one so they can reproduce this...
    Jean-Pierre

  • Package Size parameter in Partner Profile

    Hello guys,
    I'd need your advice. We have 2 systems (SAP and Non-SAP) and we need to connect them by mean of HR-PDC interface (IDoc communication is supported). We can create new Partner Profile in SAP and we can add a few IDoc types as Inbound Parameters of the partner profile. Then it is possible to define maximal package size for several IDoc types. I suppose that the Non-SAP system reads the Partner Profile and the "Package Size" parameter for the particular IDoc type before it sends some IDoc to SAP system. IDoc's size is limited by the parameter.
    And now our problem. We need to insert XI between these 2 systems. XI should only forward IDocs and nothing else. How could I set up limitation of IDoc's size? Do you thing that following scenarion could be working?
    I would define Partner Profile in XI same way as in the previous case in SAP. XI is configured to process IDocs in Integration Server by means of IDoc adapter. My expectation is that the Non-SAP system would read the partner profile in XI (including "Package Size" parameter) and then it would send IDoc to XI. Then XI would process the received IDocs in IDoc adapter.
    How can be "Package Size" parameter useful for inbound IDocs?
    Best Regards,
    Zbynek

    Hello ,
                Check the blog below
    /people/michal.krawczyk2/blog/2007/12/02/xipi-sender-idoc-adapter-packaging
    Rajesh

  • Resource_manager_plan and Reserved pool size  parameter changing every time

    Hello All,
    In my production database (Oracle 11g  RAC )Resource_manager_plan and Reserved pool size  parameter changing every time .
    Below is my question .
    This parameter changed automatically  or it require manual intervention .
    In what case this parameter changed if it automatically changed.
    I had checked dba_hist_parameter a, dba_Hist_snapshot b table for parameter changed history parameter changed .
    This parameter linked with process and sql performances?.
    Please help me . Thanks .
    Regards
    Ranjeet

    When scheduler window opens, its resource plan becomes active. For example, MONDAY_WINDOW begins on monday at 22:00. At this time current plan is changed to DEFAULT_MAINTENANCE_PLAN. At 00:00 (Tuesday) plan that was active before monday 22:00, becomes active. DEFAULT_MAINTENANCE_PLAN is used for Autotask clients :
    select client_name,WINDOW_GROUP from DBA_AUTOTASK_CLIENT ;
    CLIENT_NAME                     WINDOW_GROUP
    auto optimizer stats collection ORA$AT_WGRP_OS
    auto space advisor              ORA$AT_WGRP_SA
    sql tuning advisor              ORA$AT_WGRP_SQ
    select * from DBA_SCHEDULER_WINGROUP_MEMBERS where WINDOW_GROUP_NAME in (select WINDOW_GROUP from DBA_AUTOTASK_CLIENT);
    WINDOW_GROUP_NAME WINDOW_NAME
    ORA$AT_WGRP_OS    MONDAY_WINDOW
    ORA$AT_WGRP_OS    TUESDAY_WINDOW
    ORA$AT_WGRP_OS    WEDNESDAY_WINDOW
    ORA$AT_WGRP_OS    THURSDAY_WINDOW
    ORA$AT_WGRP_OS    FRIDAY_WINDOW
    ORA$AT_WGRP_OS    SATURDAY_WINDOW
    ORA$AT_WGRP_OS    SUNDAY_WINDOW
    ORA$AT_WGRP_SA    MONDAY_WINDOW
    ORA$AT_WGRP_SA    TUESDAY_WINDOW
    ORA$AT_WGRP_SA    WEDNESDAY_WINDOW
    ORA$AT_WGRP_SA    THURSDAY_WINDOW
    ORA$AT_WGRP_SA    FRIDAY_WINDOW
    ORA$AT_WGRP_SA    SATURDAY_WINDOW
    ORA$AT_WGRP_SA    SUNDAY_WINDOW
    ORA$AT_WGRP_SQ    MONDAY_WINDOW
    ORA$AT_WGRP_SQ    TUESDAY_WINDOW
    ORA$AT_WGRP_SQ    WEDNESDAY_WINDOW
    ORA$AT_WGRP_SQ    THURSDAY_WINDOW
    ORA$AT_WGRP_SQ    FRIDAY_WINDOW
    ORA$AT_WGRP_SQ    SATURDAY_WINDOW
    ORA$AT_WGRP_SQ    SUNDAY_WINDOW

  • V$backup_set_details and SECTION SIZE parameter

    Hi,
    I was testing the new SECTION SIZE parameter, but when I use it, the v$backup_set_details view retrieves several rows from the same backup set. I don't understand why.
    As the documentation of v$backup_set_details description says, it "provides detailed information about the backup set.". So, I understand that each row will show information about each backupset, except that "This view will contain an extra row for each backup session that invokes BACKUP BACKUPSET (that is, creates new copies for the same backup set or copies backup set information from disk to tape)" which is no my situation.
    Here is my example:
    Here is my rman script runned against a 11.2.0.1, 2TB size, with several big datafiles, setting up SECTION SIZE at 12000 M:
    RUN
    CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/backup/ora_w%T_df%t_s%s_s%p';
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/backup/%F';
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE BACKUP OPTIMIZATION ON;
    BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0 SECTION SIZE 12000 M DATABASE PLUS ARCHIVELOG;
    }When its finished, lets check what rman says about the backup of datafiles #187(which is 15G) and #57 (3,1GB)
    RMAN> list backup of datafile 187;
    Lista de Juegos de Copias de Seguridad
    ===================
    Clave BS  Tipo LV Tamaño       Tipo de Dispositivo Tiempo Transcurrido Hora de Finalización
    8352    Incr 0  11.40G     DISK        01:07:58     28-ABR-2013 22:41:37
      Lista de Archivos de Datos en el juego de copias de seguridad 8352
      Tipo de Archivo LV SCN Pto. Ctrl.    Hora de Punto de Control Nombre
      187  0  Incr 1645285921 28-ABR-2013 21:33:39     /myoracle/data/FICHEROS_DB134.dbf
      Copia de Juego de Copias de Seguridad #1 del juego de copias de seguridad 8352
      Tipo de Dispositivo Tiempo Transcurrido Hora de Finalización Comprimido Etiqueta
      DISK        01:07:58     28-ABR-2013 22:41:36 YES        TAG20130427T170811
        Lista de Partes de la Copia de Seguridad para el juego de copias de seguridad 8352 Número de Copia 1
        Clave BP  Número de Parte Estado      Nombre de Parte
        9694    1   AVAILABLE   /backup/ora_w20130428_df813965619_s13146_s1
        9687    2   AVAILABLE   /backup/ora_w20130428_df813965619_s13146_s2
    RMAN> list backup of datafile 57;
    Lista de Juegos de Copias de Seguridad
    ===================
    Clave BS  Tipo LV Tamaño       Tipo de Dispositivo Tiempo Transcurrido Hora de Finalización
    8212    Incr 0  17.29G     DISK        01:50:19     27-ABR-2013 18:58:32
            Clave BP: 9417   Estado: AVAILABLE  Comprimido: YES  Etiqueta: TAG20130427T170811
            Nombre de Parte: /backup/ora_w20130427_df813863293_s12872_s1
      Lista de Archivos de Datos en el juego de copias de seguridad 8212
      Tipo de Archivo LV SCN Pto. Ctrl.    Hora de Punto de Control Nombre
      57   0  Incr 1644254693 27-ABR-2013 17:08:13     /myoracle/data/DATOS_DB05.dbfEverything went as expected.
    Datafile #187 which is 15G goes into one backupset with two backupieces, because SECTION SIZE is 12000 M
    RMAN> list backupset 8352;
    se utiliza el archivo de control de la base de datos destino en lugar del catálogo de recuperación
    Lista de Juegos de Copias de Seguridad
    ===================
    Clave BS  Tipo LV Tamaño       Tipo de Dispositivo Tiempo Transcurrido Hora de Finalización
    8352    Incr 0  11.40G     DISK        01:07:58     28-ABR-2013 22:41:37
      Lista de Archivos de Datos en el juego de copias de seguridad 8352
      Tipo de Archivo LV SCN Pto. Ctrl.    Hora de Punto de Control Nombre
      187  0  Incr 1645285921 28-ABR-2013 21:33:39     /myoracle/data/FICHEROS_DB134.dbf
      Copia de Juego de Copias de Seguridad #1 del juego de copias de seguridad 8352
      Tipo de Dispositivo Tiempo Transcurrido Hora de Finalización Comprimido Etiqueta
      DISK        01:07:58     28-ABR-2013 22:41:36 YES        TAG20130427T170811
        Lista de Partes de la Copia de Seguridad para el juego de copias de seguridad 8352 Número de Copia 1
        Clave BP  Número de Parte Estado      Nombre de Parte
        9694    1   AVAILABLE   /backup/ora_w20130428_df813965619_s13146_s1
        9687    2   AVAILABLE   /backup/ora_w20130428_df813965619_s13146_s2Datafile #57 goes into one backup set with one backup piece along with others datafiles
    RMAN> list backupset 8212
    2> ;
    Lista de Juegos de Copias de Seguridad
    ===================
    Clave BS  Tipo LV Tamaño       Tipo de Dispositivo Tiempo Transcurrido Hora de Finalización
    8212    Incr 0  17.29G     DISK        01:50:19     27-ABR-2013 18:58:32
            Clave BP: 9417   Estado: AVAILABLE  Comprimido: YES  Etiqueta: TAG20130427T170811
            Nombre de Parte: /backup/ora_w20130427_df813863293_s12872_s1
      Lista de Archivos de Datos en el juego de copias de seguridad 8212
      Tipo de Archivo LV SCN Pto. Ctrl.    Hora de Punto de Control Nombre
      33   0  Incr 1644254693 27-ABR-2013 17:08:13     /myoracle/data/indx01.dbf
      57   0  Incr 1644254693 27-ABR-2013 17:08:13     /myoracle/data/DATOS_DB05.dbf
      202  0  Incr 1644254693 27-ABR-2013 17:08:13     /myoracle/data/FICHEROS_DB149.dbf
      210  0  Incr 1644254693 27-ABR-2013 17:08:13     /myoracle/data/FICHEROS_DB158.dbf
      218  0  Incr 1644254693 27-ABR-2013 17:08:13     /myoracle/data/FICHEROS_DB166.dbfNow, lets view those backupsets in the v$backup_set_details view.
    Viewing the backup set 8352 ... seems the expected result
    SQL> select BS_KEY, RECID, START_TIME, COMPLETION_TIME, ROUND(OUTPUT_BYTES/1024/1024,2) MB, STATUS
    from v$backup_set_details
    where bs_key = 8352
    order by start_time;  2    3    4
        BS_KEY      RECID START_TIME           COMPLETION_TIME              MB S
          8352       8352 28-ABR-2013 21:33:39 28-ABR-2013 22:41:37   11678,22 ANOW lets query for the backup set 8212
    If select that backupset in the v$backup_set_details, why there are so many rows for the same backupset?
    And why the output_bytes value is different in each row?
    SQL> select BS_KEY, RECID, START_TIME, COMPLETION_TIME, ROUND(OUTPUT_BYTES/1024/1024,2) MB, STATUS
    from v$backup_set_details
    where bs_key = 8212
    order by start_time;  2    3    4
        BS_KEY      RECID START_TIME           COMPLETION_TIME              MB S
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   18753,49 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20143,88 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20366,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20143,88 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17701,73 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   18753,49 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32    20090,2 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20143,88 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20143,88 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   19986,97 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   18753,49 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20442,96 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17038,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20442,96 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   19986,97 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20442,96 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20442,96 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17701,73 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   19986,97 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32    20090,2 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17701,73 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   19986,97 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20366,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   19986,97 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32    20090,2 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17038,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32    20090,2 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17701,73 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20442,96 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17701,73 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32    20090,2 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32    20090,2 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32    20090,2 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   19986,97 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   18753,49 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   18753,49 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   18753,49 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   18753,49 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   19986,97 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20366,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20442,96 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20366,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17038,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20366,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17038,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20143,88 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17038,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20442,96 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20143,88 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20366,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20143,88 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17701,73 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   20366,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17701,73 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17038,69 A
          8212       8212 27-ABR-2013 17:08:13 27-ABR-2013 18:58:32   17038,69 A
    56 filas seleccionadas.Can someone explain this to me? What am I missing?
    Regards

    HI,
    Sorry, I don't understand the meaning of your message. what do you mean by historic records ?
    I can't find reasonable details discription about the view v$backup_set. I don't see any information inthere about deleting a backup, nor do I find info about the size of the backup taken.
    AM I right, that view RC_BACKUP_SET in the catalog contains exactly the same info, as v$BACKUP_SET in target control-file ?
    Wher can I find info about deleted backups ?
    How to query the total size of 1 backup of 1 DB ?
    Where does "list backup" gets it's info from ? I don't see any discription about that source !
    I want to calculate the backup-times and capacity over the time and I think all that info is in the catalog, but where ? how to query it ?
    the standard views do not provide size-info !
    Why does Oracle makes it so difficult to query backup-statistics ?
    thanks for any tip on getting to understand the catalog's resources.
    regards LaoLaoDe

  • Tuxedo queue size parameter is set in peoplesoft

    Hi
    Can you please help me on this
    tuxedo queue size parameter is set in peoplesoft
    Advance wishes

    Tuxedo queue size parameter is set in
       Select one
    a)tuxedo config file
    b)Service Parameter for tuxedo
    3)psappsrv.cfg
    d)installation of BEA tuxedo
    tuxedo queue size parameter is set in peoplesoft 

  • Why OWB Execution status report showing wrong status  ?

    Hi,
    I created a process flow. It executed successfuly.
    In that process flow I used 4 mappings. and the data is updated in 4 tables. In all the tables data updated successfully.
    But the execution status is showing that , in one table the records inserted are 0.
    I verified in the Runtime Assistant. There it shows correctly.
    Why it is showing like that?
    Any suggestions are welcome.
    Thanks and regards
    Gowtham Sen.

    I've seen this often here. Check the log files on the OWB server and you may see that control center service was force restarted. I have an SR opened with Oracle on this.
    The symptoms I've seen are:
    1. The mapping reports success
    2. Some or all of the levels of the mapping show "*" for the statistics
    3. The mapping completion time shows the mapping start time.

  • OWB embeded process flow - parameter binding problem

    I've installed and Oracle 10g Database, after the database installation I added the Oracle Workflow server software. To complete the installation I ran the wfca.
    Later I get to define a process flow in OWB (cheesy) and the process flow seems to be working well (I've done all the registrations necesary also ), but when I am trying to do the parameter binding is not working. I do get the window prompt for the parameter and I enter the date parameter ( I tried different formats like 'YYYY/MM/DD' 'dd-mon-yyyy'), but after I check the runtime in the web browser the parameter is not pass to next process. I tried many things for the last week but I cannot get it to pass the parameter from one process to another.
    Any ideas,
    HELP !!!!!

    did u manage to resolve this issue? we are facing the same problem.

  • Tape size parameter in initsid.ora file for 200/400 GB

    Hi Experts,
    I have a ECC 5 system running on Windows 2003 Enterprise Server with Oracle as Database.
    I have installed an HP Ultrium Storageworks 460 tape drive on the server. It accepts a 200/400 GB tape cartridge.
    What value should I set for the parameter tape_size in the init<sid>.ora file in F:\oracle\ora92\database folder?
    The present value is default at 1200M
    Is there some formula to calculate this value depending upon the tape cartridge size?
    Thanks in advance,

    You can check note  8707  Explanation about init<SID>.sap parameter tape_size
    It is a bit old, but you can figure out the size. For example:
    Tape type                    tape_size (w/o     tape_size (with 
                                 hardw. compr.)     hardw. compr.)  
    DLT 4000    20/40 GB     :    19000M             18000M  
    Probably you can multiply it by 10 as you have a 200/400

  • Can't create float array with a variable name as size parameter?

    Hi,
    When trying to compile code that users a variable in the array subscript to set the size, CC gives the following error:
    Error: An integer constant expression is required within the array subscript operator.
    1 Error(s) detected.The code is as follows:
    int main()
    //blah blah
    const int arrSize = numberVariables;
        float tempArr[arrSize];
    //blah blah
    }Output from CC -V:
    unknown% CC -V
    CC: Sun C++ 5.9 SunOS_i386 Patch 124864-01 2007/07/25Output from uname -a:
    unknown% uname -a
    SunOS unknown 5.10 Generic_137138-09 i86pc i386 i86pcAny ideas on why CC is giving that error?
    ~Slow
    Edited by: SlowToady on Nov 15, 2008 8:28 PM
    Edited by: SlowToady on Nov 15, 2008 8:36 PM

    Marc_Glisse wrote:
    Last time I checked, I did not see VLAs in the draft of the next C++ standard, which I believe is now feature complete. So either I missed it (quite possible) or VLAs were considered a bad idea (for C as well many people consider alloca and VLA bad ideas). It is however a very reasonable extension to have (care to file a RFE?).The following code performs like this on an IBM x366 with 4 single-core 3.16 GHz Xeons, running OpenSolaris build 96:
    -bash-3.2$ ./thrtest
    malloc() took 440895924 ns
    alloca() took 2122522 ns
    -bash-3.2$Using alloca is two hundred times faster.
    Two hundred times faster.
    Think about that next time some pedant says alloca "sucks".
    Try for yourself. Here's the code:
    #include <stdlib.h>
    #include <pthread.h>
    #include <alloca.h>
    #include <strings.h>
    #include <stdio.h>
    #define NUM_THREADS 8
    #define ITERS 10000
    #define BYTES 1024
    typedef void ( *mem_func_t )( void );
    void alloca_iter()
        char *ptr;
        ptr = ( char * ) alloca( BYTES );
        memset( ptr, 0, BYTES );
        return;
    void malloc_iter()
        char *ptr;
        ptr = ( char * ) malloc( BYTES );
        memset( ptr, 0, BYTES );
        free( ptr );
        return;
    void *thread_proc( void *arg )
        mem_func_t mem_iter;
        mem_iter = ( mem_func_t ) arg;
        for ( int ii = 0; ii < ITERS; ii++ )
            mem_iter();
        return( NULL );
    int main( int argc, char **argv )
        pthread_t tids[ NUM_THREADS ];
        void *results[ NUM_THREADS ];
        hrtime_t start;
        hrtime_t end;
        int ii;
        start = gethrtime();
        for ( ii = 0; ii < NUM_THREADS; ii++ )
            pthread_create( &tids[ ii ], NULL, thread_proc, malloc_iter );
        for ( ii = 0; ii < NUM_THREADS; ii++ )
            pthread_join( tids[ ii ], &results[ ii ] );
        end = gethrtime();
        printf( "malloc() took %ld ns\n", end - start );
        start = gethrtime();
        for ( ii = 0; ii < NUM_THREADS; ii++ )
            pthread_create( &tids[ ii ], NULL, thread_proc, alloca_iter );
        for ( ii = 0; ii < NUM_THREADS; ii++ )
            pthread_join( tids[ ii ], &results[ ii ] );
        end = gethrtime();
        printf( "alloca() took %ld ns\n", end - start );
        return( 0 );
    }

  • How to deactivate OWB Execution?

    Hi,
    Im not able to execute my mappings. It has been successfully validated and deployed, but couldnt excute the mappings. In audit browser it shows the status as busy. How can I solve this problem? Where can I find the audit id in OWB?
    Regards
    Kishan

    100% Agreed. Use of rule priority as any form of substitute for fully expressing the logic within the policy/rule would be considered poor practice. There are very limited uses for rule priority that are documented, but use of them to force evaluation order in lieu of fully expressing the conditions is not one of them. The recommended approach is to fully express all conditions and rely on the logical dependence between rules to "prioritize" the evaluation of them.

  • OWB parameter Settings

    Hi ,
    I have one issue while running the etl in OWB 9.0.3.
    When i run the etl for 50000 records, it gets completed in 4 minutes.
    When i run it for 100000 records, it takes more than 1 hour.
    I have set the operating mode as "Set based fail over to row based", bulk size as 50 and commit frequency as 50.
    Please tell me, what should be the exact value to be set for Bulk size and Commit frequency?
    Vimal

    Hi Vimal.
    Well, you know there isn't a "magic number" to fix things for you. You'll have to trace this process and see if the performace problem appears when reading or writing data. I usually use statspack for that.
    In "set based fail over to row based" operating mode, OWB tries to load data using set based. If it fails, then OWB try again using row based code. Maybe that's why it takes longer when you select more rows.
    For more info about operating mode, see:
    http://download-east.oracle.com/docs/html/B10996_01/11config.htm#1112964
    Increasing the commit frequency would help performance if you were using row based mode. If you want to increase value for "Commit Frequency" parameter you'll have to alter operating mode to row based.
    Hope I've helped.
    Regards,
    Marcos

  • OWB maping execution :- ORA-20213: Unable to create standalone job record

    Hi,
    When I am running an owb mapping from sqlplus, I am getting the following error:
    SQL> DECLARE
    2 RetVal NUMBER;
    3 P_ENV WB_RT_MAPAUDIT.WB_RT_NAME_VALUES;
    4 BEGIN
    5 RetVal := UII_D_MAP_SPC_WIPBIN.MAIN ( P_ENV );
    6 dbms_output.put_line('RetVal is '||RetVal);
    7 END;
    8 /
    DECLARE
    ERROR at line 1:
    ORA-20213: Unable to create standalone job record - there may be no task
    defined for this map
    ORA-06512: at "UII_OWB_REP.WB_RT_MAPAUDIT", line 1266
    ORA-06512: at "UII_OWB_REP.WB_RT_MAPAUDIT", line 2098
    ORA-06512: at "UII_ODS_OWNER.UII_D_MAP_SPC_WIPBIN", line 3851
    ORA-06512: at "UII_ODS_OWNER.UII_D_MAP_SPC_WIPBIN", line 3993
    ORA-06512: at line 5
    Previously, I had unregistered the target schema from "OWB Runtime Audit Browser" logging as a QA user and registered to a new schema from OWB DEPLOYMENT MANAGER.
    I am able to deploy the mapping from Owb Deployment Manager and the deployment goes OK.
    I have also gone through the links:
    OWB execution error : ORA-20213: Unable to create standalone job record
    Re: ORA-20213 during execution
    But, did not find the solution.
    When I execute the mapping from Owb Deployment Manager after right clicking on it, it is working fine. But, above execution method is not working.
    We want to run the mappings in the above way, because all our mappings in our other projects are running fine in the above sql run procedure.
    Can you please help me out, how to fix the above errors?
    Thanks & Regards,
    lenin

    Good morning Lenin,
    Since the other threads do not apply, you've gotten yourself into a strange situation.
    I'm not familiar with the method you use to run the mapping, have you tried running the mapping using the SQL-template that OWB provides or - like I do - the run_my_owb_stuff script provided by the OWB time? If not, could you try that please?
    Apart from that, I don't have any experience with your exact error message so it's hard to share anything concrete, I can only suggest things. Maybe there are other users on this forum that do, who knows...
    If you want concrete support, log a TAR with Metalink, they are obligated to help you if you have a CSI :-)
    Good luck, Patrick

  • OWB 9i Features.

    Hi All,
    I am planing to prepare a comparison document for feature provided by Oracle Warehouse Builder & Ab initio ver 1.11.15 as ETL.
    OWB config details
    Oracle Warehouse Builder Client 9i Ver 9.2.0.2.8
    Oracle Warehouse Builder Repository Ver 9.2.0.2.0.
    Windows XP OS.
    I believe following features are not avaialable in OWB 9i Release.
    1> OWB doesn't support Array, Union, Vector handling.
    2> OWB can read only ASCII Serial files & not Binary, Multifiles.
    3> Global Library feature is not available in OWB. To make it clear in Abinitio there is concept of Project Parameter file which can hold Global Variables can be referred in Ab initio graph, also these variables value can be assigned by execution of Unix Shell script.
    4> OWB doesn't support dynamic execution in a mapping. E.g. If a mapping consist of 5 flow operators at run time based on certain criteria certain flow operators can be disabled & will not get executed. This feature is available in Abinitio.
    5> Change Manager in OWB provides limited feature for Config Management.
    6> Optimization can be achieved by setting Set Based or Row Based option in configuration of OWB mapping.
    7> No flow operator is available for Cumulative summary records.
    8> Plug in component is required for Name & Address flow operator.
    I am aware that Oracle is planing to launch a new release OWB 10 G & it may be possible that some of above features will be available in the new release.
    Can someone please confirm my understanding is correct?
    Thanks in Advance.
    Regards,
    Vidyanand

    Hi Vidyanand,
    I will try to address some of your concerns/questions, but may need some more information on some of them:
    1> OWB doesn't support Array, Union, Vector handling.
    JPD: Array's are supported in the next release (end of the year). I'm not sure what you mean with Union (I'm guessing this NOT as SQL union) and vector handling. Can you elaborate a bit on those terms?
    2> OWB can read only ASCII Serial files & not Binary, Multifiles.
    JPD: OWB supports the capabilities of SQL Loader and External Tables. SQL Loader should be able to handle multiple files...
    3> Global Library feature is not available in OWB. To make it clear in Abinitio there is concept of Project Parameter file which can hold Global Variables can be referred in Ab initio graph, also these variables value can be assigned by execution of Unix Shell script.
    JPD: We are adding both global and local variable support in OWB (end of the year). You will be able to store the variables on the platform, which of course is in the database (secure and easy to access).
    4> OWB doesn't support dynamic execution in a mapping. E.g. If a mapping consist of 5 flow operators at run time based on certain criteria certain flow operators can be disabled & will not get executed. This feature is available in Abinitio.
    JPD: While this sounds very interesting, I'm struggling a bit with when you would want to use this, and even more with how the flow would be linked together if I randomly switch of operators...
    One of the things to keep in mind is that OWB generates code (I believe Ab Initio does not and interprets metadata at runtime). So this feature is harder to implement in a code generator. However I'm not convinced this is a crucial feature for ETL... Any thoughts?
    5> Change Manager in OWB provides limited feature for Config Management.
    JPD: Change Manager is intended for version management. We are adding full multi-configuration support in the next release where you can attach physical characteristic to objects. That combined with Change Manager will give you a much better configuration management tool.
    6> Optimization can be achieved by setting Set Based or Row Based option in configuration of OWB mapping.
    JPD: This is actually not correct. Set based vs row based will influence performance but is not intended for just that! It gives you different ways of interpreting a graph. In Row based you can do if-then code and row-by-row evaluations. Setbased has only SQL language as implementation.
    You can influence performance with various parameters:
    - Oracle DB Hints on both extract and load operators (in mapping that is)
    - Set the parallel degree on the objects (invokes database parallelism)
    - Influence the various code flavors with:
    > Bulk Processing and Bulk Size for processing multiple rows in one go (for rowbased settings)
    > Parallel row code (for rowbased settings)
    The latter allows you run PL/SQL row based sections in parallel within the DB (transparently).
    7> No flow operator is available for Cumulative summary records.
    JPD: I'm not sure what this means, is this covered by the aggregator?
    8> Plug in component is required for Name & Address flow operator.
    JPD: OWB supports this natively. What you do need is to purchase the library files that the operator runs on and you can do that from First Logic, Trillium and Data Flux allowing you to use OWB with the market leading data libraries (or with localized ones if desired). We think this is a much stronger story than a closed box vendor specific solution. OWB is intended to work globally and we open up this interface as some vendors have better support in some regions as others.
    But just to state this again, you do NOT need any plug-ins into the client tool and it is all there natively. You just have a choice of data libraries.
    Match/Merge is of course also supported and this is completely native (and free of charge).
    I sincerely think that OWB has the strongest data quality story in the ETL business!

Maybe you are looking for

  • Since updating to 10.7.4 the 'search' no longer works in Address Book.

    Search no longer works in Address Book since I updated the MAC software to 10.7.4. Is there a 'fix' ?

  • Dont want popup when sending mail

    Hi Experts, Im working on scripts, in that i need to send mail to the customer. In selection screen i have print & mail radio btns, when print is selected, popup comes to select the output device, but when mail is selected, i dont want that popup. Fo

  • System is not considering open reservations at the time of next MRP Run

    Hi, In our business process, subcontractors are defind as Vendors in the SAP system and materials will be provided / transfered to these subcontractors in the SAP system through Subcontracting route (541 movement type). Now, business wants to 'Monito

  • Processing a batch of files

    This question was posted in response to the following article: http://help.adobe.com/en_US/photoshop/cs/using/WSfd1234e1c4b69f30ea53e41001031ab64-7427a.h tml

  • Unable to process file to proxy 40 mb file

    Hi Friends, I am encountering the error when i am trying to send 40 mb file via file to proxy, SAP PI Version is 7.1 Error Message in RWB: Transmitting the message to endpoint http://p1342134dev200.test.lichtsinfotech.co.uk:50000/sap/xi/engine?type=e