Dbms_parallel_execute

We want to use dbms_parallel_execute to create faster transactions over dblinks. As it stands dbms_parallel_execute does what we need it to do initially, however one problem we have encountered is that the interval for the next execute appears to take 3.5 seconds. We need one second or better. Can anyone explain how we can lower the 3.5 seconds?

864611 wrote:
We want to use dbms_parallel_execute to create faster transactions over dblinks. As it stands dbms_parallel_execute does what we need it to do initially, however one problem we have encountered is that the interval for the next execute appears to take 3.5 seconds. We need one second or better. Can anyone explain how we can lower the 3.5 seconds?SELECT SYSDATE FROM DUAL@REMOTE;

Similar Messages

  • Dbms_Parallel_Execute run as a Dbms_Scheduler job

    Hi,
    I have tried to use Dbms_Parallel_Execute to update a column in different tables.
    This works fine when I run it from SQLPlus or similar.
    But if I try to run the code as a background job using Dbms_Scheduler it hangs on the procedure Dbms_Parallel_Execute.Run_Task.
    The session seems to hang forever.
    If I kill the session of the background job, the task ends up in state FINISHED and the update has been completed.
    If I look on the session it seems to be waiting for event "pl/sql lock timer".
    Anyone who knows what can go wrong when running this code as a background job using Dbms_Scheduler?
    Code example:
    CREATE OR REPLACE PROCEDURE Execute_Task___ (
    table_name_ IN VARCHAR2,
    stmt_ IN VARCHAR2,
    chunk_size_ IN NUMBER DEFAULT 10000,
    parallel_level_ IN NUMBER DEFAULT 10 )
    IS
    task_ VARCHAR2(30) := Dbms_Parallel_Execute.Generate_Task_Name;
    status_ NUMBER;
    error_occurred EXCEPTION;
    BEGIN
    Dbms_Parallel_Execute.Create_Task(task_name => task_);
    Dbms_Parallel_Execute.Create_Chunks_By_Rowid(task_name => task_,
    table_owner => Fnd_Session_API.Get_App_Owner,
    table_name => table_name_,
    by_row => TRUE,
    chunk_size => chunk_size_);
    -- Example statement
    -- stmt_ := 'UPDATE Test_TAB SET rowkey = sys_guid() WHERE rowkey IS NULL AND rowid BETWEEN :start_id AND :end_id';
    Dbms_Parallel_Execute.Run_Task(task_name => task_,
    sql_stmt => stmt_,
    language_flag => Dbms_Sql.NATIVE,
    parallel_level => parallel_level_);
    status_ := Dbms_Parallel_Execute.Task_Status(task_);
    IF (status_ IN (Dbms_Parallel_Execute.FINISHED_WITH_ERROR, Dbms_Parallel_Execute.CRASHED)) THEN
    Dbms_Parallel_Execute.Resume_Task(task_);
    status_ := Dbms_Parallel_Execute.Task_Status(task_);
    END IF;
    Dbms_Parallel_Execute.Drop_Task(task_);
    EXCEPTION
    WHEN OTHERS THEN
    Dbms_Parallel_Execute.Drop_Task(task_);
    RAISE;
    END Execute_Task___;

    Hi,
    Check job_queue_processes parameter, it must be greater than 0.

  • Call to a procedure in DBMS_PARALLEL_EXECUTE package

    Hi All,
    I have a procedure that takes an input parameter, I need to call this procedure in parallel. I am going to use DBMS_PARALLEL_EXECUTE package to parallize the process. Here is a example of what I want to do, please note that there is a third parameter newParameter.
    l_sql_stmt := 'BEGIN process_update(:start_id, :end_id, ' || newParameter || ' ); END;';
    DBMS_PARALLEL_EXECUTE.run_task(task_name => l_task,
    sql_stmt => l_sql_stmt,
    language_flag => DBMS_SQL.NATIVE,
    parallel_level => 10);
    I create the task and chunck using SQL, but this run task does not start processing. Can we do this?
    Thanks in advance?

    >
    I have a procedure that takes an input parameter, I need to call this procedure in parallel. I am going to use DBMS_PARALLEL_EXECUTE package to parallize the process. Here is a example of what I want to do, please note that there is a third parameter newParameter.
    l_sql_stmt := 'BEGIN process_update(:start_id, :end_id, ' || newParameter || ' ); END;';
    DBMS_PARALLEL_EXECUTE.run_task(task_name => l_task,
    sql_stmt => l_sql_stmt,
    language_flag => DBMS_SQL.NATIVE,
    parallel_level => 10);
    I create the task and chunck using SQL, but this run task does not start processing. Can we do this?
    >
    We have no way of knowing if you can do what you are trying to do since you didn't post the code you are using to do it.
    You can use a stored procedure to process the workload if that is what you are asking.
    See this Oracle-base article for an example of using a stored procedure for the workload.
    http://www.oracle-base.com/articles/11g/dbms_parallel_execute_11gR2.php#create_chunks_by_sql
    >
    The following example shows the processing of a workload chunked by a number column. Notice that the workload is actually a stored procedure in this case.
    >
    You did NOT provide any code that shows how you plan to provide that 'third' parameter so maybe that is where your problem is.

  • DBMS_PARALLEL_EXECUTE multiple threads taking more time than single thread

    I am trying to insert 10 million records from source table to target table.
    Number of chunks = 100
    There are two scenarios:
    dbms_parallel_execute(..... parallel_level => 1) -- for single thread
    dbms_parallel_execute(..... parallel_level => 10) -- for 10 threads
    I observe that the average time taken by 10 threads to process each chunk is 10 times the average time taken in case of single thread.
    Ideally it should be same which would reduce the time taken by a factor of 10 (due to 10 threads).
    Due to the above mentioned behavior, the time taken is the same in both cases.
    It would be great if anybody can explain me the reason behind such behavior.
    Thanks in advance

    Source Table = TEST_SOURCE
    Target Table = TEST_TARGET
    Both tables have 100 columns
    Below is the code:
    DECLARE
    l_task VARCHAR2(30) := 'test_task_F';
    l_sql_stmt VARCHAR2(32767);
    l_try NUMBER;
    l_stmt VARCHAR2(32767);
    l_status NUMBER;
    BEGIN
    l_stmt := 'select dbms_rowid.rowid_create( 1, data_object_id, lo_fno, lo_block, 0 ) min_rid,
                                       dbms_rowid.rowid_create( 1, data_object_id, hi_fno, hi_block, 10000 ) max_rid
                                       from (
                                       select distinct grp,
                                  first_value(relative_fno)
                                  over (partition by grp order by relative_fno, block_id
                                  rows between unbounded preceding and unbounded following) lo_fno,
                                  first_value(block_id )
                                  over (partition by grp order by relative_fno, block_id
                                  rows between unbounded preceding and unbounded following) lo_block,
                                  last_value(relative_fno)
                                  over (partition by grp order by relative_fno, block_id
                                  rows between unbounded preceding and unbounded following) hi_fno,
                                  last_value(block_id+blocks-1)
                                  over (partition by grp order by relative_fno, block_id
                                  rows between unbounded preceding and unbounded following) hi_block,
                                  sum(blocks) over (partition by grp) sum_blocks
                                  from (
                                  select relative_fno,
                                  block_id,
                                  blocks,
                                  trunc( (sum(blocks) over (order by relative_fno, block_id)-0.01) / (sum(blocks) over ()/100) ) grp
                                  from dba_extents
                                  where segment_name = upper(''TEST_REGION_SOURCE'')
                                  and owner = ''FUSION'' order by block_id
                             (select data_object_id from user_objects where object_name = upper(''TEST_REGION_SOURCE'') )';
    DBMS_PARALLEL_EXECUTE.create_task (task_name => l_task);
    DBMS_PARALLEL_EXECUTE.create_chunks_by_sql(task_name => l_task,
    sql_stmt => l_stmt,
    by_rowid => true);
    l_sql_stmt := 'insert into FUSION.TEST_REGION_TARGET(REGION_ID,REGION1,REGION2,REGION3,REGION4,
                             ...., REGION99
                             SELECT REGION_ID,REGION1,REGION2,REGION3,REGION4,
                             .....,REGION99
                             from FUSION.TEST_REGION_SOURCE WHERE (1=1) AND rowid BETWEEN :start_id AND :end_id ';
    DBMS_PARALLEL_EXECUTE.run_task(task_name => l_task,
    sql_stmt => l_sql_stmt,
    language_flag => DBMS_SQL.NATIVE,
    parallel_level => 10);
    -- If there is error, RESUME it for at most 2 times.
    l_try := 0;
    l_status := DBMS_PARALLEL_EXECUTE.task_status(l_task);
    WHILE(l_try < 2 and l_status != DBMS_PARALLEL_EXECUTE.FINISHED)
    Loop
    l_try := l_try + 1;
    DBMS_PARALLEL_EXECUTE.resume_task(l_task);
    l_status := DBMS_PARALLEL_EXECUTE.task_status(l_task);
    END LOOP;
    DBMS_PARALLEL_EXECUTE.drop_task(l_task);
    END;
    Edited by: 943978 on Jul 2, 2012 9:22 AM

  • DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL

    Is there a way to create chunks using DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL and assign range values of a specific column?

    The common way is to chunk_by_rowid.
    For a better understanding: why would you want to assign a specific column?
    It appears you can do it anyway:
    http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_parallel_ex.htm#ARPLS67341

  • DBMS_PARALLEL_EXECUTE package of oracle

    Gurus,
    Can anyone guide me how to use this oracle supplied package (DBMS_PARALLEL_EXECUTE) in a genric way
    what I am looking for is a generic process which uses this package and triggers off by_rowid or by_col, or by_sql processes.
    Also can this package execute any of the pl7sql blocks also in parallel?
    Please help me understand this with some examples....
    Thanks in advance!
    Regards,
    Manik

    Manik wrote:
    Can anyone guide me how to use this oracle supplied package (DBMS_PARALLEL_EXECUTE) in a genric way
    what I am looking for is a generic process which uses this package and triggers off by_rowid or by_col, or by_sql processes.The basic concept behind this package (and the Oracle PQ feature) is take loads and loads of I/O, break it up into distinct (rowid) ranges, and process these ranges using separate and parallel processes.
    So I'm not sure what "generic feature" you see in this. The data is read (I/O'ed) for a reason. That reason is specific. Not generic. E.g. scan all rows in large table and find rows that match a certain filter condition. That means a very specific SQL statement that is run in parallel. Such as selecting all blue widgets with a foo attachment, that are 2mm in diameter and are between 10 - 20mm in length, and were manufactured during the last month, and shipped from factory to shop in the truck with registration ca 12345.
    This is is specific. Not generic.
    So perhaps you need to explain what you imply with generic.
    Also can this package execute any of the pl7sql blocks also in parallel?Not really. The SQL engine does parallel processing. Not the PL/SQL engine. Different languages. Different concepts.
    You can call SQL from PL/SQL. You can call PL/SQL from SQL.
    So the latter, if run in parallel, can call and use PL/SQL, in parallel. The PL/SQL code unit that supports being called like that is known as a pipeline table function.
    It seems to me that you are approaching parallel processing in Oracle with some client threading preconceptions. PL/SQL and SQL are server side languages and the environment is different than that of a client language and environment. Maybe you should empty your cup of these preconceptions first, to understand what Tubs said quite rightly, and to understand what server-side parallel processing is about in Oracle.

  • What is DBMS_PARALLEL_EXECUTE doing in the background

    What is the best way to see all the actual SQL's that are being executed in the background when a package is executed?
    For an example, I am interested in knowing what DBMS_PARALLEL_EXECUTE package is doing in the background. I've read what the procedures do and I understand the functionality. But I'd like to know how it does it. I wanted to know what create_chunks_by_number_col is doing in the background.

    970021 wrote:
    What is the best way to see all the actual SQL's that are being executed in the background when a package is executed?
    For an example, I am interested in knowing what DBMS_PARALLEL_EXECUTE package is doing in the background. I've read what the procedures do and I understand the functionality. But I'd like to know how it does it. I wanted to know what create_chunks_by_number_col is doing in the background.
    OK - I'm confused.
    You said you 'read what the procedures do' but the doc explains pretty clearly (IMHO) exactly how it creates the chunks.
    http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_parallel_ex.htm#CHDHFCDJ
    CREATE_CHUNKS_BY_NUMBER_COL Procedure
    This procedure chunks the table (associated with the specified task) by the specified column. The specified column must be a NUMBER column. This procedure takes the MIN and MAX value of the column, and then divide the range evenly according to chunk_size. The chunks are:
    CREATE_CHUNKS_BY_NUMBER_COL Procedure
    This procedure chunks the table (associated with the specified task) by
    the specified column. The specified column must be a NUMBER column. This
    procedure takes the MIN and MAX value of the column, and then divide the
    range evenly according to chunk_size. The chunks are:
    START_ID                              END_ID
    min_id_val                            min_id_val+1*chunk_size-1
    min_id_val+1*chunk_size               min_id_val+2*chunk_size-1
    min_id_val+i*chunk_size               max_id_val
    So I am at a loss to know how that particular example is of any value to you.
    That package creates a list of START_ID and END_ID values, one pair of values for each 'chunk'. It then starts a parallel process for each chunk that queries the table using a where clause that is basically just this:
    WHERE userColumn BETWEEN :START_ID AND END_ID
    The RUN_TASK Procedure explains part of that
    RUN_TASK Procedure
    This procedure executes the specified statement (sql_stmt) on the chunks in parallel. It commits after processing each chunk. The specified statement must have two placeholders called start_id, and end_id respectively, which represent the range of the chunk to be processed. The types of the placeholder must be rowid where ROWID based chunking was used, or NUMBER where number based chunking was used. The specified statement should not commit unless it is idempotent.
    The SQL statement is executed as the current user.
    Examples
    Suppose the chunk table contains the following chunk ranges:
    START_ID                              END_ID
    1                                     10
    11                                    20
    21                                    30
    And the specified SQL statement is:
    UPDATE employees
          SET salary = salary + 10
          WHERE e.employee_id  BETWEEN :start_id AND :end_id
    This procedure executes the following statements in parallel:
    UPDATE employees 
          SET salary =.salary + 10  WHERE employee_id BETWEEN 1 and 10;
          COMMIT;
    UPDATE employees 
          SET salary =.salary + 10  WHERE employee_id between 11 and 20;
          COMMIT;
    UPDATE employees 
          SET salary =.salary + 10  WHERE employee_id between 21 and 30;
          COMMIT;
    You could just as easily write those queries yourself for chunking by number. But you couldn't execute them in parallel unless you created a scheduler job.
    So like the doc says Oracle is just:
    1. getting the MIN/MAX of the column
    2. creating a process for each entry in the 'chunk table'
    3. executing those processes in parallel
    4. commiting each process individually
    5. maintain status for you.
    I'm not sure what you would expect to see on the backend for an example like that.

  • Update statement not working inside DBMS_PARALLEL_EXECUTE

    CREATE OR REPLACE procedure
    proc_table_mask_par2 (p_tblName in varchar2
    ,p_strmCount in number)
    IS
    | Name: proc_table_mask
    | Version: 1.0
    | Function: The procedure generates the block which will mask the table based on inputrs from
    MSK_INVENTORY_COLS table
    | Modification History:
    | Ver: Date: Who: What:
    | 1.0 2012-11-26 HB Created
    vtbl_Name varchar2(100);
    vtbl_Count number(19,0);
    vStrm_row_count number(19,0);
    vCurs_count number(19,0) := 1;
    vsql varchar2(30000);
    vsql1 varchar2(30000);
    vstragg varchar2(4000);
    v_try number;
    v_status number;
    v_status1 number;
    vtaskName varchar2(100) := 'Mask job for '||p_tblName ;
    pstartnum number(19,0);
    pendnum number(19,0);
    v_prim_key varchar2(100);
    --retries_in  PLS_INTEGER DEFAULT 2;
    --l_attempts    PLS_INTEGER := 1;
    begin
    -- Use function Stragg to get the update statement from MSK_INVENTORY_COLS
    select stragg(MIC.COLUMN_NAME || ' = ' || MIC.FUNCTION_NAME|| '(' ||MIC.COLUMN_NAME||')') into vstragg from MSK_INVENTORY_COLS mic
    WHERE MIC.TABLE_NAME = p_tblName;
    EXECUTE IMMEDIATE 'select count(1) from '||p_tblName into vtbl_Count;
    --DBMS_OUTPUT.PUT_LINE ( 'vtbl_Count : ' ||vtbl_Count);
    vStrm_row_count := round( vtbl_Count/p_strmCount);
    dbms_output.put_line(' vStrm_row_count : ' || vStrm_row_count);
    -- Update statement
    vsql := vsql ||chr(10) || ' UPDATE '|| p_tblName || ' /*+ parallel ( '||p_tblName ||', '||p_strmCount||') */ ' ;
    vsql := vsql ||chr(10) || ' SET '|| vstragg;
    vsql := vsql ||chr(10) || ' , lock_id = -1 ' ;
    vsql := vsql ||chr(10) || 'WHERE ROWID BETWEEN :starting_rowid AND :ending_rowid' ;
    vsql := vsql ||chr(10) || ' and lock_id <> -1 ; ' ;
    dbms_output.put_line (' vsql : ' || vsql);
    DBMS_PARALLEL_EXECUTE.CREATE_TASK (vtaskName);
    DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID (task_name => vtaskName
    , table_owner => SYS_CONTEXT ('USERENV', 'SESSION_USER') --USER
    , table_name => p_tblName
    , by_row => TRUE
    , chunk_size => vStrm_row_count
    DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => vtaskName
    , sql_stmt => vsql
    , language_flag => DBMS_SQL.native
    , parallel_level => p_strmCount
    -- Only resume for the following
    -- INVALID_STATE_FOR_REDSUME ORA - 29495
    -- Attempts to resume execution, but the task is not in FINISHED_WITH_ERROR or CRASHED state
    -- Constant value for CRASHED = 8
    -- Constant value for FINISHED_WITH_ERROR = 7
    v_try := 0;
    v_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(vtaskName);
    dbms_output.put_line (' v_status : ' || v_status);
    dbms_output.put_line (' v_try : ' || v_try);
    WHILE(v_try < 2 and v_status != DBMS_PARALLEL_EXECUTE.FINISHED and ((v_status = 7) or( v_status = 8) ))
    LOOP
    v_try := v_try + 1;
    DBMS_OUTPUT.PUT_LINE (' Why am I getting into this loop : ' );
    DBMS_PARALLEL_EXECUTE.RESUME_TASK(vtaskName);
    v_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(vtaskName);
    END LOOP;
    DBMS_PARALLEL_EXECUTE.DROP_TASK(vtaskName);
    exception
    when
    others then
    raise;
    dbms_output.put_line(sqlerrm);
    end;
    Gurus
    I am executing the procedure above using the following anonymous block.
    DECLARE
    P_TBLNAME VARCHAR2(32767);
    P_STRMCOUNT NUMBER;
    BEGIN
    P_TBLNAME := 'EMPLOYEE_DIM';
    P_STRMCOUNT := 10;
    A516907.PROC_TABLE_MASK_PAR2 ( P_TBLNAME, P_STRMCOUNT );
    COMMIT;
    END;
    I have used dbms_output for getting values for the following variables. When I check the values the update does not seem to be working.
    vStrm_row_count : 60143
    vsql :
    UPDATE EMPLOYEE_DIM /*+ parallel ( EMPLOYEE_DIM, 10) */
    SET
    BUSINESS_TITLE_NM = FN_TITLE_DRM_ENCRYPTNSUM(BUSINESS_TITLE_NM),COST_CENTER_CD =
    FN_COSTCTR_DRM_ENCRYPTNSUM(COST_CENTER_CD),DIRECT_MGR_NM =
    FN_DRM_REG_ADDR_TEXT(DIRECT_MGR_NM),FIRST_NM =
    FN_FNM_DRM_ENCRYPTNSUM(FIRST_NM),LAST_FIRST_FULL_NM =
    FN_DRM_REG_ADDR_TEXT(LAST_FIRST_FULL_NM),LAST_FIRST_MIDDLE_FULL_NM =
    FN_DRM_REG_ADDR_TEXT(LAST_FIRST_MIDDLE_FULL_NM),LAST_NM =
    FN_LNM_DRM_ENCRYPTNSUM(LAST_NM),PHONE_NO =
    FN_PHONE_DRM_ENCRYPTNSUM(PHONE_NO),PRIMARY_EMAIL_ADDRESS_NM =
    FN_EMAIL_DRM_ENCRYPTNSUM(PRIMARY_EMAIL_ADDRESS_NM)
    , lock_id = -1
    WHERE ROWID
    BETWEEN :starting_rowid AND :ending_rowid
    and lock_id <> -1 ;
    v_status : 4
    v_try : 0

    I tried to do the update using chunk sql and ran the procedure again..No updates are made.
    CREATE OR REPLACE procedure
    proc_table_mask_chunk_sql (p_tblName in varchar2
    ,p_strmCount in number)
    IS
    | Name: A516907.proc_table_mask
    | Version: 1.0
    | Function: The procedure generates the block which will mask the table based on inputrs from
    MSK_INVENTORY_COLS table
    | Modification History:
    | Ver: Date: Who: What:
    | 1.0 2012-11-26 HB Created
    vtbl_Name varchar2(100);
    vtbl_Count number(19,0);
    vStrm_row_count number(19,0);
    vCurs_count number(19,0) := 1;
    vsql varchar2(1000);
    vsql_pk varchar2(1000);
    vstragg varchar2(4000);
    vtaskName varchar2(100) := 'Mask Data in table '||p_tblName ;
    pstartnum number(19,0);
    pendnum number(19,0);
    upd_st number(19,0) := 1;
    v_prim_key varchar2(100);
    l_try NUMBER;
    l_status NUMBER;
    begin
    DBMS_PARALLEL_EXECUTE.CREATE_TASK (vtaskName);
    -- Use function Stragg to get the update statement from MSK_INVENTORY_COLS
    select stragg(MIC.COLUMN_NAME || ' = ' || MIC.FUNCTION_NAME|| '(' ||MIC.COLUMN_NAME||')') into vstragg from MSK_INVENTORY_COLS mic
    WHERE MIC.TABLE_NAME = p_tblName;
    select stragg(UCC.COLUMN_NAME) COLUMN_NAME into v_prim_key
    from user_constraints uc , user_cons_Columns ucc
    where UC.CONSTRAINT_TYPE = 'P'
    and UC.CONSTRAINT_NAME = UCC.CONSTRAINT_NAME
    and UCC.TABLE_NAME = p_tblName;
    vsql_pk := 'SELECT distinct ' || v_prim_key || ','|| v_prim_key || ' FROM ' || p_tblName;
    DBMS_OUTPUT.PUT_LINE ( 'vsql_pk : ' ||vsql_pk);
    --EXECUTE IMMEDIATE ' select stragg(COLUMN_NAME ||''=''||FUNCTION_NAME||''(''||COLUMN_NAME||'')'') from MSK_INVENTORY_COLS WHERE TABLE_NAME = ' ||p_tblName  INTO vstragg ;
    --EXECUTE IMMEDIATE 'select count(1)   from vtbl_Name' into vtbl_Count;
    EXECUTE IMMEDIATE 'select count(1) from '||p_tblName into vtbl_Count;
    --DBMS_OUTPUT.PUT_LINE ( 'vtbl_Count : ' ||vtbl_Count);
    vStrm_row_count := round( vtbl_Count/p_strmCount);
    DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(vtaskName, vsql_pk, false);
    --DBMS_OUTPUT.PUT_LINE ( 'vStrm_row_count : ' ||vStrm_row_count);
    --EXECUTE IMMEDIATE 'SELECT MIN( '||v_prim_key||') from ' ||p_tblName into pstartnum;
    ----dbms_output.put_line (' pstartnum : ' || pstartnum);
    --pendnum :=  vStrm_row_count;
    ----dbms_output.put_line (' pendnum : ' || pendnum);
    -- Update statement
    vsql := vsql ||chr(10) || ' UPDATE '|| p_tblName || ' /*+ parallel ( '||p_tblName ||', '||p_strmCount||') */ ' ;
    vsql := vsql ||chr(10) || ' SET '|| vstragg;
    vsql := vsql ||chr(10) || ' , lock_id = -1 WHERE ' ;
    vsql := vsql ||chr(10) || v_prim_key|| ' BETWEEN :start_id and :end_id ';
    vsql := vsql ||chr(10) || ' and lock_id <> -1 ; ' ;
    --DBMS_PARALLEL_EXECUTE.CREATE_TASK (vtaskName||'_'||upd_st);
    DBMS_PARALLEL_EXECUTE.RUN_TASK ( vtaskName
    , vsql
    , DBMS_SQL.native
    , parallel_level => p_strmCount
    L_try := 0;
    L_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(vtaskName);
    WHILE(l_try < 2 and L_status != DBMS_PARALLEL_EXECUTE.FINISHED and ((L_status = 7) or( L_status = 8) ))
    Loop
    L_try := l_try + 1;
    DBMS_PARALLEL_EXECUTE.RESUME_TASK(vtaskName);
    L_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS(vtaskName);
    END LOOP;
    end;
    Block run :
    DECLARE
    P_TBLNAME VARCHAR2(32767);
    P_STRMCOUNT NUMBER;
    BEGIN
    P_TBLNAME := 'EMPLOYEE_DIM';
    P_STRMCOUNT := 10;
    A516907.PROC_TABLE_MASK_CHUNK_SQL ( P_TBLNAME, P_STRMCOUNT );
    COMMIT;
    END;
    /

  • Is there any database parameters we need to set when using DBMS_PARALLE_EXECUTE?

    Hi,
    I am using dbms_parallel_execute package to do some process, I call a procedure to do the some tasks and insert data using the chunks by sql. Here is my code, I am calling p_process procedure, I am using 11g 11.2.0.3.0 - 64 bit  on windows server.
    DBMS_PARALLEL_EXECUTE.CREATE_TASK ('process a');
      DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL (
          task_name => 'process a',
         sql_stmt => 'SELECT startid, endid FROM chng_chunks',
         by_rowid => FALSE);
            dbms_parallel_execute.run_task
            ( task_name      => 'process a',
              sql_stmt       => 'begin P_process( :start_id, :end_id ); end;',
              language_flag  => DBMS_SQL.NATIVE,
             parallel_level => 24 );
    This code runs very fast on a one database and I can see it uses lots of cpus but it runs very slow on a copy of the same database on another server which has more cpus and memory. I compare v$parameter vlues and those are pretty much  identical between databases. I checked the disk spaces and both servers has plenty of free space on disks.
    Now my question is, is there any other parameters that we need to set/check when using dbms_parallel_execute package.
    Thanks in advance.
    gg

    I don't get this. Ever. Why developers insist on comparing server1 with server2, simply because their code is running on both.
    It is like comparing the athletic ability of two persons, arguing that h/w wise they are the same (i.e. human), and both have green eyes (your same software). And as because these are all the same, both persons should be able to run the 100m in the same time.
    Yes, the analogy is silly.. as is the warped concept amongst many developers that server1 and server2 should exhibit the same performance when running the same code.
    It. Does. Not. Work. Like. That.
    Want to know why server2 is exhibiting the performance it does when running your code?
    Do that by ignoring server1 as it is NOT RELEVANT.
    Do that by examining the workloads and resource usage of server2, and the run and wait states of your code on server2.

  • DML Error logging with delete restrict

    Hi,
    I am trying to log all DML errors while performing ETL process. We encountered a problem in the process when one of the on delete cascade is missing in child tables but I was curious to know why we got that exception to the calling environment because we are logging all DML errors in err$_ tables. Our expectation is when we get child record found violation then that error will be logged into ERR$_ tables and process will carry on without any interruption but it interrupted in the middle and terminated. I can illustrate with below example
    T1 -> T2 -> T3
    T1 is parent and it is s root
    Create table t1 (id number primary key, id2 number);
    Create table t2(id number references t1(id) on delete cascade, id2 number);
    create table t3 (id number references t2(id)); -- Missing on delete cascade
    insert into t1 as select level, level from dual connect by level < 20;
    insert into t2 as select level, level from dual connect by level < 20;
    insert into t3 as select level from dual connect by level < 20;
    exec dbms_errlog(t1);
    exec dbms_errlog(t2);
    exec dbms_errlog(t3);
    delete from t1 where id = 1 log errors into err$_t1 reject limit unlimited;   -- Child record found violation due to t3 raised but I am expecting this error will be trapped in log tables.
    delete from t2 where id =1 log errors into err$_t2 reject limit unlimited; -- Got the same error child record violation. My expectation error will be logged into log tables.
    I am using Oracle 11gR2.
    Also, Please let me know if there is any restrictions to use DML error logging in DBMS_PARALLEL_EXECUTE.
    Please advise
    Thanks,
    Umakanth

    What is the error you want me to fix? Missing on delete cascade?
    The Code you posted has multiple syntax errors and can't possibly run. You should post code that actually works.
    My expectation is all the DML errors will be logged into error logging tables even if it is child record found violation.
    delete from t1 where id = 1 log errors into err$_t1 reject limit unlimited;  -- Child record found violation due to t3 raised but I am expecting this error will be trapped in log tables.
    delete from t2 where id =1 log errors into err$_t2 reject limit unlimited; -- Got the same error child record violation. My expectation error will be logged into log tables.
    DML error logging logs DATA. When you delete from T1 there is an error because the T2 child record can NOT be deleted. So the T1 row that was being deleted is logged into the T1 error log table. The request was to delete a T1 row so that is the only request that failed; the child rows in T2 and T3 will not be put into log tables.
    Same when you try to delete from T2. The T3 child record can NOT be deleted so the T2 row that was being deleted is logged into the T2 error log table.
    The exceptions that occur are NOT logged, only the data that the DML could not be performed on.
    After I fixed your code your example worked fine for me and logged into the DML error tables as expected. But I wasn't doing it from a client.

  • DBMS_PARALLEL_EXEUCTE is not working for insert query.. Need help !!

    Hi All..
    I am trying to use the dbms_parallel_execute package to insert into my target table.
    But at the end of execution, the rows not getting inserted into the target table.
    Could any one please help on this?
    Below are the statements....
    create table target_table as select * from source_table where 1=0;
    --source_table has 100000 rows.
    BEGIN
      DBMS_PARALLEL_EXECUTE.create_task (task_name => 'test1');
    END;
    BEGIN
      DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(task_name   => 'test1',
                                                   table_owner => 'SYSTEMS',
                                                   table_name  => 'TARGET_TABLE',
                                                   by_row      => TRUE,
                                                   chunk_size  => 10000);
    END;
    DECLARE
      l_sql_stmt VARCHAR2(32767);
    BEGIN
      l_sql_stmt := 'insert into PRD_TAB
         select * from dbmntr_prd_tab';
      DBMS_PARALLEL_EXECUTE.run_task(task_name      => 'test1',
                                     sql_stmt       => l_sql_stmt,
                                     language_flag  => DBMS_SQL.NATIVE,
                                     parallel_level => 10);
    END;
    After executing the above statement, I can find the targt_table has zero rows.. Could anyone please correct me If I am wrong with any of above statements?

    Could anyone please correct me If I am wrong with any of above statements?
    As Hoek said you haven't created the SQL statement properly. See the 'RUN_TASK Procedure section of the  DBMS_PARALLEL_EXECUTE chapter of the doc
    http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_parallel_ex.htm#CHDIBHHB
    sql_stmt
    SQL statement; must have :start_id and :end_id placeholder
    That doc has an example in it.

  • PLSQL Error while using collections dATABASE:10G

    Hi,
    I am getting below error while compiling below code:
    Error: DML statement without BULK In-BIND cannot be used inside FORALL
    Could you suggest.
    create or replace PROCEDURE V_ACCT_MTH ( P_COMMIT_INTERVAL  NUMBER DEFAULT 10000)
    is
    CURSOR CUR_D_CR_ACCT_MTH
    IS
    SELECT * FROM D_ACCT_MTH;
    TYPE l_rec_type IS TABLE OF CUR_D_CR_ACCT_MTH%ROWTYPE
    INDEX BY PLS_INTEGER;
    v_var_tab    l_rec_type;
    v_empty_tab  l_rec_type;
    v_error_msg  VARCHAR2(80);
    v_err_code   VARCHAR2(30);
    V_ROW_CNT NUMBER :=0;
    --R_DATA    NUMBER :=1;
    BEGIN
    OPEN CUR_D_CR_ACCT_MTH;
    v_var_tab := v_empty_tab;
    LOOP
        FETCH CUR_D_CR_ACCT_MTH BULK COLLECT INTO v_var_tab LIMIT P_COMMIT_INTERVAL;
        EXIT WHEN v_var_tab.COUNT=0;
            FORALL R_DATA IN 1..v_var_tab.COUNT
               INSERT INTO ACCT_F_ACCT_MTH
                DATE_KEY
               ,ACCT_KEY
               ,P_ID
               ,ORG_KEY
               ,FDIC_KEY
               ,BAL
               ,BAL1
               ,BAL2
               ,BAL3
               ,BAL4
               ,BAL5
               ,BAL6
               ,BAL7
               ,BAL8
               ,BAL9
               ,BAL10
               ,BAL11
               ,BAL12
               ,BAL13
               ,BAL14
               ,BAL15
               VALUES
               DATE_KEY(R_DATA)
              ,ACCT_KEY(R_DATA)
              ,P_ID(R_DATA)
              ,ORG_KEY(R_DATA)
              ,FDIC_KEY(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
              ,BAL(R_DATA)
            COMMIT;
    END LOOP;
    CLOSE CUR_D_CR_ACCT_MTH; 
    EXCEPTION
    WHEN OTHERS THEN
    v_error_msg:=substr(sqlerrm,1,50);
    v_err_code :=sqlcode;
    DBMS_OUTPUT.PUT_LINE(v_error_msg,v_err_code);
    END V_ACCT_MTH;

    931832 wrote:
    Here i am using above method using forall because of large volume of data.Which is a FLAWED approach. Always.
    FORALL is not suited to "move/copy" large amounts of data from one table to another.
    Any suggestion ?Use only SQL. It is faster. It has less overheads. It can execute in parallel.
    So execute it in parallel to move/copy that data. You can roll this manually via the DBMS_PARALLEL_EXECUTE interface. Simplistic example:
    declare
            taskName        varchar2(30) default 'PQ-task-1';
            parallelSql     varchar2(1000);
    begin
            --// create trask
            DBMS_PARALLEL_EXECUTE.create_task( taskName );
            --// chunk the table by rowid ranges
            DBMS_PARALLEL_EXECUTE.create_chunks_by_rowid(
                    task_name => taskName,
                    table_owner => user,
                    table_name => 'D_ACCT_MNTH',
                    by_row => true,
                    chunk_size => 100000
            --// create insert..select statement to copy a chunk of rows
            parallelSql := 'insert into acct_f_acct_mth select * from d_acct_mnth
                            where rowid between :start_id and :end_id';
            --// run the task using 5 parallel processes
            DBMS_PARALLEL_EXECUTE.Run_Task(
                    task_name => taskName,
                    sql_stmt => parallelSql,
                    language_flag => DBMS_SQL.NATIVE,
                    parallel_level => 5
            --// wait for it to complete
            while DBMS_PARALLEL_EXECUTE.task_status( taskName ) != DBMS_PARALLEL_EXECUTE.Finished loop
                    DBMS_LOCK.Sleep(10);
            end loop;
            --// remove task
            DBMS_PARALLEL_EXECUTE.drop_task( taskName );
    end;
    /Details in Oracle® Database PL/SQL Packages and Types Reference guide.
    For 10g, the EXACT SAME approach can be used - by determining the rowid chunks/ranges via a SQL and then manually running parallel processes as DBMS_JOB. See {message:id=1108593} for details.

  • How to create special column which represents result of a query

    Hi all,
    I need your help once more.
    The situation is the following:
    I have a table MESSAGE which has some billion entries. The columns are msg_id, vehicle_id, timestamp, data, etc.
    I have another table VEHICLE which holds static vehicle data (about 20k rows) such as vehicle_id, licenceplate, etc.
    My first target was to partition the table via timestamp (by range) and subpartition by vehicle_id (by hash).
    So I could easily drop old data by dropping old partitions and tablespaces.
    Now comes the new difficult 2nd target: the messages of some vehicles must be kept forever.
    My idea is to add a column KEEP_DATA to the table MESSAGE. I could try to partition by timestamp AND KEEP_DATA, subpartion by vehicle_id.
    The problem of this idea is that i have to update billions of rows.
    It would be perfect if there is a possibility to add this KEEP_DATA-flag to the table vehicle.
    Is there any way to "link" this information to a column in MESSAGE table?
    I mean something like this:
    alter table MESSAGE
    add column (select keep_data from vehicle where VEHICLE.vehicle_id = MESSAGE.vehicle_id as keep_message) ;
    Is there some possibility like that?
    Would the partitioning on this column / statement work?
    Would the value of the keep_message be calculated on runtime?
    If so will the performance influence be noticeable?
    If so will the performance also sink if the application is querying all rows except the keep_message?
    Kind regards,
    Andreas

    What is your DB version?
    The problem of this idea is that i have to update billions of rows. If this is your underlying problem then if you are in 11g and above you can use [url http://docs.oracle.com/cd/E14072_01/appdev.112/e10577/d_parallel_ex.htm]DBMS_PARALLEL_EXECUTE and split your update into multiple chunks and execute it parallel.
    I mean something like this:
    alter table MESSAGE
    add column (select keep_data from vehicle where VEHICLE.vehicle_id = MESSAGE.vehicle_id as keep_message) ; As far as i know such a thing is not possible.

  • *Dynamic* Table Name in From Clause with input from cursor

    Hello
    I have a cursor say...
    select table_name from dba_tables where <blah>
    The result is
    row1:: emp
    rwo2:: emp_1 ---> Both the tables have same structure and entirely different data... please dont ask why... that's the way it is and we cant change it..
    Now we need to run an Insert...
    insert into tableX (col1,col2,...) select a,b,... from <o/p of the cursor> where <blah> ...
    Note: The table name changes and it the cursor can o/p emp,emp_a and emp_b.
    Am looking to do it parallel instead of doing it serially and with best performance ....no sql injection issues.
    By parallel i mean
    insert into tableX (col1,col2,...) select a,b,... from emp where <blah>
    and insert into tableX (col1,col2,...) select a,b,... from emp_1 where <blah> statements to fire parallel/at the same time to the database. If you can share procedure if you guys already have with you.. is really appreciated
    Thanks a lot for your time....
    Edited by: user007009 on Apr 27, 2013 8:33 PM

    Hello thanks for your time..
    I tried to implement the chunk by sql parallel execution approach and it took 3.1 seconds to complete.. while the SP took around 0.042 seconds and the parallel process didn't throwed any errors and it didn't insert any data either... I am not sure what I am doing wrong... can you please let me know your thoughts...
    Sample Data Creation::::::::::::::*
    drop table table_ASERCARE purge;
    drop table table_MEDCARE purge;
    DROP TABLE TABLE_XYCARE PURGE;
    DROP TABLE TABLE_TIME PURGE;
    DROP TABLE TABLE_LOCATION PURGE;
    drop table table_group purge;
    drop table tablex purge;
    -- select distinct TABLE_NAME from ALL_TAB_COLS where TABLE_NAME like 'EMP%';
    create table table_asercare (time number(30), location_number number(5), value number(5),catg_id number(5));
    insert into table_asercare values  (20110111, 01, 55, 1200);
    insert into table_asercare values  (20110131, 01, 31, 1223);
    insert into table_asercare values  (20120131, 15, 24,1224);
    insert into table_ASERCARE values  (20130131, 03, 555,1200);
    -- Truncate table table_MEDCARE
    create table table_medcare (time number(30), location_number number(5), value number(5),catg_id number(5));
    insert into table_medcare values  (20110113, 01, 23, 1200);
    insert into table_medcare values  (20110128, 02, 78, 1223);
    insert into table_medcare values  (20110130, 03, 100, 1224);
    insert into table_medcare values  (20120111, 04, 57, 1200);
    insert into table_medcare values  (20120221, 05, 64, 1223);
    insert into table_MEDCARE values  (20130321, 15, 48, 1224);
    create table table_xycare (time number(30), location_number number(5), value number(5),catg_id number(5));
    insert into table_xycare values  (20100113, 01, 99, 1200);
    insert into table_xycare values  (20110128, 02, 90, 1223);
    insert into table_XYCARE values  (20130128, 03, 24, 1224);
    create table table_LOCATION ( LOCATION_NUMBER number(5), LOCATION_NAME varchar2(50));
    insert into table_LOCATION values  (01, 'atlanta1');
    insert into table_LOCATION values  (02, 'atlanta2');
    insert into table_LOCATION values  (03, 'atlanta3');
    insert into table_LOCATION values  (04, 'atlanta4');
    insert into table_LOCATION values  (05, 'atlanta5');
    insert into table_location values  (15, 'atlanta15');
    create table table_category (catg_id number(5), catg_name varchar2(30));
    insert into table_category values (1200, 'EMS');
    insert into table_category values (1223, 'LJM');
    insert into table_category values (1224, 'LIO');
    create table table_TIME (YEAR_MONTH_DATE number(30), YEAR_VAL number(4), MONTH_VAL number(2),DATE_VAL number(2));
    insert into table_TIME values  (20110111, 2011, 01,11 );
    insert into table_TIME values  (20110131, 2011, 01,31);
    insert into table_TIME values  (20120131, 2012, 01,31);
    insert into table_TIME values  (20130131, 2013, 01,31);
    insert into table_TIME values  (20110128, 2011, 01,28 );
    insert into table_TIME values  (20110130, 2011, 01,30 );
    insert into table_TIME values  (20120111, 2012, 01,11 );
    insert into table_TIME values  (20120221, 2012, 02,21 );
    insert into table_TIME values  (20130321, 2013, 03,21 );
    insert into table_TIME values  (20100113, 2010, 01,13 );
    insert into table_TIME values  (20130128, 2013, 01,28 );
    --Truncate table table_group
    CREATE TABLE table_group (group_key number,table_name VARCHAR2(30), group_name VARCHAR2(30), catg_name varchar2(30));
    insert into table_group values (1,'table_ASERCARE', 'GROUP_ONE','EMS');
    insert into table_group values (2,'table_MEDCARE', 'GROUP_ONE','LJM');
    INSERT INTO TABLE_GROUP VALUES (3,'table_XYCARE', 'GROUP_TWO','LIO');
    create table TABLEX (YEAR_VAL number(4) ,LOCATION_NAME varchar2(50),tablename VARCHAR2(30), cnt number ); --> Proc data will be inserted into this...
    Stored Procedure++++++++_
    CREATE OR REPLACE
    PROCEDURE ABC(
        GROUP_NAME_IN IN VARCHAR2 )
    is
    type c1 is ref cursor;
        sql_stmt VARCHAR2(200);
        v_sql    VARCHAR2(30000);
        c1_cv c1;
        table_name_f VARCHAR2(30);
        c1_rec TABLE_GROUP%rowtype;
      BEGIN
        SQL_STMT := 'SELECT * FROM TABLE_GROUP WHERE GROUP_NAME = :i';
        OPEN c1_cv FOR SQL_STMT USING GROUP_NAME_IN ;
        loop
          fetch c1_cv  INTO c1_rec;
        exit when c1_cv%notfound;
    --    forall i in c1_rec.FIRST ..c1_rec.last loop
        table_name_f := c1_rec.table_name;
    --      END LOOP;
       EXECUTE immediate
       'INSERT  INTO tablex (YEAR_VAL,LOCATION_NAME, tablename, cnt)
    SELECT
    t.YEAR_VAL,l.location_name, :table_name, count(*) as cnt
    FROM '
        ||table_name_f||
        ' variable_table
    ,table_time t
    , table_location l
    ,table_group g
    ,table_category ctg
    WHERE t.year_month_date = variable_table.TIME
    and variable_table.location_number = l.location_number
    and ctg.catg_id = variable_table.catg_id
    --and ctg.catg_name = g.catg_name
    GROUP BY t.YEAR_VAL,l.location_name,g.catg_name' USING table_name_f;
        --dbms_output.put_line ( 'The SQL is'|| v_sql);
        COMMIT;
        --dbms_output.put_line ( c1_rec.table_name||','||c1_rec.group_name );
        --dbms_output.put_line ( 'The table name is '|| c1_rec.table_name );
      end loop;
        CLOSE c1_cv;
      --null;
    END ABC;
    Parallel Execution Code++++++++++_
    begin
    begin
    DBMS_PARALLEL_EXECUTE.DROP_TASK(task_name => 'TASK_NAME');
    exception when others then null;
    end;
    DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'TASK_NAME');
    DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'TASK_NAME', sql_stmt =>'select distinct group_key, group_key from table_group', by_rowid => false);
    end;
    begin
    DBMS_PARALLEL_EXECUTE.RUN_TASK (task_name => 'TASK_NAME',
    sql_stmt =>'declare
    s varchar2(16000); vstart_id number := :start_id; vend_id number:= :end_id;
    table_name varchar2(30);
    begin
    select table_name into table_name from group_table where group_key=vstart_id;
    s:=''INSERT  INTO tablex (YEAR_VAL,LOCATION_NAME, tablename, cnt)
    SELECT
    t.YEAR_VAL,l.location_name, :table_name, count(*) as cnt
    FROM ''||table_name||'' variable_table
    ,table_time t
    , table_location l
    ,table_group g
    ,table_category ctg
    WHERE t.year_month_date = variable_table.TIME
    and variable_table.location_number = l.location_number
    and ctg.catg_id = variable_table.catg_id
    and ctg.catg_name = g.catg_name
    and g.group_key =:vstart_id
    GROUP BY t.YEAR_VAL,l.location_name,g.catg_name'';
    execute immediate s using vstart_id;
    commit;
    end;',
    language_flag => DBMS_SQL.NATIVE, parallel_level => 2 );
    end;
    / thanks in advance for your time.
    Edited by: user007009 on Apr 28, 2013 12:25 PM

  • Commit for every 1000 records in  Insert into select statment

    Hi I've the following INSERT into SELECT statement .
    The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.
    Please suggest me the best way to do that .
    I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .
    How can i achieve this ..
    insert into emp_dept_master
    select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
       from emp e , dept d
      where e.deptno = d.deptno       ------ how to use commit for every 1000 records .Thanks

    Smile wrote:
    Hi I've the following INSERT into SELECT statement .
    The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.Does the another table already have records or its empty?
    If its empty then you can drop it and create it as
    create your_another_table
    as
    <your select statement that return 60000000 records>
    Please suggest me the best way to do that .
    I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .That is not the best way. Frequent commit may lead to ORA-1555 error
    [url http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:275215756923]A nice artical from ASKTOM on this one
    How can i achieve this ..
    insert into emp_dept_master
    select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
    from emp e , dept d
    where e.deptno = d.deptno       ------ how to use commit for every 1000 records .
    It depends on the reason behind you wanting to split your transaction into small chunks. Most of the time there is no good reason for that.
    If you are tying to imporve performance by doing so then you are wrong it will only degrade the performance.
    To improve the performance you can use APPEND hint in insert, you can try PARALLEL DML and If you are in 11g and above you can use [url http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_parallel_ex.htm#CHDIJACH]DBMS_PARALLEL_EXECUTE to break your insert into chunks and run it in parallel.
    So if you can tell the actual objective we could offer some help.

Maybe you are looking for

  • Dock gone, minimize doesn't work, computer won't respond to "shut down"....

    First of all, I'm surely not a computer expert by any means, but one morning when I turned on my computer (iBook) trouble erupted as follows: Once I logged on, the desktop picture was changing pictures every 2 seconds. I changed it to not switch pict

  • Report - SD module

    Hi, I am creating a report for SD module. I have to fetch the billing docs for a given date and find the open sales orders & print the report. I could fetch the billing docs...open sales order for the given date. What is the relation (link) between t

  • Why can't I see who I shared a photo with anymore?

    In the past, if I shared a photo via email I would see the details under info of who I sent the photo to. A great feature to prevent repeats. I can't see it anymore. Where did it go?

  • LVOOP %27owning library blocked execution....%27

    Hi All I am working on a project using LabVIEW 8.5.1 (specfied by the customer not me!) and am having trouble with some LabVIEW classesI have created. I have created three classes - a parent class is a 'generic DAQ class', and card specific classes t

  • MacBookPro is a desktop replacement. How come no superdrive in the retina display version? ?

    I've been waiting for the MacBook Pro with Retina Display to arrive. Now I have NO SUPERDIVE? MacBook Pro is supposed to be a  a Desktop Replacement, the closest thing to a desktop you can get in a laptop, no? So, how can you justify taking out the s