Scroll too slow in table in WD abap

Hi experts,
in my Dynpro I have a table with Layout Data  defined as GridData and scroll is very very slow also with 20 entry in table.
In the same portal I have a dynpro with a table with Layout Data defined as RowHeadData and there is no problem with scroll.
Scroll performances can depend on Layout Data of table?
What can I have to verificate and how can I increase performaces for scroll?
Help me please
Thanks

Are you checking the scroll functionality in webdynpro and portal in same system? If different, the scroll may also depends on resolution settings.
if it's resolution settings problem, it will take time to get the new row values and dispaly.
Regards
Srinvias

Similar Messages

  • Adobe reader too slow and "not responding," why?

    I downloaded adobe reader XI (11.0.10) for my touchscreen laptop with Windows 8.1 but when reading PDFs it's scrolls too slow, gets stuck when highlighting or performing other commands, and then I normally get "not responding."
    It's really slowing down my productivity!
    Would appreciate your help please. Thanks

    Try disabling Protected Mode [Edit | Preferences | Security (Enhanced)].
    If that does not resolve the problem, please tell us your operating system.

  • Select Data from aufm too slow

    Hi experts,
    I have a query in my report but its performance too slow.
    is there any proper way or any way to improve the performance of below query.
    SELECT  * FROM  aufm     into table i_aufm          " collecting input material document numbers
             WHERE mblnr GE '4900000000'
               AND mjahr GE '2008'
               AND zeile GT '0'
               AND matnr EQ matnr_101-matnr
               AND werks IN p_werks
               AND charg EQ matnr_101-charg
               AND bwart = '101'.

    Hi,
    1 .Dont use * from any table.
    2. Create  a internal table  that u want to retrieve the data from that table. (Structure).
    Eg:
    types : begin of str_ekko,
               ebeln type ekko-ebeln,
               bsart type ekko-bsart,
               aedat type ekko-aedat,
               endof str_ekko.
    data : itab type table of str_ekko,
            wa_itab type str_ekko.
    In the above u want to take care that while creating the structure u want to look the sequence of fields in the table. in the same sequence u want to create the structure.
    3 . The Select-options fields also wants be in the same field sequence of the table.
    eg : select ebeln bsart aedat from ekko into table itab where ebeln in so_ebeln and                                                                               
    bsart in so_bsart and
                                                                                    aedat in so_aedat.
    4.  If the above things are not working just look the indexes of that table. Just create a secondary index with ur requirement.

  • Action &OBJECT_ID& does not exist - when Scrolling the Scrollbar in Table

    Hi All,
    When  executing the application of  ABAP webdynpro component  at runtime when we scroll the scrollbar of the table then we are getting the following dump. please let me know the solution to fix the issue.
    Thanks in advance.
    Dump
    The following error text was processed in the system BRD : Action &OBJECT_ID& does not exist
    The error occurred on the application server c700u043_BRD_10 and in the work process 1
    The termination type was : RBAX_STATE
    The ABAP call stack was:
    method : IF_WDR_RR_CONTROLLER~GET_ACTION of program SAPLWDR_RUNTIME_REPOSITORY
    method : GET_ACTION_INTERNAL of program CL_WDR_CONTROLLER----CP
    Thanks,
    PortalUser100

    Hi..
    I dnt get anything when i see the error, its like somewhere standard error. And are you trying to scroll horizontal or vertical scroll?
    Any way, we can achieve scrolling through one application parameter called wdtablenavigation and its value is SCROLLBAR.
    then you will get scroll bar for all tables in your component.
    Once you double click on appplication,you will find parameters tab, there pass the above values.
    Regards
    Srinivas

  • Query is too slow from bseg selection

    SELECT BELNR BLDAT BUDAT XBLNR GJAHR tcode WAERS AWKEY FROM BKPF INTO
      TABLE
      ITBKPF WHERE BUKRS EQ P_BUKRS AND BELNR IN S_BELNR AND BUDAT IN
        P_BUDAT
        AND STBLG = ''
        AND ( TCODE = 'MIRO' OR
                             TCODE = 'MR8M' OR
                             TCODE = 'MB11' OR
                             TCODE = 'MB1B' OR
                            TCODE = 'MIGO_GI' OR
                            TCODE = 'MIGO_TR' OR
                             TCODE = 'MB1A' ).
       IF SY-SUBRC EQ 0.
              SORT itBKPF.
            ELSE.
              MESSAGE 'No data for the relevant date' TYPE 'A'.
             LEAVE LIST-PROCESSING.
            ENDIF.
    SELECT A1LIFNR A1NAME1 A1ORT01 A1STRAS B1~j_1icstno
      INTO TABLE it_werks
      FROM ( LFA1 AS A1 INNER JOIN j_1imocomp AS B1 ON A1werks = B1werks )
    **********************************************this is too slow*************
    SELECT BUKRS BELNR GJAHR BUZEI BUZID BSCHL SHKZG GSBER MWSKZ
            DMBTR HKONT LIFNR LANDL Matnr werks MENGE EBELP xref3
            INTO CORRESPONDING FIELDS OF TABLE ITABBSEG
            FROM BSEG
            FOR ALL ENTRIES IN ITBKPF
            WHERE BELNR = ITBKPF-BELNR
            AND GJAHR = ITBKPF-GJAHR
            AND ( BSCHL = '86' OR BSCHL = '96' or BSCHL = '89' OR BSCHL = '99'  )
            AND WERKS IN S_WERKS
            AND BUZID <> 'F' .
    ****************************************this is too slow
    Moderator message: Please Read before Posting in the Performance and Tuning Forum
    locked by: Thomas Zloch on Aug 5, 2010 2:08 PM

    You should have provided the full key of the cluster file behind BSEG (RFBLG), every key is in BKPF, so add BUKRS
    SELECT bukrs belnr gjahr buzei buzid bschl shkzg gsber mwskz
           dmbtr hkont lifnr landl matnr werks menge ebelp xref3
      INTO CORRESPONDING FIELDS OF TABLE itabbseg
      FROM bseg
      FOR ALL ENTRIES IN itbkpf
      WHERE bukrs = itbkpf-bukrs
        AND belnr = itbkpf-belnr
        AND gjahr = itbkpf-gjahr
        AND ( bschl EQ '86' OR bschl EQ '96' OR bschl EQ '89' OR bschl EQ '99' )
        AND werks IN s_werks
        AND buzid EQ 'F' .
    You could also extract the whole accounting document in the internal table, and then delete record using the not-database-key selections.
    SELECT bukrs belnr gjahr buzei buzid bschl shkzg gsber mwskz
           dmbtr hkont lifnr landl matnr werks menge ebelp xref3
      INTO CORRESPONDING FIELDS OF TABLE itabbseg
      FROM bseg
      FOR ALL ENTRIES IN itbkpf
      WHERE bukrs = itbkpf-bukrs
        AND belnr = itbkpf-belnr
        AND gjahr = itbkpf-gjahr.
    DELETE itabbseg WHERE
      ( bschl NE '86' AND bschl NE '96' AND bschl NE '89' AND bschl NE '99' )
      OR NOT ( werks IN s_werks )
      OR BUZID NE 'F' .
    In both case, perform some tests with tools like SE30 or ST05.
    Regards,
    Raymond

  • Pl/sql block is too slow, should  procedure a better option

    Hi all,
    how to tune A PL/SQL block that traverse cursors and fetch millions of records then execute inserts in different tables,
    using execute immediate statement.
    It's too slow and takes 10 hours to populate 40 tables having millions of records,
    as i have to do some modifications in data so can not do it by CTAS,
    i.e. a single sql statement.
    Should i make a procedure, does it help .
    Please help or suggest As i am New to PL/Sql
    My code look like,
    declare
    cursor     cur_table1 is
         select field1,field2,field3,field4 from table1;
    begin
    for i in cur_table1
    loop
         execute immediate 'insert into table2 (field1,field2,field3,field4) '||
    'select :1,field2,field3,field4 '||
    ' from table1 where field3= :2'
    using i.field1||'_'||to_char(sysdate,'ddmmyyyy hh12:mi:ss',i.field1;
    commit;
    end if;
    end;
    Thanks and Regards,

    declare
    cursor cur_projects is
         select PROJECTID, PROJECTNAME, DESCRIPTION, DELETED, DELETINGDATE, ACTIVE, ADMINONLY, READONLY, SECURITYCLASS, PROJECTCONTACT, DEFAULTVERSION, DEFAULTSTARTPAGE, IMAGEPATH, MAXEXAMINEERRORS, LOCKTIMEOUT, MEMORYSAVINGLEVEL, PRELOADOBJECTS, PUBLICATIONSRCPROJNAME, CREATOR, CREATED, MODIFIER, MODIFIED from projects ;
    cursor cur_projectversion(p_projectid projects.projectid%TYPE) is
         select PROJECTID, PROJECTVERSIONID, PROJECTVERSIONNAME, DESCRIPTION, DELETED , DELETINGDATE, ACTIVE , ADMINONLY, READONLY, decode(EFFECTIVEDATE,null,trunc(sysdate),EFFECTIVEDATE) EFFECTIVEDATE, EXPIRATIONDATE, SECURITYCLASS, PROJECTCONTACT, DEFAULTVERSION, DEFAULTSTARTPAGE, IMAGEPATH, MAXEXAMINEERRORS, LOCKTIMEOUT, MEMORYSAVINGLEVEL, PRELOADOBJECTS, PUBLICATIONSRCPROJNAME, PUBLICATIONSRCPROJVERNAME, CREATOR, CREATED, MODIFIER, MODIFIED, PROFILELOADERCLASS /*, TRACKCHANGES */
         from projectversions where PROJECTID=p_projectid ;
    cursor cur_objects(p_projectid projects.projectid%TYPE,p_projectversionid projectversions.projectversionid%TYPE) is
         select PROJECTID , PROJECTVERSIONID, OBJECTID , OBJECTKEY , PARENTID, KIND , NAME , TITLE , OWNER , CREATED, MODIFIER , MODIFIED , READY_TO_PUBLISH, LAST_PUBLISHED_DATE , LAST_PUBLISHER , EFFECTIVE_PUBLISHING_DATE , PUBLISHER , PUBLISHING_DATE /*, to_lob(scripttext) */ from OBJECTS where PROJECTID=p_projectid and PROJECTVERSIONID=p_projectversionid /*order by objectid */;
    begin
    for i in cur_projects
    loop
    dbms_output.put_line('PROJECTID => '||i.projectid);
    dbms_output.put_line('_________________________________');
    execute immediate 'insert into &TARGET_USER\.projects(locktimeout, memorysavinglevel , preloadobjects, projectid, projectname, description, deleted, deletingdate, active, adminonly, readonly, securityclass, projectcontact, defaultversion, defaultstartpage, imagepath, maxexamineerrors ) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17) '
    using i.locktimeout, i.memorysavinglevel, i.preloadobjects,i.projectid ,i.projectname , i.description , i.deleted , i.deletingdate , i.active , i.adminonly , i.readonly, i.securityclass, i.projectcontact , i.defaultversion, i.defaultstartpage , i.imagepath, i.maxexamineerrors;
    for k in cur_projectversion(i.projectid)
         loop
    for l in cur_objects(k.projectid,k.projectversionid)
              loop
                   cnt:=cnt+1;
    select count(1) into object_exists from &TARGET_USER\.objects where objectid=l.objectid and projectversionid=1 and projectid=l.projectid;
              if object_exists = 0
              then
              if l.objectid = 1 ------Book Object , objectid = 1 and parentid = 0
              then
              execute immediate 'INSERT INTO &TARGET_USER\.objects(PROJECTID,PROJECTVERSIONID,OBJECTID, OBJECTKEY,PARENTID,NAME, KIND,LAST_PUBLISHED_DATE,LAST_PUBLISHER,REVISIONID,DISPLAYORDER,READONLY,DELETED) values( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)'
                        using l.PROJECTID, 1, l.OBJECTID,l.OBJECTKEY, 0 , l.NAME,l.KIND, '' , '' , '', 0, 'N', 'N';
                   else
                        select count(1) into object_parentid_exists from objects where objectid=l.parentid and projectversionid=1 and projectid=l.projectid;
                        if object_parentid_exists = 0 ---Set Parentid as 1
                        then
                                  cnt_parentid_1:=cnt_parentid_1+1;
                                  execute immediate 'INSERT INTO &TARGET_USER\.objects(PROJECTID,PROJECTVERSIONID,OBJECTID, OBJECTKEY,PARENTID,NAME, KIND,LAST_PUBLISHED_DATE,LAST_PUBLISHER,REVISIONID,DISPLAYORDER,READONLY,DELETED) values( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)'
                                  using l.PROJECTID, 1, l.OBJECTID,l.OBJECTKEY, 1 , l.NAME,l.KIND, '' , '' , '', 0, 'N', 'N';
                        else
                                  execute immediate 'INSERT INTO &TARGET_USER\.objects(PROJECTID,PROJECTVERSIONID,OBJECTID, OBJECTKEY, PARENTID, NAME, KIND,LAST_PUBLISHED_DATE,LAST_PUBLISHER,REVISIONID,DISPLAYORDER,READONLY,DELETED) values( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)'
                                  using l.PROJECTID, 1, l.OBJECTID,l.OBJECTKEY,l.PARENTID,l.NAME,l.KIND, '' , '' , '', 0, 'N', 'N';
                        end if;
                   end if ;
         end if;
                   execute immediate 'INSERT INTO &TARGET_USER\.objectversions( PROJECTID, OBJECTID, PROJECTVERSIONID ,VERSIONNAME,OBJECTVERSIONID, REVISIONID,DESCRIPTION, TITLE , OWNER, CREATED, MODIFIER, MODIFIED, READY_TO_PUBLISH , LAST_PUBLISHED_DATE, LAST_PUBLISHER, EFFECTIVEDATE, SCRIPTTEXT, REVIEWSTATUS, READONLY, PUBLISHED, DELETED ) '||
                             'SELECT PROJECTID, OBJECTID, 1, owner||:1, PROJECTVERSIONID , '''', '''', TITLE, OWNER, CREATED, MODIFIER, MODIFIED, ''N'', '''' , '''', :2 , to_lob(SCRIPTTEXT), '''', ''N'', ''N'', '''' '||
                             'FROM OBJECTS '||
                             'WHERE PROJECTID= :3 and PROJECTVERSIONID= :4 and OBJECTID= :5'
                             using '_'||TO_CHAR(k.EFFECTIVEDATE,'DDMMYYHHMISS'),k.EFFECTIVEDATE,l.projectid,l.projectversionid,l.objectid;
         end loop;
         dbms_output.put_line(cnt||' OBJECTS, OBJECTVERIONS POPULATED');
         dbms_output.put_line(cnt_parentid_1||' DUMPED UNDER BOOK FOLDER ');
         cnt_parentid_1:=0;
         cnt:=0;
    ............

  • Insert too slow

    Hi DB Gurus,
    Our application is inserting 60-70K records in a table in each transaction. When multiple sessions are open on this table user face performance issues like application response is too slow.
    Regarding this table:
    1.Size = 56424 Mbytes!
    2.Count = 188,858,094 rows!
    3.Years of data stored = 4 years
    4.Average growth = 10 million records per month, 120 million each year! (has grown 60 million since end of June 2007)
    5.Storage params = 110 extents, Initial=40960, Next=524288000, Min Extents=1, Max Extents=505
    6.There are 14 indexes on this table all of which are in use.
    7. Data is inserted through bulk insert
    8. DB: Oracle 10g
    Sheer size of this table (56G) and its rate of growth may be the culprits behind performance issue. But to ascertain that, we need to dig out more facts so that we can decide conclusively how to mail this issue.
    So my questions are:
    1. What other facts can be collected to find out the root cause of bad performance?
    2. Looking at given statistics, is there a way to resolve the performance issue - by using table partition or archiving or some other better way is there?
    We've already though of dropping some indexes but it looks difficult since they are used in reports based on this table (along with other tables)
    3. Any guess what else can be causing this issue?
    4. How many records per session can be inserted in a table? Is there any limitation?
    Thanks in advance!!

    Hi DB Gurus,
    Our application is inserting 60-70K records in a
    table in each transaction. When multiple sessions are
    open on this table user face performance issues like
    application response is too slow.
    Regarding this table:
    1.Size = 56424 Mbytes!
    2.Count = 188,858,094 rows!
    3.Years of data stored = 4 years
    4.Average growth = 10 million records per month, 120
    million each year! (has grown 60 million since end of
    June 2007)
    5.Storage params = 110 extents, Initial=40960,
    Next=524288000, Min Extents=1, Max Extents=505
    6.There are 14 indexes on this table all of which are
    in use.
    7. Data is inserted through bulk insert
    8. DB: Oracle 10g
    Sheer size of this table (56G) and its rate of growth
    may be the culprits behind performance issue. But to
    ascertain that, we need to dig out more facts so that
    we can decide conclusively how to mail this issue.
    So my questions are:
    1. What other facts can be collected to find out the
    root cause of bad performance?
    2. Looking at given statistics, is there a way to
    resolve the performance issue - by using table
    partition or archiving or some other better way is
    there?
    We've already though of dropping some indexes but it
    looks difficult since they are used in reports based
    on this table (along with other tables)
    3. Any guess what else can be causing this issue?
    4. How many records per session can be inserted in a
    table? Is there any limitation?
    Thanks in advance!!You didn't like the responses from your same post - DB Performance issue

  • Mail server is too slow to deliver the mail to internal domain

    Hi,
    My mail server faster enough to send the mails to other domains, but when i try to send mail to my own domain it too slow some time it take 30 t0 40 minutes to deliver the mail.
    Please help
    Thanks,
    Gulab Pasha

    You should use statspack to check what are the main waits.
    Some indicators to check :
    - too many fts/excessive IO => check sql statements (missing index, wrong where clause)
    - explain plan for most important queries : using cbo or rbo ? If cbo, statistics should be up to date. If rbo, check acces path.
    -excessive logfile switch (> 5 per hour) : increase logfile or disable logging
    - undo waits => not enough rollback segments (if you don't set AUM)
    - data waits => alter initrans, pctfree, pctused
    - too many chaining rows => rebuild set of datas or rebuild table
    - too many levels in indexes => rebuild index
    - excessive parsing : use bind variable or alter parameter cursor_sharing
    - too many sort on disks => increase sort_area_size and create others temporary tablespace on separate disks
    - too many blocks reads for a row => db_block_size too few or too many chaining rows
    - too many lru contention => increase latches
    - OS swapping/paging ?
    Too improve performance :
    - alter and tune some parameters : optimizer_mode, sort_area_size, shared_pool_size, optimizer_index_cost_adj, db_file_multiblock_read_count...
    - keep most useful packages in memory
    - gather regularly statistics (if using cbo)
    How do your users access to the db ?
    Jean-François Léguillier
    Consultant DBA

  • Oracle 10g direct path write too slow

    Hi All,
    We have Oracle 10g on a Solaris virtual server, VMWare ESXi being the host. Data files are on RAID1, internal storage on a HP DL585 with VMFS partition at ESXi level. Problem is that DB writes for a CREATE TABLE as SELECT... statement is way too slow. To create a table which is 0.5 GB, DB takes 9 minutes which amounts to 1 MB/s. When we check for FTP or file copy at Solaris level with same size file (0.5 GB), it flies through in less than a minute. This is Oracle 10.2.0.4, 8K data block, 2 vCPU assigned to the Solaris VM. Have checked with VMWare support for any known issues and also have SR open with Oracle for any param changes that can help speed up things. Any clues or pointers from you all will be of great help.
    Thanks,
    Nikhil

    Here's the output from tkprof for waits
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    single-task message 1 0.17 0.17
    SQL*Net message to dblink 150 0.00 0.00
    SQL*Net message from dblink 150 0.04 0.32
    SQL*Net message to client 1 0.00 0.00
    direct path write temp 4003 1.16 804.93
    direct path read temp 2563 0.14 35.86
    SQL*Net more data from dblink 126967 0.17 11.81
    SQL*Net message from client 1 17.73 17.73
    Direct Path write temp has total waits of 804.93. Also, I am NOT looking to tune a particular SQL. Database is overall slow on VMware and I am looking for any gotchas for running Oracle 10g within a Solaris VM.
    Thanks,
    Nikhil

  • EXPDP is too slow even though the value of cursor_sharing changed to EXACT.

    Hi
    We are having a 10g standarad edition database (10.2.0.4) on Solaris 5 which is RAC with ASM. Infact we are planning to migrate it to LINUX x86-64 and to 11.2.0.3. The database size is around 1.3 TB. We are planning to go with an expdp backup and impdp to new server and new version database.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Release 10.2.0.4.0 - Production
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> !uname -a
    SunOS ibmxn920 5.10 Generic_127128-11 i86pc i386 i86pc
    As per the plan I started the expdp. But unfortunately the processing of tables even continued for one and half days and the backup didnt started also. After going through few docs I found that the CURSOR_SHARING should be EXACT to make the expdp more faster (Previously it was SIMILAR). So I changed the parameter to EXACT in one of the node and started the backup again yesterday night on the same node where I change the parameter. When today I came back still the processing going on. I checked the job status and found that the table processing is still going. It is not hanged at all. But its too slow.
    What could be the reason. Here is the memory details and kernal parameter details.
    Mem
    Memory: 24G phys mem, 6914M free mem, 31G swap, 31G free swap
    Kernal Parameters
    forceload: sys/msgsys
    forceload: sys/semsys
    forceload: sys/shmsys
    set noexec_user_stack=1
    set msgsys:msginfo_msgmax=65535
    set msgsys:msginfo_msgmnb=65535
    set msgsys:msginfo_msgmni=2560
    set msgsys:msginfo_msgtql=2560
    set semsys:seminfo_semmni=3072
    set semsys:seminfo_semmns=6452
    set semsys:seminfo_semmnu=3072
    set semsys:seminfo_semume=240
    set semsys:seminfo_semopm=100
    set semsys:seminfo_semmsl=1500
    set semsys:seminfo_semvmx=327670
    set shmsys:shminfo_shmmax=4294967295
    set shmsys:shminfo_shmmin=268435456
    set shmsys:shminfo_shmmni=4096
    set shmsys:shminfo_shmseg=1024
    set noexec_user_stack = 1
    set noexec_user_stack_log = 1
    #Non-administrative users cannot change file ownership.
    rstchown=1
    Do I need to make changes above of these. The dump is taking to local file system.

    Hi,
    I'd be looking at doing this in parallel over a database link and completely miss out sending anything to nfs - it will make the whole process quicker (you effectively skip the export part and everything is an import into the new instance).
    I ran a 600GB impdp in this way over a db link and it maybe took 12 hours (can't remember exactly) - a lot of that time is index build in the new database so make sure your pga etc is set up correctky for that.
    LOB data massively slows down datapump so that could be the issue here also. You should be able to acheive the whole process in less than a day (if you have no lobs...)
    Cheers,
    Harry

  • Propagation is too slow

    I preapared replication between two databases 11g release 1. Everything works fine but propagation is too slow. Propagation of a single row takes from 2 to 40 seconds. Why there are such differences?
    I tried few different values for job_queue_interval = 1, 5, job_queue_processes=5, 10, 20, 1000, latency in propagation scheduler =3, 1, 0, but there was no result.
    Databases are not laden.
    Thanks for any sugestions

    When I commit transaction data appear in source queue almost immediately so I think that capture works fine. I have to wait on data in destination queue from 2 to 40 seconds. When data appear in dest queue Apply process inserts them into table immediately.
    Regards.

  • Forall bulk delete is too slow to work,seek advice.

    I used PL/SQL stored procedure to do some ETL work. It is pick up refeshed records from staging table, then check to see whether the same record exists in target table, then do a Forall bulk deletion first, then do a Forall insert all refreshed records into target atble. the insert part is working fine. Only is the deleteion part, it is too slow to get job done. My code list below. Please advise where is the problem? Thansk.
    Declare
    TYPE t_distid IS TABLE OF VARCHAR2(15) INDEX BY BINARY_INTEGER;
    v_distid t_distid;
    CURSOR dist_delete IS
    select distinct distid FROM DIST_STG where data_type = 'H';
    OPEN dist_delete;
    LOOP
    FETCH dist_delete BULK COLLECT INTO v_distid;
    FORALL i IN v_distid.FIRST..v_distid.LAST
    DELETE DIST_TARGET WHERE distid = v_distid(i);
    END LOOP;
    COMMIT;
    end;
    /

    citicbj wrote:
    Justin:
    The answers to your questions are:
    1. why would I not use a single DELETE statement? Because this PL/SQL procedure is part of ETL process. The procedure is scheduled by Oracle scheduler. It will automatically run to refresh data. Putting DELETE in stored procedure is better to executed by scheduler.You can compile SQL inside a PL/SQL procedure / function just as easily as coding it the way you have so that's really not an excuse. As Justin pointed out, the straight SQL approach will be what you want to use.
    >
    2. The records in dist_stg with data_type = 'H' vary by each month. It range from 120 to 5,000 records. These records are inserted into target table before. But they are updated in transactional database. We need to delete old records in target and insert updated ones in to replace old ones. But the distID is the same and unique. I use distID to delete old one and insert updated records with the same distID into target again. When user run report, the updated records will show up on the report. In plain SQL statement, delete 5,000 records takes about seconds. In my code above, it take forever. The database is going without any error message. There is no trigger and FK associated
    3. Merge. I haven't try that yet. I may give a try.Quite likely a good idea based on what you've outlined above, but at the very least, remove the procedural code with the delete as suggested by Justin.
    >
    Thanks.

  • Logical Standby Apply became TOO SLOW !!

    Hi,
    I recently have setup a logical standby database. Everything was fine until the apply procedure on the standby became too slow:
    SQL> alter session set nls_date_format = 'HH24:MI:SS (MM/DD)';
    Session altered.
    SQL> SELECT APPLIED_SCN, APPLIED_TIME, READ_SCN, READ_TIME, NEWEST_SCN, NEWEST_TIME FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN APPLIED_TIME READ_SCN READ_TIME NEWEST_SCN
    NEWEST_TIME
    3036960310 18:33:28 (05/31) 3035938077 18:12:43 (05/31) 3060387972
    16:30:16 (06/02)
    SQL>
    The applied time changed about 20 minutes during last 46 hours. v$logstdby says:
    COORDINATOR: ORA-16116: no work available
    READER, BUILDER and PREPARER: ORA-16127: stalled waiting for additional transactions to be applied
    All APPLIERs: ORA-16116: no work available
    I really don't have any idea about this issue. Any help will be appreciated.
    regards

    One reason could be the SQL Apply engine performs too many slow full table scans, check metalink note:
    Determining if SQL Apply Engine is Performing Full Table Scans
    Doc ID: Note:255958.1
    If this is the reason for the slowness you have to tune your DML statements.
    Werner

  • Inserts are slow if table have lots of record (400K) vs. if it's empty

    It takes 1 minute to insert 100,000 records into a table. But if the table already contains some records (400K), then it takes 4 minutes and 12 seconds; also CPU-wait jumps up and “Free Buffer Waits” become really high (from dbconsole).
    Do you know what’s happing here? Is this because of frequent table extents? The extent size for these tables is 1,048,576 bytes. I have a feeling DB is trying to extend the table storage.
    I am really confused about this. So any help would be great!

    Your DB_CACHE_SIZE is likely too small (or DBWR writing to disk is too slow).
    Since you are doing regular INSERTs (not being Direct Path with APPEND), Oracle has to find the free block for the next row and load it into the Database Cache to insert the row into it. However, as your insert more records, the "dirty" blocks still present in the cache have to be written out to disk and DBWR is unable to write out the dirty blocks quickly enough.
    What is the size of the table in USER_SEGMENTS and also as shown in
    NUM_ROWS, SAMPLE_SIZE, AVG_ROW_LENGTH from USER_TABLES ?
    What is your DB_CACHE_SIZE ?

  • 4 to 4.1 - too slow to use.

    Since upgrading to 4.1 LR has become too slow to use.
    The problem lies primarily, but not restricted to, the noise reduction sliders.  The luminance slider specifically.
    Unless I can fix something I will have to return to an older version, Im actually thinking of going back to 3.6 which worked fine on my machine - AMD E2, Dual Core graphics, 4gm RAM.
    Please help.

    The queries are the same across all three database platforms and have been examined by our dba's and run through SQL Server optimizer to get the access path. All queries are very simple...the system was originally written in Btrieve. It is "record at a time" in nature. The program that does the screen refresh does three seperate queries...all something like, select * from tablename where key = 'abc'. The program then combines the data into a single record and returns the results. Like I said...SQL Server 1 second. Each of the three tables has 30,000 records, all with unique keys. The data returned to the application is probably 50 records, each 200 bytes...all character data. All tests were done with the same workstation on the same LAN....I don't think LAN latency is the problem here. Using other query tools, the response back is good, as expected.
    Another example...at application startup, we load all of the metadata (columns, primary key segments and index segments) into memory using the standard odbc api calls...SQLColumns, SQLPrimaryKeys and SQLStatistics. SQL Server loads this in maybe 8-10 seconds while Oracle loads in 15 minutes at best. These queries we don't control and it is still unbelievably slow. Again, using the ODBCTEST utility, the results for these api's come back pretty quick.
    If it is our application causing the problem in some way then it should be equally slow across all platforms, right?
    By the way, we had one of your Oracle Consultants in house trying to solve the metadata problem and he was unable to find any problem in our application and had no answers for us.
    Thanks for the quick response.
    Lon Diehl

Maybe you are looking for

  • IPhone 5 - Voice Recognition Does Not Work

    The voice recognition does not work in any application.  When you speak you see the waveform change following your speech.  In Siri, it even recognizes when you stop speaking. In apps like Messages, the progress circle just goes around and around wit

  • ICal duplicated on iphone and itunes but not on laptop... (not using Mobileme)

    I recently got a new laptop and exported my calendars from my old laptop to new. I then synced my iphone via itunes which resulted in all calendars being duplicated! The calendars on my laptop are not duplicated. The calendars on my itunes sync and i

  • Ti G4 PB crashes when running Software Update

    I just received a Titanium PowerBook G4 800Mhz used on eBay. Has 512MB ram, 100GB Hitachi 5400rpm hardrive insalled, airport card and came freshly formatted with 10.3 installed and a CD with system patches up to 10.3.9. I reformatted the hardrive and

  • XI file adapter stopped working after PI 7.0 upgrade (SP14)

    We have just completed a PI 7.0 upgrade and the file adapter has stopped picking up files from the inbound directory. The j2ee/cluster/server0/log/applications.log file shows #1.5^H#0013217C0311008600001F91000011D100047C99629BA213#1262898132157#/Appl

  • Costing data from Tables of Process Order Confirmation.

    Dear Friends, In which table is the Costing relevent data for process order confirmation stored. We are developing a report for which we require  material specific- Target quantity / Actual quantity Target costs / Actual costs Thanks in advance. Rega