Saving of data in a table having large number of records

Hi,
i'm working in forms 6i and database 10g.
i'm having two tables, stock_head and stock_detail.
The stock_detail table is having millions of records.
The stock_detail is having 3 database triggers.
the saving of data into these tables is very slow even after disabling the triggers.
can anyone please help me regarding this matter...
How to improve the performance?
please help me...

As always the same thing applies to these type of queries
- No exact version numbers are provided
- The problem description is way too vague to resolve the issue
- The requestor doesn't read documentation
- The requestor didn't use online resources, and didn't search this forum
The central question always is
What is it waiting for
So you need to run ADDM and/or AWR reports provided you are properly licensed, or statspack when you don't have a license for AWR/ADDM.
Apart from that no help is possible, as the post didn't contain a problem description other than 'It doesn't work, help'
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • How to show data from a table having large number of columns

    Hi ,
    I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
    Is it possible to design report in below format(half columns on one side of page, half on other side of page :
    Column1
    Data
    Column11
    Data
    Column2
    Data
    Column12
    Data
    Column3
    Data
    Column13
    Data
    Column4
    Data
    Column14
    Data
    Column5
    Data
    Column15
    Data
    Column6
    Data
    Column16
    Data
    Column7
    Data
    Column17
    Data
    Column8
    Data
    Column18
    Data
    Column9
    Data
    Column19
    Data
    Column10
    Data
    Column20
    Data
    I am using Apex 4.2.3 version on oracle 11g xe.

    user2602680 wrote:
    Please update your forum profile with a real handle instead of "user2602680".
    I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
    Is it possible to design report in below format(half columns on one side of page, half on other side of page :
    Column1
    Data
    Column11
    Data
    Column2
    Data
    Column12
    Data
    Column3
    Data
    Column13
    Data
    Column4
    Data
    Column14
    Data
    Column5
    Data
    Column15
    Data
    Column6
    Data
    Column16
    Data
    Column7
    Data
    Column17
    Data
    Column8
    Data
    Column18
    Data
    Column9
    Data
    Column19
    Data
    Column10
    Data
    Column20
    Data
    I am using Apex 4.2.3 version on oracle 11g xe.
    Yes, this can be achieved using a custom named column report template.

  • Performance issues while query data from a table having large records

    Hi all,
    I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
    SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
    SELECT SUM (B.BASE_TRANSACTION_VALUE)
    FROM
    MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A  
    WHERE A.ORGANIZATION_ID =    B.ORGANIZATION_ID 
    AND A.ORGANIZATION_ID =  :b1 
    AND B.REFERENCE_ACCOUNT =    A.MATERIAL_ACCOUNT 
    AND B.TRANSACTION_DATE <=  LAST_DAY (TO_DATE (:b2 ,   'MON-YY' )  )  
    AND B.ACCOUNTING_LINE_TYPE !=  15  
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.02       0.05          0          0          0           0
    Fetch        3    134.74     722.82     847951    1003824          0           2
    total        7    134.76     722.87     847951    1003824          0           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2
    Optimizer mode: ALL_ROWS
    Parsing user id: 193  (APPS)
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             1          1          1  SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
        788242     788242     788242   NESTED LOOPS  (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
             1          1          1    TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
             1          1          1     INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
        788242     788242     788242    TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
       8704356    8704356    8704356     INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
    788242    NESTED LOOPS
          1     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_PARAMETERS' (TABLE)
          1      INDEX   MODE: ANALYZED (UNIQUE SCAN) OF
                     'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
    788242     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_TRANSACTION_ACCOUNTS' (TABLE)
    8704356      INDEX   MODE: ANALYZED (RANGE SCAN) OF
                     'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      row cache lock                                 29        0.00          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file sequential read                    847951        0.40        581.90
      latch: object queue header operation            3        0.00          0.00
      latch: gc element                              14        0.00          0.00
      gc cr grant 2-way                               3        0.00          0.00
      latch: gcs resource hash                        1        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      gc current block 3-way                          1        0.00          0.00
    ********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
    Is there any way I can improve the performance of this query?
    Regards
    Edited by: mhosur on Dec 10, 2012 2:41 AM
    Edited by: mhosur on Dec 10, 2012 2:59 AM
    Edited by: mhosur on Dec 11, 2012 10:32 PM

    CREATE INDEX mtl_transaction_accounts_n0
      ON mtl_transaction_accounts (
                                   transaction_date
                                 , organization_id
                                 , reference_account
                                 , accounting_line_type
    /:p

  • Problem fetch large number of records

    Hi
    I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
    I know when I use DB_MULTIPLE fetch all of the information and performance improves but
    I read that I can not use this flag when I use secondary database for index.
    please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
    thanks alot
    regards
    saeed

    Hi Saeed,
    Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
    You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
    Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
    I have other question.is there any way that fetch information with number of record?
    for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
    Regards,
    Andrei

  • Best way to delete large number of records but not interfere with tlog backups on a schedule

    Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules.  There is a list of tables that need a lot of records purged from them.  What would be a good approach to use for deleting the old records?
    Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
    Approach #1
    A one-time delete that did everything.  Delete all the old records, in batches of say 50,000 at a time.
    After each run through all the tables for that DB, execute a tlog backup.
    Approach #2
    Create a job that does a similar process as above, except dont loop.  Only do the batch once.  Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
    Note:
    Some of these (well, most) are going to have relations on them.

    Hi shiftbit,
    According to your description, in my opinion, the type of this question is changed to discussion. It will be better and 
    more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
    take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state. 
    For more information about deleting a large number of records without affecting the transaction log.
    http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
    Hope it can help.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Lookups with large number of records do not return the page

    Hi,
    I am developing an application using Oracle JHeadstart 10.1.3 Preview Version 10.1.3.0.78
    In my application I created a lookup under domains and used that lookup for an attribute (Display Type for this attribute is: dropDownList) in a group to get the translation fro this attribute. The group has around 14,800 records and the lookup has around 7,400 records.
    When I try to open this group (Tab), the progress shows that it is progressing but it does not open even after a long time.
    If I change the Display Type for the attribute from dropDownList to textInput then it works fine.
    I have other lookups with lower number of records. Those lookups work fine with dropDownList Display Type.
    Only I have this kind of problem when I have a lookup with large number of records.
    Is there any limitation of record number for lookups under Domains?
    How I can solve this?
    I need to translate the attribute (get the description from another table using the code).
    Your help would be appreciated.
    Thanks
    Syed

    We have also faced similar issue, but us, it was happening when we were using the dropDownList in a table, while the same dropDownList was working in table format. In our case the JVM is just used to crash and after google'ing it here in forums, found that it might be related to some JVM issue on Windows XP machines without Service Pack 2.
    Anyway... the workaround that we taken to get around the issue is to use LOV instead of a dropDownList in your jHeadStart.
    Hope this helps...
    - rutwik

  • Find - tables with largest number of records?

    Hi,
    I need to find tables with largest number of records. Any transaction show this details?
    aRs

    Go to transaction DB02,  then click on the button that reads,  "Space Statistics", the dialog box, click ok, leave the "*" for all tables,   In the next screen put your cursor in the appropriate column labeled as Rows and click the sort button.  Now you will see your biggest tables at the top of the list.
    Regards,
    Rich Heilman

  • Not saving the data in two tables

    Hello,
    its my production problem, i have an update form where you can update the records and these
    records will sit in the temp tables until the final approval from the supervisor.
    In this update form i have two table where i am saving the data one is dup_emp to save the
    officer data and another is the dup_address to save the officer where he worked data.
    in this form address form is pop up screen where you can update and gets back to the original
    form where you can see all the other fields. my problem is if a user hit the cancel button on
    address form example the user doesnt want to update any information on that screen so user
    cancel that screen, and comes to the other screen where the user makes the changes to the
    appropriate fields and hits the SAVE button. in this case its saving only to the dup_emp table
    data not the address data from the address form to dup_address table for the same record.
    if the user cancels in both the screens cancel button it should delete the record from both the
    tables but cancel in form and saves in another form it should save the record in both the
    tables.
    here is my code from both cancel buttons from both the forms.
    this is code is from address form cancel button.
    delete from dup_address
    where address_id=:address_id
    and parent_table_name='emp';
    commit;
    CLEAR_BLOCK;
    go_block('DUP_EMP');
    This code is from dup form of the cancel button
    declare
    temp_address_id varchar2 (12);
    begin
    delete from dup_emp
    where secondemp_id =:dup_emp.secondemp_id;
    delete from dup_address
    where parent_t_id=:global.secondemp
    and parent_table_name='emp';
    commit;
    clear_block;
    go_block('secondaryemp');
    END;

    Hi,
    As Aravind mentioned, it's nothing related to workflow. You have to find a BADI in tcode PA30 that could be used after the infotype is updated. So, you can use FM SAVE_TEXT.
    Regards,

  • Data from 3 tables having latest dates

    Hi,
    Need some help with PL/SQL code, I need to write a code which will get data from 3 tables all with the latest date.
    For a particular ACT_CODE the output of the SQL query should show data having the latest dates from 3 tables, if there is no
    date in the table, it should show the remaining data (think left join will do the trick here)
    Table Names:
    Institution_UPDT aiu
    ASQ_CONTACT ac
    GR_AUTHORIZE gr
    All 3 tables have ACT_Code as common                     
    Column Names
    INSTITUTION_UPDT aiu -- aiu.ACT_CODE,aiu.project_id as proj,aiu.UPDT_TYPE_ID, aiu.USER_ID, aiu.UPDT_DATE
    ASQ_CONTACT ac -- ac.ACT_CODE as contact_code,ac.project_id,ac.first_name, ac.middle_initial,ac.last_
    name,ac.title,ac.status,ac.status_date
    GR_AUTHORIZE gr --gr.ACT_CODE as grad_code,gr.name, gr.title AS grad_title, gr.submit_date
    The date column names are
    ac.status_date,
    aiu.UPDT_DATE and
    gr.submit_date
    Thank you everyone
    appreciate your help
    Jesh

    Hi, Jesh,
    user11095252 wrote:
    That is correct, I want to include all the columns from ASQ_Contacts, Institution_UPDT and GR_AUTHORIZEOh! You want all columns from all three tables, not just ASQ_Contacts. That changes the problem considerably!
    UNION requires that all prongs have the same number of columns, and that the datatypes of the columns match. That's no problem if we just need act_code and a date from each one. If we just need additional columns from one table, it's easy to add literal NULLs to the other prongs to serve as the additional columns. But if we need all (or even several) columns from all three tables, that's no good. So let's revert to your original idea: outer joins.
    I want to display only one row which has the latest date with the most recently updated time (example:mm/dd/yyyy hr:min:sec am/pm)Yes, but what if there is a tie for the most recently updated time?
    In case of a tie, the query below will pick one of the contenders arbitrarily. That may be fine with you (e.g., you may have UNIQUE constraints, making ties impossible). If you need a tie-breaker, yiou can add more columns to the analytic ORDER BY clauses.
    WITH     aiu     AS
         SELECT     institution_updt.*     -- or list columns wanted
         ,     ROW_NUMBER () OVER ( PARTITION BY  act_code
                             ORDER BY        updt_date     DESC
                           ) AS r_num
    FROM     institution_updt
    WHERE     act_code     = :p1_act_code
    AND     project_id     = :p2_project_id
    ,     ac     AS
         SELECT     asq_contact.*          -- or list columns wanted
         ,     ROW_NUMBER () OVER ( PARTITION BY  act_code
                             ORDER BY        status_date     DESC
                           ) AS r_num
    FROM     asq_contact
    WHERE     act_code     = :p1_act_code
    AND     project_id     = :p2_project_id
    ,     gr     AS
         SELECT     gr_authorize.*          -- or list columns wanted
         ,     ROW_NUMBER () OVER ( PARTITION BY  act_code
                             ORDER BY        submit_date     DESC
                           ) AS r_num
    FROM     gr_authorize
    WHERE     act_code     = :p1_act_code
    SELECT     *     -- or list columns wanted
    FROM          aiu
    FULL OUTER JOIN     ac     ON     ac.act_code     = aiu.act_code
                   AND     ac.r_num     = 1
                   AND     aiu.r_num     = 1
    FULL OUTER JOIN     gr     ON     gr.act_code     = NVL (ac.act_code, aiu_act_code)
                   AND     gr.r_num     = 1
    ;That's a lot of code, so there may be typos. If you'd post CREATE TABLE and INSERT statements for a few rows of sample data, I could test it.
    In all places where I said "SELECT *" above, you may want to list the individual columns you want.
    If you do that in the sub-queries, then you don't have to qualify the names with the table name: that's only required when saying "SELECT *" with another column (r_num, in this case).
    It's more likely that you won't want to say "SELECT *" in the main query. The three r_num columns, while essential to the query, are completely useless to your readers, and you might prefer to have just one act_code column, since it will be the same for all tables that have it. But since it may be NULL in any of the tables, you'll have to SELECT it like this:
    SELECT  COALESCE ( aiu.act_code
                     , ac.act_code
                     , gr_act_code
                     )  AS act_codeThe query above will actually work for multiple act_codes. You can change the condidition to something like
    WHERE   act_code IN (&act_code_list)If so, remember to change it in all three sub-queries.

  • Trigger to insert unique data into other table (having more than 40 millions of records) - MYSQL

    Hi All,
    i am facing impact of trigger in MYSQL, scenario is this:
    I am Having one table with duplicate records that is consist of (eid,tin,status and some other columns are also there but i need only these three).  there is another table which is having same these three columns (eid, tin, status).
    eid and tin will be same for given combination only status will be different i.e.
    1245 23 0
    1245 23 1
    1245 23 5
    1233 33 3
    1211 24 2
    1211 24 5
    so as per above example i have to feed data into other table as
    1245 23 0
    1233 33 3
    1211 24 5
    priority of status is like 0 will be inserted if that is present in record otherwise it will be decrease from 5 to 1.
    so i have designed trigger for this which will insert data after reading each row, but it is taking around 6.5 minutes for inserting 300000 records. so is there any other way to improve performance  for this mysql program.
    DELIMITER $$
    CREATE
        /*[DEFINER = { user | CURRENT_USER }]*/
        TRIGGER `kyr_log`.`upd_status` AFTER INSERT
        ON `kyr_log`.`kyrlog_bup`
        FOR EACH ROW
        BEGIN
    DECLARE v_eid VARCHAR(28);
    DECLARE v_status INT(11);
    SELECT kyrl_eid,kyrl_status INTO v_eid,v_status FROM kyrlog_bup ORDER BY kyrl_id DESC LIMIT 1;
       IF v_eid NOT IN (SELECT kyrl_eid FROM update_status.new_status) THEN
    INSERT INTO update_status.new_status(kyrl_eid,kyrl_tin,kyrl_status)
    SELECT kyrl_eid,kyrl_tin,kyrl_status FROM kyrlog_bup ORDER BY kyrl_id DESC LIMIT 1;
       ELSE IF v_status=2 THEN
    IF v_status > ANY (SELECT kyrl_status FROM kyrlog_bup WHERE kyrl_eid=v_eid AND kyrl_status<>0) THEN
              UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid;
    END IF;
       ELSE IF v_status=3 THEN
    IF v_status > ANY (SELECT kyrl_status FROM kyrlog_bup WHERE kyrl_eid=v_eid AND kyrl_status<>0) THEN
              UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid;  
    END IF;
       ELSE IF v_status=4 THEN
    IF v_status > ANY (SELECT kyrl_status FROM kyrlog_bup WHERE kyrl_eid=v_eid AND kyrl_status<>0) THEN
               UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid; 
    END IF;
       ELSE IF v_status=5 THEN
    IF v_status > ANY (SELECT kyrl_status FROM kyrlog_bup WHERE kyrl_eid=v_eid AND kyrl_status<>0) THEN
               UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid;
    END IF;
               ELSE IF v_status=0 THEN
    UPDATE update_status.new_status SET kyrl_status=v_status WHERE kyrl_eid=v_eid;            
              END IF;
           END IF;
           END IF;
           END IF;
           END IF;
           END IF;
        END;
    $$
    DELIMITER ;
    please suggest me if there is  possibility of any other solution.
    thanks

    actually you didn't have seen discussion on this link , there are many discussion related to MYSQL . and mysql is owned by oracle. so i post it here.
    thanks for suggestion

  • Issue in data Extraction , Source tables having columns wth lengthe 60

    Hi BI Experts ,
    Here I have a issue while extracting the data from Oracle tables. I encountered some columns for which the length of character stream is more than 60 , some where around 200 to 300 , for example : Reason for some action , Comments , discription .
    I am not able to to treat them as master  data text since these fileds are coming with the Transaction data . In SAP BI we can have the data type CHAR with length max to 60 . Now how can I deal this situation in a better way ??
    Could you please come up with your ideas .
    Expecting interesting solutions
    Anurag

    Hello Charan,
    first check this Blog:
    http://www.sdn.sap.com/irj/scn/weblogs;jsessionid=(J2EE3417800)ID0294722750DB10878770002327649734End?blog=/pub/wlg/3705
    It may helps already.
    Anouter methode is to report from PSA Tables. But here no How-to is available.
    Br.
    Joerg

  • Insert into table a large amount of records

    I was trying to find a fast way to optimize a script that insert a large amount of records into a table. The initial script was like
    insert into table_xxxx
    select a.camp1, a.camp2, a.camp3 a.camp4, b.camp1, b.camp2, b.camp3
    from table_a a, table_b b
    where a.camp0 = b.camp0
    The commit sentence was at the end of the insert script; so i came up with this solution
    Declare
    TYPE cur_CURSOR IS REF CURSOR ;
    TYPE Tab_Hist IS TABLE OF table_xxxx%ROWTYPE INDEX BY BINARY_INTEGER;
    g_tHist Tab_Hist;
    CURSOR c_Base IS
    select a.camp1, a.camp2, a.camp3 a.camp4, b.camp1, b.camp2, b.camp3
    from table_a a, table_b b
    where a.camp0 = b.camp0;
    BEGIN
    OPEN c_base;
    LOOP
    FETCH c_base BULK COLLECT INTO g_tHist LIMIT 1000;
    EXIT WHEN g_tHist.COUNT = 0;
    BEGIN
    FORALL i IN g_tHist.FIRST .. g_tHist.COUNT SAVE EXCEPTIONS
    INSERT INTO prov_cobr_dud VALUES g_tHist(i);
    COMMIT;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    NULL;
    END;
    g_tHist.DELETE;
    EXIT WHEN c_base%NOTFOUND;
    END LOOP;
    CLOSE c_base;
    commit;
    END;
    If anyone could tell me another way to do the same thing i'll apreciate it a lot; i'm keen on learn more efficient ways to optimize scripts.
    PD: The initial insert was inserting the table with 120,000 records (more or less)

    Hello,
    Wrong forum. This is the Oracle Forms forum. You should post in the SQL-PL/SQL forum.
    Francois

  • Analyze table after insert a large number of records?

    For performance purpose, is it a good practice to execute an 'analyze table' command after inserting a large number of a records into a table in Oracle 10g, if there is a complex query following the insert?
    For example:
    Insert into foo ...... //Insert one million records to table foo.
    analyze table foo COMPUTE STATISTICS; //analyze table foo
    select * from foo, bar, car...... //Execute a complex query whithout hints
    //after 1 million records inserted into foo
    Does this strategy help to improve the overall performance?
    Thanks.

    Different execution plans will most frequently occur when the ratio of the number of records in various tables involved in the select has changed tremendously. This happens above all if 'fact' tables are growing and 'lookup' tables stayed constant.
    This is why you shouldn't test an application with a small number of 'fact' records.
    This can happen both with analyze table and dbms_stats.
    The advantage of dbms_stats is, it will export the current statistics to a stats to table, so you can always revert to them using dbms_stats.import_stats.
    You can even overrule individual table and column statistics by artificial values.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • How to Capture a Table with large number of Rows in Web UI Test?

    HI,
    Is there any possibility to capture a DOM Tabe with large number of Rows (say more than 100) in Web UI Test?
    Or is there any bug?

    Hi,
    You can try following code to capture the table values.
    To store the table values in CSV :
    *web.table( xpath_of_table ).exportToCSVFile("D:\exporttable.csv", true);*
    TO store the table values in a string:
    *String tblValues=web.table( xpath_of_table ).exportToCSVString();*
    info(tblValues);
    Thanks
    -POPS

  • Not receiving email when sending large number of records using a FM?

    Hi..
    I am using the function module " SO_DOCUMENT_SEND_API1 " to send email....
    When a single record is there.. or around 5-6 records are there... email is coming successfully...
    But when there are more records..say around 100, the email is not coming... I checked SOST transaction and the status there is in red..and the error message is " Internal error: SO_OBJECT_MIME_GET Exception: 2 ".........
    What could be the reason behind this problem....... ??
    I have another problem... my output has over 60 fields, but in the email which i am receiving has only around 10 fields... how to solve this problem...??
    Plz help...

    Well... right now i am tryin to get only the first 2 fields.. but even in this case... i am not getting the email if around 15 records are there......
    I am using the code which is given below which i found in SDN only......In this code.. data is getting selected from EKPO... i tried changing the number of rows getting selected.. and in this case, the attachement is coming as desired... but when i use the same code for my prog.. i am not getting the mail.. even if there are only 10 records or so...
    *& Report  ZT062108   ALV Header                                    *
    REPORT  zt062108.
    TABLES: ekko.
    PARAMETERS: p_email   TYPE somlreci1-receiver
                                      DEFAULT '<give email here>'.
    TYPES: BEGIN OF t_ekpo,
      ebeln TYPE ekpo-ebeln,
      ebelp TYPE ekpo-ebelp,
      aedat TYPE ekpo-aedat,
      matnr TYPE ekpo-matnr,
    END OF t_ekpo.
    DATA: it_ekpo TYPE STANDARD TABLE OF t_ekpo INITIAL SIZE 0,
          wa_ekpo TYPE t_ekpo.
    TYPES: BEGIN OF t_charekpo,
      ebeln(10) TYPE c,
      ebelp(5)  TYPE c,
      aedat(8)  TYPE c,
      matnr(18) TYPE c,
    END OF t_charekpo.
    DATA: wa_charekpo TYPE t_charekpo.
    DATA:   it_message TYPE STANDARD TABLE OF solisti1 INITIAL SIZE 0
                    WITH HEADER LINE.
    DATA:   it_attach TYPE STANDARD TABLE OF solisti1 INITIAL SIZE 0
                    WITH HEADER LINE.
    DATA:   t_packing_list LIKE sopcklsti1 OCCURS 0 WITH HEADER LINE,
            t_contents LIKE solisti1 OCCURS 0 WITH HEADER LINE,
            t_receivers LIKE somlreci1 OCCURS 0 WITH HEADER LINE,
            t_attachment LIKE solisti1 OCCURS 0 WITH HEADER LINE,
            t_object_header LIKE solisti1 OCCURS 0 WITH HEADER LINE,
            w_cnt TYPE i,
            w_sent_all(1) TYPE c,
            w_doc_data LIKE sodocchgi1,
            gd_error    TYPE sy-subrc,
            gd_reciever TYPE sy-subrc.
    *START_OF_SELECTION
    START-OF-SELECTION.
    *   Retrieve sample data from table ekpo
      PERFORM data_retrieval.
    *   Populate table with detaisl to be entered into .xls file
      PERFORM build_xls_data_table.
    *END-OF-SELECTION
    END-OF-SELECTION.
    * Populate message body text
      perform populate_email_message_body.
    * Send file by email as .xls speadsheet
      PERFORM send_file_as_email_attachment
                                   tables it_message
                                          it_attach
                                    using p_email
                                          'Example .xls documnet attachment'
                                          'TXT'
                                          'filename'
                                 changing gd_error
                                          gd_reciever.
    *   Instructs mail send program for SAPCONNECT to send email(rsconn01)
      PERFORM initiate_mail_execute_program.
    *&      Form  DATA_RETRIEVAL
    *       Retrieve data form EKPO table and populate itab it_ekko
    FORM data_retrieval.
      SELECT ebeln ebelp aedat matnr
       UP TO 1000 ROWS
        FROM ekpo
        INTO TABLE it_ekpo.
    ENDFORM.                    " DATA_RETRIEVAL
    *&      Form  BUILD_XLS_DATA_TABLE
    *       Build data table for .xls document
    FORM build_xls_data_table.
    *  CONSTANTS: con_cret TYPE x VALUE '0D'.  "OK for non Unicode
    *             con_tab TYPE x VALUE '09'.   "OK for non Unicode
    *If you have Unicode check active in program attributes thnen you will
    *need to declare constants as follows
    *class cl_abap_char_utilities definition load.
    constants:
        con_tab  type c value cl_abap_char_utilities=>HORIZONTAL_TAB,
        con_cret type c value cl_abap_char_utilities=>CR_LF.
      CONCATENATE 'EBELN' 'EBELP' 'AEDAT' 'MATNR'
             INTO it_attach SEPARATED BY con_tab.
      CONCATENATE con_cret it_attach  INTO it_attach.
      APPEND  it_attach.
      LOOP AT it_ekpo INTO wa_charekpo.
        CONCATENATE wa_charekpo-ebeln wa_charekpo-ebelp
                    wa_charekpo-aedat wa_charekpo-matnr
               INTO it_attach SEPARATED BY con_tab.
        CONCATENATE con_cret it_attach  INTO it_attach.
        APPEND  it_attach.
      ENDLOOP.
    ENDFORM.                    " BUILD_XLS_DATA_TABLE
    *&      Form  SEND_FILE_AS_EMAIL_ATTACHMENT
    *       Send email
    FORM send_file_as_email_attachment tables pit_message
                                              pit_attach
                                        using p_email
                                              p_mtitle
                                              p_format
                                              p_filename
                                              p_attdescription
                                              p_sender_address
                                              p_sender_addres_type
                                     changing p_error
                                              p_reciever.
      DATA: ld_error    TYPE sy-subrc,
            ld_reciever TYPE sy-subrc,
            ld_mtitle LIKE sodocchgi1-obj_descr,
            ld_email LIKE  somlreci1-receiver,
            ld_format TYPE  so_obj_tp ,
            ld_attdescription TYPE  so_obj_nam ,
            ld_attfilename TYPE  so_obj_des ,
            ld_sender_address LIKE  soextreci1-receiver,
            ld_sender_address_type LIKE  soextreci1-adr_typ,
            ld_receiver LIKE  sy-subrc.
      ld_email   = p_email.
      ld_mtitle = p_mtitle.
      ld_format              = p_format.
      ld_attdescription      = p_attdescription.
      ld_attfilename         = p_filename.
      ld_sender_address      = p_sender_address.
      ld_sender_address_type = p_sender_addres_type.
    * Fill the document data.
      w_doc_data-doc_size = 1.
    * Populate the subject/generic message attributes
      w_doc_data-obj_langu = sy-langu.
      w_doc_data-obj_name  = 'SAPRPT'.
      w_doc_data-obj_descr = ld_mtitle .
      w_doc_data-sensitivty = 'F'.
    * Fill the document data and get size of attachment
      CLEAR w_doc_data.
      READ TABLE it_attach INDEX w_cnt.
      w_doc_data-doc_size =
         ( w_cnt - 1 ) * 255 + STRLEN( it_attach ).
      w_doc_data-obj_langu  = sy-langu.
      w_doc_data-obj_name   = 'SAPRPT'.
      w_doc_data-obj_descr  = ld_mtitle.
      w_doc_data-sensitivty = 'F'.
      CLEAR t_attachment.
      REFRESH t_attachment.
      t_attachment[] = pit_attach[].
    * Describe the body of the message
      CLEAR t_packing_list.
      REFRESH t_packing_list.
      t_packing_list-transf_bin = space.
      t_packing_list-head_start = 1.
      t_packing_list-head_num = 0.
      t_packing_list-body_start = 1.
      DESCRIBE TABLE it_message LINES t_packing_list-body_num.
      t_packing_list-doc_type = 'RAW'.
      APPEND t_packing_list.
    * Create attachment notification
      t_packing_list-transf_bin = 'X'.
      t_packing_list-head_start = 1.
      t_packing_list-head_num   = 1.
      t_packing_list-body_start = 1.
      DESCRIBE TABLE t_attachment LINES t_packing_list-body_num.
      t_packing_list-doc_type   =  ld_format.
      t_packing_list-obj_descr  =  ld_attdescription.
      t_packing_list-obj_name   =  ld_attfilename.
      t_packing_list-doc_size   =  t_packing_list-body_num * 255.
      APPEND t_packing_list.
    * Add the recipients email address
      CLEAR t_receivers.
      REFRESH t_receivers.
      t_receivers-receiver = ld_email.
      t_receivers-rec_type = 'U'.
      t_receivers-com_type = 'INT'.
      t_receivers-notif_del = 'X'.
      t_receivers-notif_ndel = 'X'.
      APPEND t_receivers.
      CALL FUNCTION 'SO_DOCUMENT_SEND_API1'
           EXPORTING
                document_data              = w_doc_data
                put_in_outbox              = 'X'
                sender_address             = ld_sender_address
                sender_address_type        = ld_sender_address_type
                commit_work                = 'X'
           IMPORTING
                sent_to_all                = w_sent_all
           TABLES
                packing_list               = t_packing_list
                contents_bin               = t_attachment
                contents_txt               = it_message
                receivers                  = t_receivers
           EXCEPTIONS
                too_many_receivers         = 1
                document_not_sent          = 2
                document_type_not_exist    = 3
                operation_no_authorization = 4
                parameter_error            = 5
                x_error                    = 6
                enqueue_error              = 7
                OTHERS                     = 8.
    * Populate zerror return code
      ld_error = sy-subrc.
    * Populate zreceiver return code
      LOOP AT t_receivers.
        ld_receiver = t_receivers-retrn_code.
      ENDLOOP.
    ENDFORM.
    *&      Form  INITIATE_MAIL_EXECUTE_PROGRAM
    *       Instructs mail send program for SAPCONNECT to send email.
    FORM initiate_mail_execute_program.
      WAIT UP TO 2 SECONDS.
      SUBMIT rsconn01 WITH mode = 'INT'
                    WITH output = 'X'
                    AND RETURN.
    ENDFORM.                    " INITIATE_MAIL_EXECUTE_PROGRAM
    *&      Form  POPULATE_EMAIL_MESSAGE_BODY
    *        Populate message body text
    form populate_email_message_body.
      REFRESH it_message.
      it_message = 'Please find attached a list test ekpo records'.
      APPEND it_message.
    endform.                    " POPULATE_EMAIL_MESSAGE_BODY

Maybe you are looking for