Number of rows differences across 9i logical dataguard

Hi,
I have a wierd situation, and need help to investigate:
We have a 9i primary database and a logical standby database for reporting purposes. In the logcial standby, apart from the main APPDBM schema from primary, we also have RPTDBM schema containing reporting objects. RPTDBM doesnot exist in primary.
We maintain the guard_status to "STANDBY" level in logical standby database to allow modifications to the RPTDBM schema.
But, today, we have observed difference in the number of rows for many tables (131 out of 2000) between the APPDBM schema in primary and standby database.
How could this have happened? What safeguard we should implement to prevent?
Any help is greatly appreciated.

You have a customized solution. Without knowing all about the process of creating, populating, maintaining the reporting table, it's impossible to say.

Similar Messages

  • Difference of number of rows in VC form

    Hello,
    I have imported the query into VC, done the required mappings and deployed it. I am expecting to see about 200 rows in my form instance. But am seeing just one row. I have double checked from bex. The form instance in bex is able to pull all 200 rows. Can any one tell me what could be the problem here?
    I am out of ideas.
    Thanks,
    Kiran

    The table should have a scrollbar or more than 1 page if it doesn't fit to the number of rows. You can adjust the number of rows in design time at layout. Just resize the table until there are as many rows as you like. It doesn't extend automatically.
    Anja

  • How to get number of rows

    Hi!
    Can someone tell me how to obtain the number of rows returned by a query?
    I tried to use getFetchSize() but it seems to always return 0 for my query.
    THANKS in advance!
    ResultSet testRS = Stmt.executeQuery(testStmt);
    int size = testRS.getFetchSize();
    System.out.println("size = " + size);
    Wo Lay

    To get the number of rows returned, you have to loop
    through the result set, or do a "select count" (which
    is more efficient since it doesn't have to pass the
    rows across the network) ...Looping around the entire result set is not very performance efficient. Then agin if you use count (*) you actually execute an extra query.
    The best thing you can do is:// assume res is your result set.
    res.last();
    int rows = res.getRow();This way you only have to execute one query and you don't really need to loop around the entire result set. Of course you will have to have a scrollable result set, if this is supported by your DB.
    Hope that helped.
    afotoglidis

  • How to restrict number of row/lines in a table in RTF

    Hi,
    Im working on PO report. It has header information in the header part of RTF. But coming to the lines , I put that under a different group in the table in the body part of the RTF. I did a FOR loop for printing PO Lines because it does have multiple line information. But now i have to restrict the number of rows in that table per page irrespective of number of PO Lines. After printing 20 rows in the table, it should go to the next page. Im going through the other threads for the solution.
    Thanks,
    venkat

    There are so many thread regarding this ,
    pls go thru them patiently...
    logic is something like....
    use if condition and position() and say page-break..
    use for-each@section instead of for-each..so that the headed info is maintained with regards to PO lines
    happy searching...
    if you cant get still, last option send me mail with details

  • How to determine the number of rows to be displayed in a report

    hello experts,
    Has anyone ever come across this requirement before? Before a report finishes being executed and before it displays the results, is there a way to determine the number of rows to be displayed as the result?
    Many thanks in advance.
    Regards,
    Inma

    Hello Arun,
    Thanks for your reply but do you know which method I should use for this purpose if I use the table interface?
    Thanks,
    Inma

  • New table without statistics returns invalid number of rows

    Hi,
    I've been searching for a while now for an explanation for the following "problem"
    We have an Oracle 11.1.0.7 database on AIX5.3
    In this database we have two tables, called KRT_PRODUCTS_INFO and KRT_STRUCTURES_INFO ( the table name don't really matter ).
    The scenario is as following:
    If we recreate these tables like:
    CREATE TABLE KRT_PRODUCT_INFO_BUP AS SELECT * FROM KRT_PRODUCT_INFO;
    DROP TABLE KRT_PRODUCT_INFO CASCADE CONSTRAINTS;
    CREATE TABLE KRT_PRODUCT_INFO (...) TABLESPACE PIM_DATA NOLOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING;
    CREATE INDEX KRT_PRODUCT_INFO_X1 ON KRT_PRODUCT_INFO (PRODUCT_NUMBER) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_PRODUCT_INFO_X2 ON KRT_PRODUCT_INFO (PIM_ARTICLEREVISIONID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    INSERT INTO KRT_PRODUCT_INFO (SELECT * FROM KRT_PRODUCT_INFO_BUP);
    COMMIT;
    CREATE TABLE KRT_STRUCTURE_INFO_BUP AS SELECT * FROM KRT_STRUCTURE_INFO;
    DROP TABLE KRT_STRUCTURE_INFO CASCADE CONSTRAINTS;
    CREATE TABLE KRT_STRUCTURE_INFO (...) TABLESPACE PIM_DATA NOLOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING;
    CREATE INDEX KRT_STRUCTURES_X1 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_REV_ID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_STRUCTURES_X2 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_IDENTIFIER) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_STRUCTURES_X3 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_ID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    INSERT INTO KRT_STRUCTURE_INFO (SELECT * FROM KRT_STRUCTURE_INFO_BUP);
    COMMIT;
    and we run a complex query with these two tables, this query only return a couple of rows ( exactly 24 !!! )
    If we however generate statistics on these tables after creation, the correct number of rows is returned, being 1.167.991 rows
    The statistics are gathered using:
    BEGIN
    SYS.DBMS_STATS.GATHER_TABLE_STATS (
    OwnName => 'PIM_KRG'
    ,TabName => 'KRT_PRODUCT_INFO'
    ,Estimate_Percent => NULL
    ,Method_Opt => 'FOR ALL COLUMNS SIZE REPEAT '
    ,Degree => NULL
    ,Cascade => TRUE
    ,No_Invalidate => FALSE);
    END;
    BEGIN
    SYS.DBMS_STATS.GATHER_TABLE_STATS (
    OwnName => 'PIM_KRG'
    ,TabName => 'KRT_STRUCTURE_INFO'
    ,Estimate_Percent => NULL
    ,Method_Opt => 'FOR ALL COLUMNS SIZE REPEAT '
    ,Degree => NULL
    ,Cascade => TRUE
    ,No_Invalidate => FALSE);
    END;
    /I can imagine that the 'plan' for the query used is wrong because of missing statistics.
    But I can't imagine that it would actually return an incorrect number of rows.
    I tested this behaviour in Toad and sqlplus ( first thought it was Toad ), and both behave the same.
    Another fact is, that the "problem" is NOT reproducable on our TEST environment, that runs on Oracle 11.1.0.7 on Windows2008
    Just to be sure this is the "complex" query used. It is not developed by me, and I think it looks somewhat strange but that shouldn't matter:
    SELECT sr."Identifier" STRUCTURE_IDENTIFIER
    , ar_i."Identifier" ITEM_NUMBER
    , SUM (REPLACE (NVL (s.HIDE_LE10, 0) + NVL (p.HIDE_LE10, 0), 2, 1))
    hide_le10
    , SUM (REPLACE (NVL (s.HIDE_LE30, 0) + NVL (p.HIDE_LE30, 0), 2, 1))
    hide_le30
    , SUM (REPLACE (NVL (s.HIDE_LE40, 0) + NVL (p.HIDE_LE40, 0), 2, 1))
    hide_le40
    , SUM (REPLACE (NVL (s.HIDE_LE50, 0) + NVL (p.HIDE_LE50, 0), 2, 1))
    hide_le50
    , SUM (REPLACE (NVL (s.HIDE_LE55, 0) + NVL (p.HIDE_LE55, 0), 2, 1))
    hide_le55
    , SUM (REPLACE (NVL (s.HIDE_LE60, 0) + NVL (p.HIDE_LE60, 0), 2, 1))
    hide_le60
    , SUM (REPLACE (NVL (s.HIDE_LE70, 0) + NVL (p.HIDE_LE70, 0), 2, 1))
    hide_le70
    , SUM (REPLACE (NVL (s.HIDE_LE75, 0) + NVL (p.HIDE_LE75, 0), 2, 1))
    hide_le75
    , SUM (REPLACE (NVL (s.HIDE_LE58, 0) + NVL (p.HIDE_LE58, 0), 2, 1))
    hide_le58
    , SUM (REPLACE (NVL (s.HIDE_LE80, 0) + NVL (p.HIDE_LE80, 0), 2, 1))
    hide_le80
    , SUM (REPLACE (NVL (s.HIDE_LE90, 0) + NVL (p.HIDE_LE90, 0), 2, 1))
    hide_le90
    , SUM (REPLACE (NVL (s.HIDE_LE92, 0) + NVL (p.HIDE_LE92, 0), 2, 1))
    hide_le92
    , SUM (REPLACE (NVL (s.HIDE_LE94, 0) + NVL (p.HIDE_LE94, 0), 2, 1))
    hide_le94
    , SUM (REPLACE (NVL (s.HIDE_LE96, 0) + NVL (p.HIDE_LE96, 0), 2, 1))
    hide_le96
    , COUNT (*) cnt
    FROM KRAMP_HPM_MAIN."StructureRevision" sr
    , KRAMP_HPM_MAIN."StructureGroupRevision" sgr
    , KRAMP_HPM_MASTER."ArticleStructureMap" asm
    , KRAMP_HPM_MASTER."ArticleRevision" ar_p
    , KRAMP_HPM_MASTER."ArticleDetail" ad_p
    , KRAMP_HPM_MASTER."ArticleRevision" ar_i
    , KRAMP_HPM_MASTER."ArticleDetail" ad_i
    , KRAMP_HPM_MASTER."ArticleReference" ar
    , KRT_STRUCTURE_INFO s
    , KRT_PRODUCT_INFO p
    WHERE sr."StructureID" = sgr."StructureID"
    AND sgr."StructureGroupID" = asm."StructureGroupID"
    AND ar_p."ID" = asm."ArticleRevisionID"
    AND ar_p."ID" = ad_p."ArticleRevisionID"
    AND ad_p."Res_Text100_02" = 'PRODUCT'
    AND ar_i."ID" = ad_i."ArticleRevisionID"
    AND ad_i."Res_Text100_02" = 'ARTICLE'
    AND ar."ArticleRevisionID" = ar_p."ID"
    AND ar."ReferencedSupplierAID" = ar_i."Identifier"
    AND s.STRUCTURE_GRP_REV_ID = sgr."ID"
    AND p.PIM_ARTICLEREVISIONID = ar_p."ID"
    GROUP BY sr."Identifier", ar_i."Identifier";Any ideas are welcome..
    Thanks
    FJFranken

    Hemant K Chitale wrote:
    These two tables are in the PIM_KRG schema while the other tables in the query are distributed across two other schemas "KRAMP_HPM_MAIN" and "KRAMP_HPM_MASTER" ?
    Do you happen to have the same table names occurring in multiple schemas - the query is then referencing the data in the wrong schema ?
    Hemant K ChitaleHi,
    This is not the case. The KRAMP_HPM schema's are application dedicated schema's
    And this also then does not explain why the results are correct after generating statistics.
    Anyway thanks for the tip.
    FJFranken

  • How to Efficiently Sample a Fixed Number of Rows

    Good afternoon. I need to select a specific number of random rows from a table, and while I believe my logic is right it's taking too long, 30 minutes for a routine data size. Hopefully someone here can show me a more efficient query. I've seen the SAMPLE function, but it just randomly selects rows on a one-by-one basis, without a guaranteed total count.
    This is the idea:
    INSERT INTO Tmp_Table (Value, Sequence) SELECT Value FROM Perm_Table, DBMS_RANDOM.VALUE;
    SELECT Value FROM Tmp_Table WHERE ROWNUM <= 1234 ORDER BY Sequence;I'd need to put the ORDER BY in a subselect for ROWNUM to work correctly, but anyway that's just an illustration. My actual need is a little more complicated. I have many sets of data; each set has many rows; and for each set I need to return a specific (different) number of rows. Perhaps project A has three rows in this table, and I want to keep two of them; project B has two rows, and I want to keep one of them. So I need to identify, for each row, whether it's valid for that project. This is what my data looks like:
    Project Person  Sequence Position Keeper
    A       Bill    1234     1        Yes
    A       Fred    5678     3        No
    A       George  1927     2        Yes
    B       April   5784     2        No
    B       Janice  2691     1        YesI populate Sequence with random values, then calculate the position of each person within their project, and finally discard people who's Position is greater than Max_Targets for the Project. Fred and April have the highest random numbers, so they're cut. It's not the case that I'm just trimming one person from each project; the actual percentage kept will range from zero to 100.
    Populating the list with random values is not time-consuming, but calculating Position is. This is my code:
    UPDATE Tmp_Targets T1
    SET Position =
      SELECT
       COUNT(*)
      FROM
       Perm_Targets PT1
       INNER JOIN Perm_Targets PT2 ON PT1.Project = PT2.Project
       INNER JOIN Tmp_Targets T2 ON PT2.Target = T2.Target
      WHERE
           T1.Target = PT1.Target
       AND T2.Sequence <= T1.Sequence
      );The Target fields are PKs, and the Project and Sequence fields are indexed. Is there a better way to approach this? I could write a cursor that pulls out project codes and performs the above operations for each project in turn; that would be logically simpler and possibly faster. Has anyone here addressed a similar problem before? I'd appreciate any ideas.
    This is on 9.2, in case it matters. Thank you,
    Jonathan

    You've not given any indication of how max targets for a given project is determined, so for my example I'm using the ceiling of 1/2 of the number of records in each project which gives the same number of yes and no responses per project as you had:
    with dta as (
      select 'A' project, 'Bill' person from dual union all
      select 'A', 'Fred' from dual union all
      select 'A', 'George' from dual union all
      select 'B', 'April' from dual union all
      select 'B', 'Janice' from dual
    ), t1 as (
      select project
           , person
           , row_number() over (partition by project order by dbms_random.value) ord
           , count(*) over (partition by project) cnt
           , rownum rn
        from dta
    select project
         , person
         , ord
         , cnt
         , case when ord <= ceil(cnt/2) then 'Yes' else 'No' end keep
      from t1
      order by rn
    PROJECT PERSON ORD                    CNT                    KEEP
    A       Bill   2                      3                      Yes 
    A       Fred   3                      3                      No  
    A       George 1                      3                      Yes 
    B       April  1                      2                      Yes 
    B       Janice 2                      2                      No  
    5 rows selectedIn this example I use an analytic function to assign a random ordering for each record within a project in the middle query, in the final output query I am determining the yes no status based on the order within a project and the count of records in the project. If you had a table of projects indicating the thresh hold you could join to that and use the thresh hold in place of the ceil(cnt/2) portion of my inequality in the case statement.

  • Number of rows in Excel sheet

    In ABAP reports, is there any logic to find number of rows in Excel sheet.
    Regards,
    Naseer.

    Hi,
    Try this code...
    REPORT zreport.
    PARAMETER p_infile LIKE rlgrap-filename DEFAULT 'C:TEMPZMPR.xls'.
    DATA : lin TYPE i.
    DATA: itab LIKE alsmex_tabline OCCURS 0 WITH HEADER LINE.
    START-OF-SELECTION.
      CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
        EXPORTING
          filename                = p_infile
          i_begin_col             = '1'
          i_begin_row             = '1'
          i_end_col               = '1'
          i_end_row               = '28000'
        TABLES
          intern                  = itab
        EXCEPTIONS
          inconsistent_parameters = 1
          upload_ole              = 2
          OTHERS                  = 3.
      IF sy-subrc <> 0.
        MESSAGE text-009 TYPE 'S'. "Problem uploading Excel Spreadsheet
        EXIT.
      ENDIF.
      DESCRIBE TABLE itab LINES lin.
      WRITE :/ 'Number of lines: ', lin.

  • Limiting number of rows returned by SQVI

    I've created a query in SQVI and I need to limit the number of rows returned by the query.  (I'm using the query as an exploratory tool, and it's not easy to predict how many records will be returned based on the selection fields I'm using.)
    Is there any way to add a 'Maximum No. of Hits' field to the SQVI selection screen similar to what is found when using SE16?
    Thanks,
    Bob

    It's not surprising that you are confused because the documentation doesn't bother to explain what the "fetch size" actually is, it just says that setFetchSize sets it and getFetchSize gets it. As I understand it from some other documents I read about JDBC, the fetch size is a number that may be used internally by the JDBC driver. Here's an example of how I understand it (others, I know you will feel free to correct me if you disagree):
    When the driver produces a result set with a very large number of records, it has to generate those records and deliver them to the system that requested them. If the database is not on the same system, then those records must all travel over the network. It could be a performance problem if you had to wait for (say) 80,000 records to travel across the network. Enter the fetch size. If you set the fetch size to 100, then the driver will bring the records across in batches of 100, as the program calls for them. Now, this buffering is transparent to your program; the driver doesn't tell you that it's getting another batch and you can't tell it to get another batch. So it is not a solution to the problem that everybody has here, namely how to display your records 10 per page and allow the user to go back and forth among those pages, like search engines do.

  • Find number of rows in  a cursor

    How can i find how many rows are in a cursor. I have a situation if my cursor only retruns one row i return the data if it returns more then one row i will need to do additional select.

    If you have logic like this
    SELECT COUNT(*)
      INTO l_cnt
      FROM emp
    WHERE deptno = p_deptno;
    IF( l_cnt > 1 )
    THEN
      FOR x IN (SELECT * FROM emp WHERE deptno = p_deptno)
      LOOP
        ..where you are counting the rows and then later fetching them, getting the count and then getting the rows will force Oracle to do the work of the query twice. It will also potentially cause logic errors because the number of rows returned by the COUNT(*) won't necessarily match the number of rows fetched from the cursor because some other session may have committed changes between your session executing those two statements.
    Can you describe the logic you are trying to implement in a bit more detail? There are potentially a number of different approaches depending on exactly what you are trying to do. Sometimes it may make sense to do a COUNT(*), sometimes it may make sense to fetch the data into a collection, count the collection elements, and then operate on the collection, sometimes it may make sense to do a SELECT INTO and handle the exception if the wrong number of rows are returned, etc.
    Justin

  • How to display a fixed number of rows in a page when using CL_GUI_ALV_GRID

    Hy experts
    How to display a fixed number of rows in a page when using CL_GUI_ALV_GRID?? lets say 500 ?? because my display table it may contain in some cases 10.000 and evidently I can t see all of them..
    I have a button in my toolbar witch triggers this event
    (display 500 records ) but I don t have the logic to do this only with methods of CL_GUI_ALV_GRID.
    can you tell me a standard method of CL_GUI_ALV_GRID witch can help me do this?? any hint will be good..
    Till now I was used to add a column to my structure witch represents a flag that is a number corresponding to every 500 records (a batch containing 500 records )
    first       500 - flag -> 1
    second  500 - flag -> 2
    etc..but I m convinced that exists a way of doing this more easy..without damaging my structure..
    thanx in advance..don t be shy..reply if you have any hints..

    Hi,
    if method SET_FILTER_CRITERIA doesn´t help, I think that you must work with 2 internal tables, a counter and a loop for filtering the records to be displayed:
    case counter.
      when 1.
         loop at int_table1 from 1 to 500.      "<-- your table with all records
           move int_table1 to int_table2
        endloop.
      when 2.
         loop at int_table1 from 501 to 1000.     
           move int_table1 to int_table2
        endloop.
    etc, etc.
    Call grid-->SET_TABLE_FOR_FIRST_DISPLAY
       exporting
         IT_OUTTAB = int_table2                "<-- instead of your currently table int_table1

  • Searchin pattern in datbase table.(Number of rows in table :More than 3 cro

    Actually i have a db table having 2 columns(columnA,time).db table has 70lac rows.I have to retireve all those values which are present min. 2 times in the interval of 10 minutes for a particular value of columnA. eg. columnA,time values are
    {a,June-01-2011 10:13:12},{b,June-01-2011 10:14:12},{b,June-01-2011 10:15:12},{c,June-01-2011 10:16:12},{b,June-01-2011 10:17:12},{d,June-01-2011 10:18:12},{d,June-01-2011 10:25:12},{e,June-01-2011 10:26:12},{e,June-01-2011 11:38:12},{f,June-01-2011 10:39:12},{f,June-01-2011 10:43:12},{a,June-01-2011 10:44:12},{f,June-01-2011 10:51:12},{b,June-01-2011 10:51:12},{b,June-01-2011 10:53:12},{c,June-01-2011 10:54:12},{g,June-01-2011 10:55:12},{b,June-01-2011 10:56:12},{b,June-01-2011 10:57:12},{b,June-01-2011 10:58:12}
    Then I have to retrieve following rows in output : {b,June-01-2011 10:14:12},{b,June-01-2011 10:15:12},{b,June-01-2011 10:17:12},{d,June-01-2011 10:18:12},{d,June-01-2011 10:25:12},{f,June-01-2011 10:39:12},{f,June-01-2011 10:43:12},{b,June-01-2011 10:51:12},{b,June-01-2011 10:53:12},{b,June-01-2011 10:56:12},{b,June-01-2011 10:57:12},{b,June-01-2011 10:58:12}
    Is it related to data Mining? i have 3 crore rows in database table.I have to search such type of many patterns in my application.I have spent hours looking for tutorials on Google. However I cannot seem to find anything that holds the hand? try to be more clear, i'm in lack of ideas in this problem, even it sounds like a classic. Can oracle java data mining solve the problem?

    first of all thanks for reply, i will take care of your suggestion.
    Number of rows in table : More than 3*10pow7(30 Milion rows)(But it can be more than 120 milion)
    Actually i have to display all those rows which setisfy following criteria:( Specific value of column A should appear min 2 times in the interval of 10 minutes.)
    for eg.
    eg 1. {b,June-01-2011 10:14:12},{b,June-01-2011 10:15:12},{b,June-01-2011 10:17:12}{This set is appearing 3 times in the any interval of 10 minutes.}
    EG2 {a,June-01-2011 10:13:12} is present only one time,so i dont want this rows in output.
    EG# {e,June-01-2011 10:26:12},{e,June-01-2011 11:38:12} is present 2 times in db,but not in the interval of 10 minutes,so i dont want this rows as output.(t1=10:26:12,t2=11:38:12 difference is more than 10 minutes.)
    In my eg. i specified only 2 columns but in actual scenario there will be 8-10 columns in table.This is only one pattern which i specified,but there are many more pattern which i have to search in db.As specified above in eg. was only one pattern,After fulfiling this pattern It has to pass many other pattern.and i want result should be retrieved very fast.
    Please dont go into answer of above eg.Actually i want, what should be approach of such type of problems?
    Is this problem related to oracle data mining and how?
    Can data mining algorthms(Minimum Description Length ,Naive Bayes,Apriori,Decision Tree,Non-Negative Matrix Factorization ,Support Vector Machine...etc..) solve my problem?
    Different-2 type of queries will be fired against this table?
    if i am still not clear then let me know.

  • How to count the number of Rows to be Updated before Update takes place..

    Hi all,
    I have a requirement, where i have to count the number of rows to be updated before updating it. SQL%ROWCOUNT gives the no. of rows updated ( after update takes place). How do i get to know the count of no. of rows to be updated/inserted/ deleted. I was looking for a simple solution, as above SQL%Rowcount. But i couldn't find any. I can use a Function and Return the value which will give me number of rows to be updated, But is there any Simple Logic other than this.. or any count function. Your Help is Appreciated. Thanks!

    If you really want to do this (I have no clue why you would need it), then you can piggy back on any existing pessimistic locking you may already have in place.
    However, it would require two loops through the records of which you want to know the count before you update, and a second pass to update them.
    I would really re-think the need for this, though.
    SQL> create table t0304(c number);
    Table created.
    SQL> insert into t0304 select rownum from all_objects where rownum <= 10;
    10 rows created.
    SQL> commit;
    Commit complete.
    SQL> select * from t0304;
             C
             1
             2
             3
             4
             5
             6
             7
             8
             9
            10
    10 rows selected.
    SQL> declare
      2    cursor mycursor is select * from t0304 where mod(c,2) = 0 for update;
      3    i number := 0;
      4  begin
      5    for r in mycursor loop
      6      i := i + 1;
      7    end loop;
      8    dbms_output.put_line(i);
      9    for r in mycursor loop
    10      update t0304 set c = c + 20 where current of mycursor;
    11    end loop;
    12  end;
    13  /
    5
    PL/SQL procedure successfully completed.
    SQL> commit;
    Commit complete.
    SQL> select * from t0304;
             C
             1
            22
             3
            24
             5
            26
             7
            28
             9
            30
    10 rows selected.
    SQL>Edited by: Steve Howard on Mar 4, 2011 5:57 PM

  • Total  number of rows fetched

    hi
    Is there any way to get total number of rows fetched along with
    result set in a single query
    jos

    Hi,
    You are referring to "Analytical Functions"
    COUNT(*) OVER ()the above one your are counting the number of records, with respect to logical parition window -as if you specify with it the over clause. it won't provide the information as required, with respect to the your first post.
    - Pavan Kumar N

  • More Table Blocks in 11g with same number of rows as in 9i

    I was using SQL performance analyzer to compare performance difference between 9i and 11g databases.
    For this purpose I generated a trace file in 9i database and used it to build STS in 11g database.
    After runing two trials and comparison the report showed performance regression (comparing "buffer gets") while using following SQL statement
    *"SELECT * FROM EMP";*
    There were total 14 rows in emp table in both databases( i.e 9i and 11g)
    There was no plan change for above SQL statement in 11g database but still there was performance regression in 11g.
    After querying dba_tables view for number of blocks in emp table on both sides i found that EMP table in 9i database had 1 block where as in 11g emp table had 5 blocks (Even after using alter table emp move;)
    I am unable to understand why emp table has more number of blocks in 11g with same number of rows as in 9i emp table?

    user8916506 wrote:
    Below query was executed in 9i database.
    SQL> select extent_management,initial_extent,allocation_type from dba_tablespaces where tablespace_name='SYSTEM';
    EXTENT_MAN INITIAL_EXTENT ALLOCATIO
    LOCAL 65536 SYSTEM
    Results shows that SYSTEM tablespace in 9i database is locally managed.
    Where as results of below query from 11g database is indicating that users tablespace in 11g is also locally managed.
    SQL> select extent_management,initial_extent,allocation_type from dba_tablespaces where tablespace_name='USERS';
    EXTENT_MAN INITIAL_EXTENT ALLOCATIO
    LOCAL 65536 SYSTEMGood to see that you also picked up the allocation_type at the same time.
    So you have shown that the discrepancy between 9i and 11g isn't down to the difference in extent management.
    Are there any other differences between the tablespaces when you compare 9i system with non-system, and 9i system with 11g non-system ? (Hint - we have an anomaly with space allocation.)
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

Maybe you are looking for