Issue with Mulidimensional Nested Table

Hi All,
I am studing Multidimensional Nested table and have the below code:
DECLARE
  TYPE table_type1 IS TABLE OF INTEGER;
  TYPE table_type2 IS TABLE OF TABLE_TYPE1;
  table_tab1 table_type1 := table_type1();
  table_tab2 table_type2 := table_type2(table_tab1);
BEGIN
  FOR i IN 1 .. 2
  LOOP
    table_tab2.extend;
    table_tab2(i) := table_type1();
    FOR j IN 1 .. 3
    LOOP
      IF i = 1
      THEN
        table_tab1.extend;
        table_tab1(j) := j;
      ELSE
        table_tab1.extend;
        table_tab1(j) := 4 - j;
      END IF;
        table_tab2.extend;
      table_tab2(i)(j) := table_tab1(j);
      DBMS_OUTPUT.PUT_LINE('table_tab2(' || i || ')(' || j || '): ' ||
                           table_tab2(i) (j));
    END LOOP;
  END LOOP;
exception
  when others then
    dbms_output.put_line(sqlerrm);
END;
This code is working fine as of now.But,If i comment below code(table_tab2 is also extended latter):
table_tab2.extend; 
    table_tab2(i) := table_type1();
then it gives me error 'Subscription Beyond count'.
I would like to know why i need to extend table_tab2 twice?
Thanks!

This code is working fine as of now
No, it isn't. Make sure you have serveroutput on. If you do, you get this as script output (I am using SQL Developer):
anonymous block completed
...meaning no exception was raised to client, because you trapped it, but the DBMS Output window gives:
ORA-06533: Subscript beyond count
...with no indication of which line, because you suppressed it (why?). If you drop the worse than useless exception handler, you get:
ORA-06533: Subscript beyond count
ORA-06512: at line 22
06533. 00000 -  "Subscript beyond count"
*Cause:    An in-limit subscript was greater than the count of a varray
           or too large for a nested table.
*Action:   Check the program logic and explicitly extend if necessary.
A bit more helpful, yes? and with less code. Here is a revised block that writes the loop variable product to the nested array, with its output, initialising and extending correctly:
DECLARE
  TYPE table_type1 IS   TABLE OF INTEGER;
  TYPE table_type2 IS   TABLE OF table_type1;
  table_tab1            table_type1;
  table_tab2            table_type2 := table_type2();
BEGIN
  FOR i IN 1..2 LOOP
    table_tab1 := table_type1();
    FOR j IN 1..3 LOOP
      table_tab1.extend;
      table_tab1(j) := i * j;
    END LOOP;
    table_tab2.extend;
    table_tab2(i) := table_tab1;
  END LOOP;
  FOR i IN 1..table_tab2.COUNT LOOP
    FOR j IN 1..table_tab2(i).COUNT LOOP
      DBMS_OUTPUT.PUT_LINE('Row ' || i || ' element ' || j || ' = ' || table_tab2(i)(j));
    END LOOP;
  END LOOP;
END;
Row 1 element 1 = 1
Row 1 element 2 = 2
Row 1 element 3 = 3
Row 2 element 1 = 2
Row 2 element 2 = 4
Row 2 element 3 = 6
You could alternatively dispense with table_tab1 if you prefer, and write directly each element.

Similar Messages

  • Issue with Update of Table VARINUM

    Hi,
    I am getting waiting Issues with Update of table VARINUM. Has anybody faced such an issue.
    I have a lot of Jobs which are running in background. I am submitting it through a report. what can be the issue.
    Regards,
    Abhishek jolly

    Thisi is quite old, but not answered properly yet, so there you go:
    SAP generates a new job and temporary variant on report RSDBSPJS, for each HTTP call,which creates database locks on table VARINUM .
    This causes any heavyweight BSP application  to hang and give timeout errors.
    The problem is fixed applying OSS note 1791958, which is not included in any service pack.

  • Issue with data dictionary -Table maintanance generator

    Hi all,
    I have an issue with Data dictionary, table maintenance generator. I have entered some records in a custom table (ZBCSECROLETOGRP) and changed the delivery class from C to A. When I create the table maintainance generator, I am encountered with the following errors:
    1)Field ZBCSECROLETOGRP-PORTALGROUP shortened (new visible length: 000032)
    2)0012 could not be generated
    3)In TCTRL_ZBCSECROLETOGRP field LENGTH has the invalid value 01
    My main motto is to create the table maintainace generator and transport to the furthur systems .
    Please help.
    ThnX in advance,
    Vishal..

    HI,
    Regenerate the table maintenance by selecting the checkbox of "Modified field structure" => new entry & then save.
    Also ensure that the new changes are not affecting old data bcz of data type changes. If that is the case, then delete the old records, regenerate table maint. & re-enter those records which you had deleted.
    Thanks,
    Best regards,
    Prashant

  • CVC creation - Strange issue with Master data table of 9AMATNR

    Hi Experts,
    We have encountered a strange issue with Master data table (/BI0/9APMATNR) of info object 9AMATNR.
    We have a BADI implemented for checking the valid Characteristic before creation of the CVC using transaction /SAPAPO/MC62. This BADI puts a select on master data tab of material /BI0/9APMATNR and returns no value. But the material actually exists in the table (checked through SE16).
    Now we go inside the info object 9AMATNR and go to the Master data Tab. There we go inside the master table
    /BI0/9APMATNR and activate that. After activating the table it is read by the select statement inside BADI (Strange) and allows the CVC to be created.
    Ideally it should not allow us to activate the SAP standard table /BI0/9APMATNR. I observed that in technical settings of this table it has single record buffering as switched on. (But as per my knowledge buffer gets refreshed every 2 to 4 mins and not in 2 days or something).
    Your expert comment is valuable to us. Thanks.
    Best Regards,
    Chandan Dubey

    Hi Chandan,
                 Try to use a WAIT statment with 5 seconds before your select statment.
    I'm not sure whether this will work. Anyway check it and let me know the result.
    Regards,
    Siva.

  • Is the Time Capsule wi-fi router compatible with the Nest thermostat? I've read where there are issues with the Nest and Airport Extreme (AE)? Is AE used in the Time Capsule?

    I've read where there are issues with the Nest and Airport Extreme (AE)? Is AE used in the Time Capsule?
    If so, i already have a wi-fi router, so can the Time Capsule simply be used for wireless backup instead of as a router if there is conflict?

    A TC is fundamentally a AE with a sata chip included to drive the hard disk. Some slightly different bios to include disk functions, although most of those are also on the AE. So as a router it is 99.999% identical.
    And yes, if you have another router you can easily bridge the TC and use wireless for backups and internet connection. Once you no longer route through the TC the issues with most NAT problems including port forwarding should disappear.. mostly this is due to all apple routers using NAT-PMP instead of upnp which is the near universal standard for opening ports.. without upnp apple keep their routers more exclusive.

  • How to insert into a table with a nested table which refer to another table

    Hello everybody,
    As the title of this thread might not be very understandable, I'm going to explain it :
    In a context of a library, I have an object table about Book, and an object table about Subscriber.
    In the table Subscriber, I have a nested table modeling the Loan made by the subscriber.
    And finally, this nested table refers to the Book table.
    Here the code concerning the creation of theses tables :
    Book :
    create or replace type TBook as object
    number int,
    title varchar2(50)
    Loan :
    create or replace type TLoan as object
    book ref TBook,
    loaning_date date
    create or replace type NTLoan as table of TLoan;
    Subscriber :
    create or replace type TSubscriber as object
    sub_id int,
    name varchar2(25)
    loans NTLoan
    Now, my problem is how to insert into a table of TSubscriber... I tried this query, without any success...
    insert into OSubscriber values
    *(1, 'LEVEQUE', NTLoan(*
    select TLoan(ref(b), '10/03/85') from OBook b where b.number = 1)
    Of course, there is an occurrence of book in the table OBook with the number attribute 1.
    Oracle returned me this error :
    SQL error : ORA-00936: missing expression
    00936. 00000 - "missing expression"
    Thank you for your help

    1) NUMBER is a reserved word - you can't use it as identifier:
    SQL> create or replace type TBook as object
      2  (
      3  number int,
      4  title varchar2(50)
      5  );
      6  /
    Warning: Type created with compilation errors.
    SQL> show err
    Errors for TYPE TBOOK:
    LINE/COL ERROR
    0/0      PL/SQL: Compilation unit analysis terminated
    3/1      PLS-00330: invalid use of type name or subtype name2) Subquery must be enclosed in parenthesis:
    SQL> create table OSubscriber of TSubscriber
      2  nested table loans store as loans
      3  /
    Table created.
    SQL> create table OBook of TBook
      2  /
    Table created.
    SQL> insert
      2    into OBook
      3    values(
      4           1,
      5           'No Title'
      6          )
      7  /
    1 row created.
    SQL> commit
      2  /
    Commit complete.
    SQL> insert into OSubscriber
      2    values(
      3           1,
      4           'LEVEQUE',
      5           NTLoan(
      6                  (select TLoan(ref(b),DATE '1985-10-03') from OBook b where b.num = 1)
      7                 )
      8          )
      9  /
    1 row created.
    SQL> select  *
      2    from  OSubscriber
      3  /
        SUB_ID NAME
    LOANS(BOOK, LOANING_DATE)
             1 LEVEQUE
    NTLOAN(TLOAN(000022020863025C8D48614D708DB5CD98524013DC88599E34C3D34E9B9DBA1418E49F1EB2, '03-OCT-85'))
    SQL> SY.

  • Oracle 10g - issue with "DELETE from TABLE WHERE ID in (1,2,3)" (cfqueryparam used)

    Hello, everyone.
    I am having issues with running a DELETE statement on an Oracle 10g database.
    DELETE
    FROM tableA
    WHERE ID in (1,2,3)
    If there is only one ID for the IN clause, it works.  But if more than one ID is supplied, I get an "SQL command not properly ended" error message.  Here is the query as CF:
    DELETE
    FROM TRAINING
    WHERE userID = <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#trim(form.userID)#">
         AND TRAINING_ID in <cfqueryparam value="#form.trainingIDs#" cfsqltype="CF_SQL_INTEGER" list="yes">
    Anyone work with Oracle that can help me with this?  I'm an experienced MS-SQL developer; Oracle is new to me.
    Thanks,
    ^_^

    Nevermind.. a co-worker just told me that I still have to use parenthesis around the values for the IN clause. 

  • When selecting a row from a view with a nested table I want just ONE entry returned

    Does a nested table in a view "EXPLODE" all values ALWAYS no matter the where clause for the nested table?
    I want to select ONE row from a view that has columns defined as TYPE which are PL/SQL TABLES OF other tables.
    when I specify a WHERE clause for my query it gives me the column "EXPLODED" with the values that mathc my WHERE clause at the end of the select.
    I dont want the "EXPLODED" nested table to show just the entry that matches my WHERE clause. Here is some more info:
    My select statement:
    SQL> select * from si_a31_per_vw v, TABLE(v.current_allergies) a where a.alg_seq
    =75;
    AAAHQPAAMAAAAfxAAA N00000 771 223774444 20 GREGG
    CADILLAC 12-MAY-69 M R3
    NON DENOMINATIONAL N STAFF USMC N
    U
    E06 11-JUN-02 H N
    05-JAN-00 Y Y
    USS SPAWAR
    353535 USS SPAWAR
    SI_ADDRESS_TYPE(NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NUL
    L, NULL)
    SI_ADDRESS_TAB()
    SI_ALLERGY_TAB(SI_ALLERGY_TYPE(69, 'PENICILLIN', '11-JUN-02', NULL), SI_ALLERGY
    TYPE(74, 'SHELLFISH', '12-JUN-02', NULL), SIALLERGY_TYPE(68, 'PEANUTS', '13-J
    UN-02', NULL), SI_ALLERGY_TYPE(75, 'STRAWBERRIES', '13-JUN-02', NULL))
    SI_ALLERGY_TAB()
    75 STRAWBERRIES 13-JUN-02
    *******Notice the allergy entry of 75, Strawberries, 13-JUN-02 at the
    end. This is what I want not all the other exploded data.
    SQL> desc si_a31_per_vw
    Name Null? Type
    ........ Omitted uneeded previous column desc because of metalink
    character limit but the view is bigger then this.......
    DEPT_NAME VARCHAR2(20)
    DIV_NAME VARCHAR2(20)
    ADDRESSES SI_ADDRESS_TAB
    CURRENT_ALLERGIES SI_ALLERGY_TAB
    DELETED_ALLERGIES SI_ALLERGY_TAB
    SQL> desc si_allergy_tab
    si_allergy_tab TABLE OF SI_ALLERGY_TYPE
    Name Null? Type
    ALG_SEQ NUMBER
    ALG_NAME VARCHAR2(50)
    START_DATE DATE
    STOP_DATE DATE
    SQL> desc si_allergy_type
    Name Null? Type
    ALG_SEQ NUMBER
    ALG_NAME VARCHAR2(50)
    START_DATE DATE
    STOP_DATE DATE

    Can you explain what do you mean by the following?
    "PL/SQL tables (a.k.a. Index-by tables) cannot be used as the basis for columns and/or attributes"There are three kinds of collections:
    (NTB) Nested Tables
    (VAR) Varrying Arrays
    (IBT) Index-by Tables (the collection formerly known as "PL/SQL tables")
    NTB (and VAR) can be defined as persistent user defined data types, and can be used in table DDL (columns) and other user defined type specifications (attributes).
    SQL> CREATE TYPE my_ntb AS TABLE OF INTEGER;
    SQL> CREATE TABLE my_table ( id INTEGER PRIMARY KEY, ints my_ntb );
    SQL> CREATE TYPE my_object AS OBJECT ( id INTEGER, ints my_ntb );
    /IBT are declared inside stored procedures only and have slightly different syntax from NTB. Only variables in stored procedures can be based on IBT declarations.
    CREATE PROCEDURE my_proc IS
       TYPE my_ibt IS TABLE OF INTEGER INDEX BY BINARY_INTEGER;  -- now you see why they are called Index-by Tables
       my_ibt_var my_ibt;
    BEGIN
       NULL;
    END;That sums up the significant differences as it relates to how they are declared and where they can be referenced.
    How are they the same?
    NTB and VAR can also be (non-persistently) declared in stored procedures like IBTs.
    Why would you then ever use IBTs?
    IBTs are significantly easier to work with, since you don't have to instantiate or extend them as you do with NTB and VAR, or
    Many other highly valuable PL/SQL programs make use of them, so you have to keep your code integrated/consistent.
    There's a lot more to be said, but I think this answers the question posed by Sri.
    Michael

  • Issue with Period Control Table after copying Essbase adapter

    Hi Experts,
    I'm working on version 11.1.1.3 and have copied the adapters (Essbase, Pull + EPRi) in the work bench so I can add an additional target for the FDM application. However, I have an issue with the import process; it returns an error with the Time & Periods (I guess it's something to do with the Periods category).
    I have reimported the Periods Control Table and updated the new application's Target Period & Year (whilst changing the system code in the Application settings to the newl adapter) and still receive the same error message.
    Any direction or thoughts would be welcome.
    Thanks
    Mark

    The time periods do not copy. You need to maintain them through the UI or upload them from excel. There is a KM article on this if you need additional detail.

  • Issue with updating partitioned table

    Hi,
    Anyone seen this bug with updating partitioned tables.
    Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
    We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
    Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
    I'd be interested if anyone has seen this and has a version of Sybase without the issue.
    Unfortunately when it happens on a replicated table - it takes down rep server.
    CREATE TABLE #table1
        (   PK          char(8) null,
            FileDate        date,
            changed         bit
    CREATE TABLE partitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    LOCK DATAROWS
      PARTITION BY RANGE (ValidTo)
      ( p2014 VALUES <= ('20141231') ON [default],
      p2015 VALUES <= ('20151231') ON [default],
      pMAX VALUES <= (MAX) ON [default]
    CREATE UNIQUE CLUSTERED INDEX pk
      ON partitioned(PK, ValidFrom, ValidTo)
      LOCAL INDEX
    CREATE TABLE unpartitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    LOCK DATAROWS
    CREATE UNIQUE CLUSTERED INDEX pk
      ON unpartitioned(PK, ValidFrom, ValidTo)
    insert partitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert unpartitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert #table1
    select "ET00jPzh", "Jan 15 2015", 1
    union all
    select "ET00jPzh", "Jan 15 2015", 1
    go
    update partitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join partitioned p on (p.PK = t.PK)
    where  p.ValidTo = '99991231'
    and    t.changed = 1
    go
    update unpartitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join unpartitioned u on (u.PK = t.PK)
    where  u.ValidTo = '99991231'
    and    t.changed = 1
    go
    drop table #table1
    go
    drop table partitioned
    drop table unpartitioned
    go

    wrt to replication - it is a bit unclear as not enough information has been stated to point out what happened.  I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should.   You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making. 
    wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself.   There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
    Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
    -- testing with tinyint
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed tinyint
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned
    -- duplicating with 'int'
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed int
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned

  • Issue with Data Load Table

    Hi All,
           i am facing issue with apex 4.2.4 ,using the  Data Load Table concept's and in this look up used the
          Where Clause option  ,it seems to be not working this where clause ,Please help me on this

    hi all,
        it looks this where clause not filter with 'N'  data ,Please help me ,how to solve this or help me on this

  • Scalability issue with global temporary table.

    Hi All,
    Does create global temporary table would lock data disctionary like create table? if yes would not it be a scalable issue with multi user environment?
    Thanks and Regards,
    Rudra

    Billy  Verreynne  wrote:
    acadet wrote:
    am I correct in interpreting your response that we should be using GTT's in favour of bulk operations and collections and in memory operations? No. I said collections cannot scale. This means due to the fact that collections reside in expensive PGA memory, you cannot stuff large data volumes into them. Thus they do not make an ideal storage bin for temporary data (e.g. data loaded from file or a web service). GTTs otoh do not suffer from the same restrictions, can be indexed and offer vastly better scalability and so on.
    Multiple passes are often needed using such a data structure. Or filtering to find specific data. As a GTT is a SQL native, it offers a lot more flexibility and performance in this regard.
    And this makes sense - as where do we put out persistent data? Also in tables, but ones of a persistent and not temporary kind like a GTT.
    Collections are pretty useful - but limited in size and capability.
    Rudra states:
    I want to pull out few metrices from differnt tables and processing itIf this can't be achieved in a SQL statement, unless Rudra is a master of understatement then I would see GTT's as a waste of IO and programming effort. I agree.
    My comments however were about choices for a temporary data storage bin in PL/SQL.I agree with your general comments regarding temporary storage bins in Oracle, but to say that collections don't scale is putting to narrow a definition on scaling. True, collections can be resource intensive in terms of memory and CPU requirements, but their persistence will generally be much shorter than other types of temporary storage. Given the right characteristics, collections will scale and given the wrong characteristics GTT's wont scale.
    As you say it is all about choice. Getting back to the theme of this thread though, the original poster should be made aware that well designed and well coded applications are most likely to scale. Creating tables on the fly is generally considered bad practice and letting the database do what it does best, join tables in queries at the SQL level is considered good practice. The rest lies somewhere in between and knowing when to do which is why we get paid the big bucks (not). ;-)
    Regards
    Andre

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • Issue with the shawdow table

    I am in the process of understanding the shawdow table and the error log table.
    i have a table created with a shadow table in place.
    1. ex : table emp( empno, ename), emp_err( ...., empno,ename)
    it contains the values (1,'A').
    Now i place the empno with a data rule, unique not null ... and configure the operator
    as MOVE TO ERROR.
    when i try to insert a row with 1,A, its not only moving the new row to be inserted but also the existing row in the table emp ie two rows are getting populated in the error table emp_err
    2. I have a scenario where i want to update a row in the fact, from an incoming row.
    If there is no match of the incoming row to that of the fact, how do i put that into the
    error_table ?
    Any ideas or tricks appreciated.
    Thanks
    Narayana.

    Hi,
      Remove the the internal tables memory by using FREE statement after processing the internal table.
       Also u can ask your Basis person to increase the page area.
    Reward if helpful.
    Regards,
    Umasankar.

  • Issue with ADF Tree Table

    Hi,
    I have the following requirement where i need to display a tree table. Here is how the initial implementation is:
    I have created the read only view for : ManagersVO > PoolsVO > MachinesVO. Where 'MachinesVO' is the destination view. And created view links between ManagersVO & PoolsVO using ManagerId and PoolsVO & MachinesVO using PoolId.
    And using this implementation, successfully created tree table on the UI. Now we got an enhancement for this:
    i.e., MachinesVO should return list of machines as per user logs in. i.e., we have 4 different roles. 'Super Admin', 'Sys Admin', 'App Admin', 'End User'. The default query for MachinesVO is for 'Super Admin'. The query for other user roles is different except the SELECT statement.
    The requirement is to dynamically change the query of MachinesVO based on user logs in and display the tree table accordingly. To implement the same i have tried using setQuery() operation on 'MachinesVO' which results with the following error:
    JBO-26016: InvalidOperException
    Cause: You cannot set customer query (calling setQuery()) on a view object if it is the detail view object in a master detail view link.
    Action: Do not call setQuery() if the view object is a detail.
    Can one suggest me a best solution to implement this.
    Thanks & Regards,
    Kiran

    Hi Navaneetha Krishnan,
    Here is how i implemented based on your comments. As i have tree table based 3 different VO's, created the following method at middle view(i.e., PoolsVO).
    1.Tree Model hierarchy
    ManagersVO > PoolsVO > MachinesVO
    I actually want to filter the data at Machines level. Hence wrote a method at PoolsVOImpls and exposed it in the PoolsVO client interface. Here is the code that i have placed in the PoolVOImpl
    public class PoolsVOImpl extends ViewObjectImpl implements PoolsVO{
         * This is the default constructor (do not remove).
        public PoolsVOImpl () {
      public void filterMachinesDataByUserRole(String userRole,String vzId){
        Row row = getCurrentRow();
        String query = "";
        if(row != null){
          RowSet rowSet = (RowSet)row.getAttribute("MachinesVO");
          if(rowSet != null){
            MachinesVOImpl machinesVOImpl = (MachinesVOImpl)rowSet.getViewObject();
            if(userRole.equalsIgnoreCase("SYS ADMIN")){
                    machinesVOImpl .setWhereClause(query related to sysadmin);
             //Similarly for other user roles.
             executeQuery();
    }And this piece of code needs to be executed before the jsff(which has the tree table) renders. Hence, i created a this methodAction as a default activity in the respective taskflow where the jsff is placed. Once this method get executed, the page should render the machines specific to the user.
    Here is the issue: getCurrentRow() method call is returning always NULL.
    Please correct me if i'm doing something wrong. I do tried the above mentioned approach by creating the method at '*ManagersVOImpl*' level too. Still the same issue.
    Thanks & Regards,
    Kiran

Maybe you are looking for