A way to set table type in MySQL?

Is there a way to set the table type that Kodo uses in MySQL? I use MySQL 4.0
and I want to use the Inno DB tables for my tables because they implement and
enforce true foreign keys.
TIA
Robert

Hopefully we'll get it into 2.5 final.
Anyone else with some other options applicable?
On Tue, 27 May 2003 09:08:57 +0200, Robert Simmons wrote:
Hmm, would be nice to get on the features request list.
-- Robert
"Stephen kim" <[email protected]> wrote in message
news:[email protected]..
Unfortunately, that is not one of the exposed configuration options... One
way to work around the problem is to stop Eclipse, edit the
.metadata/.plugins/com.sol.../pref_store.ini and treat it as a .properties
file.
On Tue, 27 May 2003 07:13:33 +0200, Robert Simmons wrote:
How would this work with the eclipse pluggin?
-- Robert
"Patrick Linskey" <[email protected]> wrote in message
news:[email protected]..
Yes; whitespace.
-Patrick
On Sun, 09 Feb 2003 20:10:02 -0500, Robert Simmons wrote:
would I just put TableType=innodb in the dictionary properties of the
Kodo NetBeans module? If so then what would I use for a separator If I
wanted to describe other properties?
-- Robert--
Patrick Linskey
SolarMetric Inc.
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com

Similar Messages

  • Create Value Set of Table Type

    Hi,
    Could you provide me a sample program for creating a value Set of Table type.
    Thanks
    Tim.

    Hi,
    I was able to create value set (table type) from the System Administrator and attached the value set to the Inventory Kanban DFF Form.
    I'm having issue here, value set is created on Custom Table which has data as below :
    EX:- Custom table has Item_Num: 101 , Lot_Num: Lot_1 & Item_Num: 102 , Lot_Num: Lot_2
    When I go into DFF form in Inventory for item_Num: 101 it should only allow to enter Lot_1 but it is accepting Lot_2 also.
    I must be missing some where clause in the value set setup.
    Any suggestions?
    R12 Version.
    Thanks
    Tim.

  • BEx Web - Open/Save dialog - Set Default Type

    Hi All,
    In the BEx Open/Save dialog (accessed thru New Analysis button), shows "Views" in the Type drop down as default. Is there a way to set this type to Query by default?
    Thanks & Regards,
    Sree

    Option is available in the XML code where the type of dataprovider can be selected as "QUERY" or "VIEW" or "INFOPROVDER".

  • How to attach table type value set to LOV of oaf page

    Hi everyone,
    There is a valueset created in the front end say xx_ap_valueset which is of table type and created on a table xx_ap_table having some where cluase and order by clause.
    Is there a way we can attach this value set to a LOV of oaf page? or is it just that i need to create a vo and replicate the select statement used in the valueset.
    Please help!
    Thanks
    Sunny

    Sunny,
    You cannot attach the Valueset to the Lov directly. You need to create a VO for that.
    Regards,
    Gyan

  • Creation of a Table Type value set with 'ALL' as one of the  value

    Gurus,
    My requirement is to create [table type]value set which would show the [LOV]values in parameter of Conc Progr .
    So far we have three such values to chose from ,they are, 'Frozen', 'Pending' and 'Testing'. I achieved it.
    My question is ,
    if user wants to choose 'ALL' three values , how shall I accommodate it in this table type value set?
    Giving fourth option as ALL, which would eventually select 'ALL' three values 'Frozen', 'Pending' and 'Testing'.
    thanks in advance.
    -sDJ

    You can't have UNION in the value set.
    Try creating a view, which is having UNION with ALL.
    Check the following links.
    Table Value Set.
    ORA-00907 Missing Right Parenthesis in Value Set
    By
    Vamsi

  • Set Aggregation type of Count Distinct to use correct table aggregation in

    Hi there,
    Currently I use OBIEE 10.1.3.4.1 , and there is a case where a fact table consist of 2 logical table source: detail and aggregate table, which has some measure using count distinct as aggregation type. The problem is everytime I browse the measure with no dimension at all , it always use detail table not aggegation one..
    Really appreciate for any suggestion ..
    thanks a lot

    Hi,
    I don't think it's the same case as mine. Let say I have 2 table : detail and aggegate
    Detail Table consists 4 fields:
    *) Period
    *) Market
    *) Region
    *) Measure : Customer ID, Sales
    Aggregate Table consists 3 fields :
    *) Period
    *) Region
    *) Measure : Customer ID, Sales
    in the measure I set aggregation type for each field:
    *) Sales >> set as Sum
    *) Customer ID >> copy as "Number of Customer" and set as Count Distinct
    In each LTS' contents I set the level of aggregation using "Get Levels" feature..
    Then I try to browse via Presentation and do some querys belows:
    a) only choose single field of measure : Sales, the session shows that the value is taken from aggregation table and just as I expected.
    b) choose period and sales, the session shows that the values are taken from aggregation table, and still just as I expected.
    c) choose period, sales , and market, the session shows that the values are taken from detail table, just as I expected.
    d) only choose single field of measure : "Number of Customer", the session shows that the value is taken from detail table , this is NOT as I expected. It suppose to take the value from aggregation table..
    e) choose period and "Number of Customer", the session shows that the value is taken from detail table , this is also NOT as I expected. It suppose to take the value from aggregation table..
    I've tried to override the aggregation , but still confuse how to apply in measure "Number of Customer" and did not work at all..
    any idea ?
    thanks a lot

  • Sporadically getting error "string or binary data would be truncated" in SQL server 2008 while inserting in a Table Type object

    I am facing a strange SQL exception:-
    The code flow is like this:
    .Net 4.0 --> Entity Framework --> SQL 2008 ( StoredProc --> Function {Exception})
    In the SQL Table-Valued Function, I am selecting a column (nvarchar(50)) from an existing table and (after some filtration using inner joins and where clauses) inserting the values in a Table Type Object having a column (nvarchar(50))
    This flow was working fine in SQL 2008 but now all of sudden the Insert into @TableType is throwing  "string or binary data would be truncated"  exception. 
    Insert Into @ObjTableType
    Select * From dbo.Table
    The max length of data in the source column is 24 but even then the insert statement into nvarchar temp column is failing.
    Moreover, the same issue started coming up few weeks back and I was unable to find the root cause, but back then it started working properly after few hours
    (issue reported at 10 AM EST and was automatically resolved post 8 PM EST). No refresh activity was performed on the database.
    This time however the issue is still coming up (even after 2 days) but is not coming up in every scenario. The data set, for which the error is thrown, is valid and every value in the function is fetched from existing tables. 
    Due to its sporadic nature, I am unable to recreate it now :( , but still unable to determine why it started coming up or how can i prevent such things to happen again.
    It is difficult to even explain the weirdness of this bug but any help or guidance in finding the root cause will be very helpful.
    I also Tried by using nvarchar(max) in the table type object but it didn't work.
    Here is a code similar to the function which I am using:
    BEGIN
    TRAN
    DECLARE @PID
    int = 483
    DECLARE @retExcludables
    TABLE
        PID
    int NOT
    NULL,
        ENumber
    nvarchar(50)
    NOT NULL,
        CNumber
    nvarchar(50)
    NOT NULL,
        AId
    uniqueidentifier NOT
    NULL
    declare @PSCount int;
    select @PSCount =
    count('x')
    from tblProjSur ps
    where ps.PID
    = @PID;
    if (@PSCount = 0)
    begin
    return;
    end;
    declare @ExcludableTempValue table (
            PID
    int,
            ENumber
    nvarchar(max),
            CNumber
    nvarchar(max),
            AId
    uniqueidentifier,
            SIds
    int,
            SCSymb
    nvarchar(10),
            SurCSymb
    nvarchar(10)
    with SurCSymbs as (
    select ps.PID,
                   ps.SIds,              
                   csl.CSymb
    from tblProjSur ps
                right
    outer join tblProjSurCSymb pscs
    on pscs.tblProjSurId
    = ps.tblProjSurId
    inner join CSymbLookup csl
    on csl.CSymbId
    = pscs.CSymbId 
    where ps.PID
    = @PID
        AssignedValues
    as (
    select psr.PID,
                   psr.ENumber,
                   psr.CNumber,
                   psmd.MetaDataValue
    as ClaimSymbol,
                   psau.UserId
    as AId,
                   psus.SIds
    from PSRow psr
    inner join PSMetadata psmd
    on psmd.PSRowId
    = psr.SampleRowId
    inner join MetaDataLookup mdl
    on mdl.MetaDataId
    = psmd.MetaDataId
    inner join PSAUser psau
    on psau.PSRowId
    = psr.SampleRowId
                inner
    join PSUserSur psus
    on psus.SampleAssignedUserId
    = psau.ProjectSampleUserId
    where psr.PID
    = @PID
    and mdl.MetaDataCommonName
    = 'CorrectValue'
    and psus.SIds
    in (select
    distinct SIds from SurCSymbs)         
        FullDetails
    as (
    select asurv.PID,
    Convert(NVarchar(50),asurv.ENumber)
    as ENumber,
    Convert(NVarchar(50),asurv.CNumber)
    as CNumber,
                   asurv.AId,
                   asurv.SIds,
                   asurv.CSymb
    as SCSymb,
                   scs.CSymb
    as SurCSymb
    from AssignedValues asurv
    left outer
    join SurCSymbs scs
    on    scs.PID
    = asurv.PID
    and scs.SIds
    = asurv.SIds
    and scs.CSymb
    = asurv.CSymb
    --Error is thrown at this statement
    insert into @ExcludableTempValue
    select *
    from FullDetails;
    with SurHavingSym as (   
    select distinct est.PID,
                            est.ENumber,
                            est.CNumber,
                            est.AId
    from @ExcludableTempValue est
    where est.SurCSymb
    is not
    null
    delete @ExcludableTempValue
    from @ExcludableTempValue est
    inner join SurHavingSym shs
    on    shs.PID
    = est.PID
    and shs.ENumber
    = est.ENumber
    and shs.CNumber
    = est.CNumber
    and shs.AId
    = est.AId;
    insert @retExcludables(PID, ENumber, CNumber, AId)
    select distinct est.PID,
    Convert(nvarchar(50),est.ENumber)
    ENumber,
    Convert(nvarchar(50),est.CNumber)
    CNumber,
                            est.AId      
    from @ExcludableTempValue est 
    RETURN
    ROLLBACK
    TRAN
    I have tried by converting the columns and also validated the input data set for any white spaces or special characters.
    For the same input data, it was working fine till yesterday but suddenly it started throwing the exception.

    Remember, the CTE isn't executing the SQL exactly in the order you read it as a human (don't get too picky about that statement, it's at least partly true enough to say it's partly true), nor are the line numbers or error messages easy to read: a mismatch
    in any of the joins along the way leading up to your insert could be the cause too.  I would suggest posting the table definition/DDL for:
    - PSMetadata, in particular PSRowID, but just post it all
    - tblProjectSur, in particularcolumns CSymbID and TblProjSurSurID
    - cSymbLookup, in particular column CSymbID
    - PSRow, in particular columns SampleRowID, PID,
    - PSAuser and PSUserSur, in particualr all the USERID and RowID columns
    - SurCSymbs, in particular colum SIDs
    Also, a diagnostic query along these lines, repeat for each of your tables, each of the columns used in joins leading up to your insert:
    Select count(asurv.sid) as count all
    , count(case when asurv.sid between 0 and 9999999999 then 1 else null end) as ctIsaNumber
    from SurvCsymb
    The sporadic nature would imply that the optimizer usually chooses one path to the data, but sometimes others, and the fact that it occurs during the insert could be irrelevant, any of the preceding joins could be the cause, not the data targeted to be inserted.

  • What is the maximum rows allowed in PLSQL array table type?

    Hi,
    I have a procedure and it contains the cursor which will fetch more than 500 records. And i have 5 output parameters to store the values coming from the cursor. I don't want to store that into custom table. I want to save it one table type array or something like that. Now i want to know what is the maximum storage of array table type? If i store more than 500 data, how will be the performance? Then is there any other way to achieve this? And that should not decrease the preformace.Let me know your thoughts.
    Thanks

    It really depends on what you are planning to do with the records once you return them from your stored procedure, and what client is on the receiving end of the results.
    One option would be to just return a ref cursor and let the client deal with retreiving the rows themselves, whether one by on or by a bulk collect. Another option would be to declare a table of records matching the result set and do a bulk collect into that table type in your procedure and return the table type to the caller. You could also declare a table type for each field in the resultset, bulk collect the records into thos types and return one for each field.
    Personally, I would likely go with returning a ref cursor. Both of the collect the resultset in your procedure and then return collections to the caller methods require memory on the database server to hold the entire resultset and memeory on the client to hold the entire resultset. While 500 records is probably not going to be too bad on memory, if the result set grows, you will run into memory issues at some point.
    John

  • How to Populate a table type variable from a cursor

    Hi
    I have a stored procedure (P1) that returns a ref cursor as the output.
    Another procedure (P2) receives this ref cursor (C).
    In this procedure (P2), I want to do a Bulk Collect from this ref cursor (C) in
    a table type variable that has been declared locally in the procedure P2. I have created appropriate Object Type and Table Types at the database level.
    Please advise how to do it. I tried to do it in different ways, but was not able to do it - each time I faced incompatible data-type related issues.
    Regards
    Madhup

    What I wrote was unclear. Syntactically it is valid and does something. But consider the advantage of a decent design.
    SQL> create or replace procedure p1 (o out sys_refcursor) as
      2  begin
      3   open o for select * from emp;
      4  end p1;
      5  /
    Procedure created.
    SQL> create or replace procedure p2(i sys_refcursor) as
      2   type emp_tab is table of emp%rowtype;
      3   l_emp_tab emp_tab;
      4  begin
      5   fetch i bulk collect into l_emp_tab;
      6   close i;
      7  
      8   for i in 1..l_emp_tab.count loop
      9     NULL;
    10   end loop;
    11  end p2;
    12  /
    Procedure created.
    SQL> CREATE OR REPLACE PROCEDURE p3 IS
      2 
      3  TYPE myarray IS TABLE OF emp%ROWTYPE;
      4  l_data myarray;
      5 
      6  CURSOR r IS
      7  SELECT * FROM emp;
      8 
      9  BEGIN
    10    OPEN r;
    11    LOOP
    12      FETCH r BULK COLLECT INTO l_data;
    13 
    14      FOR j IN 1 .. l_data.COUNT
    15      LOOP
    16        NULL;
    17      END LOOP;
    18 
    19      EXIT WHEN r%NOTFOUND;
    20    END LOOP;
    21    CLOSE r;
    22  END p3;
    23  /
    Procedure created.
    SQL> set serverout on
    SQL> set timing on
    SQL> declare
      2   r sys_refcursor;
      3  begin
      4    FOR i IN 1 .. 10000 LOOP
      5      p1(r);
      6      p2(r);
      7    END LOOP;
      8  end;
      9  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:01.71
    SQL> begin
      2    FOR i IN 1 .. 10000 LOOP
      3      p3;
      4    END LOOP;
      5  end;
      6  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:01.21
    SQL> Again sorry for being less than clear.

  • Multiline attribute vs. table type based attribute in workflow container

    Hi,
    When we are talking about definition of workflow container attribute, you have to choose a type of attribute from a dictionary and have to decide if it is multiline attribute or single line (flat). If we want to define a multiline string-based attribute, we can do it in the following ways:
    — define an attribute type of string and set checkbox on multiline;
    — define an attribute table type of string and do not set checkbox on multiline.
    My question is:
    Is there any difference between these two approaches, described above (flat type + multiline vs. table type + single line)?
    Thanks.

    I don't think that there is any difference. If you set a table type as container element data type, the multiline checbox is checked automatically (and you cannot change that). So eventually the table type container element is the same as a structure type container element with multiline checkbox checked.
    EDIT: Or does it behave differently for you?
    Regards,
    Karri

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Is there table type object in PL/SQL?

    Hi.
    What I'm trying to do is, according to the some conditions, aggregate data as an object type( I am not sure that there's a similar thing)
    There are 5-6 conditions. then at last join those table type object.
    FYI, Here is an example
    Company will give bonus to a person who meet these conditions
    1. Salary is less than 1000$ - salary table
    2. More than 5 persons in his family - staff table
    3. gross sale is over 10000$
    In PL/SQL Package, I'll get those data as an object type and join those object just like in-line query.
    select emp_no
    from sal_tbl , staff_tbl, sale_tbl
    where sal_tbl = staff_tbl
    and sal_tbl = sale_tbl
    In my opinion I can get employee list who get bonus. because they meet all conditions.
    Reason why bonus condition will be updated continuously. I want to make them easy to maintanence. what if I need to add new conditions. Just add a new procedure that calculate new result and pass as an table type object.
    select emp_no
    from sal_tbl , staff_tbl, sale_tbl, new_tbl*
    where sal_tbl = staff_tbl
    and sal_tbl = sale_tbl
    and sal_tbl = new_tbl*
    Is there any thing just like what I think?
    Thanks in advance
    Message was edited by: me
    allbory
    I have read Oracle user guide - Collection, Record, and Cursor. But I can't get anything that I want.
    Give me some clues

    > In PL/SQL Package, I'll get those data as an object type and join those object just like
    in-line query.
    Not the best of ideas. If I'm getting what you're saying, you want to use the following approach:
    SQL> create or replace type TStrings as table of varchar2(4000);
    2 /
    Type created.
    SQL>
    SQL>
    SQL> create or replace procedure BadIdea( collection TStrings ) is
    2 minVal string(4000);
    3 maxVal string(4000);
    4 begin
    5 -- in the procedure we now run SQLs against the collection
    6 select
    7 MIN( column_value ) into minVal
    8 from TABLE( collection );
    9
    10 select
    11 MAX( column_value ) into maxVal
    12 from TABLE( collection );
    13 end;
    14 /
    Procedure created.
    SQL> show errors
    No errors.
    SQL>
    SQL>
    SQL> -- and then we call this procedure to do our "SQL" processing for us
    SQL> declare
    2 list TStrings;
    3 begin
    4 select
    5 object_name bulk collect into list
    6 from user_objects;
    7
    8 BadIdea( list );
    9 end;
    10 /
    PL/SQL procedure successfully completed.A collection of objects resides in (non-sharable and expensive) PL/SQL memory. Running SQL, requires the PL/SQL engine copying the data to the SQL engine into a structure and format that the SQL engine will understand.
    This is slow. This is expensive. This does not scale.
    It makes little sense to copy data from the very fast and hot and good and scalable db buffer cache into an expensive memory structure in the PGA and then run SQLs against that.
    > Reason why bonus condition will be updated continuously. I want to make them easy
    to maintanence. what if I need to add new conditions. Just add a new procedure that
    calculate new result and pass as an table type object.
    The way I read your SQL, your new_tbl* requires dynamic SQL. Which means you also need to dynamically cater for the correct column names to join on, to filter on, and to select from.
    Also not the best of ideas. Dynamic SQL like that requires a lot of code to deal correctly with bind variables. Lot of exception handling as there can easily be run-time errors as the code itself that is executed in turn creates new dynamic code to execute.
    If this is to be truly dynamic, then one approach would be that each rule needs to be executable as a SQL or PL/SQL block. Each rule needs to have an input like an employee number. Each rule needs to returns a boolean like value, saying whether the rule has passed or failed.
    Only when all the rules have been passed, can the bonus be allocated.
    This will deal fine with the "dynamic rule" requirement. Performance wise and scalability wise.. it may not be the best of ideas. 10 dynamic and very slow and expensive rules, could very well be rewritten as a one very fast and very cheap static SQL statement.
    Anyway, the dynamic rule approach can look something like the following:
    SQL> create or replace type TBonusRule is object
    2 (
    3 result# char(1),
    4 member function Passed return boolean
    5 )
    6 not final;
    7 /
    Type created.
    SQL> show errors
    No errors.
    SQL>
    SQL>
    SQL> create or replace type body TBonusRule is
    2 member function Passed return boolean is
    3 begin
    4 return( UPPER(self.result#) = 'Y' );
    5 end;
    6 end;
    7 /
    Type body created.
    SQL> show errors
    No errors.
    SQL>
    SQL>
    SQL> create or replace type TBonusSQLRule under TBonusRule
    2 (
    3 constructor function TBonusSQLRule( empNo number, sqlStatement varchar2 ) return self as result
    4 ) final;
    5 /
    Type created.
    SQL> show errors
    No errors.
    SQL>
    SQL>
    SQL> create or replace type body TBonusSQLRule is
    2 constructor function TBonusSQLRule( empNo number, sqlStatement varchar2 ) return self as result is
    3 begin
    4 execute immediate sqlStatement
    5 into self.result#
    6 using IN empNo;
    7
    8 return;
    9 end;
    10
    11 end;
    12 /
    Type body created.
    SQL> show errors
    No errors.
    SQL>
    SQL> set serveroutput on
    SQL> declare
    2 rule1 TBonusSQLRule;
    3 rule2 TBonusSQLRule;
    4 empNo number;
    5 begin
    6 empNo := 7369; -- we process employee 7369
    7
    8 -- we apply bonus rule 1 that check if the employee is a clerk (of course,
    9 -- we could be reading these rules from a rules table - this example simply
    10 -- creates them dynamically)
    11 rule1 := new TBonusSQLRule( empNo, 'select ''Y'' from emp where empno = :0 and job=''CLERK''' );
    12
    13 if rule1.Passed then
    14 DBMS_OUTPUT.put_line( 'Rule 1. PASSED' );
    15 else
    16 DBMS_OUTPUT.put_line( 'Rule 1. FAILED' );
    17 end if;
    18
    19 -- rule 2 can for example check if the employee has been working for at least 5 years for the
    20 -- company
    21 rule2 := new TBonusSQLRule( empNo, 'select ''Y'' from emp where empno = :0 and (SYSDATE-hiredate)>5*365' );
    22
    23 if rule2.Passed then
    24 DBMS_OUTPUT.put_line( 'Rule 2. PASSED' );
    25 else
    26 DBMS_OUTPUT.put_line( 'Rule 2. FAILED' );
    27 end if;
    28
    29 end;
    30 /
    Rule 1. PASSED
    Rule 2. PASSED
    PL/SQL procedure successfully completed.
    SQL>
    PL/SQL rules can in a similar fashion be subclassed from the base/asbtract class. And rules can be persisted in a table too.
    But even though I did this example to illustrate just how flexible Oracle can be, I would personally think hard before using the above approach myself.
    Why?
    Because rules 1 and 2 resulted in two SQLs being fired. A single SQL could have done the job.
    2 SQLs were used for a single employee. I can use a single SQL to find ALL employees that matches the rule criteria.
    So... not very scalable and not very fast.

  • Nested table type in object view on 8.1.7

    Object views seem to be the ideal way to deliver XML datagrams from database queries with nested data.
    I need to create a datagram that contains nested data within another nested set of data eg. a family has many people, each person may have many hobbies.
    The following code taken from Oracle documentation would create the Types I need, but this does not work on 8.1.7 (gets PLS-00534 error). Can someone advise if nested tables within a table type is a new Oracle 9 feature?
    CREATE TYPE project_t AS OBJECT
    ( projname VARHCAR2(20)
    , mgr VARHCAR2(20));
    CREATE TYPE nt_project_t AS TABLE OF project_t;
    CREATE TYPE emp_t AS OBJECT
    ( ename VARCHAR2(20)
    , salary NUMBER
    , deptname VARHCAR2(20)
    , projects nt_project_t);
    CREATE TYPE nt_emp_t AS TABLE OF emp_t;
    CREATE TYPE dept_t AS OBJECT
    ( deptno NUMBER
    , deptname VARHCAR2(20)
    , emps nt_emp_t);
    Thks, Matt. (asked same question in XML forum but maybe more appropriate here).
    null

    Matthew,
    Value-based multi-level collections, such as the one you have here, were not supported in 8.1.7. You have two choices:
    1. Upgrade to 9i to take advantage of value-based multi-level collections (see http://download-west.oracle.com/otndoc/oracle9i/901_doc/appdev.901/a88878/adobjbas.htm#462243), type inheritance, type evolution and other new features.
    2. Use REFs in 8.1.7 to build a reference-based multi-level collections (see http://otn.oracle.com/docs/products/oracle8i/doc_library/817_doc/appdev.817/a76976/adobjdes.htm#446229).
    Regards,
    Geoff

  • Whats the important of "  table-type  " in sap abap?

    hi,
    i am ahmed. abap fresher.
    i want to what use and importance of table-type in sap abap which comes in      
                         datadictionary
                                V               
                        data types
    V----
    V                         V                                 V
    data element       structure                   table type
    i want to know about table type. plz give a brief idea.
    bye.

    hi,
    Transparent Tables
    A transparent table in the dictionary has a one-to-one relationship with a table in the database. Its structure in R/3 Data Dictionary corresponds to a single database table. For each transparent table definition in the dictionary, there is one associated table in the database. The database table has the same name, the same number of fields, and the fields have the same names as the R/3 table definition. When looking at the definition of an R/3 transparent table, it might seem like you are looking at the database table itself.
    Transparent tables are much more common than pooled or cluster tables. They are used to hold application data. Application data is the master data or transaction data used by an application. An example of master data is the table of vendors (called vendor master data), or the table of customers (called customer master data). An example of transaction data is the orders placed by the customers, or the orders sent to the vendors.
    Transparent tables are probably the only type of table you will ever create. Pooled and cluster tables are not usually used to hold application data but instead hold system data, such as system configuration information, or historical and statistical data.
    Both pooled and cluster tables have many-to-one relationships with database tables. Both can appear as many tables in R/3, but they are stored as a single table in the database. The database table has a different name, different number of fields, and different field names than the R/3 table. The difference between the two types lies in the characteristics of the data they hold, and will be explained in the following sections.
    Table Pools and Pooled Tables
    A pooled table in R/3 has a many-to-one relationship with a table in the database (see Figures 3.1 and 3.2). For one table in the database, there are many tables in the R/3 Data Dictionary. The table in the database has a different name than the tables in the DDIC, it has a different number of fields, and the fields have different names as well. Pooled tables are an SAP proprietary construct.
    When you look at a pooled table in R/3, you see a description of a table. However, in the database, it is stored along with other pooled tables in a single table called a table pool. A table pool is a database table with a special structure that enables the data of many R/3 tables to be stored within it. It can only hold pooled tables.
    R/3 uses table pools to hold a large number (tens to thousands) of very small tables (about 10 to 100 rows each). Table pools reduce the amount of database resources needed when many small tables have to be open at the same time. SAP uses them for system data. You might create a table pool if you need to create hundreds of small tables that each hold only a few rows of data. To implement these small tables as pooled tables, you first create the definition of a table pool in R/3 to hold them all. When activated, an associated single table (the table pool) will be created in the database. You can then define pooled tables within R/3 and assign them all to your table pool (see Figure 3.2).
    Pooled tables are primarily used by SAP to hold customizing data.
    When a corporation installs any large system, the system is usually customized in some way to meet the unique needs of the corporation. In R/3, such customization is done via customizing tables. Customizing tables contain codes, field validations, number ranges, and parameters that change the way the R/3 applications behave.
    Some examples of data contained in customizing tables are country codes, region (state or province) codes, reconciliation account numbers, exchange rates, depreciation methods, and pricing conditions. Even screen flows, field validations, and individual field attributes are sometimes table-driven via settings in customizing tables.
    During the initial implementation of the system the data in the customizing tables is set up by a functional analyst. He or she will usually have experience relating to the business area being implemented and extensive training in the configuration of an R/3 system.
    Table Clusters and Cluster Tables
    A cluster table is similar to a pooled table. It has a many-to-one relationship with a table in the database. Many cluster tables are stored in a single table in the database called a table cluster.
    A table cluster is similar to a table pool. It holds many tables within it. The tables it holds are all cluster tables.
    Like pooled tables, cluster tables are another proprietary SAP construct. They are used to hold data from a few (approximately 2 to 10) very large tables. They would be used when these tables have a part of their primary keys in common, and if the data in these tables are all accessed simultaneously. The data is stored logically as shown in Figure 3.3.
    Figure 3.3 : Table clusters store data from several tables based on the primary key fields that they have in common.
    Table clusters contain fewer tables than table pools and, unlike table pools, the primary key of each table within the table cluster begins with the same field or fields. Rows from the cluster tables are combined into a single row in the table cluster. The rows are combined based on the part of the primary key they have in common. Thus, when a row is read from any one of the tables in the cluster, all related rows in all cluster tables are also retrieved, but only a single I/O is needed.
    A cluster is advantageous in the case where data is accessed from multiple tables simultaneously and those tables have at least one of their primary key fields in common. Cluster tables reduce the number of database reads and thereby improve performance.
    For example, as shown in Figure 3.4, the first four primary key fields in cdhdr and cdpos are identical. They become the primary key for the table cluster with the addition of a standard system field pageno to ensure that each row is unique.
    Reward if helpful
    Jagadish

  • Alter table type from COLUMN to ROW

    TABLE type can be changed from ROW to COLUMN (and vice versa) using the ALTER TABLE command .
    Lars Breddemann  wrote
    when considering which data store to choose (which, by the way, can be changed later on as well), you have to take into account:
    * will you usually need the complete row (all columns)? If so, row store may be more efficient, as reconstructing the complete row is one of the most expensive column store operations.
    * will you need to join the row-store table to a column store table? If so, you should avoid using a different storage type, since using both storage engines in a statement leads to intermediate result set materialization which is another name for bad performance.
    * do you want to fill the table with huge amounts of data, that should be aggregated and analysed? If this is the case, the column store is the better option.
    As a rule of thumb you may just start with column-store tables and change them to row-store tables when you encounter performance issues.
    In general most developers cannot anticipate all important use cases for the tables they design.
    This is especially true for living and growing systems.
    So, more important than choosing the 'right' storage in the beginning is to monitor the performance and to benchmark the differences when changing the storage engine.
    So suppose we have a COLUMN table , but would be requiring to get data from many columns (so would be a very expensive column operation) , would it be advisable to change the table type FROM COLUMN to ROW on the fly . would this be a resource intensive operation if the table has a lot of data ?
    Lets suppose , if the above can be done , but there exists a interdependency for the column table (say from another simultaneous operation) , and thus should remain as COLUMN table as such . so what would be the better option in this case .
    Creating views is not an option as it seems from the SQL guide , that there was not an option to create a ROW view from a COLUMN table. ?
    Edited by: Rajarshi Muhuri on Nov 27, 2011 3:25 AM

    Dear Rajashri,
    1. you cann't alter table from column to row using alter command.
    but you can achieve this through Stored procedure, just Little bit HSQL coding.
    I hope upcoming versions SAP Gives like following SQL statements  ( following statemnt not works in HANA works in oralce )
       create row table "EFASHION_TUTORIAL"."AAA" as
    select
    "ARTICLE_COLOR_LOOKUP_ID",
    "ARTICLE_ID",
    "COLOR_CODE",
    "ARTICLE_LABEL",
    "COLOR_LABEL",
    "CATEGORY",
    "SALE_PRICE",
    "FAMILY_NAME",
    "FAMILY_CODE"
    from "EFASHION_TUTORIAL"."ARTICLE_COLOR_LOOKUP";
    2. Row & column table two different purpose like OLTP & OLAP.
         when you think about OLAP means modeling use Column.
          when you think about OLTP means real time operations then use Row
    Column table is high compress ( 5 - 20X),  i don't think you want get any performance issue when read information from column table. that is actual Core engine reading parller process. ( that is Heart of HANA).
    Column table purpose quite different like calculations, grouping.. most of DW environment Queires.
    Row table is currently system tables in feature row tables as OLTP, it's less compress mode compress to column store.
    so finally you write small program convert column to row and row to column
    thanks
    Rao

Maybe you are looking for

  • Read a text file before knowing the encoding

    Hi, I've wrote a AppleScript to adjust time delay of *.lrc file(txt file but different extension).I only got a messed up text, in other aspect, it works well. I understand applescript read the text file using the wrong encoding, but I can't just set

  • Second Last Measure

    I am using the following set with a KPI to return the latest CH4 measure value for each Well: This set returns the current CH4 value for all Wells and current date. I now need to be able to return the second last measure values for all wells for a KP

  • Update Photoshop Elements 9 from mac app store

    Hi! I have been using the PSE 9 Editor from the Mac App Store for a few years. Mostly for editing RAW files from my canon DSLR. However, I just bought a mirrorless Olympus E-PL5, and the CameraRAW that comes with PSE9 will not open the RAW files from

  • APINV : Workflow Action History empty for approvers.- WF Note Empty

    I have the invoice approval workflow working in Ebus. When the person (Person A) first receives the customised approval notification they have the WF_NOTE attribute and fill it in accordingly, they submit it to Person B. When the next person (Person

  • Cannot restore TM backup of iPhoto Library - insufficient privileges

    This seems to be a long-standing problem.  I am experiencing problems caused by Apple's lousy implementation of user-defined places in iPhoto 11 and cannot afford to test whether reducing the radius of a place that overlaps one hundred or so other us