Table with planning definition?

Hi friends!
Does anybody know where I can found any table, report or transaction,
in wich can I see the planning definition?
(ie: all the characteristics in each planning level, all de aggr levels, etc)
Thanks in advance!

Hello,
Please go through the following links:
Standard tables available in SAP BI
http://wiki.sdn.sap.com/wiki/display/BI/Important%20Tables%20in%20SAP%20BI%20%28%20NW2004%29
Thanks.
With regards,
Anand Kumar

Similar Messages

  • How to Compare Data length of staging table with base table definition

    Hi,
    I've two tables :staging table and base table.
    I'm getting data from flatfiles into staging table, as per requirement structure of staging table and base table(length of each and every column in staging table is 25% more to dump data without any errors) are different for ex :if we've city column with varchar length 40 in staging table it has 25 in base table.Once data is dumped into staging table I want to compare actual data length of each and every column in staging table with definition of base table(data_length for each and every column from all_tab_columns) and if any column differs length I need to update the corresponding row in staging table which also has a flag called err_length.
    so for this I'm using cursor c1 is select length(a.id),length(a.name)... from staging_table;
    cursor c2(name varchar2) is select data_length from all_tab_columns where table_name='BASE_TABLE' and column_name=name;
    But we're getting data atonce in first query whereas in second cursor I need to get each and every column and then compare with first ?
    Can anyone tell me how to get desired results?
    Thanks,
    Mahender.

    This is a shot in the dark but, take a look at this example below:
    SQL> DROP TABLE STAGING;
    Table dropped.
    SQL> DROP TABLE BASE;
    Table dropped.
    SQL> CREATE TABLE STAGING
      2  (
      3          ID              NUMBER
      4  ,       A               VARCHAR2(40)
      5  ,       B               VARCHAR2(40)
      6  ,       ERR_LENGTH      VARCHAR2(1)
      7  );
    Table created.
    SQL> CREATE TABLE BASE
      2  (
      3          ID      NUMBER
      4  ,       A       VARCHAR2(25)
      5  ,       B       VARCHAR2(25)
      6  );
    Table created.
    SQL> INSERT INTO STAGING VALUES (1,RPAD('X',26,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (2,RPAD('X',25,'X'),RPAD('X',26,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (3,RPAD('X',25,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXX
    SQL> UPDATE  STAGING ST
      2  SET     ERR_LENGTH = 'Y'
      3  WHERE   EXISTS
      4          (
      5                  WITH    columns_in_staging AS
      6                  (
      7                          /* Retrieve all the columns names for the staging table with the exception of the primary key column
      8                           * and order them alphabetically.
      9                           */
    10                          SELECT  COLUMN_NAME
    11                          ,       ROW_NUMBER() OVER (ORDER BY COLUMN_NAME) RN
    12                          FROM    ALL_TAB_COLUMNS
    13                          WHERE   TABLE_NAME='STAGING'
    14                          AND     COLUMN_NAME != 'ID'
    15                          ORDER BY 1
    16                  ),      staging_unpivot AS
    17                  (
    18                          /* Using the columns_in_staging above UNPIVOT the result set so you get a record for each COLUMN value
    19                           * for each record. The DECODE performs the unpivot and it works if the decode specifies the columns
    20                           * in the same order as the ROW_NUMBER() function in columns_in_staging
    21                           */
    22                          SELECT  ID
    23                          ,       COLUMN_NAME
    24                          ,       DECODE
    25                                  (
    26                                          RN
    27                                  ,       1,A
    28                                  ,       2,B
    29                                  )  AS VAL
    30                          FROM            STAGING
    31                          CROSS JOIN      COLUMNS_IN_STAGING
    32                  )
    33                  /*      Only return IDs for records that have at least one column value that exceeds the length. */
    34                  SELECT  ID
    35                  FROM
    36                  (
    37                          /* Join the unpivoted staging table to the ALL_TAB_COLUMNS table on the column names. Here we perform
    38                           * the check to see if there are any differences in the length if so set a flag.
    39                           */
    40                          SELECT  STAGING_UNPIVOT.ID
    41                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_A
    42                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_B
    43                          FROM    STAGING_UNPIVOT
    44                          JOIN    ALL_TAB_COLUMNS ATC     ON ATC.COLUMN_NAME = STAGING_UNPIVOT.COLUMN_NAME
    45                          WHERE   ATC.TABLE_NAME='BASE'
    46                  )       A
    47                  WHERE   COALESCE(ERR_LENGTH_A,ERR_LENGTH_B) IS NOT NULL
    48                  AND     ST.ID = A.ID
    49          )
    50  /
    2 rows updated.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX                Y
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX               Y
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXHopefully the comments make sense. If you have any questions please let me know.
    This assumes the column names are the same between the staging and base tables. In addition as you add more columns to this table you'll have to add more CASE statements to check the length and update the COALESCE check as necessary.
    Thanks!

  • ORA-01461 Error when mapping table with multiple varchar2(4000) fields

    (Note: I think this was an earlier problem, supposed fixed in 11.0, but we are experiencing in 11.7)
    If I map an Oracle 9i table with multiple varchar2(4000) columns, targeting another Oracle 9i database, I get the ORA-01461 error (Can't bind a LONG value only for insert into a LONG).
    I have tried changing the target columns to varchar2(1000), as suggested as a workaround in earlier versions, all to no avail.
    I can have just one varchar2(4000) map correctly and execute flawlessly - the problem occurs when I add a second one.
    I have tried making the target column a LONG, but that does not solve the problem.
    Then, I made the target database SQL Server, and it had no problem at all, so the issue seems to be Oracle-related.

    Hi Jon,
    Thanks for the feedback. I'm unable to reproduce the problem you describe at the moment - if I try to migrate a TEXT(5), OMWB creates a VARCHAR(5) and the data migrates correctly!! However, I note from you description that even though the problematic source column datatype is TEXT(5), you mention that there are actually 20 lines of text in this field (and not 5 variable length characters as the definition might suggest).
    Having read through some of the MySQL reference guide I note that, in certain circumstances, MySQL actually changes the column datatype specified either at table creation time or when interfacing with other databases ( ref 14.2.5.1 Silent Column Specification Changes and 12.7 Using Column Types from Other Database Engines in the MySQL reference guide). Since your TEXT(5) actually contains 20 lines of text, MySQL (database or JDBC driver .... or both) may be trying to automatically map the specified datatype of the column to a datatype more appropriate to storing 20 lines of text.... that is, to a LONG value in this case. Then, when Oracle is presented with this LONG value to store in a VARCHAR(5) field, it throws the ORA-01461 error. I need to investigate this further, but this may be the case - its the first time I've see this problem encountered.
    To workaround this, you could change the datatype of the column to a LONG from within the Oracle Model before migrating. Any application code that accesses this column and expects a TEXT(5) value may need to be adjusted to cope with a LONG value. Is this a viable workaround for you?
    I will investigate further and notiofy you of any details I uncover. We will need to track this issue for possible inclusion in future development plans.
    I hope this helps,
    Regards,
    Tom.

  • Help to read a table with data source and convert time stamp

    Hi Gurus,
      I have a req and need to write a ABAP prog. As soon as i excute ABAP program it should ask me enter a data source name, then my ABAP prog has excute teh code, in ABAP code i have to read a table with this data source as key, sort time stamp from table and should display the data source and time stamp as output.
    As follows:
    Enter Data Source Name: 
    Then user enters : 2lis_11_vahdr
    Then out put should be "Data source  :"  10-15-2008.
    The time stamp format in table is 20,050,126,031,520 (YYYYMMDDhhmmss). I have to display as 05-26-2005. Any help would be apprciated.
    Thanks,
    Ram

    Hi Jayanthi Babu Peruri,
    I tried to extract YEAR, MONTH, DAY separately and using
    EDIT MASK written it.
    Definitely there will be some STANDARD CONVERSION ROUTINE will be there. But no idea about it.
    DATA : V_TS      TYPE TIMESTAMP,
           V_TS_T    TYPE CHAR16,
           V_YYYY    TYPE CHAR04,
           V_MM      TYPE CHAR02,
           V_DD      TYPE CHAR02.
    START-OF-SELECTION.
      GET TIME STAMP FIELD V_TS.
      V_TS_T = V_TS.
      CONDENSE V_TS_T.
      V_YYYY = V_TS_T.
      V_MM   = V_TS_T+4(2).
      V_DD   = V_TS_T+6(2).
      V_TS_T(2) = V_MM.
      V_TS_T+2(2) = V_DD.
      V_TS_T+4(4) = V_YYYY.
      SKIP 10.
      WRITE : /10 V_TS," USING EDIT MASK '____-__-________'.
              /10 V_YYYY,
              /10 V_MM,
              /10 V_DD,
              /10 V_TS_T USING EDIT MASK '__-__-__________'.
    If you want DATE alone, just declare the length of V_TS_T as 10.
    Regards,
    R.Nagarajan.
    We can -

  • R4 EA: Error loading a table with an XMLTYPE field

    When I try to view the data in a table with an XMLTYPE field I get the following error in R4 EA Version 4.0.0.12 Build MAIN-12-27.  This works in 2.2 and 3.0.
    oracle.sqldeveloper.migration.application
         Error: Resource not found: ${SCRATCH_COMMAND_ICON}.
    Double clicking on the error opens the EXTENSION.XML file and shows this line:
    <trigger-hooks xmlns="http://xmlns.oracle.com/ide/extension">
      <!-- Add registry here if required -->
      <triggers xmlns:c="http://xmlns.oracle.com/ide/customization">
        <actions xmlns="http://xmlns.oracle.com/jdeveloper/1013/extension">
          <action id="MigrationProject.ApplicationScan">
            <properties>
              <property name="Name">${APPSCAN_TITLE}</property>
              <property name="MnemonicKey">${APPSCAN_TITLE2}</property>
              <property name="SmallIcon">res:${SCRATCH_COMMAND_ICON}</property>
            </properties>
          </action>
        </actions>
    The table has the following definition:
    ID    NUMBER(38,0)    No
    WS_DATA    XMLTYPE    Yes
    WS_SNAPSHOT_ID    NUMBER(38,0)    No
    Any help would be greatly appreaciated.  We have to upgrade to 4.0 because our security team will no longer allow Java 6 on any server or workstation.
    Thanks,
    Steve

    Hi Steve
    Still no response?
    I am having same issue when querying a table with XMLTYPE.
    How is your XMLTYPE stored in the DB, as a CLOB or as a BINARY XML?
    Regards,
    Shaun

  • MATERIALIZED view on two tables with Fast Refresh

    i Wanted to create MV on two tables with Fast refresh on commit.
    I followed below steps
    create materialized view log on t1 WITH PRIMARY KEY, rowid;
    create materialized view log on t2 WITH PRIMARY KEY, rowid;
    CREATE MATERIALIZED VIEW ETL_ENTITY_DIVISION_ASSO_MV
    REFRESH fast ON commit
    ENABLE QUERY REWRITE
    AS
    select A.ROWID B.ROWID,a.c1, DECODE(a.c1,'aaa','xxx','aaa') c2
    from t1 A
    join t2 b
    on AB.c1= CD.c2;
    i am getting below error.
    Error report:
    SQL Error: ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
    12054. 00000 - "cannot set the ON COMMIT refresh attribute for the materialized view"
    *Cause:    The materialized view did not satisfy conditions for refresh at
    commit time.
    *Action:   Specify only valid options.
    Basically i want to take record in MV by joinig two tables and if both of the base tables will updated then record should reflect in materialised view.
    Please do the needfull.

    does the table support PCT? the other restrictions on joins look to be ok in your statement.
    maybe try creating first with on demand instead of commit to see does it create.
    http://docs.oracle.com/cd/B19306_01/server.102/b14223/basicmv.htm
    >
    Materialized Views Containing Only Joins
    Some materialized views contain only joins and no aggregates, such as in Example 8-4, where a materialized view is created that joins the sales table to the times and customers tables. The advantage of creating this type of materialized view is that expensive joins will be precalculated.
    Fast refresh for a materialized view containing only joins is possible after any type of DML to the base tables (direct-path or conventional INSERT, UPDATE, or DELETE).
    A materialized view containing only joins can be defined to be refreshed ON COMMIT or ON DEMAND. If it is ON COMMIT, the refresh is performed at commit time of the transaction that does DML on the materialized view's detail table.
    If you specify REFRESH FAST, Oracle performs further verification of the query definition to ensure that fast refresh can be performed if any of the detail tables change. These additional checks are:
    A materialized view log must be present for each detail table unless the table supports PCT. Also, when a materialized view log is required, the ROWID column must be present in each materialized view log.
    The rowids of all the detail tables must appear in the SELECT list of the materialized view query definition.
    If some of these restrictions are not met, you can create the materialized view as REFRESH FORCE to take advantage of fast refresh when it is possible. If one of the tables did not meet all of the criteria, but the other tables did, the materialized view would still be fast refreshable with respect to the other tables for which all the criteria are met.

  • Passing an internal table WITH HEADER LINE to abap object

    Hi. In another thread, it was explained how to pass an internal table to an abap object method. Is it possible to pass an internal table that has a header line, and RETAIN the header line once the table has been passed?
    My problem is, that I can pass the table, update it, but the read buffer is not populated when returning from the object's method. This is the result of being able to pass a STANDARD TABLE type, but not a STANDARD TABLE WITH HEADER LINE.
    This means that I have to read the table into a work area instead of doing a READ TABLE LKNA1 within the method, which is what I need to do.
    Thanks.

    Please check this sample program, notice that it is modifing the internal table and passing it back modified as well as passing the "work area" or "header line" back thru the exporting parameter.
    report zrich_0001.
    *       CLASS lcl_app DEFINITION
    class lcl_app definition.
      public section.
        types: t_t001 type table of t001.
        class-data: it001 type table of t001.
        class-data: xt001 like line of it001.
        class-methods: change_table
                                    exporting ex_wt001 type t001
                                    changing im_t001 type t_t001.
    endclass.
    data: w_t001 type t001.
    data: a_t001 type table of t001 with header line.
    start-of-selection.
      select * into table a_t001 from t001.
      call method lcl_app=>change_table
                 importing
                     ex_wt001 = w_t001
                 changing
                     im_t001  = a_t001[] .
      check sy-subrc  = 0.
    *       CLASS lcl_app IMPLEMENTATION
    class lcl_app implementation.
      method change_table.
        loop at im_t001 into xt001.
          concatenate xt001-butxt 'Changed'
               into xt001-butxt separated by space.
          modify im_t001 from xt001.
        endloop.
        ex_wt001 = xt001.
      endmethod.
    endclass.
    Regards,
    Rich Heilman

  • Cartesian of data from two tables with no matching columns

    Hello,
    I was wondering – what’s the best way to create a Cartesian of data from two tables with no matching columns in such a way, so that there will be only a single SQL query generated?
    I am thinking about something like:
    for $COUNTRY in ns0: COUNTRY ()
    for $PROD in ns1:PROD()
    return <Results>
         <COUNTRY> {fn:data($COUNTRY/COUNTRY_NAME)} </COUNTRY>
         <PROD> {fn:data($PROD/PROD_NAME)} </PROD>
    </Results>
    And the expected result is combination of all COUNTRY_NAMEs with all PROD_NAMEs.
    What I’ve noticed when checking query plan is that DSP will execute two queries to have the results – one for COUNTRY_NAME and another one for PROD_NAME. Which in general results in not the best performance ;-)
    What I’ve noticed also is that when I add something like:
    where COUNTRY_NAME != PROD_NAME
    everything is ok and there is only one query created (it's red in the Query plan, but still it's ok from my pov). Still it looks to me more like a workaround, not a real best approach. I may be wrong though...
    So the question is – what’s the suggested approach for such queries?
    Thanks,
    Leszek
    Edited by xnts at 11/19/2007 10:54 AM

    Which in general results in not the best performanceI disagree. Only for two tables with very few rows, would a single sql statement give better performance.
    Suppose there are 10,000 rows in each table - the cross-product will result in 100 million rows. Sounds like a bad idea. For this reason, DSP will not push a cross-product to a database. It will get the rows from each table in separate sql statements (retrieving only 20,000 rows) and then produce the cross-product itself.
    If you want to execute sql with cross-products, you can create a sql-statement based dataservice. I recommend against doing so.

  • Problem creating a table with a subquery and a dblink

    Hello!!
    I have a little problem. When I create a table with a subquery and a dblink, for example:
    CREATE TABLE EXAMPLE2 AS SELECT * FROM EXAMPLE1@DBLINK
    the table definition is changed. Fields with a type of CHAR or VARCHAR2 are created with a size three times bigger than the original table in the remote database. Field of type DATE and NUMBER are not changed. For example if the original table, in the database 1, has a field of type CHAR(1) it is create in the local database with a type of CHAR(3), and a VARCHAR2(5) field is created with VARCHAR2(15).
    Database 1 has a WE8DEC character set.
    Database 2 has a AL32UTF8 character set.
    Could it be related to the difference in character sets?
    What can I do to make Oracle use the same table definition when creating a table in this way?
    Thanks!!

    That is related to character sets, and probably necessary if you want all the data in the remote table to be able to fit in the new table.
    When you declare a column VARCHAR2(5), by default, you're allocating 5 bytes of storage. In a single-byte character set, which I believe WE8DEC is, that also happens to equate to 5 characters. In a multi-byte character set like AL32UTF8, though, 1 character may require up to 3 bytes of storage, so you'd need to allocate 15 bytes to store that data. That's what's going on here.
    You could still store all the data if you create the table locally by explicitly requesting 5 characters of storage by declaring the column VARCHAR2(5 CHAR). You could also set the NLS_LENGTH_SEMANTICS parameter to CHAR rather than BYTE before creating the table, but I believe that both of these will only apply when you're explicitly defining columns as part of your CREATE TABLE. I don't believe either will apply to CTAS statements.
    Justin

  • Build page with screen definition in XML using XSLT in ADF 11.1.1.x

    Hi folks,
    I'm figuring out how best integrating Oracle Policy Automation/Webdeterminations in ADF. My idea is that in a ADF Taskflow I first call a Init-session webservice on OPA to initiate a session with facts queried from ADF Components.
    Then I would query a Determinations Server Webservice to get the next interview-screen definition. This gives me an xml with the definition of a screen to present. I could create an xslt that transform this to an HTML form. Then I would show a page with this (x)html form included in a container. The user could fill in the question-fields. Then on a command button I would read the values from the HTTP request and feed it into a webservice call to the Determinations Service.
    Is a scenario like this possible in ADF? And could you give me some hints to get me on track? Or would you suggest otherwise?
    The recommendation from OPA is to use data-adapters. But that are then java-classes based on an java-interface that are to be custom build on a datamodel. And I could imagine several security implications on that.
    Thanks in advance.
    Regards,
    Martien

    Can you use JAXB to unmarshall the XML document to a Java Class (and create a POJO data control out of the java class) and use it in the ADF pages as form or table or tree table?
    Take a look at the following sample how an XML document having a schema associated can be converted to a Java class and used in the UI:
    http://adftree.googlecode.com/svn/trunk/TreeSample.zip
    Thanks,
    Navaneeth

  • Large partitioned tables with WM

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    Hello Ben
    Thank you for your reply.
    The table structure we have is like so:
    CREATE TABLE hdr
    (   pk_id               NUMBER PRIMARY KEY,
        customer_id         NUMBER FOREIGN KEY REFERENCES customer,
        entry_type          NUMBER NOT NULL
    CREATE TABLE dtl_daily
    (   pk_id               NUMBER PRIMARY KEY,
        hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
        active_date         DATE NOT NULL,
        col1                NUMBER
        col2                NUMBER
    PARTITION BY RANGE(active_date)
    (   PARTITION ptn_200709
            VALUES LESS THAN (TO_DATE('200710','YYYYMM'))
            TABLESPACE x COMPRESS,
        PARTITION ptn_200710
            VALUES LESS THAN (TO_DATE('200711','YYYYMM'))
            TABLESPACE x COMPRESS
    CREATE TABLE dtl_hourly
    (   pk_id               NUMBER PRIMARY KEY,
        hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
        active_date         DATE NOT NULL,
        active_hour         NUMBER NOT NULL,
        col1                NUMBER
        col2                NUMBER
    PARTITION BY RANGE(active_date)
    (   PARTITION ptn_20070901
            VALUES LESS THAN (TO_DATE('20070902','YYYYMMDD'))
            TABLESPACE x COMPRESS,
        PARTITION ptn_20070902
            VALUES LESS THAN (TO_DATE('20070903','YYYYMMDD'))
            TABLESPACE x COMPRESS
        PARTITION ptn_20070903
            VALUES LESS THAN (TO_DATE('20070904','YYYYMMDD'))
            TABLESPACE x COMPRESS
        ...For every day for 20 years
    /The hdr table holds one or more rows for each customer and has it's own synthetic key generated for every entry as there can be multiple rows having the same entry_type for a customer. There are two detail tables, daily and hourly, which hold detail data at those two granularities. Some customers require hourly detail, in which case the hourly table is populated and the daily table is populated by aggregating the hourly data. Other customers require only daily data in which case the hourly table is not populated.
    At the moment, changes to customer data require that the content of these tables are rebuilt for that customer. This rebuild is done every night for the changed customers and I want to change this to be a near real time rebuild. The rebuild involves deleteing all existing entries from the three tables for the customer and then re-inserting the new set using new synthetic keys. If we do make this near real time, we need to be able to provide a snapshot of the data asof close of business every day, and we need to be able to report as of a point of time up to 10 days in the past.
    For any one customer, they may have rows in the hourly table that goes out 20 years at a hourly granularity, but once the active date has passed(by 10 days), we no longer need to keep it. This is why we were considering partitioning as it gives us a simple way of dropping off old data, and as a nice side effect, helps to improve performance of queries that are looking for active data between a range of dates (which is most of them).
    I did have a look at the idea of save points but I wasn't sure it would be efficient. So in this case, would the idea be that we don't partition the table but instead at close of business every day, we create a savepoint like "savepoint_20070921" and instead of using dbms_wm.gotodate. we would use dbms_wm.gotosavepoint. Then every day we would do
    DBMS_WM.DeleteSavepoint(
       workspace                   => 'LIVE',
       savepoint_name              => 'savepoint_20070910', --10 days ago
       compress_view_wo_overwrite  => TRUE,
    DBMS_WM.CompressWorkspace(
       workspace                   => 'LIVE,
       compress_view_wo_overwrite  => TRUE,
       firstSP                     => 'savepoint_20070911', --the new oldest save point
       );Is my understanding correct?
    David
    Message was edited by:
    fixed some formatting
    David Tyler

  • Displaying several tables with same structure

    There are several tables with the same structure (number of columns, column names, column types etc.) and corresponding ADF BC view objects.
    A user should interactively choose the table he or she wants to edit. The UI remains the same for any given table.
    I need advice on how to implement this optimally, considering ease of development and performance.
    Right now I'm planning to use a SelectOneListbox for choosing and hide and show some containers with tables and forms for editing.
    I feel it should be quite easy, but I am an ADF newbie and feel puzzled :)
    Thanks in advance for any help!

    Hi,
    I feel your approuch will work fine . Check if following link can help
    http://www.oracle.com/technology/products/adf/patterns/11/enabledisablepattern.pdf
    Vikram

  • Internal table with variable no of columns

    Hi All,
    I have to create an internal table with some fixed columns and rest of the table should be dynamic. The total no of columns depends on a field of some other table. hence, the number of columns of the table are unknown. How to create such a table.
    Thanks,
    Neha

    Execute this program .. we will get a fair idea about how the dyanmic column are populated in the table. Based on that u can write ur sceond table with 92 columns
    *& Report  ZTEST009
    REPORT  ztest009 NO STANDARD PAGE HEADING LINE-SIZE 60 LINE-COUNT 2(1).
    TYPE-POOLS : slis.
    TYPES : BEGIN OF internal,
            matnr(18),
            werks(4),
            qtyn(20),
            desc(20) TYPE c,
            qty TYPE i,
            END OF internal.
    DATA : it TYPE TABLE OF internal,
           wa TYPE internal.
    DATA : fieldcat  TYPE lvc_t_fcat,
           lcat      TYPE lvc_s_fcat,
           final_cat TYPE slis_t_fieldcat_alv,
           fcat      TYPE slis_fieldcat_alv,
           top       TYPE slis_t_listheader,
           events    TYPE slis_t_event,
           layout    TYPE slis_layout_alv.
    DATA : newfield  TYPE REF TO data,
           newdata   TYPE REF TO data.
    FIELD-SYMBOLS : <fs1>,
                    <dynamic_value>,
                    <dynamic_cat> TYPE STANDARD TABLE.
    START-OF-SELECTION.
      PERFORM popudate.
      PERFORM buildcat.
      PERFORM loadata.
      PERFORM events USING events.
      PERFORM header USING top.
      PERFORM layout.
    END-OF-SELECTION.
      PERFORM display.
    *&      Form  popudate
          text
    -->  p1        text
    <--  p2        text
    FORM popudate .
      DEFINE popu.
        wa-matnr = &1.
        wa-werks = &2.
        wa-qtyn = &3.
        wa-desc = &4.
        wa-qty = &5.
        append wa to it.
        clear wa.
      END-OF-DEFINITION.
      popu 'material1' 'pla1' 'QTY1' 'quantity1' 100.
      popu 'material1' 'pla1' 'QTY2' 'quantity2' 200.
      popu 'material1' 'pla1' 'QTY3' 'quantity3' 300.
      popu 'material2' 'pla2' 'QTY1' 'quantity1' 400.
      popu 'material2' 'pla2' 'QTY2' 'quantity2' 500.
      popu 'material2' 'pla2' 'QTY3' 'quantity3' 600.
      popu 'material3' 'pla3' 'QTY1' 'quantity1' 700.
      popu 'material3' 'pla3' 'QTY2' 'quantity2' 400.
      SORT it BY matnr.
    ENDFORM.                    " popudate
    *&      Form  buildcat
          text
    -->  p1        text
    <--  p2        text
    FORM buildcat .
      lcat-fieldname = 'MATNR'.
      lcat-datatype = 'CHAR'.
      lcat-seltext = 'Material'.
      lcat-intlen = 18.
      APPEND lcat TO fieldcat.
      CLEAR lcat.
      lcat-fieldname = 'WERKS'.
      lcat-datatype = 'CHAR'.
      lcat-seltext = 'Plant'.
      lcat-intlen = 4.
      APPEND lcat TO fieldcat.
      CLEAR lcat.
      LOOP AT it INTO wa.
        READ TABLE fieldcat INTO lcat WITH KEY fieldname = wa-qtyn.
        IF sy-subrc <> 0.
          lcat-fieldname = wa-qtyn.
          lcat-datatype = 'CHAR'.
          lcat-seltext = wa-desc.
          lcat-intlen = 10.
          APPEND lcat TO fieldcat.
          CLEAR lcat.
        ENDIF.
      ENDLOOP.
      CLEAR lcat.
      CALL METHOD cl_alv_table_create=>create_dynamic_table
        EXPORTING
       i_style_table             =
          it_fieldcatalog           = fieldcat
       i_length_in_byte          =
        IMPORTING
          ep_table                  = newfield
       e_style_fname             =
    EXCEPTIONS
       generate_subpool_dir_full = 1
       others                    = 2
      IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
               WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
      ASSIGN newfield->* TO <dynamic_cat>.
      CREATE DATA newdata LIKE LINE OF <dynamic_cat>.
      ASSIGN newdata->* TO <dynamic_value>.
    ENDFORM.                    " buildcat
    *&      Form   loadata
          text
    -->  p1        text
    <--  p2        text
    FORM  loadata .
      DATA flag TYPE i.
      LOOP AT it INTO wa.
        flag = 0.
        ASSIGN COMPONENT 'MATNR' OF STRUCTURE <dynamic_value> TO <fs1>.
        <fs1> = wa-matnr.
        ASSIGN COMPONENT 'WERKS' OF STRUCTURE <dynamic_value> TO <fs1>.
        <fs1> = wa-werks.
        CALL FUNCTION 'AIPC_CONVERT_TO_UPPERCASE'
          EXPORTING
            i_input  = wa-qtyn
            i_langu  = sy-langu
          IMPORTING
            e_output = wa-qtyn.
        ASSIGN COMPONENT wa-qtyn OF STRUCTURE <dynamic_value> TO <fs1>.
        <fs1> = wa-qty.
        AT END OF matnr.
          APPEND <dynamic_value> TO <dynamic_cat>.
          CLEAR : <dynamic_value>.
        ENDAT.
        CLEAR : wa.
      ENDLOOP.
    ENDFORM.                    "  loadata
    **&      Form  display
          text
    -->  p1        text
    <--  p2        text
    FORM display .
      LOOP AT fieldcat INTO lcat.
        fcat-fieldname = lcat-fieldname.
        fcat-outputlen = lcat-intlen.
        fcat-seltext_l = lcat-seltext.
        APPEND fcat TO final_cat.
      ENDLOOP.
      CALL FUNCTION 'REUSE_ALV_LIST_DISPLAY'
       EXPORTING
         i_callback_program             = sy-repid
      I_CALLBACK_PF_STATUS_SET       = ' '
      I_CALLBACK_USER_COMMAND        = ' '
         is_layout                      = layout
         it_fieldcat                    = final_cat
      IT_SORT                        =
      I_SAVE                         = ' '
      IS_VARIANT                     =
         it_events                      = events
        TABLES
          t_outtab                       = <dynamic_cat>
    ENDFORM.                    " display
    Regards,
    Aswin.

  • Joining a table with all_tab_columns

    How is it possible to join a table with the tab_columns?
    The query im trying to establish is that in my table they are 12 columns with months names. So under an input variable im trying to return the required values with the selected months. The only way i could think of to get the months to connect with the tab_columns table.
    Any suggestion is really appreciated
    SELECT bust,
    Sum(jan) JAN,
    Sum(feb) FEB,
    Sum(mar) MAR,
    Sum(apr) APR,
    Sum(may) MAY,
    Sum(jun) JUN,
    Sum(jul) JUL,
    Sum(aug) AUG,
    Sum(sep) SEP,
    Sum(oct) OCT,
    Sum(nov) NOV,
    Sum(DEC) DECC
    FROM budget a,all_tab_columns b
    WHERE vsl_code = 4602
    AND code = 1
    AND year=2013
    AND account_code='30'
    AND b.table_name='BUDGET'
    AND b.column_name IN
                         (SELECT column_name
                          FROM (SELECT Column_name, ROWNUM r
                                        FROM all_tab_columns b
                                        WHERE table_name = 'BUDGET'
                                        AND Column_id BETWEEN 3 AND 14
                                        ORDER BY column_id)
                          WHERE r BETWEEN 2 AND 3 ) --Returns February,March
    group by bust;

    Sorry, I don't understand what you're trying to do or why you think you need to join to all_tab_columns. Perhaps you could post the definition of the budget table, some sample data, and the results you're hoping to see.
    Without that, I don't see why you can't just do this:
    SELECT bust,   
    Sum(jan) JAN,   
    Sum(feb) FEB,   
    Sum(mar) MAR,   
    Sum(apr) APR,  
    Sum(may) MAY,  
    Sum(jun) JUN,   
    Sum(jul) JUL,   
    Sum(aug) AUG,   
    Sum(sep) SEP,   
    Sum(oct) OCT,  
    Sum(nov) NOV,   
    Sum(DEC) DECC  
    FROM budget a
    WHERE vsl_code = 4602  
    AND code = 1  
    AND year=2013  
    AND account_code='30' 
    group by bust;

  • Where's the metadata for STORE IN clause of a table with partition?

    Hi Experts,
    I created a table with a range-interval partition with STORE IN clause. It's definition:
    CREATE TABLE interval_part (
    person_id NUMBER(5) NOT NULL,
    first_name VARCHAR2(30),
    last_name VARCHAR2(30))
    PARTITION BY RANGE (person_id)
    INTERVAL (100) STORE IN (TSP_1, TSP_2, TSP_3) (
    PARTITION p1 VALUES LESS THAN (101))
    TABLESPACE TSP_1;
    I can not find the metadata for STORE IN clause in ALL_TAB_PARTITIONS, ALL_TABLES. Where is the metadata for the STORE IN clause? How can I find the tablespace list (TSP_1, TSP_2, TSP_3)?
    Thanks,
    David

    DBMS_METADATA.GET_DDL returns the definition of the table. But I just need the value of the tablespace list, for example, TSP_1, TSP_2, TSP_3. Is there any view which stored the value?
    Thanks,
    David

Maybe you are looking for

  • Hiding A TabStrip ViewSet in a Window

    Hello everyone, I created a TabStrip ViewSet in a Window.  Depending on certain conditions, i'd like to hide or show certain tab strips. Is this possible? Any help will be much appreciated. Mike

  • Why copy option from recent call numbers is not working on ios7 [recent call logs in phone]?

    why copy option from recent call numbers is not working on ios7 [recent call logs in phone]?

  • Problem when using utl.http package

    hello,as subject above,i have a problem with that package set serveroutput on DECLARE req   utl_http.req; resp  utl_http.resp; value VARCHAR2(32000); BEGIN req := utl_http.begin_request('http://www.psoug.org'); resp := utl_http.get_response(req); val

  • PATCH 10

    Patch 10 documentation is already online. In metalink i did not find PATCH 10 download. When it will be online ?

  • Help for ProactiveHandler & ProactiveResponseHandler

    Hi all, I am nw developing a Java Card Applet. As my applet will consists of different sub-menus so it will require different class to manage it. Also I am using the sim.toolkit package. But I have try to pass these instance from one class to another