How to identity segment when to setup partition for large table?

I have a table with size is about 3G. There is a code column in this table with distinguish 20 value. I am try to create list partition on this column.  How can I assign segment for different value for this partition?In my database, only less than 10 segments available. If I want the better performance, each partition should be on different segment or different device? Sould each sement have enough size to hold all data? what happen if the segment is smaller? for example, if I honly have 4 sgements each with 500M?
If I remove or change the strategy of partition, for example, change the type to range, if the system can release the partitions on segment automatically?

This section of the performance and tuning guide addresses all of these concerns.  Give it a good read and post questions that you have about the documentation:
http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc00841.1570/html/phys_tune/title.htm

Similar Messages

  • How do I export when FCP keeps crashing for large QuickTime exports?

    Forum,
    I have a 108-minute finished Final Cut Pro project. Ultimately, I'd like to burn it on a DVD so that it can be watched by any Joe-Schmoe using an ordinary DVD player. I'll be using DVD Studio Pro to do the burning. And before this, I plan to compress using Apple Compressor.
    But...
    I can't export this file to QuickTime! FCP keeps CRASHING about halfway through the process. I'm preserving the Current Settings, which, by the way, are XDCAM EX 1080p24 VBR.
    I have four internal hard drives used as follows:
    D1 OSX and FCP
    D2 audio media
    D3 visual media
    D4 location of FCP files and scratch disk
    I also have a MyBook USB2.0 external drive available should this be of use.
    Any suggestions for a viable workflow here? Should I be exporting to QuickTime first? Is my configuration doomed? Should I export straight to Compressor?
    thanks,
    Shanked

    Is there a way to do this WITHOUT your suggested software?
    Sure, but its very manual and the Corrupt Clip Finder really just automates that for you ... but I get your point about keeping things clean and can't fault that philosophy.
    First, you can use the Tools > Render Manager to delete all the sequence's render files and references. Then just play through the entire sequence (in the sequence RT menu you can enable the option to Play Base Layer Only). If the corruption was in one of the render files then you should now be able to play through without issue. If the corruption is in one of the clips then you should discover that FCP will crash or throw up an error when it tries to play though that clip ... keep a note of where this error occurs as thats your corrupt clip. Similarly, after trashing renders you can re-render, keep an eye on the render progress ... the error will be thrown proportionate to the area of the timeline where the corrupt source clip is contained. Additionally, yes, you can use the Reconnect Media function to selectively reconnect your source media to your masterclips, clip by clip ... the one that won't reconnect is going to be the problem child. That's the one you need to recapture/reimport from source.
    Hope it helps
    Andy

  • How to populate date & time when user enter data for custom table in sm30

    Can anyone tell me How to populate system date & time when user enter data for custom table in sm30..
      Req is
      i have custom table and using sm30 user can enter data.
    after saving date i want to update date & time in table
    Pls let me know where to write the code?
    Thanks in Advance

    You have to write the code in EVENT 01 in SE54 transaction. Go to SE54, enter your Ztable name and in the menu 'Environment-->Events'. Press 'ENTER' to go past the popup message. In the next screen, click on 'New Entries'. In the first column, enter 01 and in the next column give some name for your routine(say UPDATE_USER_DATE_TIME). Then click on the souce code icon that appears in blue at the end of the row. In the code, you need logic like below.
    FORM update_user_date_time.
      DATA: f_index LIKE sy-tabix.
      DATA: BEGIN OF l_total.
              INCLUDE STRUCTURE zztable.
      INCLUDE  STRUCTURE vimtbflags.
      DATA  END OF l_total.
      DATA: s_record TYPE zztable.
      LOOP AT total INTO l_total.
        IF l_total-vim_action = aendern OR
           l_total-vim_action = neuer_eintrag.
          MOVE-CORRESPONDING l_total TO s_record.
          s_record-zz_user = sy-uname.
          s_record-zz_date = sy-datum.
          s_record-zz_time = sy-uzeit.
          READ TABLE extract WITH KEY l_total.
          IF sy-subrc EQ 0.
            f_index = sy-tabix.
          ELSE.
            CLEAR f_index.
          ENDIF.
          MOVE-CORRESPONDING s_record TO l_total.
          MODIFY total FROM l_total.
          CHECK f_index GT 0.
          MODIFY extract INDEX f_index FROM l_total.
        ENDIF.
      ENDLOOP.
    ENDFORM.                    " UPDATE_USER_DATE_TIME
    Here ZZTABLE is the Z table and ZZ_USER, ZZ_DATE, and ZZ_TIME are the fields that are updated.

  • How to get the data from pcl2 cluster for TCRT table.

    Hi frndz,
    How to get the data from pcl2 cluster for tcrt table for us payroll.
    Thanks in advance.
    Harisumanth.Ch

    PL take a look at the sample Program EXAMPLE_PNP_GET_PAYROLL in your system. There are numerous other ways to read payroll results.. Pl use the search forum option & you sure will get a lot of hits..
    ~Suresh

  • How to find block free space  and PCTUSED FOR THE TABLE

    HI all,
    Due to performance issues for my database , my management ask me to reset the PCTUSED and PCTFREE values , and my doubt is
    1)How to find the current PCTUSED and PCTFREE values.
    2)How to find block free space and PCTUSED FOR THE TABLE.
    Please help me out regarding this.
    Regards,
    Vamsi.

    1)version is 10.2.0.4
    2)output of query
    tablespace extent_management allocation_type segment_space_management
    SYSTEM     LOCAL     SYSTEM     MANUAL
    UNDOTBS1     LOCAL     SYSTEM     MANUAL
    SYSAUX     LOCAL     SYSTEM     AUTO
    TEMP     LOCAL     UNIFORM     MANUAL
    USERS     LOCAL     SYSTEM     AUTO
    UNDOTBS2     LOCAL     SYSTEM     MANUAL
    INS     LOCAL     SYSTEM     AUTO
    CONFTBS     LOCAL     SYSTEM     AUTO
    REINS     LOCAL     SYSTEM     AUTO
    ANALYST     LOCAL     SYSTEM     AUTO
    BI     LOCAL     SYSTEM     AUTO
    INTRFC     LOCAL     SYSTEM     AUTO
    COGNOS     LOCAL     SYSTEM     AUTO
    TS_INDX     LOCAL     SYSTEM     AUTO
    TS_CHOLAWEB     LOCAL     SYSTEM     AUTO
    TS_DASBOARD     LOCAL     SYSTEM     AUTO

  • How to find the TCODE that is created for the table maintanance generator

    Hi ,
    How to find the TCODE that is created for the table maintanance generator of particular table,if we only know the table name.
    Regards
    Ramakrishna L

    Hello,
    I try it this way
    1. Goto SE16 --> enter table TSTCP.
    2. In the selection-screen displayed, enter
    PARAM = *<ZTABNAME>*
    You will get the t-code for the TMG.
    BR,
    Suhas
    PS: Are you sure a t-code has been created for this TMG ?

  • How to delete 3 months old ALL partitions from oracle tables

    How to delete 3 months old ALL partitions? Here is the query i am executing..but this query delete partitions 90 days old..need help sir..
    select 'alter table ABC.'||table_name||' drop partition L_'||to_char(sysdate - 90 ,'DDMONYY') || ' UPDATE GLOBAL INDEXES;' from dba_tables where table_name in ('TABLE1','TABLE2');
    Thanks
    Ora_user

    static sql :-
    alter table TEST1 drop partition L_01DEC10 update global indexes;
    alter table TEST1 drop partition L_06DEC10 update global indexes;
    Table DDL :-
    CREATE TABLE TABLE1
    SEQ_CACHE NUMBER(10) NOT NULL,
    REQUEST_DATE_TIME DATE NOT NULL,
    Manual_load VARCHAR2(30 BYTE) NOT NULL,
    SRS_AUTO VARCHAR2(200 BYTE) NOT NULL,
    SRS_REM VARCHAR2(30 BYTE) NOT NULL,
    TGT_FORM VARCHAR2(200 BYTE) NOT NULL,
    TGT_SYS1 VARCHAR2(25 BYTE),
    LOCK_TYPE VARCHAR2(30 BYTE) NOT NULL,
    AUTO_LEVEL VARCHAR2(20 BYTE) NOT NULL,
    SUCCESS_STS CHAR(1 BYTE) NOT NULL,
    SOURCE_NAME VARCHAR2(200 BYTE) NOT NULL,
    PNAME_SS VARCHAR2(40 BYTE),
    ID_CORE VARCHAR2(40 BYTE),
    ID_TRAN VARCHAR2(40 BYTE),
    TRIP_TIME NUMBER(10),
    REMOTE_TIME NUMBER(10),
    ERROR_TYPE VARCHAR2(30 BYTE),
    ERROR_TOLL VARCHAR2(30 BYTE),
    PRG_ID VARCHAR2(30 BYTE),
    OPER_ERROR_ID VARCHAR2(30 BYTE),
    DES_SYS_ERR VARCHAR2(200 BYTE),
    DBS_INS_DT DATE NOT NULL
    TABLESPACE DAT01
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    BUFFER_POOL DEFAULT
    PARTITION BY RANGE (REQUEST_DATE_TIME)
    PARTITION L_06APR11 VALUES LESS THAN (TO_DATE(' 2011-04-06 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_11APR11 VALUES LESS THAN (TO_DATE(' 2011-04-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_16APR11 VALUES LESS THAN (TO_DATE(' 2011-04-16 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_21APR11 VALUES LESS THAN (TO_DATE(' 2011-04-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_26APR11 VALUES LESS THAN (TO_DATE(' 2011-04-26 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_01MAY11 VALUES LESS THAN (TO_DATE(' 2011-05-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_06MAY11 VALUES LESS THAN (TO_DATE(' 2011-05-06 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_11MAY11 VALUES LESS THAN (TO_DATE(' 2011-05-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_16MAY11 VALUES LESS THAN (TO_DATE(' 2011-05-16 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_21MAY11 VALUES LESS THAN (TO_DATE(' 2011-05-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_26MAY11 VALUES LESS THAN (TO_DATE(' 2011-05-26 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    PARTITION L_01JUN11 VALUES LESS THAN (TO_DATE(' 2011-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE DAT01
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    BUFFER_POOL DEFAULT
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING
    /

  • How to find out when I am eligible for an upgrade from my phone?

    I am not the account primary account owner, my mom is. I live 300 miles from her and she doesn't know how to check it. I know there is a way to find out from my phone when I am eligible for an upgrade to a new phone. I have done it in the past, but can't figure it out anymore. Can someone please help me and tell me what to push to find this out. I know it is # or * and then 3 or 4 numbers.
    THANKS!!!!

    #UPG or # 874 to get your upgrade date.   You'll be sent a free text message with the date.

  • How to come up with a magic number for any table that returns more than 32KB?

    I am in a unique situation where in I am trying to retrieve values from multiple tables and publish them as XML output. The problem is based on the condition a few tables could retrieve data more than 32KB and a few less than 32KB. Less than 32KB is not an issue as XML generation is smooth. The minute it reaches greater than 32KB it generates a run time error. Just wondering if there is any way to ensure that the minute the query's results is greater than 32 kb, it should break say - if the results is 35KB, then I should break that result into 32 KB and 3kb; once done then pass this data to be published as an XML output. This is again not just for one table, but all the tables that are called in the function.
    Is there any way?? I am unable to get any ideas nor have I done anything so complex from production support stand point. Would appreciate if someone can guide me on this.
    The way it is, is as follows:
    I have a table called ctn_pub_cntl
    CREATE TABLE CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id          NUMBER(18)
    ,table_name                  VARCHAR2(50)
    ,last_pub_tms              DATE
    ,queue_name               VARCHAR2(50)
    ,dest_system              VARCHAR2(50)
    ,frequency                  NUMBER(6)
    ,status                      VARCHAR2(8)
    ,record_create_tms          DATE
    ,create_user_id                VARCHAR2(8)
    ,record_update_tms          DATE
    ,update_user_id             VARCHAR2(8)
    ,CONSTRAINT ctn_pub_cntl_id_pk PRIMARY KEY(ctn_pub_cntl_id)
    Data for this is:
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms 
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_SBDVSN'
    ,TO_DATE('10/2/2004 10:17:44PM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.TSZ601.UNP'
    ,'SAP'
    ,15
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms 
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_TRACK_SGMNT_DN'
    ,TO_DATE('02/06/2015 9:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,30
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms 
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FXPLA_TRACK_LCTN_DN'
    ,TO_DATE('10/2/2004 10:17:44PM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.YRDPLN.INPUT'
    ,'SAP'
    ,30
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms 
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FXPLA_TRACK_LCTN2_DN'
    ,TO_DATE('02/06/2015 9:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.TSZ601.UNP'
    ,'SAP'
    ,120
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FXPLA_TRACK_LCTN2_DN'
    ,TO_DATE('04/23/2015 11:50:00PM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.YRDPLN.INPUT'
    ,'SAP'
    ,10
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_FIXED_PLANT_ASSET'
    ,TO_DATE('04/23/2015 11:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,10
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_OPRLMT'
    ,TO_DATE('03/26/2015 7:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,30
    INSERT INTO CTNAPP.ctn_pub_cntl
    (ctn_pub_cntl_id   
    ,table_name        
    ,last_pub_tms
    ,queue_name 
    ,dest_system       
    ,frequency         
    VALUES
    (CTNAPP_SQNC.nextval
    ,'TRKFCG_OPRLMT_SGMNT_DN'
    ,TO_DATE('03/28/2015 12:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
    ,'UT.TSD.WRKORD.UNP'
    ,'SAP'
    ,30
    COMMIT;
    Once the above data is inserted and committed, then I created a function in a package:
    CREATE OR REPLACE PACKAGE CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH
    IS
    TYPE tNameTyp IS TABLE OF ctn_pub_cntl.table_name%TYPE INDEX BY BINARY_INTEGER;
    g_tName tNameTyp;
    TYPE tClobTyp IS TABLE OF CLOB INDEX BY BINARY_INTEGER;
    g_tClob tClobTyp;
    FUNCTION GetCtnData(p_nInCtnPubCntlID IN CTN_PUB_CNTL.ctn_pub_cntl_id%TYPE,p_count OUT NUMBER ) RETURN tClobTyp;
    END CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH;
    --Package body
    CREATE OR REPLACE PACKAGE BODY CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH
    IS
         doc           xmldom.DOMDocument;
         main_node     xmldom.DOMNode;
         root_node     xmldom.DOMNode;
         root_elmt     xmldom.DOMElement;
         child_node    xmldom.DOMNode;
         child_elmt    xmldom.DOMElement;
         leaf_node     xmldom.DOMNode;
         elmt_value    xmldom.DOMText;
         tbl_node      xmldom.DOMNode;
         table_data    XMLDOM.DOMDOCUMENTFRAGMENT;
         l_ctx         DBMS_XMLGEN.CTXHANDLE;
         vStrSqlQuery  VARCHAR2(32767);
         l_clob        tClobTyp;
         l_xmltype     XMLTYPE;
    --Local Procedure to build XML header    
    PROCEDURE BuildCPRHeader IS
      BEGIN
        child_elmt := xmldom.createElement(doc, 'PUBLISH_HEADER');
        child_node  := xmldom.appendChild (root_node, xmldom.makeNode (child_elmt));
        child_elmt := xmldom.createElement (doc, 'SOURCE_APLCTN_ID');
        elmt_value := xmldom.createTextNode (doc, 'CTN');
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
        child_elmt := xmldom.createElement (doc, 'SOURCE_PRGRM_ID');
        elmt_value := xmldom.createTextNode (doc, 'VALUE');
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
        child_elmt := xmldom.createElement (doc, 'SOURCE_CMPNT_ID');
        elmt_value := xmldom.createTextNode (doc, 'VALUE');
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
        child_elmt := xmldom.createElement (doc, 'PUBLISH_TMS');
        elmt_value := xmldom.createTextNode (doc, TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
        leaf_node  := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
        leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
    END BuildCPRHeader;
    --Get table data based on table name
    FUNCTION GetCtnData(p_nInCtnPubCntlID IN CTN_PUB_CNTL.ctn_pub_cntl_id%TYPE,p_Count OUT NUMBER) RETURN tClobTyp IS
        vTblName      ctn_pub_cntl.table_name%TYPE;
        vLastPubTms   ctn_pub_cntl.last_pub_tms%TYPE;
    BEGIN
                g_vProcedureName:='GetCtnData';   
                g_vTableName:='CTN_PUB_CNTL';
            SELECT table_name,last_pub_tms
            INTO   vTblName, vLastPubTms
            FROM   CTN_PUB_CNTL
            WHERE  ctn_pub_cntl_id=p_nInCtnPubCntlID;
        -- Start the XML Message generation
            doc := xmldom.newDOMDocument;
            main_node := xmldom.makeNode(doc);
            root_elmt := xmldom.createElement(doc, 'PUBLISH');
            root_node := xmldom.appendChild(main_node, xmldom.makeNode(root_elmt));
          --Append Table Data as Publish Header
            BuildCPRHeader;
          --Append Table Data as Publish Body
           child_elmt := xmldom.createElement(doc, 'PUBLISH_BODY');
           leaf_node  := xmldom.appendChild (root_node, xmldom.makeNode(child_elmt));
           DBMS_SESSION.SET_NLS('NLS_DATE_FORMAT','''YYYY:MM:DD HH24:MI:SS''');
           vStrSqlQuery := 'SELECT * FROM ' || vTblName
                          || ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
                        --  ||  ' AND rownum < 16'
          DBMS_OUTPUT.PUT_LINE(vStrSqlQuery);
           l_ctx  := DBMS_XMLGEN.NEWCONTEXT(vStrSqlQuery);
          DBMS_XMLGEN.SETNULLHANDLING(l_ctx, 0);
          DBMS_XMLGEN.SETROWSETTAG(l_ctx, vTblName);
          -- Append Table Data as XML Fragment
          l_clob(1):=DBMS_XMLGEN.GETXML(l_ctx); 
          elmt_value := xmldom.createTextNode (doc, l_clob(1));
         leaf_node  := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
         xmldom.writeToBuffer (doc, l_clob(1));
         l_clob(1):=REPLACE(l_clob(1),'&lt;?xml version=&quot;1.0&quot;?&gt;', NULL);
         l_clob(1):=REPLACE(l_clob(1),'&lt;', '<');
         l_clob(1):=REPLACE(l_clob(1),'&gt;', '>');
         RETURN l_clob;
         DBMS_OUTPUT.put_line('Answer is' ||l_clob(1));
         EXCEPTION
            WHEN NO_DATA_FOUND THEN
            DBMS_OUTPUT.put_line('There is no data with' || SQLERRM);
            g_vProcedureName:='GetCtnData';
            g_vTableName:='CTN_PUB_CNTL';
            g_vErrorMessage:=SQLERRM|| g_vErrorMessage;
            g_nSqlCd:=SQLCODE;
            ctn_log_error('ERROR',g_vErrorMessage,'SELECT',g_nSqlCd,p_nInCtnPubCntlID,g_vPackageName,g_vProcedureName,g_vTableName);
            WHEN OTHERS THEN
           DBMS_OUTPUT.PUT_LINE('ERROR : ' || SQLERRM);
           ctn_log_error('ERROR',g_vErrorMessage,'OTHERS',g_nSqlCd,p_nInCtnPubCntlID,g_vPackageName,g_vProcedureName,g_vTableName);
    END GetCtnData;
    PROCEDURE printClob (result IN OUT NOCOPY CLOB) IS
        xmlstr   VARCHAR2 (32767);
        line     VARCHAR2 (2000);
    BEGIN
        xmlstr := DBMS_LOB.SUBSTR (result, 32767);
        LOOP
           EXIT WHEN xmlstr IS NULL;
           line := SUBSTR (xmlstr, 1, INSTR (xmlstr, CHR (10)) - 1);
           DBMS_OUTPUT.put_line (line);
           xmlstr := SUBSTR (xmlstr, INSTR (xmlstr, CHR (10)) + 1);
        END LOOP;
    END printClob;
    END CTN_PUB_CNTL_EXTRACT_PUBLISH;
    If you notice my query:
    vStrSqlQuery := 'SELECT * FROM ' || vTblName
                          || ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
                         ||  ' AND rownum < 16'
    The minute I comment
    ||  ' AND rownum < 16' ;
    , it throws an error because this query returns around 600 rows and all of those rows need to be published as XML and the tragedy is that there is a C program in between as well i.e. C will call my packged functions and then will do all the processing. Once this is done will pass the results back to C program. So obviously C does not recognise CLOB and somewhere in the process I have to convert the CLOB to VARCHAR or instead of CLOB I have to use VARCHAR array as a return type. This is my challenge.
    Anyone that can help me to find out the required magic number and also a brief know how, I would appreciate that. Many thanks in advance.

    Not sure I understand which part is failing.
    Is it the C program calling your packaged function? Or does the error occur in the PL/SQL code, in which case you should be able to pinpoint where it's wrong?
    A few comments :
    1) Using DOM to build XML out of relational data? What for? Use SQL/XML functions.
    2) Giving sample data is usually great, but it's not useful here since we can't run your code. We're missing the base tables.
    3) This is wrong :
    vStrSqlQuery := 'SELECT * FROM ' || vTblName                     || ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
    A bind variable should be used here for the date.
    4) This is wrong :
    elmt_value := xmldom.createTextNode (doc, l_clob(1));
    createTextNode does not support CLOB so it will fail as soon as the CLOB you're trying to pass exceeds 32k.
    Maybe that's the problem you're referring to?
    5) This is most wrong :
         l_clob(1):=REPLACE(l_clob(1),'&lt;?xml version=&quot;1.0&quot;?&gt;', NULL); 
         l_clob(1):=REPLACE(l_clob(1),'&lt;', '<'); 
         l_clob(1):=REPLACE(l_clob(1),'&gt;', '>'); 
    I understand what you're trying to do but it's not the correct way.
    You're trying to convert a text() node representing XML in escaped form back to XML content.
    The problem is that there are other things to take care of besides just '&lt;' and '&gt;'.
    If you want to insert an XML node into an existing document, treat that as an XML node, not as a string.
    Anyway,
    Anyone that can help me to find out the required magic number
    That would be a bad idea. Fix what needs to be fixed.
    And please clearly state which part is failing : the C program or the PL/SQL code?
    I'd vote for PL/SQL, as pointed out in [4].

  • How to Maintain Surrogate Key Mapping (cross-reference) for Dimension Tables

    Hi,
    What would be the best approach on ODI to implement the Surrogate Key Mapping Table on the STG layer according to Kimball's technique:
    "Surrogate key mapping tables are designed to map natural keys from the disparate source systems to their master data warehouse surrogate key. Mapping tables are an efficient way to maintain surrogate keys in your data warehouse. These compact tables are designed for high-speed processing. Mapping tables contain only the most current value of a surrogate key— used to populate a dimension—and the natural key from the source system. Since the same dimension can have many sources, a mapping table contains a natural key column for each of its sources.
    Mapping tables can be equally effective if they are stored in a database or on the file system. The advantage of using a database for mapping tables is that you can utilize the database sequence generator to create new surrogate keys. And also, when indexed properly, mapping tables in a database are very efficient during key value lookups."
    We have a requirement to implement cross-reference mapping tables with Natural and Surrogate Keys for each dimension table. These mappings tables will be populated automatically (only inserts) during the E-LT execution, right after inserting into the dimension table.
    Someone have any idea on how to implement this on ODI?
    Thanks,
    Danilo

    Hi,
    first of all please avoid bolding something. After this according Kimball (if i remember well) is a 1:1 mapping, so no-surrogate key.
    After that personally you could use Lookup Table
    http://www.odigurus.com/2012/02/lookup-transformation-using-odi.html
    or make a simple outer join filtering by your "Active_Flag" column (remember that this filter need to be inside your outer join).
    Let us know
    Francesco

  • ALV: how to save context space for large tables ?

    Dear collegues,
    We are displaying an ALV table that is quite large. Therefore, the corrsponding DDIC structure and the WD context is large. This has an impact on performance and the load size of the program. Now we will enhance the ALV table again.
    Example: for an icon and its explaining tooltip that are displayed in the ALV: there is are context fields required like "SOURCE_FIELDNAME" for the tooltip as well as for for the icon. They need a lot of characters for each tooltip and icon).
    Question: do you have an idea, how to save context space for those ALV fields ?
    Best regards,
    Christian

    >We are displaying an ALV table that is quite large.
    Do you mean quite large as in a large number of columns or as in a large number of rows (or both)?  I assume that the problem is probably more related to a large number of rows.  For very large tables, you should consider using the table instead of the ALV. For very large tables you can even use a technique called context paging to keep only a subset of the data in the context memory at a time.  Here is a recent blog that I created on the topic with demonstrations of different techniques for table sharing, shared memory, and context paging when dealing with large tables in Web Dynpro ABAP:
    Web Dynpro ABAP: How Fast Can You Consume 1 Million Rows?

  • Suggestions for table partition for existing tables.

    I have a table as below. This table contains huge data. This table has so many child tables .I am planning to do a 'Reference Partitioning' for the same.
    create table PROMOTION_DTL
      PROMO_ID              NUMBER(10) not null,
      EVENT                 VARCHAR2(6),
      PROMO_START_DATE      TIMESTAMP(6),
      PROMO_END_DATE        TIMESTAMP(6),
      PROMO_COST_START_DATE TIMESTAMP(6),
      EVENT_CUT_OFF_DATE    TIMESTAMP(6),
      REMARKS               VARCHAR2(75),
      CREATE_BY             VARCHAR2(50),
      CREATE_DATE           TIMESTAMP(6),
      UPDATE_BY             VARCHAR2(50),
      UPDATE_DATE           TIMESTAMP(6)
    alter table PROMOTION_DTL
      add constraint PROMOTION_DTL_PK primary key (PROMO_ID);
    alter table PROMOTION_DTL
      add constraint PROMO_EVENT_FK foreign key (EVENT)
      references SP_PROMO_EVENT_MST (EVENT);
    -- Create/Recreate indexes
    create index PROMOTION_IDX1 on PROMOTION_DTL (PROMO_ID, EVENT)
    create unique index PROMOTION_PK on PROMOTION_DTL (PROMO_ID)
    -- Grant/Revoke object privileges
    grant select, insert, update, delete on PROMOTION_DTL to SCHEMA_RW_ROLE;I would like to partition this table .Most of the queries contains the following conditions.
    promo_end_date >=   SYSDATE
    and
    (event = :input_event OR
    (:input_Start_Date <= promo_end_date           
    AND promo_start_date <= :input_End_Date))Any time the promotion can be closed by updating the PROMO_END_DATE.
    Interval partioning on PROMO_END_DATE is not possible as PROMO_END_DATE is a nullable and updatable field.
    I am now to table partition.
    Any suggestions are welcome...

    DO NOT POST THE SAME QUESTION IN MULTIPLE FORUMS PLEASE!
    Suggestions for table partition of existing tables

  • "Invalid segment" when shrinking a partitioned table

    I'm encountering the following error when trying to shrink space compact for a partitioned table. Would you know how can I go about this?
    I need to make it work.
    SQL> alter table PS_SGSN_1_MONTH modify partition P_201304 shrink space compact;
    alter table PS_SGSN_1_MONTH modify partition P_201304 shrink space compact
    ERROR at line 1:
    ORA-10635: Invalid segment or tablespace type
    My Oracle DB version is 11gR2

    Yes, that would be the right thing to do to check how and where MV is being used and what downtime you can get to fix this. Check if you can change the Materialized view to create based on primary key instead of Row id.
    Steps would be
    1 drop the materialized views related
    2 drop the materialized views logs
    3 shrink the tables and indexes
    4 recreate the materialized views log
    5 recreate the materialized views
    Also, there is a bug with the primary key as well. Check this
    Bug 13709220 - ORA-10663 when shrinking a master table of an MVIEW with primary key (Doc ID 13709220.8)

  • Error when trying to partition for Bootcamp

    Evening everyone,
    I use parallels at the moment but have a piece of software which isn't quite behaving itself. As I have just upgraded to leopard, I thought I would see if setting up Bootcamp and running it in there would solve the problem. I freed up plenty of disk space but when trying to create a 10GB partition in the setup assistant I got this error:
    'The disk cannot be partitioned because some files cannot be moved. Back up the disk and use Disk Utility to format it as a single Mac OS Extended (journaled) volume. Restore your information to the disk and try using Boot Camp Assistant again.'
    I don't know much about the technical side of my computer - partitioning and formatting etc so that message sounds all very scary to me. Is it saying I need to wipe my hard drive and re-format it? It is already formatted in Mac OS Extended (Journaled). If so, can I do that from my Time Machine backups? it sounds like a really drastic action. Also, could this be anything to do with my current Parallels software?
    Thanks very much for your time, any help would be much appreciated!

    when you say "plenty of disk space" how much exactly is that?
    that message indicates that bootcamp assistant couldn't find big enough chunk of contiguous free space to create a bootcamp partition. this happens due to disk fragmentation.
    at the minimum you should boot from the leopard install DVD and repair the hard drive (repair disk, not permissions).
    If you still want to install bootcamp you need to defrag your hard drive. there are several ways to do it. you can use 3rd party software like idefrag. You can clone your disk to an external drive using a 3rd party cloner like CCCloner or Superduper. then boot from the clone and clone it back to the main drive. Or, if you have time machine, you can restore your system from a TM backup. that will also defrag your hard drive. To do that boot from the install DVD and select 'restore system from backup" from the Utilities menu at the top. that seems to be the suggestion the message you saw makes.
    also, keep in mind that you need to keep plenty of free space (>15%) on your OS X partition to avoid disk fragmentation problems.
    Message was edited by: V.K.

  • How to fix crashing when PDF and procedure for cross reference?

    I'm using Frame 8.  When I try to PDF my book, Frame crashes.  I have tried PDFing the individual chapters, but that didn't work.
    Also, I'm trying to create either a cross reference or hyperlink.  I have a paragraph that says "For more information, see the Setup section in Chapter 1"  The Setup section in chapter 1 is a heading 4.  I've tried using the instructions in the Frame help, but I don't understand them, and I don't know whether I need a hyperlink or a cross reference.  Can I get simpler explanation and how-to?
    Thanks!

    Several things could be causing the crashing during the PDF creation process.
    1. Have you applied the MS hotfix mentioned here?
    http://blogs.adobe.com/techcomm/2009/07/repost_hotfix_for_framemaker.html
    2. In your PDF Setup, make sure that you have Tagged PDF turned off and in the Links panel enable "Create Named Destinations for All Paragraphs".
    Try it again and also make sure that you have applied all of the FM8 patches in sequence.
    You can use either cross-references or hypertext links to create the jumps, though cross-refs are easier to do and maintain.
    1. To insert a cross-ref to the Chapter 1 Setup section, you first would need to have Chapter 1 open along with the document that you are currently working on.
    2. At the location where you want to insert the cross-reference, select the Special > Cross-Reference menu option.
    3. For the first time that you use a cross-reference, you have create a format for it. In your case the "For more information, see the" and "section in" are constant strings and the section name "Setup" and "Chapter 1" are the variables.
        a) Click on the "Edit Format" button in the Reference section towards the bottom of the window. This will open a new dialogue window allowing you to create a format for use.
         b) In this window, give the format a name so that you can easily identify it, such as "More info in Chapter section".
         c) Then you have to enter in the "Defintion" line what you want the cross-ref to say, which could be (without quotes) "For more information, see the <$paratext> section in <$paranum[Chapter]>." The items in the angled brackets are building blocks used by FM to pick up content from the specified destination when you make the cross-ref. The first one <$paratext> specifies to grab the content of the paragragh that you point to (in this case it should be "Setup" section heading. The <$paranum[Chapter}> item will grab the autonumber value from the Chapter tag (or use whichever one that you have for chapter numbering at the start of each chapter).
         d) Once you're satisfied with your format, click on the Add and then Done buttons. This will save thgis format in the cross-ref catalog.
    4. To actually insert the cross-ref, select the target document (e.g. Chapter 1) from the drop down list for "Dcoument:".
    5. Then under the Source Type (the down arrow below the Document), make sure that you have Paragraphs selected.
    6. In the left column you will see the paratags available in the Chapter 1 document.
    7. Pick the paratag type that contains the Setup section title (heading 4 ?).
    8. In the right-hand column you should then see all of the heading 4 tags in Chapter 1.
    9. Select the Setup in this column.
    10. Check that the Format line shows the one that you created earlier ("More Info in Chapter Section") and then click on the Insert button.
    You're done.
    If you don't like the formatting or it's picking up the wrong content, then open the cross-ref dialogue again and edit the format to get what you want.

Maybe you are looking for

  • Message Mapping - same structure

    I have the Source and Target Structures are the SAME XML format. Should I need a Message Mapping? Should I need an Interface Mapping? Please help me. Thanks.

  • BAM-06008: The XML payload is invalid; extra contents were found.

    Hi All, Version:11.1.1.6 We are getting the BAM Error: "BAM-06008: The XML payload is invalid; extra contents were found" after I added a another field to a BAM Object.I have added that extra field to corresponding data objects .wsdl file as well as

  • Strange Problem in Order Deletion

    Hi,         I have some Scheduling Agreements (Category: HG) as below in the planning book.      Quan             Date 1. 180,00        - 13.08.2007 2. 3.210,00      - 20.08.2007 3. 3.240,00      - 22.08.2007 3. 3.240,00      - 06.09.2007 4. 4.320,00

  • Include file issue

    I've got a form in a tabe that i've got as an include file throughout the site. For some reason in firefox the form sits flush with the table above it & in ie there is a margin of about 10/15px. Firefox is displaying as it should as i have checked al

  • OrderBooking sample corrupt after installing patches 1,2,3?

    Hi, I did a fresh install of PM GA on my win xp machine. Then used obant in 127.OrderBooking and all was fine. Then I installed the mandatory patch and the three patches 1,2,3 and tried to use obant again in 127.OrderBooking. This worked fine. But in