Retrieve the large volume XML in CLOB

If I store the larage volume XML documents
in the database CLOB field,
How can I retreive the corresponding XML file
efficently?
null

The Dynamic News sample app shows some techniques for working with XML documents and CLOB fields.
Regards,
-rh

Similar Messages

  • How to retrieve the large data from the database

    in my program, I want to operate the data retrieved from the database. but there are too many rows in the ResultSet . so when I try to get the ReusltSet from the database, the error of "java.lang.OutOfMemoryError
    " will appear . because there are two million rows contained in the ResultSet, So I want to know whether there are some methods to deal with this problem.
    anyone can give me some tips or recommend some papers and books to me.
    thanks!!!!

    the program is developed for the data warehourse, you know there is a large number of data in data warehourse. so I think I have to deal with the very large ResultSet in my program, this code is a example for my problem :
    import java.sql.Connection;
    import java.sql.DriverManager;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.SQLException;
    import java.sql.Statement;
    public class Untitled1 {
        String user = "";
        String password = "";
        public void createTable() {
            try {
                Class.forName("com.sybase.jdbc3.jdbc.SybDriver");
                System.out.println("Good to go");
                String url = "jdbc:sybase:Tds:59.64.137.240:5000/TJ";
                Connection conn = DriverManager.getConnection(url, user, password);
                System.out.println("connect successfully!");
    //            Statement st1 = conn.createStatement();
                String sqlsentence2 = "SELECT DISTINCT caller FROM upcdr";
                PreparedStatement st2 = conn.prepareStatement(sqlsentence2);
                PreparedStatement st3 = conn.prepareStatement("INSERT INTO callerTable VALUES(?)");
                System.out.println("createstatement successfully!");
    //            String sqlsentence1 = "CREATE TABLE callerTable (caller CHAR(18))";
    //            st1.executeUpdate(sqlsentence1);
    //            System.out.println(sqlsentence1);
    //            st1.close();
                ResultSet rs = st2.executeQuery();
                while (rs.next()) {
                    st3.setString(1, rs.getString("caller"));
                    st3.executeUpdate();
                    System.out.println(rs.getString("caller"));
                st2.close();
                st3.close();
            } catch (SQLException e) {
                e.printStackTrace();
                System.out.println(e.getMessage());
               System.out.println( e.getSQLState());
               System.out.println( e.getErrorCode());
            } catch (ClassNotFoundException e) {
                e.printStackTrace();
        public static void main(String args[]) {
            Untitled1 cct = new Untitled1();
            cct.createTable();
    }at here, the table "upcdr" is very large, it has two million rows. when I run this program , the error will happen. someone told me that if I use the cursor, because it will retrieve one row from the database every time, so it will not produce a very large ResultSet, then the memory would not be used out. but I am not familiar with that aspect. so I think if anyone can give me some advice.
    thanks!!!!

  • Create a GPT partition table and format with a large volume (solved)

    Hello,
    I'm having trouble creating a GPT partition table for a large volume (~6T). It is a RAID 5 (hardware) with 3 hard disk drives having a size of 3T each (thus the resulting 6T volume).
    I tried creating a GPT partition table with gdisk but it just fails at creating it, stopping here (I've let it run for like 3 hours...):
    Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
    PARTITIONS!!
    Do you want to proceed? (Y/N): y
    OK; writing new GUID partition table (GPT) to /dev/md126.
    I also tried with parted but I get the same result. Out of luck, I created a GPT partition table from Windows 7 and  2 NTFS partitions (15G and the rest of space for the other) and it worked just fine. I then tried to format the 15G partition as ext4 but, as for gdisk, mkfs.ext4 will just never stop.
    Some information:
    fdisk -l
    Disk /dev/sda: 256.1 GB, 256060514304 bytes, 500118192 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0xd9a6c0f5
    Device Boot Start End Blocks Id System
    /dev/sda1 * 2048 104861695 52429824 83 Linux
    /dev/sda2 104861696 466567167 180852736 83 Linux
    /dev/sda3 466567168 500117503 16775168 82 Linux swap / Solaris
    Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    Disk /dev/sde: 320.1 GB, 320072933376 bytes, 625142448 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0x5ffb31fc
    Device Boot Start End Blocks Id System
    /dev/sde1 * 2048 625139711 312568832 7 HPFS/NTFS/exFAT
    Disk /dev/md126: 6001.1 GB, 6001143054336 bytes, 11720982528 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 65536 bytes / 131072 bytes
    Disk label type: dos
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/md126p1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.
    WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
    gdisk -l on my RAID volume (/dev/md126):
    GPT fdisk (gdisk) version 0.8.7
    Partition table scan:
    MBR: protective
    BSD: not present
    APM: not present
    GPT: present
    Found valid GPT with protective MBR; using GPT.
    Disk /dev/md126: 11720982528 sectors, 5.5 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 8E7D03F1-8C3A-4FE6-B7BA-502D168E87D1
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 11720982494
    Partitions will be aligned on 8-sector boundaries
    Total free space is 6077 sectors (3.0 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 34 262177 128.0 MiB 0C01 Microsoft reserved part
    2 264192 33032191 15.6 GiB 0700 Basic data partition
    3 33032192 11720978431 5.4 TiB 0700 Basic data partition
    To make things clear: sda is an SSD on which Archlinux has been freshly installed (sda1 for root, sda2 for home, sda3 for swap), sde is a hard disk drive having Windows 7 installed on it. My goal with the 15G partition is to format it so I can mount /var on the HDD rather than on the SSD. The large volume will be for storage.
    So if anyone has any suggestion that would help me out with this, I'd be glad to read.
    Cheers
    Last edited by Rolinh (2013-08-16 11:16:21)

    Well, I finally decided to use a software RAID as I will not share this partition with Windows anyway and it seems a better choice than the fake RAID.
    Therefore, I used the mdadm utility to create my RAID 5:
    # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
    # mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
    It works like a charm.

  • Date insertion from large XML document (clob) into relation table very slow

    Hi Everybody!
    I'm working with Oracle 9.2.0.5 on Microsoft Windows Server 2003 Enterprise Edition.
    The server (a test server) is a Pentium 4 2.8 GHz, 1GB of RAM.
    I use a procedure called PARITOP_TRAITERXMLRESULTMASSE to insert the data contained in the pXMLDOC clob parameter in the table pTABLENAME. (You can see the format of the XML document below). The first step on this procedure is to verify that the XML document is not empty. If not, the procedure needs to add a node in the document, in every <ROW> tag. This added node is named “RST_ID”. It’s the foreign key of each record. I can retrieve the value of each <RST_ID> node in an other table in which the data has been previously added (by the calling procedure). When each of the <ROW> elements has been treated, the PARITOP_INSERTXML procedure is called. This procedure uses DBMS_XMLSAVE.INSERTXML to insert the data contained in the XML document in the specified table.
    (Below, you can see the code of my procedures.)
    With this information, can you tell me why this treatment is very very very slow with a large XML document and how I can improve it?
    Thank you for your help!
    Anne-Marie
    CREATE OR REPLACE PROCEDURE "PARITOP_TRAITERXMLRESULTMASSE" (
    pPRC_ID IN PARITOP_PARC.PRC_ID%TYPE,
    pRST_MONDE IN PARITOP_RESULTAT.RST_MONDE%TYPE,
    pXMLDOC IN CLOB,
    pTABLENAME IN VARCHAR2)
    AS
    Objectif :Insérer le contenu du XML passé en paramètre (pXMLDOC) à la table passée en paramètre (pTABLENAME)
    La table passée en paramètre doit être une table ayant comme clé étrangère le champs "RST_ID" .
    (Le noeud RST_ID est donc ajouté à tous les document XML. Ce rst_id est
    déterminé à partir de la table PARITOP_RESULTAT grâce à pPRC_ID et
    pRstMonde fournis en paramètre)
    result_doc CLOB;
    XMLDOMDOC XDB.DBMS_XMLDOM.DOMDOCUMENT;
    NODE_ROWSET DBMS_XMLDOM.DOMNODE;
    NODE_ROW DBMS_XMLDOM.DOMNODE;
    vUE_ID PARITOP_RESULTAT.UE_ID%TYPE;
    vRST_ID PARITOP_RESULTAT.RST_ID%TYPE;
    nodeList DBMS_XMLDOM.DOMNODELIST;
    BEGIN
    BEGIN
    vUE_ID := 0;
    vRST_ID := 0;
    XMLDOMDOC := DBMS_XMLDOM.NEWDOMDOCUMENT(pXMLDOC);
    IF NOT GESTXML_PKG.FN_PARITOP_DOCUMENT_IS_NULL(XMLDOMDOC) THEN
    NODE_ROWSET := DBMS_XMLDOM.item(DBMS_XMLDOM.GETCHILDNODES (DBMS_XMLDOM.MAKENODE(XMLDOMDOC)),0);
    for i in 0..dbms_xmldom.getLength(DBMS_XMLDOM.getchildnodes(NODE_ROWSET))-1 loop
    NODE_ROW := DBMS_XMLDOM.ITEM(DBMS_XMLDOM.GETCHILDNODES(NODE_ROWSET), i) ;
    nodeList := DBMS_XMLDOM.GETELEMENTSBYTAGNAME(DBMS_XMLDOM.makeelement(NODE_ROW) , 'UE_ID');
    IF vUE_ID <> DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0))) THEN
    vUE_ID := DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0)));
    --on ramasse le rst_id
    SELECT RST_ID INTO vRST_ID
    FROM PARITOP_RESULTAT RST
    WHERE RST.PRC_ID = pPRC_ID
    AND RST.UE_ID = vUE_ID
    AND RST.RST_MONDE = pRST_MONDE
    AND RST_A_SUPPRIMER = 0;
    END IF;
    GESTXML_PKG.PARITOP_ADDNODETOROW(XMLDOMDOC, NODE_ROW, 'RST_ID', vRST_ID);
    end loop;
    RESULT_DOC := ' '; --à garder, pour ne pas que ca fasse d'erreur lors du WriteToClob.
    dbms_xmldom.writeToClob(DBMS_XMLDOM.MAKENODE(XMLDOMDOC), RESULT_DOC);
    --Insertion du document XML dans la table "tableName"
    GESTXML_PKG.PARITOP_INSERTXML(RESULT_DOC, pTABLENAME);
    DBMS_XMLDOM.FREEDOCUMENT( XMLDOMDOC);
    end if;
    EXCEPTION
    […exception treatement…]
    END;
    END;
    The format of a XML clob is :
    <ROWSET>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6223</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>92.307692307692307</CMP_INDICESELECTION>
    <CMP_PVRES>94900</CMP_PVRES>
    <CMP_PVAJUSTE>72678.017699115066</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>72678.017699115095</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>72678.017699115037</CMP_PVAJUSTEMAX>
    <CMP_PV>148000</CMP_PV>
    <CMP_VALROLE>129400</CMP_VALROLE>
    <CMP_PVRESECART>4790</CMP_PVRESECART>
    <CMP_PVRHAB>101778.01769911509</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>148000</CMP_PVA>
    </ROW>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6235</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>76.92307692307692</CMP_INDICESELECTION>
    <CMP_PVRES>117800</CMP_PVRES>
    <CMP_PVAJUSTE>118080</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>118080</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>118080</CMP_PVAJUSTEMAX>
    <CMP_PV>172000</CMP_PV>
    <CMP_VALROLE>134800</CMP_VALROLE>
    <CMP_PVRESECART>0</CMP_PVRESECART>
    <CMP_PVRHAB>147180</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>172000</CMP_PVA>
    </ROW>
    </ROWSET>
    PARITOP_COMPARABLE TABLE :
    RST_ID NUMBER(10) NOT NULL,
    VEN_ID NUMBER(10) NOT NULL,
    CMP_SELMAN NUMBER(1) NOT NULL,
    CMP_UTILISE NUMBER(1) NOT NULL,
    CMP_INDICESELECTION FLOAT(53) NOT NULL,
    CMP_PVRES FLOAT(53) NULL,
    CMP_PVAJUSTE FLOAT(53) NULL,
    CMP_PVRHAB FLOAT(53) NULL,
    CMP_TVM FLOAT(53) NULL
    ROCEDURE PARITOP_INSERTXML (xmlDoc IN clob, tableName IN VARCHAR2)
    AS
    insCtx DBMS_XMLSave.ctxType;
    rowss number;
    BEGIN
    --permet d'insérer les champs du XML dans la table passée en paramètre.
    --il suffit que les champs XML aient le même nom que les champs de la table
    BEGIN
    insCtx := DBMS_XMLSave.newContext(tableName); -- get context handle
    DBMS_XMLSAVE.SETDATEFORMAT( insCtx, 'yyyy-MM-dd HH:mm:ss');--attention, case sensitive
    DBMS_XMLSAVE.setIgnoreCase(insCtx, 1);
    rowss := DBMS_XMLSAVE.INSERTXML(insCtx , xmlDoc);
    DBMS_XMLSave.closeContext(insCtx);
    EXCEPTION
    […]
    END;
    END;
    PROCEDURE PARITOP_ADDNODETOROW (
    XMLDOMDOC DBMS_XMLDOM.DOMDOCUMENT,
    NODE_ROW dbms_xmldom.DOMNode,
    NOM_NOEUD VARCHAR2,
    VALEUR_NOEUD VARCHAR2)
    AS
    --PERMET D'AJOUTER UN NOEUD AVEC 1 SEULE VALEUR DANS une ROW D'UN XML.
    --UTILE SURTOUT POUR LES CLÉS ÉTRANGÈRES
    domElemAInserer DBMS_XMLDOM.DOMELEMENT;
    NODE dbms_xmldom.DOMNode;
    NODE_TMP dbms_xmldom.DOMNode;
    BEGIN
    domElemAInserer := DBMS_XMLDOM.createElement(XMLDOMDOC, NOM_NOEUD) ;
    NODE := DBMS_XMLDOM.MAKENODE(domElemAInserer); --cast
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE_ROW,NODE);
    NODE_TMP := DBMS_XMLDOM.MAKENODE(DBMS_XMLDOM.CREATETEXTNODE(XMLDOMDOC, VALEUR_NOEUD ) );
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE,NODE_TMP );
    END;

    Hi Everybody!
    I'm working with Oracle 9.2.0.5 on Microsoft Windows Server 2003 Enterprise Edition.
    The server (a test server) is a Pentium 4 2.8 GHz, 1GB of RAM.
    I use a procedure called PARITOP_TRAITERXMLRESULTMASSE to insert the data contained in the pXMLDOC clob parameter in the table pTABLENAME. (You can see the format of the XML document below). The first step on this procedure is to verify that the XML document is not empty. If not, the procedure needs to add a node in the document, in every <ROW> tag. This added node is named “RST_ID”. It’s the foreign key of each record. I can retrieve the value of each <RST_ID> node in an other table in which the data has been previously added (by the calling procedure). When each of the <ROW> elements has been treated, the PARITOP_INSERTXML procedure is called. This procedure uses DBMS_XMLSAVE.INSERTXML to insert the data contained in the XML document in the specified table.
    (Below, you can see the code of my procedures.)
    With this information, can you tell me why this treatment is very very very slow with a large XML document and how I can improve it?
    Thank you for your help!
    Anne-Marie
    CREATE OR REPLACE PROCEDURE "PARITOP_TRAITERXMLRESULTMASSE" (
    pPRC_ID IN PARITOP_PARC.PRC_ID%TYPE,
    pRST_MONDE IN PARITOP_RESULTAT.RST_MONDE%TYPE,
    pXMLDOC IN CLOB,
    pTABLENAME IN VARCHAR2)
    AS
    Objectif :Insérer le contenu du XML passé en paramètre (pXMLDOC) à la table passée en paramètre (pTABLENAME)
    La table passée en paramètre doit être une table ayant comme clé étrangère le champs "RST_ID" .
    (Le noeud RST_ID est donc ajouté à tous les document XML. Ce rst_id est
    déterminé à partir de la table PARITOP_RESULTAT grâce à pPRC_ID et
    pRstMonde fournis en paramètre)
    result_doc CLOB;
    XMLDOMDOC XDB.DBMS_XMLDOM.DOMDOCUMENT;
    NODE_ROWSET DBMS_XMLDOM.DOMNODE;
    NODE_ROW DBMS_XMLDOM.DOMNODE;
    vUE_ID PARITOP_RESULTAT.UE_ID%TYPE;
    vRST_ID PARITOP_RESULTAT.RST_ID%TYPE;
    nodeList DBMS_XMLDOM.DOMNODELIST;
    BEGIN
    BEGIN
    vUE_ID := 0;
    vRST_ID := 0;
    XMLDOMDOC := DBMS_XMLDOM.NEWDOMDOCUMENT(pXMLDOC);
    IF NOT GESTXML_PKG.FN_PARITOP_DOCUMENT_IS_NULL(XMLDOMDOC) THEN
    NODE_ROWSET := DBMS_XMLDOM.item(DBMS_XMLDOM.GETCHILDNODES (DBMS_XMLDOM.MAKENODE(XMLDOMDOC)),0);
    for i in 0..dbms_xmldom.getLength(DBMS_XMLDOM.getchildnodes(NODE_ROWSET))-1 loop
    NODE_ROW := DBMS_XMLDOM.ITEM(DBMS_XMLDOM.GETCHILDNODES(NODE_ROWSET), i) ;
    nodeList := DBMS_XMLDOM.GETELEMENTSBYTAGNAME(DBMS_XMLDOM.makeelement(NODE_ROW) , 'UE_ID');
    IF vUE_ID <> DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0))) THEN
    vUE_ID := DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0)));
    --on ramasse le rst_id
    SELECT RST_ID INTO vRST_ID
    FROM PARITOP_RESULTAT RST
    WHERE RST.PRC_ID = pPRC_ID
    AND RST.UE_ID = vUE_ID
    AND RST.RST_MONDE = pRST_MONDE
    AND RST_A_SUPPRIMER = 0;
    END IF;
    GESTXML_PKG.PARITOP_ADDNODETOROW(XMLDOMDOC, NODE_ROW, 'RST_ID', vRST_ID);
    end loop;
    RESULT_DOC := ' '; --à garder, pour ne pas que ca fasse d'erreur lors du WriteToClob.
    dbms_xmldom.writeToClob(DBMS_XMLDOM.MAKENODE(XMLDOMDOC), RESULT_DOC);
    --Insertion du document XML dans la table "tableName"
    GESTXML_PKG.PARITOP_INSERTXML(RESULT_DOC, pTABLENAME);
    DBMS_XMLDOM.FREEDOCUMENT( XMLDOMDOC);
    end if;
    EXCEPTION
    […exception treatement…]
    END;
    END;
    The format of a XML clob is :
    <ROWSET>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6223</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>92.307692307692307</CMP_INDICESELECTION>
    <CMP_PVRES>94900</CMP_PVRES>
    <CMP_PVAJUSTE>72678.017699115066</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>72678.017699115095</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>72678.017699115037</CMP_PVAJUSTEMAX>
    <CMP_PV>148000</CMP_PV>
    <CMP_VALROLE>129400</CMP_VALROLE>
    <CMP_PVRESECART>4790</CMP_PVRESECART>
    <CMP_PVRHAB>101778.01769911509</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>148000</CMP_PVA>
    </ROW>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6235</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>76.92307692307692</CMP_INDICESELECTION>
    <CMP_PVRES>117800</CMP_PVRES>
    <CMP_PVAJUSTE>118080</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>118080</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>118080</CMP_PVAJUSTEMAX>
    <CMP_PV>172000</CMP_PV>
    <CMP_VALROLE>134800</CMP_VALROLE>
    <CMP_PVRESECART>0</CMP_PVRESECART>
    <CMP_PVRHAB>147180</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>172000</CMP_PVA>
    </ROW>
    </ROWSET>
    PARITOP_COMPARABLE TABLE :
    RST_ID NUMBER(10) NOT NULL,
    VEN_ID NUMBER(10) NOT NULL,
    CMP_SELMAN NUMBER(1) NOT NULL,
    CMP_UTILISE NUMBER(1) NOT NULL,
    CMP_INDICESELECTION FLOAT(53) NOT NULL,
    CMP_PVRES FLOAT(53) NULL,
    CMP_PVAJUSTE FLOAT(53) NULL,
    CMP_PVRHAB FLOAT(53) NULL,
    CMP_TVM FLOAT(53) NULL
    ROCEDURE PARITOP_INSERTXML (xmlDoc IN clob, tableName IN VARCHAR2)
    AS
    insCtx DBMS_XMLSave.ctxType;
    rowss number;
    BEGIN
    --permet d'insérer les champs du XML dans la table passée en paramètre.
    --il suffit que les champs XML aient le même nom que les champs de la table
    BEGIN
    insCtx := DBMS_XMLSave.newContext(tableName); -- get context handle
    DBMS_XMLSAVE.SETDATEFORMAT( insCtx, 'yyyy-MM-dd HH:mm:ss');--attention, case sensitive
    DBMS_XMLSAVE.setIgnoreCase(insCtx, 1);
    rowss := DBMS_XMLSAVE.INSERTXML(insCtx , xmlDoc);
    DBMS_XMLSave.closeContext(insCtx);
    EXCEPTION
    […]
    END;
    END;
    PROCEDURE PARITOP_ADDNODETOROW (
    XMLDOMDOC DBMS_XMLDOM.DOMDOCUMENT,
    NODE_ROW dbms_xmldom.DOMNode,
    NOM_NOEUD VARCHAR2,
    VALEUR_NOEUD VARCHAR2)
    AS
    --PERMET D'AJOUTER UN NOEUD AVEC 1 SEULE VALEUR DANS une ROW D'UN XML.
    --UTILE SURTOUT POUR LES CLÉS ÉTRANGÈRES
    domElemAInserer DBMS_XMLDOM.DOMELEMENT;
    NODE dbms_xmldom.DOMNode;
    NODE_TMP dbms_xmldom.DOMNode;
    BEGIN
    domElemAInserer := DBMS_XMLDOM.createElement(XMLDOMDOC, NOM_NOEUD) ;
    NODE := DBMS_XMLDOM.MAKENODE(domElemAInserer); --cast
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE_ROW,NODE);
    NODE_TMP := DBMS_XMLDOM.MAKENODE(DBMS_XMLDOM.CREATETEXTNODE(XMLDOMDOC, VALEUR_NOEUD ) );
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE,NODE_TMP );
    END;

  • XML - large volume

    This is a beginner question, but I haven't found anything that answers it.
    Is XML suitable for large volume data exchange? Does it depend on the capcity of the parser used?
    I am interested in using XML as the format of a fairly complex database extact of around 200,000 rows. Is this a reasonable use?

    Your decision to use XML for that size of a select depends on what you want to do with the data once you have it materialized. While wrapping it in XML will definitely bloat it, if you are trying to preserve its schema information on transfer and have it processed by generic standards-based parsers instead of custom code, it may well be worth it.
    Oracle XML Team

  • How do you use BOBJ SDK to retrieve the results of a query in XML

    I am trying to programatically get the results of a query given the query id
    My old code used BusinessObjects Enterprise Web Services API to  Retrieve a document's contents
    DocumentInformation biDocInfo;
    RetrieveData retBOData = RetrieveData.Factory.newInstance();
    Action[] actions = new Action[1];
    retBOData.setRetrieveView(xmlView);
    biDocInfo = rEngine.getDocumentInformation(queryId, null, actions, null, retBOData);
    (XMLView) biDocInfo.getView();
    Is there an equivalent way to retrieve the results of the query using SAP BusinessObjects BI 3.x Developer SDK Library ?
    Thanks for any information

    Hello.
    Are you wanting to use the BusinessObjects Enterprise SDK along with the Report Engine SDK as opposed to using the Web Services SDK that you were using previously?
    Also, what part of a webi document are you trying to get the XML format of?
    - Whole document
    - Single report within a document
    - Report page of a report
    - Report part within a report
    - All data providers
    - Single data provider
    If you are trying to use Business Objects Enterprise along with the Report Engine SDK, there are numerous samples for the various parts of the webi document that I mentioned above available at the following link:
    http://wiki.sdn.sap.com/wiki/display/BOBJ/JavaReportEngineSDKSamples
    I hope that this information helps.
    Regards.
    - Robert

  • Performance question on small XML content but with large volume

    Hi all,
    I am new to Berkeley XML DB.
    I have the following simple XML content:
    <s:scxml xmlns:s="http://www.w3.org/2005/07/scxml">
    <s:state id="a"/>
    <s:state id="b"/>
    <s:state id="c"/>
    </s:scxml>
    about 1.5K bytes each but the total number of such content is large (5 million+ records).
    This is a typical query:
    query 'count(collection("test.dbxml")/s:scxml/s:state[@id="a"]/following-sibling::s:state[@id="e"])'
    where the id attribute is used heavily.
    I've tested with about 10000 records with the following indexes:
    Index: edge-attribute-equality-string for node {}:id
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    Index: edge-element-presence-none for node {}:scxml
    Index: edge-element-presence-none for node {}:state
    but the query took just under one minute to complete. Is this the expected performance? It seems slow. Is there anyway to speed it up?
    In addition, the total size of the XML content is about 12M but ~100M of data is generated with the log.xxxxxxxxxx files. Is this expected?
    Thanks.

    Hi Ron,
    Yes, I've noticed the URI issue after sending the post and changed them to:
    dbxml> listindex
    Default Index: none
    Index: edge-attribute-equality-string for node {http://www.w3.org/2005/07/scxml}
    :id
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2
    002/dbxml}:name
    Index: edge-element-presence-none for node {http://www.w3.org/2005/07/scxml}:scx
    ml
    Index: edge-element-presence-none for node {http://www.w3.org/2005/07/scxml}:sta
    te
    5 indexes found.
    I added more records (total ~30000) but the query still took ~1 minute and 20 seconds to run. Here is the query plan:
    dbxml> queryplan 'count(collection("test.dbxml")/s:scxml/s:state[@id="start"]/fo
    llowing-sibling::s:state[@id="TryToTransfer"])'
    <XQuery>
    <Function name="{http://www.w3.org/2005/xpath-functions}:count">
    <DocumentOrder>
    <DbXmlNav>
    <QueryPlanFunction result="collection" container="test.dbxml">
    <OQPlan>n(P(edge-element-presence-none,=,root:http://www.sleepycat.com
    /2002/dbxml.scxml:http://www.w3.org/2005/07/scxml),P(edge-element-presence-none,
    =,scxml:http://www.w3.org/2005/07/scxml.state:http://www.w3.org/2005/07/scxml))<
    /OQPlan>
    </QueryPlanFunction>
    <DbXmlStep axis="child" prefix="s" uri="http://www.w3.org/2005/07/scxml"
    name="scxml" nodeType="element"/>
    <DbXmlStep axis="child" prefix="s" uri="http://www.w3.org/2005/07/scxml"
    name="state" nodeType="element"/>
    <DbXmlFilter>
    <DbXmlCompare name="equal" join="attribute" name="id" nodeType="attrib
    ute">
    <Sequence>
    <AnyAtomicTypeConstructor value="start" typeuri="http://www.w3.org
    /2001/XMLSchema" typename="string"/>
    </Sequence>
    </DbXmlCompare>
    </DbXmlFilter>
    <DbXmlStep axis="following-sibling" prefix="s" uri="http://www.w3.org/20
    05/07/scxml" name="state" nodeType="element"/>
    <DbXmlFilter>
    <DbXmlCompare name="equal" join="attribute" name="id" nodeType="attrib
    ute">
    <Sequence>
    <AnyAtomicTypeConstructor value="TryToTransfer" typeuri="http://ww
    w.w3.org/2001/XMLSchema" typename="string"/>
    </Sequence>
    </DbXmlCompare>
    </DbXmlFilter>
    </DbXmlNav>
    </DocumentOrder>
    </Function>
    </XQuery>
    I've noticed the indexes with URI were not used so I added back the indexes without URI:
    dbxml> listindex
    Default Index: none
    Index: edge-attribute-equality-string for node {}:id
    Index: edge-attribute-equality-string for node {http://www.w3.org/2005/07/scxml}
    :id
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2
    002/dbxml}:name
    Index: edge-element-presence-none for node {}:scxml
    Index: edge-element-presence-none for node {http://www.w3.org/2005/07/scxml}:scx
    ml
    Index: edge-element-presence-none for node {}:state
    Index: edge-element-presence-none for node {http://www.w3.org/2005/07/scxml}:sta
    te
    8 indexes found.
    Here is the query plan with the above indexes:
    dbxml> queryplan 'count(collection("test.dbxml")/s:scxml/s:state[@id="start"]/fo
    llowing-sibling::s:state[@id="TryToTransfer"])'
    <XQuery>
    <Function name="{http://www.w3.org/2005/xpath-functions}:count">
    <DocumentOrder>
    <DbXmlNav>
    <QueryPlanFunction result="collection" container="test.dbxml">
    <OQPlan>n(P(edge-element-presence-none,=,root:http://www.sleepycat.com
    /2002/dbxml.scxml:http://www.w3.org/2005/07/scxml),P(edge-element-presence-none,
    =,scxml:http://www.w3.org/2005/07/scxml.state:http://www.w3.org/2005/07/scxml),V
    (edge-attribute-equality-string,state:http://www.w3.org/2005/07/scxml.@id,=,'sta
    rt'),V(edge-attribute-equality-string,state:http://www.w3.org/2005/07/scxml.@id,
    =,'TryToTransfer'))</OQPlan>
    </QueryPlanFunction>
    <DbXmlStep axis="child" prefix="s" uri="http://www.w3.org/2005/07/scxml"
    name="scxml" nodeType="element"/>
    <DbXmlStep axis="child" prefix="s" uri="http://www.w3.org/2005/07/scxml"
    name="state" nodeType="element"/>
    <DbXmlFilter>
    <DbXmlCompare name="equal" join="attribute" name="id" nodeType="attrib
    ute">
    <Sequence>
    <AnyAtomicTypeConstructor value="start" typeuri="http://www.w3.org
    /2001/XMLSchema" typename="string"/>
    </Sequence>
    </DbXmlCompare>
    </DbXmlFilter>
    <DbXmlStep axis="following-sibling" prefix="s" uri="http://www.w3.org/20
    05/07/scxml" name="state" nodeType="element"/>
    <DbXmlFilter>
    <DbXmlCompare name="equal" join="attribute" name="id" nodeType="attrib
    ute">
    <Sequence>
    <AnyAtomicTypeConstructor value="TryToTransfer" typeuri="http://ww
    w.w3.org/2001/XMLSchema" typename="string"/>
    </Sequence>
    </DbXmlCompare>
    </DbXmlFilter>
    </DbXmlNav>
    </DocumentOrder>
    </Function>
    </XQuery>
    The indexes are used in this case and the execution time was reduced to about 40 seconds. I set the namespace with setNamespace when the session is created. Is this the reason why the indexes without URI are used?
    Any other performance improvement hints?
    Thanks,
    Ken

  • "This backup is too large for the backup volume" - Info

    Hi there. I had a problem with my time machine and got an error stating "This backup is too large for the backup volume". I did noticed after logging in that TM was indexing in the upper right corner [magnifier with a flashing dot(spotlight)] for a few seconds. So then I went on to "back up now" and it was preparing and then I got the Error message described above. So I uninstalled my anti-virus (you must disable auto protection/or exclude timemachine.app and its plist file (location below)from Anti-virus preferences in the case you have a anti-virus app, otherwise it will take forever to back up.) though that was not my issue. I then turn off time machine and deleted this .plist file in Macintosh HD > Library > Preferences > com.apple.TimeMachine.plist....STOP here if this fixed your problem after restarting. Time machine External Drive in Disk Utility **THESE STEPS WILL ERASE YOUR ENTIRE BACKUPS** ( "Erase" and rename or "partition" to make more that one partition on the External Drive if you wish, and Rename) (Disk utility> Partition tab> "option" you must - guid=intel / apple partition map=PowerPC)...sorry alot of newbie out there...by deleting the "com.apple.TimeMachine.plist" = when you plug in you TM it will ask you if you want to use the drive as a TM back up automatically. This did the trick. But to let you guys know I also used Cocktail (app) and used a feature it has to erase my computers spotlight index and rebuild it. Also in Cocktail, when you have your time machine plugged in you can erase its index and disable it all together. I recommend you first disable spotlight (before the first initial TM backup) in system preferences > spotlight> Privacy (tab) and plus to add time machine ...which has to be mounted (plugged in) to add from window under "Devices".

    http://www.macfixit.com/article.php?story=20090403093528353

  • Error: "This backup is too large for the backup volume."

    Well TM is acting up. I get an error that reads:
    "This backup is too large for the backup volume."
    Both the internal boot disk and the external baclup drive are 1TB. The internal one has a two partitions, the OSX one that is 900GBs and a 32GB NTFS one for Boot Camp.
    The external drive is a single OSX Extended part. that is 932GBs.
    Both the Time Machine disk, and the Boot Camp disk are excluded from the backup along with a "Crap" folder for temporary large files as well as the EyeTV temp folder.
    Time Machine says it needs 938GBs to backup only the OSX disk, which has 806GBs in use with the rest free. WTFFF? The TM pane says that "only" 782GBs are going to be backed up. Where did the 938GBs figure come from?
    This happened after moving a large folder (128GB in total) from the root of the OSX disk over to my Home Folder.
    I have reformated the Time Machine drive and have no backups at all of my data and it refuses to backup!!
    Why would it need 938GBs of space to backup if the disk has "only" 806 GBs in use??? Is there anyway to reset Time Machine completely???
    Some screenshots:
    http://www.xcapepr.com/images/tm2.png
    http://www.xcapepr.com/images/tm1.png
    http://www.xcapepr.com/images/tm4.png

    xcapepr wrote:
    Time Machine says it needs 938GBs to backup only the OSX disk, which has 806GBs in use with the rest free. WTFFF? The TM pane says that "only" 782GBs are going to be backed up. Where did the 938GBs figure come from?
    Why would it need 938GBs of space to backup if the disk has "only" 806 GBs in use??? Is there anyway to reset Time Machine completely???
    TM makes an initial "estimate" of how much space it needs, "including padding", that is often quite high. Why that is, and Just exactly what it means by "padding" are rather mysterious. But it does also need work space on any drive, including your TM drive.
    But beyond that, your TM disk really is too small for what you're backing-up. The general "rule of thumb" is it should be 2-3 times the size of what it's backing-up, but it really depends on how you use your Mac. If you frequently update lots of large files, even 3 times may not be enough. If you're a light user, you might get by with 1.5 times. But that's about the lower limit.
    Note that although it does skip a few system caches, work files, etc., by default it backs up everything else, and does not do any compression.
    All this is because TM is designed to manage it's backups and space for you. Once it's initial, full backup is done, it will by default then back-up any changes hourly. It only keeps those hourly backups for 24 hours, but converts the first of the day to a "daily" backup, which it keeps for a month. After a month, it converts one per week into a "weekly" backup that it will keep for as long as it has room
    What you're up against is, room for those 30 dailies and up to 24 hourlies.
    You might be able to get it to work, sort of, temporarily, by excluding something large, like your home folder, until that first full backup completes, then remove the exclusion for the next run. But pretty soon, it will begin to fail again, and you'll have to delete backups manually (from the TM interface, not via the Finder).
    Longer term, you need a bigger disk; or exclude some large items (back-them up to a portable external or even DVD/RWs first); or a different strategy.
    You might want to investigate CarbonCopyCloner, SuperDuper!, and other apps that can be used to make bootable "clones". Their advantage, beyond needing less room, is when your HD fails, you can immediately boot and run from the clone, rather than waiting to restore from TM to your repaired or replaced HD.
    Their disadvantages are, you don't have the previous versions of changed or deleted files, and because of the way they work, their "incremental" backups of changed items take much longer and far more CPU.
    Many of us use both a "clone" (I use CCC) and TM. On my small (roughly 30 gb) system, the difference is dramatic: I rarely notice TM's hourly backups -- they usually run under 30 seconds; CCC takes at least 15 minutes and most of my CPU.

  • This backup is too large for the backup volume - ridiculous Size!!!

    Hi .. i own a macbook 13" aluminium, I have Snow Leopard 10.6.2 , and i change my internal hard drive to a 500Gb.
    I bought 1Tb Western Digital My Book USB External Drive to use it for back ups using Time machine.. at the beginning worked great, when i changed my hard drive i restore everything in lest than 2 hours.
    Then one time, it said that the hard drive where going out of space, and i said to delete the oldest backups, it erase everything and kept the last back up. Since then, it came a message
    *_+This backup is too large for the backup volume. The backup requires 2.73EB but only 995 Gb are available.*+_
    its ridiculous, when i go to the time machine preference, it said that the full size back up it would take 83Gb only.
    I tried everything, formatting the unit, taking out the partition , and making it out again, it makes the first full back up, but then the same message...
    Please anyone... i am desperate
    Thanks Again
    Daniel

    DanielFaour wrote:
    *_+This backup is too large for the backup volume. The backup requires 2.73EB but only 995 Gb are available.*+_
    Hi, and welcome to the forums.
    That message seems to indicate that something is corrupted on your internal HD. Do a +*Verify Disk+* on it, per #A5 in the Time Machine - Troubleshooting *User Tip,* also at the top of this forum.
    If that finds errors, you'll have to use the procedure in the yellow box there to Repair them.
    If that does not find errors, Restart your Mac and do a "full reset" of Time Machine, per #A4 there.
    I tried everything, formatting the unit, taking out the partition , and making it out again, it makes the first full back up, but then the same message...
    What partition? Are there multiple partitions on your TM drive? How large is the one for Time Machine? Check the setup per #C1 of the Troubleshooting Tip.

  • "Backup is too large for the backup volume" error

    I've been backing up with TM for a while now, and finally it seems as though the hard drive is full, since I'm down to 4.2GB available of 114.4GB.
    Whenever TM tries to do a backup, it gives me the error "This backup is too large for the backup volume. The backup requires 10.8 GB but only 4.2GB are available. To select a larger volume, or make the backup smaller by excluding files, open System Preferences and choose Time Machine."
    I understand that I have those two options, but why can't TM just erase the oldest backup and use that free space to make the new backup? I know a 120GB drive is pretty small, but if I have to just keep accumulating backups infinitely, I'm afraid I'll end up with 10 years of backups and a 890-zettabyte drive taking up my garage. I'm hoping there's a more practical solution.

    John,
    Please review the following article as it might explain what you are encountering.
    *_“This Backup is Too Large for the Backup Volume”_*
    First, much depends on the size of your Mac’s internal hard disk, the quantity of data it contains, and the size of the hard disk designated for Time Machine backups. It is recommended that any hard disk designated for Time Machine backups be +at least+ twice as large as the hard disk it is backing up from. You see, the more space it has to grow, the greater the history it can preserve.
    *Disk Management*
    Time Machine is designed to use the space it is given as economically as possible. When backups reach the limit of expansion, Time Machine will begin to delete old backups to make way for newer data. The less space you provide for backups the sooner older data will be discarded. [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    However, Time Machine will only delete what it considers “expired”. Within the Console Logs this process is referred to as “thinning”. It appears that many of these “expired” backups are deleted when hourly backups are consolidated into daily backups and daily backups are consolidated into weekly backups. This consolidation takes place once hourly backups reach 24 hours old and daily backups reach about 30 days old. Weekly backups will only be deleted, or ‘thinned’, once the backup drive nears full capacity.
    One thing seems for sure, though; If a new incremental backup happens to be larger than what Time Machine currently considers “expired” then you will get the message “This backup is too large for the backup volume.” In other words, Time Machine believes it would have to sacrifice to much to accommodate the latest incremental backup. This is probably why Time Machine always overestimates incremental backups by 2 to 10 times the actual size of the data currently being backed up. Within the Console logs this is referred to as “padding”. This is so that backup files never actually reach the physically limits of the backup disk itself.
    *Recovering Backup Space*
    If you have discovered that large unwanted files have been backed up, you can use the Time Machine “time travel” interface to recovered some of that space. Do NOT, however, delete files from a Time Machine backup disk by manually mounting the disk and dragging files to the trash. You can damage or destroy your original backups by this means.
    Additionally, deleting files you no longer wish to keep on your Mac does not immediately remove such files from Time Machine backups. Once data has been removed from your Macs' hard disk it will remain in backups for some time until Time Machine determines that it has "expired". That's one of its’ benefits - it retains data you may have unintentionally deleted. But eventually that data is expunged. If, however, you need to remove backed up files immediately, do this:
    Launch Time Machine from the Dock icon.
    Initially, you are presented with a window labeled “Today (Now)”. This window represents the state of your Mac as it exists now. +DO NOT+ delete or make changes to files while you see “Today (Now)” at the bottom of the screen. Otherwise, you will be deleting files that exist "today" - not yesterday or last week.
    Click on the window just behind “Today (Now)”. This represents the last successful backup and should display the date and time of this backup at the bottom of the screen.
    Now, navigate to where the unwanted file resides. If it has been some time since you deleted the file from your Mac, you may need to go farther back in time to see the unwanted file. In that case, use the time scale on the right to choose a date prior to when you actually deleted the file from your Mac.
    Highlight the file and click the Actions menu (Gear icon) from the toolbar.
    Select “Delete all backups of <this file>”.
    *Full Backup After Restore*
    If you are running out of disk space sooner than expected it may be that Time Machine is ignoring previous backups and is trying to perform another full backup of your system? This will happen if you have reinstalled the System Software (Mac OS), or replaced your computer with a new one, or hard significant repair work done on your exisitng Mac. Time Machine will perform a new full backup. This is normal. [http://support.apple.com/kb/TS1338]
    You have several options if Time Machine is unable to perform the new full backup:
    A. Delete the old backups, and let Time Machine begin a fresh.
    B. Attach another external hard disk and begin backups there, while keeping this current hard disk. After you are satisfied with the new backup set, you can later reformat the old hard disk and use it for other storage.
    C. Ctrl-Click the Time Machine Dock icon and select "Browse Other Time Machine disks...". Then select the old backup set. Navigate to files/folders you don't really need backups of and go up to the Action menu ("Gear" icon) and select "Delete all backups of this file." If you delete enough useless stuff, you may be able to free up enough space for the new backup to take place. However, this method is not assured as it may not free up enough "contiguous space" for the new backup to take place.
    *Outgrown Your Backup Disk?*
    On the other hand, your computers drive contents may very well have outgrown the capacity of the Time Machine backup disk. It may be time to purchase a larger capacity hard drive for Time Machine backups. Alternatively, you can begin using the Time Machine Preferences exclusion list to prevent Time Machine from backing up unneeded files/folders.
    Consider as well: Do you really need ALL that data on your primary hard disk? It sounds like you might need to Archive to a different hard disk anything that's is not of immediate importance. You see, Time Machine is not designed for archiving purposes, just as a backup of your local drive(s). In the event of disaster, it can get your system back to its' current state without having to reinstall everything. But if you need LONG TERM storage, then you need another drive that is removed from your normal everyday working environment.
    This KB article discusses this scenario with some suggestions including Archiving the old backups and starting fresh [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    Let us know if this clarifies things.
    Cheers!

  • How to retrieve the data/property value from portalapp.xml

    Hi I would like to retrive the some common data from the portalapp.xml file
    <application>
      <application-config>
        <property name="PrivateSharingReference" value="com.sap.portal.htmlb"/>
          </application-config>
      <components>
        <component name="DynZMMGR">
          <component-config>
            <property name="ClassName" value="DynZMMGR"/>
            <property name="SecurityZone" value="DynZMMGR/high_safety"/>
            <property name="ComponentType" value="jspnative"/>
            <property name="JSP" value="pagelet/DynJspZMMGR.jsp"/>
          </component-config>
          <component-profile/>
        </component>
        <component name="dynpagedel">
          <component-config>
            <property name="ClassName" value="dynpagedel"/>
          </component-config>
          <component-profile/>
        </component>
        <component name="VARIANTLIST">
          <component-config>
            <property name="ClassName" value="com.sap.ep.r3rpts.VARIANTLIST"/>
            <property name="SecurityZone" value="com.sap.ep.r3rpts.VARIANTLIST/high_safety"/>
          </component-config>
          <component-profile/>
        </component>
        <component name="DynZMM33">
          <component-config>
            <property name="ClassName" value="DynZMM33"/>
            <property name="SecurityZone" value="DynZMM33/high_safety"/>
            <property name="ComponentType" value="jspnative"/>
            <property name="JSP" value="pagelet/DynJspZMM33.jsp"/>
          </component-config>
          <component-profile/>
        </component>
      </components>
      <services/>
    </application>
    The above is my portalapp.xml file .
    I want to retrieve the common data across the component.
    These  datas are common across the components .
    I kmow by putting the data inside the
    component name="DynZMM33">
          <component-config>
            <property name="ClassName" value="DynZMM33"/>
            <property name="SecurityZone" value="DynZMM33/high_safety"/>
            <property name="ComponentType" value="jspnative"/>
            <property name="JSP" value="pagelet/DynJspZMM33.jsp"/>
          </component-config>
    <component-profile>
       <property name="UserID" value="userid"/>
       <property name="password" value="password"/>
    </component-profile>
    </component>
    this becoms component specific ... which I can retriev the by using
    String ClientVal = request.getComponentContext().getProfile().getProperty("Client");
    in doContent()  method ..
    But this way i have to specify these properties in all the component.. which is repetetive in nature.
    I would rather put it in common location and want to retrieve the info from the portalapp.xml...
    How to achieve this
    I am using "AbstractPortalComponent"  .
    thanks
    pkiran

    Hi Prashanth,
    see Reading another iView's profile personalized values and Validate PCD URI
    Anyhow, maybe you should implement a service which returns the values (from the service profile). This would be more clean for accessing global values.
    Hope it helps
    Detlev

  • How to send the  XML  format SOAP request to retrieve the Opportunity

    Hello,
    Can any body suggest me how to send the SOAP request to CRM OnDemand in the format of XML to retrieve the Opportunity ?.
    Thanks,
    --bdr_09                                                                                                                                                                                                                                                                                                                               

    I'm afraid you can't.
    If you want to export/import, you should consider using loader, insert (no sqldev needed for import) or even csv (sqldev needed for import) . If you have a lot of data, consider external tools like Data Pump.
    Have fun,
    K.

  • Backup too large for the backup volume??

    Hi,
    My disk is 30Go of windows and 120Go of Mac. Only 115Go are used. My backup volume is 185Go.
    But Time machine tells me, *on the first backup*, that it needs 330Go for backup??? What the f**?
    All my external disks are excluded, and it does say the size of the included files is 115Go. How come time machine needs *three times* the space of the original data??

    When my Time Machine disk had only 50GB left, I got the message that it couldn't back up because it required 118GB to do so. Because there was, for some reason, only one day backed up, I deleted the backup files to start fresh. Now I am getting the message:
    "This backup too large for the backup volume. The backup requires 1056.6 GB but only 929.4 GB are available."
    Well, the backup only require 118GB a little while ago, and the initial full backup that I deleted was 880 GB. Nothing has been added to the startup disk, and a 400GB disk has been excluded to try to make this work, so this message can't be right.
    How do I make this work? I did not interrupt or abort another backup, and I've already reformatted the drive, with the same result. Do I have to keep reinstalling Time Machine every time I want to backup?

  • How to retrieve the Source XML Payload into an External System using message ID?

    Dear Experts,
    I have a requirement to retrieve the inbound XML message payload based on Message ID into Backend ERP system for reference or re-process as needed. Request you to let me know the technique to get the source payload of successful message.
    Kind Regards,
    Madhu

    Hi,
    Check following links:
    http://help.sap.com/saphelp_nw04/helpdata/en/31/6c5c3c3806af06e10000000a11402f/content.htm
    http://wiki.sdn.sap.com/wiki/display/Snippets/PI+Monitoring+Functionality+-+Fetching+Data+from+SXMB_MONI+Standard+Tables+-+Part+I
    https://wiki.sdn.sap.com/wiki/display/Snippets/PI+Monitoring+Functionality+-+Fetching+Data+from+SXMB_MONI+Standard+Tables+-+Part+II
    -Deepak

Maybe you are looking for

  • Cannot extract the embedded font arial.Some char might not be displayed

    Hi Folks,               Our requirement is to generate the customer account statement for a set of cusomters during a particular period.                      we  use a script to generate the statement and convert it to pdf.This pdf is then stored in

  • Basis Activities in ABAP Trial version

    Hello all,            I wanted to know if we can practice basis activities in ABAP Trial version.  Also i want to know if we can install j2ee engine and make it as addin system. <REMOVED BY MODERATOR> Rgds, Dinesh Edited by: Alvaro Tejada Galindo on

  • How to create sales employee without HR module

    Dear Experts IMGSDMASTER DATABUSINESS PARTNER-USE SALES EMPLOYEE WITHOUT HR 2. Define PErsonnel area (usually locations) 3. Define PErsonnel Subarea (bifurcations within loctions, may not exist but recommended to be maintained) 4. Define Employee Gro

  • Reports issu

    Dear All, Our env 12.1.3 on Solaris 10, We are running balance sheet report in pdf format, it is displaying complete report. When we run same balance sheet report in postscript format than part of the reports is displaying it is cutting from the righ

  • This is probably really old/dumb but i would apriciate help

    ok i know this mother borad is really old its a MicroStar MS6119 v1.1 mother borad and according to the live update i have the most recent bios update (ver 2.90) but i still have the crash problem with drives over 65GB and the system stability proble