XML file for large number of data records

I have to create an xml file using the data from multiple table. The problem That i am facing is the data is huge it is in millions so I was wondering that is there any efective way of creating such an xml file.
It would be great if you can suggest some approach to achieve my requirement.
Thanks,
-Vinod

You'd probably be better asking over in the XML DB forum as that forum is dedicated to dealing with the XML side of things.
The FAQ in that forum is here: {thread:id=410714}

Similar Messages

  • How to optimize DELETE for large number of data?

    Hi Guys,
    I am creating a DELETE sql for deleting large amount of data, approximately a million records. When I tested it, it got a lot amount of execution time. Is there a way how can I optimize this query?
    I highly appreciate any idea from you guys!
    Thanks,
    Jay

    JayPavia wrote:
    I am creating a DELETE sql for deleting large amount of data, approximately a million records. When I tested it, it got a lot amount of execution time. Is there a way how can I optimize this query?
    I highly appreciate any idea from you guys!Jay,
    one quite nice approach is described in the AskTom thread: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2345591157689#2356959585102
    In a nutshell: You can create a partitioned table with the same layout (using e.g. a RANGE partitioning with a single default partition), insert the remaining data into that table and swap the two segments using ALTER TABLE EXCHANGE PARTITION.
    This allows you to load the data into the partitioned table using all means available to you, e.g. direct-path parallel insert with nologging attribute of the table to make it as fast as possible, rebuilding/creating any indexes in bulk after the load completed.
    Furthermore, at least seamless read-access to the table is possible while you're performing this operation, and you don't have to deal with renaming/granting etc. which you would need to do with a traditional CTAS approach.
    Of course, if the table needs to be online for DML while performing the DELETE you can't use these options.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Efficient searching in a large XML file for specific elements

    Hi
    How can I search in a large XML file for a specific element efficiently (fast and memory savvy?) I have a large (approximately 32MB with about 140,000 main elements) XML file and I have to search through it for specific elements. What stable and production-ready open source tools are available for such tasks? I think PDOM is a solution but I can't find any well-known and stable implementations on the web.
    Thanks in advance,
    Behrang Saeedzadeh.

    The problem with DOM parsers is that the whole document needs to be parsed!
    So with large documents this uses up a lot of memory.
    I suggest you look at sometthing like a pull parser (Piccolo or MPX1) which is a fast parser that is program driven and not event driven like SAX. This has the advantage of not needing to remember your state between events.
    I have used Piccolo to extract events from large xml based log files.
    Carl.

  • [svn:bz-3.x] 17461: Update example jgroups-tcp.xml file for JGroups 2.9. 0 on the BlazeDS 3.x branch.

    Revision: 17461
    Revision: 17461
    Author:   [email protected]
    Date:     2010-08-24 10:10:38 -0700 (Tue, 24 Aug 2010)
    Log Message:
    Update example jgroups-tcp.xml file for JGroups 2.9.0 on the BlazeDS 3.x branch. Fixed a problem where the bind_addr property was set to a port number instead of an address. Updated the file to the version we have on BlazeDS/trunk which has descriptions of what the various settings do. 
    Modified Paths:
        blazeds/branches/3.x/resources/clustering/2.9.0/jgroups-tcp.xml

    Revision: 17461
    Revision: 17461
    Author:   [email protected]
    Date:     2010-08-24 10:10:38 -0700 (Tue, 24 Aug 2010)
    Log Message:
    Update example jgroups-tcp.xml file for JGroups 2.9.0 on the BlazeDS 3.x branch. Fixed a problem where the bind_addr property was set to a port number instead of an address. Updated the file to the version we have on BlazeDS/trunk which has descriptions of what the various settings do. 
    Modified Paths:
        blazeds/branches/3.x/resources/clustering/2.9.0/jgroups-tcp.xml

  • Help in creation of XML file for IDOC postings

    Hi All,
    Need help if anyone has knowledge/experience in creating XML files for IDOC processing.
    We need to design an input file (in XML format) for creation of IDOCu2019s for purchase Invoices through Interface.
    We have an existing input file, which is working correctly.  We are trying to modify this existing input file for a new Tax Code (Non-deductible inverse tax liability).   This tax code is working fine for manual postings.   But, through IDOC, tax postings are not correctly triggering.
    Could you please confirm if any one has experience on this, so that I can share more details for resolving.
    Thanks & Regards,
    Srini

    Hello,
    you can use CALL TRANSFORMATION id, which will create a exact "print" of the ABAP data into the XML.
    If you need to change the structure of XML, you can alter your ABAP structure to match the requirements.
    Of course you can create your own XSLT but that is not that easy to describe and nobody will do that for you around here. If you would like to start with XSLT, you´d better start the search.
    Regards Otto

  • Need to generate a Index xml file for corresponding Report PDF file.

    Need to generate a Index xml file for corresponding Report PDF file.
    Currently in fusion we are generating a pdf file using given Rtf template and dataModal source through Ess BIPJobType.xml .
    This is generating pdf successfully.
    As per requirement from Oracle GSI team, they need index xml file of corresponding generated pdf file for their own business scenario.
    Please see the following attached sample file .
    PDf file : https://kix.oraclecorp.com/KIX/uploads1/Jan-2013/354962/docs/BPA_Print_Trx-_output.pdf
    Index file : https://kix.oraclecorp.com/KIX/uploads1/Jan-2013/354962/docs/o39861053.out.idx.txt
    In R12 ,
         We are doing this through java API call to FOProcessor and build the pdf. Here is sample snapshot :
         xmlStream = PrintInvoiceThread.generateXML(pCpContext, logFile, outFile, dbCon, list, aLog, debugFlag);
         OADocumentProcessor docProc = new OADocumentProcessor(xmlStream, tmpDir);
         docProc.process();
         PrintInvoiceThread :
              out.println("<?xml version=\"1.0\" encoding=\"UTF-8\" ?>");
                   out.print("<xapi:requestset ");
                   out.println("<xapi:filesystem output=\"" + outFile.getFileName() + "\"/>");
                   out.println("<xapi:indexfile output=\"" + outFile.getFileName() + ".idx\">");
                   out.println(" <totalpages>${VAR_TOTAL_PAGES}</totalpages>");
                   out.println(" <totaldocuments>${VAR_TOTAL_DOCS}</totaldocuments>");
                   out.println("</xapi:indexfile>");
                   out.println("<xapi:document output-type=\"pdf\">");
    out.println("<xapi:customcontents>");
    XMLDocument idxDoc = new XMLDocument();
    idxDoc.setEncoding("UTF-8");
    ((XMLElement)(generator.buildIndexItems(idxDoc, am, row)).getDocumentElement()).print(out);
    idxDoc = null;
    out.println("</xapi:customcontents>");
         In r12 we have a privilege to use page number variable through oracle.apps.xdo.batch.ControlFile
              public static final String VAR_BEGIN_PAGE = "${VAR_BEGIN_PAGE}";
              public static final String VAR_END_PAGE = "${VAR_END_PAGE}";
              public static final String VAR_TOTAL_DOCS = "${VAR_TOTAL_DOCS}";
              public static final String VAR_TOTAL_PAGES = "${VAR_TOTAL_PAGES}";
    Is there any similar java library which do the same thing in fusion .
    Note: I checked in the BIP doc http://docs.oracle.com/cd/E21764_01/bi.1111/e18863/javaapis.htm#CIHHDDEH
              Section 7.11.3.2 Invoking Processors with InputStream .
    But this is not helping much to me. Is there any other document/view-let which covers these thing .
    Appreciate any help/suggestions.
    -anjani prasad
    I have attached these java file in kixs : https://kix.oraclecorp.com/KIX/display.php?labelId=3755&articleId=354962
    PrintInvoiceThread
    InvoiceXmlBuilder
    Control.java

    You can find the steps here.
    http://weblogic-wonders.com/weblogic/2009/11/29/plan-xml-usage-for-message-driven-bean/
    http://weblogic-wonders.com/weblogic/2009/12/16/invalidation-interval-secs/

  • How to dpwnload one XML file for each report row

    Hi all,
    i have a report on the products table with about 1000 rows
    desc of table products
    product_id varchar (10)
    product_desc varchar2 (2000)
    I would like to download on the file system an xml file for each row of the report
    the nane of each xml file = <product_id>.xml
    thanks in advance
    km

    Hi,
    You would probably find it better to search on the PL/SQL forum as this is more their area than Apex.
    From what I can see in there (and there are a number of examples that would help you), you need to do something like:
    DECLARE
    vPATH VARCHAR2(50) := '/DEV';
    vFILENAME VARCHAR2(50);
    vFILE UTL_FILE.FILE_TYPE;
    BEGIN
    FOR C IN (SELECT PRODUCT_ID, PRODUCT_DESC FROM PRODUCTS)
    LOOP
      vFILENAME := C.PRODUCT_ID || '.xml';
      vFILE := UTL_FILE.FOPEN(vPATH, vFILENAME, 'W');
      UTL_FILE.PUT_LINE(vFILE, '<xdr>');
      UTL_FILE.PUT_LINE(vFILE, '<product_id>' || C.PRODUCT_ID || '</product_id>');
      UTL_FILE.PUT_LINE(vFILE, '<product_desc>' || C.PRODUCT_DESC || '</product_desc>');
      UTL_FILE.PUT_LINE(vFILE, '</xdr>');
      UTL_FILE.FCLOSE(vFILE);
    END LOOP;
    END;I haven't included exception handling etc and you should check on the PL/SQL forum to see if there are better examples!
    Andy

  • How to restrict number of Data Records from Source system?

    Hi,
    How can I restrict the number of Data records from R3 source system that are being loaded into BI. For example I have 1000 source data records, but only wish to transfer the first 100. How can I achieve this? Is there some option in the DataSource definition or InfoPackage definition?
    Pls help,
    SD

    Hi SD,
    You can surely restrict the number of records, best and simplest way is, check which characteristics are present in selection screen of InfoPackage and check in R3, which characteristics if given a secection could fetch you the desired number of records. Use it as selection in InfoPackage.
    Regards,
    Pankaj

  • How to create an XML File for Gantt?

    Hi Everybody.
    I need a Gantt in Web Dynpro ABAP. I need it for the boss to do a Forecast for Vacation. I Hope this is understanding.
    Did Anyone know how can i create a XML File for the Gantt at runtime?
    Thanks
    MSi

    Hi Marcus,
    the following link is to the JNet/JGantt developer doco
    [JNet/JGantt Developer Documentation|http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/lw/uuid/f010ec31-9658-2910-3c83-c6e62904eceb]
    included in this is the:
    [Schema for XML|http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/com.sap.km.cm.docs/lw/webdynpro/jnet_jgantt%20developer%20documentation/schema/xml-spy/jnet-schema.html]
    You can use this to build XML to represent a gantt like chart.
    I did this to build XML to represent employee avaliablity - although we fronted the gantt through WD Java not ABAP - the java applet called is the same (I believe).
    Don't underestimate the work required to tweek your data into a decent layout. It took me days.
    There may be tools/api to help generate the XML - I don't know of them - I'd look forward to seeing any other replies to this thread from people who have used/built any.  An example of a class that generates JNet XML - CL_SPI_UI_JNET - but this does not seem to be a Gantt display.
    Cheers,
    Chris

  • Using a local XML file for parsing - iPhone

    Good afternoon everyone. I am trying to create an application for the iPhone that will use an XML file for it's data source. I have been attempting to convert the SeismicXML application from the dev center to use a locally stored file, but to no avail. I have it building and running fine, but no data is appearing. Is there anyone that can give me an example of parsing with a locally stored file? Thanks!

    NSString *filePath = [[NSBundle mainBundle] pathForResource:@"myFile" ofType:@"xml"];
    NSData *myData = [NSData dataWithContentsOfFile:filePath];
    if ( myData ) {
    NSXMLParser *parser = [[NSXMLParser alloc] initWithData:myData];
    [parser setDelegate:myObject];
    [parser setShouldProcessNamespaces:NO];
    [parser setShouldReportNamespacePrefixes:NO];
    [parser setShouldResolveExternalEntities:NO];
    [parser parse];
    NSError *parseError = [parser parserError];
    if ( parseError && error) {
    *error = parseError;
    // Do post load activity

  • Generate XML file for payroll result

    Hi,
    I tried to generate XML file for chinese payroll result by t-code: pu12, but i can not find the interface format for china payroll(here is format for other country vesion like international, france...). How can i generate xml file for chinese payroll result. DO i need to create a format type? Is there any other way? thank you in advance.

    Coming back to my question is it really possible to generate bigger XML on 9i.
    Sure. But not in one go.
    I would use SQL/XML functions instead and paginate the result set so that each CLOB chunk doesn't exceed 4GB (or less). Each chunk could then be appended to a file using UTL_FILE.
    Alternatively, as Oracle 9i supports parallel pipelined functions, you could also imagine doing the job in parallel using automatic data partitioning on an input ref cursor.
    That would require some postprocessing steps to rebuild the entire file from the different chunks though.
    See : Pipelined and Parallel Table Functions

  • Fastest Way to Delete a large number of data

    I m trying to delete a large number of Data
    this Proc will be running every day without disturbing the other Process
    SET sql = 'DELETE TOP (100) FROM ' + @TableName + ' WHERE Id IN (Select Distinct Id FROM [state] WHERE ToDel = 1)'SET @Deleted_Rows = 1;SET @Rows = 0; WHILE (@Deleted_Rows > 0)BEGIN BEGIN TRANEXEC sp_executesql @sql SET @Deleted_Rows = (SELECT @@ROWCOUNT);SET @Rows = (@Rows + @Deleted_Rows);COMMIT TRANWaitFor DELAY '00:00:00:01' END
    Have you any Idea how can I optimize my Query ?
    Thank you very much

    Hi, following are few thoughts.
    1. If you can manage without dynamic sql then it may yield better performance as except table name there is no need for it as per your query.
    2. Ensure indexes are in place.
    3. If the table in question is not participant in replication then you may slightly go for higher batch size and again it depends on volume.
    Please take a look at the tweaked code.
    declare @Deleted_Rows int, @Rows int;
    SET @Deleted_Rows = 1;
    SET @Rows = 0;
    WHILE (@Deleted_Rows > 0)
    BEGIN
    BEGIN TRAN;
    DELETE TOP (100) t1
    FROM
    tabName as t1
    WHERE exists (Select * FROM [state] as t2 WHERE t2.ToDel = 1 and t2.Id = t1.id);
    SET @Deleted_Rows = @@ROWCOUNT ;
    SET @Rows = @Rows + @Deleted_Rows;
    COMMIT TRAN;
    WaitFor DELAY '00:00:00:01';
    END
    Print 'Total rows deleted : '+convert(varchar(50), @Rows);

  • Hibernate mapping XML files for the two SQL Server tables below.

    Hello all..,
    Question 1:
    I am working on a project that needs to support a database with an inherited legacy schema that you cannot change. The schema is provided below.Hibernate mapping XML files for the two SQL Server tables below. Please provide those two XML files. Assume some hypothetical package and class names. Assume that no "fancy" stuff such as lazy initialization, optimistic locking etc is needed at this time.
    CREATE TABLE [SURVEY_ANSWERS] (
    [ANSWER_ID] [int] IDENTITY (1,1) NOT NULL,
    [QUESTION_ID] [int] NOT NULL,
    [POSITION] [int] NULL,
    [TEXT] [varchar](350) NULL
    CREATE TABLE [dbo].[SURVEY_QUESTIONS] (
          [QUESTION_ID] [int] IDENTITY (1, 1) NOT NULL ,
          [TEXT] [varchar] (350) NULL
    GO
    ALTER TABLE SURVEY_ANSWERS
    ADD CONSTRAINT pk_SURVEY_ANSWERS PRIMARY KEY(ANSWER_ID,QUESTION_ID);
    ALTER TABLE [dbo].[SURVEY_QUESTIONS] ADD
           PRIMARY KEY  CLUSTERED
                [QUESTION_ID]
    GO
    ALTER TABLE [dbo].[SURVEY_ANSWERS] ADD
           FOREIGN KEY
                [QUESTION_ID]
          ) REFERENCES [dbo].[SURVEY_QUESTIONS] (
                [QUESTION_ID]
          )Question 2:
    Assume that you are working on a project developing, say, a banking application. You are the Architect and thinking that Hibernate ORM should be used for the entire access to the relational database. As usual, you have created (or auto-generated) a set of HBM XML files as well as POJOs for which you define the mappings. Assume now that a new requirement has just popped up. The system needs to be able to import new bank accounts and user information in bulk from a very large XML file at once and store it in the database. Assume the XML file contains all necessary information to populate fields in database tables. As performance is very important for this operation. Given this description, how would you approach the problem?
    Please describe briefly.
    -Thanks and regards
    Praveen Soni

    You're not fooling anyone Dennis_Mox. But nice try.Jeez, man. Mail me at denismox[at]yandex.ru, I will show you that exact test, dammit.

  • How to generate xml-file for SAP Fiori (UI add-on) with Solution Manager 7.0.1?

    Hello Guru,
    could you please help with my issue with Fiori Installation.
    We want to install SAP Fiori Front-End (GW+UI) on the Sandbox system with SAP Netweaver 7.3.1. (SP14)
    Gateway component (SAP GW CORE 200 SP10) was installed without any problems.
    But I need to install UI-add-on (NW UI Extensions v1.0) and when I try to install it via SAINT, transaction said me that I need to generate xml-file for it (as in General notes for UI add-on mentioned).
    But I have Solution Manager 7.0.1 and in MOPZ for this version I do not have option  "install Add-on" as it written in Guide for ui add-on installation.
    Could you please help me with advice how to generate xml-file for UI add-on installation on SolMan v.7.0.1?
    If where is no way, but only to upgrade Solution Manager, maybe somebody could give me xml-file for your system (for NW 731) and I will change it to my needs, I will be very grateful!
    Thanks in advance for any help!!!
    Bets regards,
    Natalia.

    Hello Guru,
    could you please help with my issue with Fiori Installation.
    We want to install SAP Fiori Front-End (GW+UI) on the Sandbox system with SAP Netweaver 7.3.1. (SP14)
    Gateway component (SAP GW CORE 200 SP10) was installed without any problems.
    But I need to install UI-add-on (NW UI Extensions v1.0) and when I try to install it via SAINT, transaction said me that I need to generate xml-file for it (as in General notes for UI add-on mentioned).
    But I have Solution Manager 7.0.1 and in MOPZ for this version I do not have option  "install Add-on" as it written in Guide for ui add-on installation.
    Could you please help me with advice how to generate xml-file for UI add-on installation on SolMan v.7.0.1?
    If where is no way, but only to upgrade Solution Manager, maybe somebody could give me xml-file for your system (for NW 731) and I will change it to my needs, I will be very grateful!
    Thanks in advance for any help!!!
    Bets regards,
    Natalia.

  • XML file for SEPA Credit transfer

    Hi All,
    I am working in ECC 6.0. The system was configured to generate XML files for credit transfer as per SEPA guidelines. The files are being created in the application layer. But the line length is being restricted to 255 characters. As per the specifications of a XML file in apllication layer. all information should be contained in one single line.
    Secondly the job log for the payment medium creation program contains the below mentioned line among others -
    "Incorrect Print Parameter" of message class "0K", message no. 091and message ID I
    Can anybody tell me what is going wrong?

    HI,
    Can you please help me in solving the issue when using the PMW format SEPA_CT.
    My co. code currency is EUR and when paying the invoice which is in GBP i am not able to get the currency EUR in the DME file generated by the payment run instead I am getting the currency GBP.
    As per the SEPA concept evey payment will be in EUR irrespective of the currency in which the invoice is raised. So please let me know what needs to be done to get the local currency EUR amount the DME file for the payment made against invoice raised in GBP...
    Thanks in Advance..
    Regards
    Sanjeev

Maybe you are looking for

  • How can I get my data in Ipad that was been restored in Apple?

    I just downloaded the Itune in y laptop so that I can connect my data to my Ipad. Unfortunately after downloaded the Itune, I pressed "restore" which my ipad had been connected in my laptop. Then after that, all the data from my ipad been gone like j

  • Help with finishing drop-down menu in dreamweaver?

    I am trying to make a menu that I want to look like this: http://www.ivoog.com/link2 I want it to look exactly like that but with a few more of the same menus next to the first one. The reason I am not using the menu I made above is because I used a

  • Ledgers and registers

    Hi, We are in receipt of the Notice for Assessment from the I.T Dept.They have asked for u201CBooks of accounts for the Financial Year  in CD form for Stock register, Purchase Register and Sales Register, Ledgeru201D Can any one proprose the best sol

  • Af:treeTable component: API to return list of expanded nodes?

    Hi, In my UI design, I am using a tree table (af:treeTable component). Does the framework support API to return (at any given time) a list of all expanded nodes? I realize that I can build logic to 1. return the currently selected node, then 2. query

  • Transferring apps with with purchases on them.

    I plan on getting an ipad soon and was wondering if I buy stuff on apps like ibooks, marvel, or DC on my iphone currently will the books and comics transfer over to the ipad as well or is it stuck on that one device? Also what about games? I have Fin