Xsd validation in Database for 1 million record

Hello All,
I would like to know the pros and cons to do the xsd validation of a million record in 11g database and if possible the processing time taken to do xsd validation for million records.
What would be good datatype to load this xml file of million records, should it be blog/clob or varchar2(200000000).
Thanks.

varchar2(200000000).SQL VARCHAR2 is limited to 4000
PL/SQL VARCHAR2 is limited to 32767

Similar Messages

  • ABAP Proxy for 10 million records

    Hi,
    I am running a extract program for my inventory which has about 10 million records.
    I am sending through ABAP proxy and  job is cacelled due to memory problem.
    I am breaking up the records while sending through ABAP proxy..
    I am sending about 2000 times proxy by breaking the number of records..
    do you think ABAP proxy would able to handle 10 million records..?
    Any advice would be highly appreciated.
    Thanks and Best Regards,
    M-

    Hi,
    I am facing the same problem. My temporary solution is to break up the selected data into 30.000 records and send those portions by ABAP proxy to PI.
    I think the problem lies in the ABAP to xml conversion (call transformation) within the proxy.
    Although breaking up the data seems to work for me now, it gives me an other issue: I have to combine the data back again in PI.
    So now I am thinking of saving all the records as a dataset file on the application server and using the file adapter instead.
    Regards,
    Arjan Aalbers

  • Fast Updates for 8 million records..!!

    Hi All,
    I was wondering is there any fast method for updating 8 million records out of 10 million table?
    For eg :
    I am having a customer table of 10m records and columns are cust_id, cust_num and cust_name.
    i need to update 8m records out of 10m customer table as follows.
    update customer set cust_id=46 where cust_id=75;
    The above statement will update 8m records. And cust_id is indexed.
    But if i fire the above update statement we'll face rollback segment problem..
    Even if i use ROWNUM and commit after 100K records still its gonna take huge time and also i know CTAS will be lot lot faster but for this scenario i guess its not possible.. Right?
    Any help is much appreciated...
    Thanks.

    You didn't specify what version you're on, but have you looked at dbms_redefinition?
    create table cust (cust_num number, constraint cust_pk primary key (cust_num), cust_id number, name varchar2(10));
    create index cust_id_idx on cust(cust_id);
    insert into cust values( 1, 1, 'a');
    insert into cust values( 2, 2, 'b');
    insert into cust values( 3, 1, 'c');
    insert into cust values( 4, 4, 'd');
    insert into cust values( 5, 1, 'e');
    select * From cust;
    create table cust_int (cust_num number, cust_id number, name varchar2(10));
    exec dbms_redefinition.start_redef_table(user,'cust', 'cust_int', 'cust_num cust_num, decode(cust_id, 1, 99, cust_id) cust_id, name name');
    declare
    i pls_integer;
    begin
    dbms_redefinition.copy_table_dependents( user, 'cust', 'cust_int', copy_indexes=>dbms_redefinition.cons_orig_params, copy_triggers=>true, copy_constraints=>true, copy_privileges=>true, ignore_errors=>false, num_errors=>i);
    dbms_output.put_line('Errors: ' || i);
    end;
    exec dbms_redefinition.finish_redef_table(user, 'cust', 'cust_int');
    select * From cust;
    select table_name, index_name from user_indexes;
    You would probably want to run a sync_interim_table in there before the finish.
    Good luck.

  • Which is the Best way to upload BP for 3+ million records??

    Hello Gurus,
                       we have 3+million records of data to be uploaded in to CRM coming from Informatica. which is the best way to upload the data in to CRM, which takes less time consumption and easy. Please help me.
    Thanks,
    Naresh.

    do with bapi BAPI_BUPA_FS_CREATE_FROM_DATA2

  • Need to generate a report for 30 Million records

    HI Gurus,
              We have a requirement wherein we need to generate a report with 30-32 million records on a monthly basis and store the report in some external system via FTP. We estimated the size of the file to be around 2.5 GB. Is it possibloe to save the file in PDF ? or is it possible to store such file in anyother file type?
    Kindly let me know..
    Cheers...
    Nip

    Hi,
    If you are using 7.0 then you can save the file as a PDF file. Would suggest you precalculate while running the report as well.
    Cheers,
    Kedar

  • I am using the database connectivity toolkit to retrieve data using a SQL query. The database has 1 million records, I am retrieving 4000 records from the database and the results are taking too long to get, is there any way of speeding it up?

    I am using the "fetch all" vi to do this, but it is retrieving one record at a time, (if you examine the block diagram) How can i retrieve all records in a faster more efficient manner?

    If this isn't faster than your previous method, then I think you found the wrong example. If you have the Database Connectivity Toolkit installed, then go to the LabVIEW Help menu and select "Find Examples". It defaults to searching for tasks, so open the "Communicating with External Applications" and "Databases" and open the Read All Data. The List Column names just gives the correct header to the resulting table and is a fast operation. That's not what you are supposed to be looking at ... it's the DBTools Select All Data subVI that is the important one. If you open it and look at its diagram, you'll see that it uses a completely different set of ADO methods and properties to retrieve all the data.

  • XSD validation: wrong behaviour for mixed=false

    Based on XSD report.xsd demo file, I experienced mixed="false" attribute of complex type.
    <element name="purchaseReport">
    <complexType mixed="false">
    Such an attribute forbids text nodes in purchaseReport elements.
    So such XML data should be rejected by XSDSetSchema tool:
    <purchaseReport
    xmlns="http://www.example.com/Report"
    xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.example.com/Report report.xsd"
    period="P3M" periodEnding="1999-12-31">
    invalidchar
    <regions>
    However it passes Oracle schema validation, which is not expected.
    It is rejected by XMLSpy, which is expected.
    I didn't find such a bug report.
    Does someone has a clue?
    Tx,
    François Loison

    Hi Mark,
    Thanks for your response for the confirmation of the bug.
    Before we created this thread, we tested our case in Stylus Studio using the following XSD Validators:
    - Saxonica 9.4.0.6
    - Saxonica 9.4.0.6 for XSD 1.1
    - Xalan Java 2.7.0
    - .NET Framework XML Parser
    - Xerces-J Validator 2.11.0 for XSD 1.1
    All of them worked properly for the XML and XSD in question.
    By the way, on the subject of the Oracle XML DB and XSD 1.1, any plans to support it?
    Oracle 12c ?!
    We will open an SR and provide it here. So you will be able to open a bug for this issue.

  • Best database for large# of records

    LWCVI
    SQL Toolkit
    I have data logging software that logs to an MS Access database (*.mdb) with 4 internal tables.  As the database gets large (> 100,000) records, the write times become very long, and I have instances of software shutdown.  Even it it does not shutdown, the write times become several minutes, which is longer than the data logging interval (every 2 minutes).
    I am currently manually archiving and emptying the records on a monthly basis, but it makes it difficult for the user to find data that they need.
    Is this an inherent problem with MS Access?  This is the older version of SQL toolkit running under CVI 5.1.  This may be remedied by an upgrade for CVI/SQL or both.  I do not want to spend the $$ if this is a database structure problem.
    I can move over to another format, such as MySQL, etc.  Previously, I used a dBase structure/ODBC and did not have these issues.
    Any suggestion, help, etc is welcome.  Thanks in advance,
    David E.

    Humphrey,
    I moved to MS Access because I did not have any way to view the data in the old system remotely (dBase format).  dBase is long gone.  Access will open the dBase database, but it converts it.
    I created some nice custom reports in Access to display the data for my customers. 
    I actually have found a solution to the large file problems.  Rather than log all the data into a single table (within the database), I create a new table each month.  This allows the table size to stay reasonable, and still allows a search routine to find the data (e.g., May13, June13, etc).
    If I keep the number of records in a table < 1M, then the write times are reasonable.
    Thanks for the help.  I appreciate the quick reply.
    PS the optimization link is a dead link.
    David

  • Maitaning huge volume of data (Around 60 Million records)

    Iu2019ve requirement to load the data from ODS to Cube by full load. This ODS is getting 50 Million records down the line for 6 months which we have to maintain in BW.
    Can you please put the advise on the following things?
         Can we accommodate 50 Million records in ODS?
    If i.e. the case u201CCan we put the load for 50 Million records from ODS to Cube?u201D     And each record has to go check in another ODS to get the value for another InfoObject. Hence u201CIs the load going to be successful for the 50 Million records? Iu2019m not sure. Or do we get time out error?

    Harsha,
    The data load should go through ... some things to do / check...
    Delete the indices on cube before loading and then rebuild the same later after the load completes.
    regarding the lookup - if you are looking up specific values in another DSO - build a suitable secondary index on the DSO for the same ( preferably unique index )
    A DSo or cube can definitely hold 50 million records - we have had cases where we has 50 million records for 1 month with the DSO holding data for 6 to 10 months and the same with the cube also. Only that the reporting on the cube might be slow at a very detailed level.
    Also please state your version - 3.x or 7.0...
    also if you are on Oracle - plan for providing / backing up archive logs - since loading generates a lot of arcive logs...
    Edited by: Arun Varadarajan on Apr 21, 2009 2:30 AM

  • Planfunction in IP or with BW modelling - case with 15 million records

    Hi,
    we need to implement a simple planfunction (qty * price) which has to be executed for 15 million records at a time (qty of 15 million records multiplied with average price calculated on a higher level). I'd like to still implement this with a simple FOX formula but are fearing the performance, given the number of records. Does anyone has experience with this number of records. Would you suggest to do this within IP or using BW modelling. The maximum lead time accepted is 24 hours for this planfunction ...
    The planfunction is expected to be executed in a batch or background mode, but should be triggered from an IP input query and not using RSPC for example...
    please advise.
    D

    Hi Dries,
    using BI IP you should definitely do a partition via planning sequence in a process chain, cf.
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/45/946677f8fb0cf2e10000000a114a6b/frameset.htm
    Planning functions load the requested data into main memory, with 15 million records you will have a problem. In addition it is not a good idea to emply only one work process with the whole work (a planning function uses only one work process). So partition the problem to be able to use parallelization.
    Process chains can be triggered via an API, cf. function group RSPC_API. So you can easily start a process chain via a planning function.
    Regards,
    Gregor

  • Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long.

    Hello Friends,
    The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
    Scenario:
    Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
    Questions are…
    What is best option?
    IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
    I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
    When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
    Thanks
    Karthikeyan Jothi

    http://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
    Processing
    hundreds of millions records can be done in less than an hour.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Loading 3 millions record in database via externel table

    I am loading 3+ millions records in database by using externel tables. It is very slow process. How can I make this process fast?

    Hi,
    1. Break down the file into several files. let just say 10 files (300,000 record each)
    2. disable all index on the target table if possible
    3. disable Foreign key if possible, beside you can check this later using exceptions table
    4. make sure your freelist is and initrans is 10 for the target table, if you are inserting tabel resides in manual space management tablespace
    5. Create 10 proccess, each reading from their own file. and run this 10 process concurrently and used log error with unlimited reject limit facility. so the insert will continue until finish
    hope can help.

  • How can we do the validation for non visible records in table control.

    Hi Experts,
      I have a table control which displays list of material details. I have a button to upload the material details from excel file to table control directly. I have to validate all the customers which are exist in the table. But my item (Material) table control displays only 5 rows i.e only 5 entries are visible in module pool screen. The validation is done for 5 records only in PAI event, but i need to do validation for rest of the records too (Which are not visible on items table), if validation fails then needs to display error message.
      How can we do the validation for non visible records in table control.
    Regards,
    Bujji

    Hi,
    try validating material before displaying it in table control...'
    Rgds/Abhhi

  • Can validation check can be set for vendor master record .

    Hi Experts ,
    User wants to remove two existing payment methods,instead of removing i think to set validation for the vendor master record .
    can validation can be set for vendor master record ?
    PLease suggest me with advice.
    Thanks
    Pardha

    Hi,
    To my knowledge, validation can be set up for accounting document.
    Though a work around, create a validation such that
    Make the user enter the Payment method to be mandatorily entered by the user.
    Also, restrict the unwanted payments as well in the same validation.
    In case of APP also, the payment method in the document will take precedence.
    Regards,
    Sridevi

  • Polling database for inserted records

    I have to poll database for records insertion.The thing is that the field on which i am updating the sequence table is not generated sequentially and there is no other field which can be used for this purpose and also i cannot add any more fields in the table to use logical delete.So how i can still poll for new records added.

    You could use a trigger on your table to populate another table - and on this table have a logical delete column, You can define a view over the two tables and poll that view.

Maybe you are looking for

  • Link to page in middle of pdf document.

    Nevermind I figured it out, I left out the equal sign before the page number. Should have been: http://playgroundsmag.com/042007playgroundsall.pdf#page=12 Deisregard below. I have a web page in dreamweaver that contains a Table of Contentsd jpg file.

  • Migrating itunes to the Mac

    I recently moved from itunes windows to macbook air.  I am able to connect both my old pc and the new mac on the same network and view the shared library.  However, I am unable to move the songs on the shared library to my itunes library on the mac. 

  • OSN - Inbound XML AP Invoices

    We are attempting to receive AP invoices from one of our suppliers directly from the OSN. 1) We can successfully import the sample AP invoice from the OSN into the AP interface tables. 2) We can receive the suppliers invoice into the ECX_DOCLOGS tabl

  • Why i can't use import java...;

    Hi, i'm trying to compile a small piece of java program but the compiler tells me that some of the objects defined are unknown. For example, i imported java.io.* and istanced TextFileReader fr = new TextFileReader(); Is there anybody able helping me?

  • I can't get sound on my MacBook Pro

    I can't get any sound from my MacBook pro.  I have checked setting and volume is at full sound