Best way to data transfer between Database & XML & Display to end user

Hi,
What is the best mothod to insert the data into tables from XML and viceversa. Also I need to display the data to the end user, with the provision to edit the data.
Thanks in advance

If you want to edit and store the data completely in XML then you could do the following:
1) Register the XML schema and specify a default table.
2) Connect to the XDB repository and store your documents using FTP or WebDAV. Make sure they reference the registered schema in the instance header. This should load the underlying XMLTYPE table you specified as the default table.
3) Connect to the repository with a WebDAV-aware XML editor.
And if you want to take this a little further, then you could also do the following:
4) Create a relational view on top of your XML table using code similar to below.
CREATE OR REPLACE VIEW "ACT_LIST" ("NAME", "CITE", "SORT", "JURISDICTION", "LOCATION") AS
SELECT
ExtractValue(OBJECT_VALUE, '/act/act_version/name[1]') Name
, ExtractValue(OBJECT_VALUE, '/act/act_version/cite[1]') Cite
, ExtractValue(OBJECT_VALUE, '/act/act_version/name[1]/@sort') Sort
, ExtractValue(OBJECT_VALUE, '/act/@juris') Jurisdiction
, b.ANY_PATH Location
FROM
ACTS a
, RESOURCE_VIEW b
WHERE
Ref(a) = ExtractValue(b.RES, '/Resource/XMLRef')
5) Put that into an application (e.g. APEX) as a report.
6) Link from the report to the XML editor and pass the location of the document in the repository. Or you can use an embedded WYSIWYG XML editor so that they edit the document inside the application and the whole thing is fairly seamless for the user.
Hope this helps.

Similar Messages

  • Best way to move LOB between database

    I am using Oracle 10.2.0.4. Please share your experience what is the best to move LOBs between databases. Lob size is 40 gb. I have tried using data pump, and parallel insert with nologging but it is taking almost 1 hour to transfer 1 gb.
    Thanks for any suggestion.

    N Gasparotto wrote:
    For data pump I am using impdp over network_link .Would be much more efficient to copy the file accross the network and then run impdp locally. Did you also tried also parallel impdp ?
    Nicolas.Thank I will be trying your suggestion to move the copy. Currently do to space issue I cannot try it but I have already requested for the designated mount point.
    I tried using parallel hint. Our sever had 8 cpu so I tried parallel 16. Although current parallelism was 16 ,
    worker parallelism was 1. They were only 2 workers and other worker was always waiting. In other words, the two workers did not execute simultaneously. So, I guess data pump did not run parallely.
    Should not more than one worker be executing simultaneously for it to run parallel? I did not understand what worker parallelism meant?

  • Data transfer between databases

    Dear forum,
    there is two databases DB_our and DB_supplier. Data should be transferred from a view in DB_supplier into a table DB_our. Both databases are in the same LAN. There is also a scheduler (not from Oracle) which have to be used. It is able to launch unix scripts.
    How to transfer and control the transfer most simple ?
    The first idea: using database link and launch the process via stored procedure.
    Any more simple solution ?
    Thanks in advance,
    Michel
    Edited by: Michel77 on Dec 10, 2008 4:22 PM

    Hi
    There could be more than one solution and you need to choose the one which suits your needs.
    1) SQL*Plus COPY command
    2) Export/Import
    3) Data Pump (If you are on Oracle 10g)
    4) Replication
    5) Oracle Streams
    6) Database Links
    Regards
    Asif momen
    http://momendba.blogspot.com

  • JPA - Best Practice For Data Transfer?

    I've been considering an alternative method for data transfer between applications, by using Serialized or Encoded to File JPA Entities. (either Binary or XML)
    I know this procedure may have several draw backs as compared to traditional exported SQL queries or data manipulation statements, however, I want to know if anyone has considered or used this process for data transfer?
    The process would be to
    - query the database and load the JPA Entities
    - Serialize or Encode them to file
    - zip up the entire folder with the JPA entities
    - transfer the data to destination machine
    - extract the data to a temp directory
    - reload the JPA entities by de-serializing and persisting them to the database
    The reason I'm considering this process, is basically because I have a desktop application (manages member records, names, dates, contributions, etc) used by different organisations in different locations (which are not related except by purpose ie clubs or churches) and would like to have a simple way of transporting all data associated with a single profile (information about a member of the organisation) from one location to another in a simple way, ie users interact only with the application without the need for any database management tool or such.
    I'm also considering this because it is not easy to generate an SQL Script file without using a dedicated Database Management Tool, which I do not want the application users to have to learn how to use.
    I would appreciate ANY suggestions and probable alternative solutions for this problem. FYI: I'm using a Java DB database.
    ICE

    jschell wrote:
    In summary you are moving data from one database to another. True
    You only discussed flow one way. Well the process is meant to be bi-directional. Basically what I envision would to have something like:
    - the user goes to File -> Export Profile...
    - then selects one or more profiles to export
    - the export process completes and the zip archive is created and transfered (copied, mailed, etc) to the destination location
    then on the destination pc
    - the user goes to File -> Import Profile
    - selects the profile package
    - the app extracts, processes and imports the data (JPA serialized for example)
    Flow both ways is significantly more complicated in general.Well if well done it shouldn't be
    And what does this have to do with users creating anything?Well as shown above the user would be generating the Zip Archive (assuming that is the final format)
    Does this make the problem clearer?
    ICE

  • Data transfer between SAP & Java and Vice versa using IDOC Process

    Dear Experts,
            We are working on one of the good requirement related to data transfer between SAP and Java software. Client requirement is, they want to transfer the data in both the ways (from SAP --> Java and Vice versa also).
    In detail is, after sales order creation using one custom program loading plan details will be calculated. Once loading dates are confirmed then, user will release the sales document to transfer the data from SAP to Java using "Outbound IDOC processing". Similarly in that JAVA software some shipment details will be performed, once completed from JAVA software again details needs to be pumped back to SAP as "Inbound IDOC Processing".
    For this fields are already identified from external software SAP and we are looking for the same to perform the steps in SAP.
    At this stage, I need your expert opinion  / feedback how to go  about at this stage.
    Meaning,  
                     1. What are the customizing steps needs to be done in SAP..?
                     2. How to trigger the :Outbound IDOC process" once the documents are "Released" from custom transaction
                     3. How to create the link between SAP and JAVA to transfer the data between these 2 software
                     4. How to trigger the "Inbound IDOC Process" from JAVA software to SAP and how to store the data in SAP
    Experts, please give your feedback in terms of reply or by sending the step by step process to fulfill this client requirement.
    Thanks for your cooperation.
    Regards,
    Ramesh

    Maybe too many open questions in the same document.
    Maybe you should repost a more specific question in a technical forum.
    This looks like a small project where you already know what you want, maybe you should contract a technical specialist so he proceeds to the implementation!

  • I have problem with data transfer between Windows Server 2012RT and Windows7 (no more than 14kbps) while between Windows Sever 2012RT and Windows8.1 speed is ok.

    I have problem with data transfer between Windows Server 2012RT and Windows7 (no more than 14kbps) while between Windows Sever 2012RT and Windows8.1 speed is ok.

    Hi,
    Regarding the issue here, please take a look at the below links to see if they could help:
    Slow data transfer speed in Windows 7 or in Windows Server 2008 R2
    And a blog here:
    Windows Server 2012 slow network/SMB/CIFS problem
    Hope this may help
    Best regards
    Michael
    If you have any feedback on our support, please click
    here.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Error 33172 occurred at Read & Write data transfer between two or more PF2010 controller

    Hi,i need to do data transfer between two or more FP2010 controller.e.g. FP2010(A) & FP2010(B).
    FP2010(A) need to transfer the measurement (from its I/O module) to FP2010(B) to do the data analysis.These data transfer should be synchronous btw two controller to prevent data lost.
    From the vi used in the attachment,i encountered some problems at:
    (1) Error 33172 occurred while publishing the data.Can i create and publish data under different item name?
    (2) How to synchronies the read & write btw contorller?
    All controller are communicating with each other directly without the need of a host computer to link them together
    Is there any other method to do fast data transfer betwe
    en controller?

    Hi YongNei,
    You were succesful in omiting enough information to make it very difficult to answer!
    Please post your example.
    Please tell us what version of LV-RT you are using.
    Please define what you concider "fast data transfer".
    Have you concidered mapping the FP tags of FP2010(A) to FP2010(B) and vise versa?
    WHat exactly has to be syncronized?
    If you have something that is close to working, share that.
    Well, that as far as I can go with the info you have provided. Depending on the details, what you are asking could be anything from trivial to impossible with the currently available technology. I just can't say.
    It would probably be a good idea to start over with a fresh question (sorry) because not many people are going to know what a a "
    PF2010" is and I can not guarentee that I will be able to get back to you personally until next week-end.
    Trying to help you get an answer,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Secure the file/data transfer between XI and any third-party system

    Hi All,,
    I would like to use to "secure" SSH on OS Level the file/data transfer between XI and any third-party system Run OS Command before processing and OS command After processing. right now my XI server installed on iSeries OS.
    with ISeries we can't call the Unix commands hope we need to go for AS400 (CL) Programming. If we created the AS400 programm how i can call that in XI.
    If any one have idea pls let me know weather it will work or not.
    Thanks in adavance.
    Venkat

    Hi,
    Thanks for your reply.
    I have red some blogs like /people/krishna.moorthyp/blog/2007/07/31/sftp-vs-ftps-in-sap-pi to call the Unix Shell script in XI.
    But as i know in iSeries OS we can write the shell script we need to go for AS400 programe. If we go with AS400 how we need to call that programe and it will work or not i am not sure there i need some help please.
    Thanks,
    Venkat

  • What is the best way to import a full database?

    Hello,
    Can anyone tell me, what is the best way to import a full database called test, into an existing database called DEV1?
    when import into an existing database do you have drop the users say pinfo, tinfo schemas are there. do i have drop these and recreate or how it will work, when you impport full database?
    Could you please give step by step instructions....
    Thanks a lot...

    Nayab,
    http://youngcow.net/doc/oracle10g/backup.102/b14191/rcmdupdb005.htmA suggestion that please don't use external sites which host oracle docs since there can not be any assurance that whether they update their content with the latest corrections or not. You can see the updated part no in the actual doc site from oracle,
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1009381
    Aman....

  • [svn:osmf:] 11205: Fix bug FM-169: Trait support for data transfer sample doesn' t display bytes loaded and bytes total for SWF element

    Revision: 11205
    Author:   [email protected]
    Date:     2009-10-27 15:04:26 -0700 (Tue, 27 Oct 2009)
    Log Message:
    Fix bug FM-169: Trait support for data transfer sample doesn't display bytes loaded and bytes total for SWF element
    Ticket Links:
        http://bugs.adobe.com/jira/browse/FM-169
    Modified Paths:
        osmf/trunk/apps/samples/framework/PluginSample/src/PluginSample.mxml
        osmf/trunk/apps/samples/framework/PluginSample/src/org/osmf/model/Model.as

    The bug is known, and a patch has been submitted: https://bugs.freedesktop.org/show_bug.cgi?id=80151. There's been no update since friday, so I wonder what the current status is, or if it's up for review at all.
    Does anyone know how we can be notified when this patch hits the kernel?

  • Best way to create placeholder frames for XML import

    This is my first attempt at importing XML into ID (CS2), my client has given me an Access database that includes about 150 records that include several fields of data, including an image file and a few brief paragraphs of descriptive text.
    The goal is to have each record occupy a page in the layout.
    My goal is to set up one initial page with a placeholder frame or frames and have ID automatically duplicate for all the records. And that's my question...do I create multiple frames, one for each field in the database, or do I create one frame and just tag various elements within the single frame?
    If I create multiple frames, I'm thinking I would have to copy/paste the group onto each new page (or am I wrong?)
    If I create one single master frame, then InDesign would just consider all the content one story and automatically create as many threaded frames as are needed--right?
    The other consideration is that the field containing the descriptive text will vary in length with each record and I want to make sure that there aren't inconsistent spacing issues happening, as would probably be the case if I created a unique frame for the descriptive text.
    I hope I'm being clear...would somebody who's done this type of thing before please advise me on the best way to set up my placeholders?
    Thanks so much.

    Hi,
    You can use Web Dynpro or Visual Composer for designing UI for your CA.
    1) WebDynPro Callable Object that implements GP Interface.
    You can use Web Dynpro Component CO to use a single Web Dynpro component in your GP Activity.
    4) Create a WebDynPro Application Callable Object :This is for entire Web Dynpro Application(multiple components).
    2) Consume the Application services(exposed as web services) in WebDynPro app :
    3) Creating a WebDynPro Model for the CAF Services.
    For example, in updating operations first we have to get the existing data from Database.For getting this data you can use application services in your Web Dynpro component.Later expose this WD component as CO insert in a GP activity.
    For creating operations, you can use Web Service CO(Application Service)  directly in GP Activity.
    These links are useful for you.
    [link1|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a00c07d0-61e0-2a10-d589-d7bc9894b02a]
    [link2|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10b99341-60e0-2a10-6e80-b6e9f58e3654]
    [lnk3|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1078e3b0-ec5d-2a10-f08a-c9b878917b19]

  • Best Way to Replicate Azure SQL Databases to Lower Environments

    I have XML files delivered to my server where they are parsed into reference data and written to a database (Premium tier).  I want to have that database sync to other databases (Basic tier) so that my non-production environments can use the same reference
    data.
    I tried Data Sync and it seems incredibly slow.  Is Azure Data Sync the best way?  What are my other options?  I don't really want to change my parser to write to 3 different databases each time they receive an updated XML file, but I suppose
    that is an option.

    Greg,
    Data sync is one of the option but I wouldn't recommend as Data-Sync Service is going to be deprecated in near future. I would urge you to go through the options around Geo-replication. There are 3 versions of Geo-repl and i believe Active-Geo replication
    would suit your requirement however the copy of the database which is in sync will also have to be in the same service tier (Basic is not possible). With the current Azure offering, it is not possible to have a sync copy of database with different SLOs. I
    would also recommend you to open a support incident with Microsoft to understand different options of Geo-replication. Throughout the time I was composing my answer, keeping DR (disaster recovery) in mind. If i am mistaken, please let me know.
    -Karthik Krishnamurthy (SQK Azure KKB)

  • Best way to load initial TimesTen database

    I have a customer that wants to use TimesTen as a pure in-memory database. This IMDB has about 65 tables some having data upwards of 6 million rows. What is the best way to load this data? There is no cache-connect option being used. I am thinking insert is the only option here. Are there any other options?
    thansk

    You can also use the TimesTen ttbulkcp command line utility, this tool is similar to SQL*Loader except it handles both import and export of data.
    For example, the following command loads the rows listed in file foo.dump into a table called foo in database mydb, placing any error messages into the file foo.err.
    ttbulkcp -i -e foo.err dsn=mydb foo foo.dump
    For more information on the ttbulkcp utility you can refer to the Oracle TimesTen API Reference Guide.

  • What is the best way to handle duplicate in an XML document?

    I have an XML document that may contain duplicated nodes. I want to insert it to the DBXML database, so that the duplicated nodes are eliminated.
    What would be the best way (in term of performance) to do it?
    I thought of enforcing the uniqueness constraint and then insert the nodes one by one, so that I will get an exception from the database if the node is duplicated, but I may have more than 50000 nodes in the worst case, so I'm not sure if this is still a good way to do it.
    Can someone give me some suggestion on this?
    Thanks!

    Hi,
    I would suggest to reconsider building of your document so that it doesn't contain duplicates if you don't need them. And it doesn't have much to do with DB XML then.
    Also you could insert your document with duplicates and use XQuery Update facilities to delete undesirable nodes.
    Vyacheslav

  • Data synchronization between databases

    Hi,
    This week one of my team accidentally damaged multiple records in one of the production databases, I have a backup of the database prior to what happened.    I need to copy the information in a few tables from the beginning of records (about 10 years ago), to the end of August 2014.   The newer records should remain as they are.
    I was thinking that the best route is to use some sort of tool to synchronize data between databases.
    Do you guys have a recommendation for this sort of work? 
    The databases (production and backup) are ASA 9.
    Best regards,
    Edgard

    Hello Edgard,
    If you have any PowerBuilder in-house the Data Pipeline object might help. Otherwise I'd have a look at Squirrel SQL. I know that can communicate between two databases simultaneously but confess I have never tried it.
    Paul

Maybe you are looking for

  • 8600 Premium won't auto-select tray based on page size

    I just bought the 8600 Premium yesterday, and am absolutely flabbergasted to find that this printer, which is TOUTED for its full capability of handling legal-sized paper can't pick different sizes of paper from Tray 1 and Tray 2 in the same print jo

  • My new 30g black

    I just purchased a new 30 gig black video ipod and im having trouble getting my music from real player to my i tunes can anyone help me with ths problem?? the last thing i want to do is to have to put all my cd's on itunes.... can someone please help

  • Title for the Legend in jfree charts

    Hi, I want to give the Legend a Header i.e a block above the legend stating the name of the Legend. I am unable to find a method that would help me with this.M using jfree1.0.2 (a small example could be helpful)

  • No sound on Intel HD Audio (3400 series 5)

    $ lspci | grep -i audio 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) $ lsmod | grep '^snd_' | column -t snd_pcm_oss             38914  0 snd_mixer_oss           15315  1  snd_pcm_oss snd_hda_code

  • Safari on a dial-up connection

    Because I live in a rural area my only option for getting online is dial-up. I'm used to DSL A friend told me you can change some settings on Safari and pages will load faster. I'm not hoping for a miracle, but is this true, or is she mistaken?