Data Mapper to relational database

I've been reading the "Patterns of Enterprise Application Architecture" book and it has opened my mind to a lot of different designs.
I'm trying to design a large accounting system in OO. I'm unclear on how I should design the mapping inbetween the domain logic and database.
Should the controller classes talk to the data mappers?
In inheritance if my classes don't directly relate to tables should I have one data mapper or one for each class?
When I'm using a data mapper and I select all from a table does the mapper create multiple objects or does it return a result set?
If it creates multiple objects how does the domain logic sort this data?
Do I really need a Unit of Work to manage my data mappers? How hard are they to create? Is there only one Unit of Work class for all data mappers?

I've been reading the "Patterns of Enterprise
Application Architecture" book and it has opened my
mind to a lot of different designs.Usually the worst time to apply the stuff, because you're looking to use it before you really understand. But there must be a first time for everything. Great book, though.
I'm trying to design a large accounting system in OO.Wow, is this for personal education or a paid gig? I hope it's the former. If not, make sure that "build versus buy" is part of the decision. Would anybody write an accounting package in this day and age, with SAP, Oracle, QuickBooks, Great Plains, and a thousand others ready to buy?
I'm unclear on how I should design the mapping
inbetween the domain logic and database.
Should the controller classes talk to the data mappers?Depends on what you call controllers. I like to think of them as the classes that implement the service interfaces. The methods in the service interface are the use cases. I think the distinction between controller and service is very important, because a web-based app will certainly have a controller. If you fall into the trap of putting all the logic into that class, you can't use it WITHOUT the web controller. It also puts you right in line to turn this into a service oriented architecture.
The services will certainly call the persistence interface. Make sure you have one.
In inheritance if my classes don't directly relate to
tables should I have one data mapper or one for each
class?That's one way to do it.
When I'm using a data mapper and I select all from a
table does the mapper create multiple objects or does
it return a result set?I think it ought to return a Collection of objects. You should never return a ResultSet. That should be instantiated and closed within the scope of the persistence layer. ResultSets are database cursors, a scarce resource. They should be kept open for the shortest time and scope possible.
If it creates multiple objects how does the domain logic sort this data?You won't ask for all of them unless you have something sensible to do with them. If you're referring to the literal "sort", it's an easier question. Have the database do an ORDER BY in the SELECT. Hard to tell what you mean by this.
Do I really need a Unit of Work to manage my data mappers? You do if you need ACID properties for transactions.
So you're going to try to do all of this by hand? A hard job, indeed. You're talking about writing your own transaction manager.
How hard are they to create? Is there only
one Unit of Work class for all data mappers?I haven't read Fowler's book in a while, and I don't have it handy, but it's one Unit of Work per transaction when I think about the term. What you need is something that will demark transaction boundaries.
If you weren't writing this yourself, you'd be using JTA (Java Transaction Architecture.) That's what I'd recommend that you do.
But if this is an educational effort, knock yourself out.
%

Similar Messages

  • How to store, in an effective way, analyzer data into a relational database?

    We want to store the "sweep traces" of a network analyzer in a relational database in a way that it saves as much as possible space without loosing resolution.
    The solutions were we thinking on are to separate the x-axes information from the y-axes information and store it in different tables of the database.
    Because the repeating character of the measurements the data in the x-axes will be nearly all ways the same. So we want to store only new data in the x-axes table as a different x-axes is detected.
    In a third table we want to save the relation between the x and y data and other data that belongs to the measurement.
    Question is are there other or better possibilities to solve this proble
    m?

    Hi Ben,
    Thanks for you help.
    The use of a third table that links the X-axe and y-axe table together depends on if I store the datapoints in the y-axe table sequential, so I need an identification of the points belonging together and I can have a varying number of data-points, (i.e. 401 of 801 ...) or I save it in one record.
    The problem here is I have to save a varying nummer of points in tables with a lot of "datapoint columns".
    Another solution is save the datapoints as a semicolon ( separated text string in one field.
    Problem now is the limitation in the max. text field length.
    In my Oracle Rdb database I can use "Varchar" fields.
    (is here no limitation??)
    In other databases a "Note field" will maybe give a solution.
    The question sti
    ll is: What is the best solution and uses the smallest amount of space?
    In the next week I will do some tests with the solutions mentioned.
    Please let me know what DSC is??
    Greetings Huub

  • Can I configure Enterprise search to search a Relational database

    We have several legacy web applications. We are currently using coveo to search the relational databases. We create mapping xml file for the coveo indexing services. The coveo access the database using jdbc. It provides the ability to create uri for search result. The uri is invoked when the user clicks on search result.
    Can I setup something similar with SAP Enterprise search.

    Hi:
    SAP NetWeaver Enterprise Search contains an SAP NetWeaver BI instance within the architecture on the appliance.  You can utilize the BI features to extract data from your relational DataBase, then index that data into the TREX part of ES.  BI features both "DB connect" and "UD Connect", which give you techiques to connect the BI system to an external DataBase and extract data.  You will need to model a DataSource, but the features of SAP NetWeaver 2004s BI make this relatively easy.  Once you have a datasource, you can create an Open Hub destination for indexing (with a transformation and DTP to deliver the data), and create a process chain to extract and index the data.
    ES offers numerous DataSources "out of the box" for extraction from SAP ERP systems, you could follow the business content structure for those, but instead use DB connect or UDconnect to get the data from a non-SAP system.
    For more info on this, see the BI area of SDN and help.sap.com > NetWeaver > Key Capability > Information Integration > BI.
    Thanks for any points you choose to assign (they are the way of saying thanks on SDN).
    Best Regards -
    Ron Silberstein
    SAP

  • Accessing Essbase ASO Cube from Oracle Relational database

    Hi All,
    I am a Oracle database developer. We have a requirement where we need to access Hyperion Essbase ASO cube data directly from Relational Database. We have identified below options.
    1. Use Hyperion web service and UTL_HTTP oracle utility
    2. Use JAVA API to access ASO cube. The code of the Java, will be written in Informatica Java Transformation.
    Unfortunetly, i am not getting good resources in Google on how to do?
    Appreciate, if someone share the knowledge if they have implemented this.?

    I am not competent to recommend any particular approach but Essbase.ru has some blog entries on using XMLA / 11.1.2.2 services and a Google Code project...
    http://essbase.ru/archives/category/performance/essbase-api/xmla
    Google will translate if you don't read Russian!

  • SQL ENTERPRISE: The edition of Reporting Services that you are using requires that you use local SQL Server relational databases for report data sources and the report server database

    The error below makes absolutely no sense! I'm using Enterprise Core...yet I'm being told I can't use remote data sources:
    w3wp!library!8!03/05/2015-19:08:48:: i INFO: Catalog SQL Server Edition = EnterpriseCore
    w3wp!library!8!03/05/2015-19:08:48:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.OperationNotSupportedException: , Microsoft.ReportingServices.Diagnostics.Utilities.OperationNotSupportedException: The feature: "The edition of Reporting
    Services that you are using requires that you use local SQL Server relational databases for report data sources and the report server database." is not supported in this edition of Reporting Services.;
    Really? This totally contradicts the documentation found here:
    https://msdn.microsoft.com/en-us/library/ms157285(v=sql.110).aspx
    That article says remote connections are completely supported.
    ARGH! Why does this have to be so difficult to setup?!?

    Hi jeffoliver1000,
    According to your description, you are using Enterprise Core edition and you are prompted that you can’t use remote data sources.
    In your scenario, we neither ignore your point nor be doubt with what you say. But actually we have met the case before that even though the SQL Server engine is Enterprise but the reporting services is still standard. So I would recommend you to find the
    actual edition of reporting services you are using. You can find Reporting Services starting SKU in the Reporting Service logs ( default location: C:\Program Files\Microsoft SQL Server\<instance name>\Reporting Services\LogFiles). For more information,
    please refer to the similar thread below:
    https://social.technet.microsoft.com/Forums/en-US/f98c2f3e-1a30-4993-ab41-acbc5014f92e/data-driven-subscription-button-not-displayed?forum=sqlreportingservices
    By the way, have you installed the other SQL Server edition before?
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • Move XMLTYPE data to a remote  database

    Hello,
    I need your experience !!!!!
    I am working with the 10.2.1 database.
    I have try to call a remote procedure (using DBLINK) with a XMLTYPE parameter, I get a error link to a lob locator.
    One solution could be to save XMLTYPE to disk.
    Some body know other solution to transfer XMLTYPE data to a remote database.
    Thanks in advance
    Best Regards

    A future alternative will be Oracle streams and b.t.w. one of the advantages (sometimes not, but anyway) of an Oracle database is that you can use object orientated methods, relational methods, java solutions, etc. and nowadays XML methods. XML was once all about transporting data, creating an uniform interface between solutions (databases, application/webservers, technology stacks, etc...). Think out of the Box. XMLDB isn't relational, but it is packaged in one. Make use of it and vice versa.

  • APEX Application accessing data from two different databases

    Hi All,
    Currently as we all know that APEX Application resides in database and is connected to the schema of that database.
    I want APEX Application to be running and accessing data from two different databases. Elaborating my question,
    Currently, my APEX Production Application is connected with XXXX Schema of DB1 Database(Where APEX Resides). Now I want to add some pages into this APEX Application for REPORT Purpose, But I want to connect this REPORT APEX Pages to get data from Different Schema YYYY for Database DB2.
    Is it possible to configure this scenario?
    The reason for doing this is to avoid the REPORT related (adhoc queries) resource utilization effect on Production DB1 Database.
    Thanks
    Nil

    1. If you do the joining of two or more tables in DB1 then all data is pulled over to DB1 and then the join is executed: so more data over the databaselink and more work for DB1. Better keep the joining stuff where the data resides and just pull exactly that data over that you need.
    2. Don't know about your different block sizes. Seems a nice question for one of the other forums (DBA or SQL).
    3. I mean create synonyms on DB1 for reports VIEWS in DB2.
    Hope all is clear!

  • How to get data into the mySQL database?

    First some background.
    I have a website that has outgrown its designed dimensions and is a huge burden to maintain. See PPBM5 Benchmark
    There is a lot of maintenance work involved, so I'm investigating a PHP/MySQL approach to easen the burden and to add functionality to the site. With the current Excel based structure and over 420 entries, it is cumbersome for me to maintain, but also for users to find what they need.
    A MySQL based dynamic structure is a lot easier and offers vastly more selection capabilities, like selecting only records that meet specific criteria.
    Data submission is done with a form, that contains most of the relevant data, but the drawack is that people submitting their data are often not technically inclined, give wrong answers due to a lack of understanding or making typo's. The test results are attached in one or two separate .txt files, but often they have not read the instructions correctly or did something wrong, so these attached .txt files can not be trusted automatically, they have to be checked before inclusion.
    These were my initial thoughts:
    1. Data collection:
    To avoid spending all our energy and time  on correcting typo's, getting missing data, correcting errors, I am  investigating the use of CPU-Z in Ghost mode to create a .txt or .html  file that contains all relevant hardware info we need and even more. It gives all the info we currently have, but adds  data like number of memory sticks, DDR timings, stock clock speed and  BCLK setting, video card info and VRAM size, etc.
    To see what I mean, run CPU-Z, go to the About tab and press the Save Report button and look at the results.
    This can all be done without user intervention in an automatic way, but  maybe I need to add an Auto-It file to the test to make it all run as  desired.
    If this works and I'm able to extract the relevant data from the created  file and can insert it into the database, we may be in business for the  next version of PPBM5.5 or PPBM6. It does require a modification to the instructions, making them a lot  easier, because there is less data to fill out.
    2. Data submission:
    The submission form can be simplified if  the CPU-Z data can be used. We have to create an automatic way to attach  the created .html file from CPU-Z to the submission form and we have to  streamline the Output.txt and Output-MPE.txt files to be more easily included in the 'form.lib.php' file. It  currently is manual labor and very time consuming.
    3. Adding to Database:
    I have to find a way to create database  records from the Gmail forms I receive. All incoming mail messages need  to be checked on relevancy and if relevant, need to be added  automatically to the database and then offered for approval before final inclusion in the database. Data included in the database  will then include submission date and time, Email address,  IP address  used, plus links to the files submitted and available on the website.
    4. Publication of the database:
    After approval of new records from step  3, all updates will be automatically applied to the database and  accessible for users. I do not yet intend to introduce a user account ,  requesting login before all functionality is accessible. Too much trouble and administration.
    Queries should be possible on things like CPU (check box), so include  17-920, i7-930, i7-950 but exclude i7-980X and i7-990X, Size of memory  (check box), Overclocked (boolean, yes, no), SSD as OS disk, and similar  options.
    The biggest problem is to keep the color grading and statistical  indicators (Top, D9, Q3, Med, Q1 and D1) intact on dynamically generated  queries. Say you make a query which results in 20 observations, this  should show the related colors and legends. Next query results in 48 observations and of course the color grading and legends  do need to reflect that. Question in my mind, does the RPI remain  constant, independent of the query or does that need to be recalculated  on the basis of the query?
    Next thing is to allow a user to select a specific observation and by  simply clicking on it be shown, in a separate window (detail page) or  accordion, all the CPU-Z related information about the hardware.
    The graphs, Top-20 and MPE Gains, need to be dynamically adjusted, based on the query used.
    5. Ideally, external links:
    In an ideal situation, one could link the  CPU-Z data to external price databases, looking up current prices for  CPU, memory, video card, disks, raid controller, etc. to get instant  BFTB charts, based on the query made. But that is the next step.
    Situation now:
    I have a MySQL database that is easily updated with the new submissions. Simply create a .CSV flie from the submitted forms and import that into the database. The bulk of the initial work is done.Lots remain to be done as you can see above, but that is for a later time.
    Question:
    I have this table, that needs to be filled with data in the submitted and attached files. Mr. X submitted his data and can be uniquely identified by his "Ref_ID". He attached one or two files in .TXT format with the relevant test data. These files are stored on the server with a concatenated name:
    "Ref_ID","-","filename"
    Say his Ref-ID is: 20110204-6cf5 and his submitted file is called: Output(99).txt then the file can be found on the server as
    20110204-6cf5-Output(99).txt
    I need to be able to open that comma delimited file, the contents may look like this: "439","1036","819","531" and insert these contents into the relevant record and fields.
    Graphically,
    is what I want to achieve.
    This being my first exposure to PHP/MySQL, you can imagine I'm not clear on how to go from here.
    Added complication is that I actually have 5 numbers to insert per record and two calculated fields, Total Score and RPI should be calculated fields. Haven't yet figured out how to handle calculated fields, maybe only in the PHP/HTML code and not in the database.
    I hope someone can help me.

    You do have a very complex looking site and may need several tables in mysql to handle all that data. If you knew to phpmysql I would suggest taking a look at this tutorial it will help get you started in understanding how to $_GET info from a database and also how to $_POST data to a database. I am no expert just learning myself and I found this very helpful. This is the link http://www.adobe.com/devnet/dreamweaver/articles/first_dynamic_site_pt1.html
    There are also many tutorials on Youtube to help build a CMS Content Management Site I would suggest the following: -
    http://www.youtube.com/user/phpacademy
    http://www.youtube.com/user/betterphp
    http://www.youtube.com/user/flashbuilding
    And many more on my channel here
    http://www.youtube.com/user/Whisperingonthewind
    CMS's are easier to maintain, add edit and delete content.
    I have also recently bought a Book by David Powers Training from the Source very helpful.
    Anyway hope you get it sorted.

  • Not able to create XML from an existing relational database

    Hi,
    I am trying to create and xml from an existing relational database. I am not able to get any XML data, I receive "XMLTYPE()" as the result.
    Here's the query -
    SQL> select o.order_id, XMLELEMENT("order", XMLATTRIBUTES(o.order_id as ID)) AS "result" from order_
    info o where order_id=2793;
    ORDER_ID
    result()
    +2793+
    XMLTYPE()
    I was expecting to get +<order id=2793 />+ instead of XMLTYPE().
    I am using -
    SQLPlus: Release 9.0.1.0.1*
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    I have also run some checks to confirm XML DB installation -
    SQL> select 1 from all_users where username  = 'XDB';
    +1+
    +1+
    SQL> desc RESOURCE_VIEW;
    Name                                      Null?    Type
    RES                                                SYS.XMLTYPE
    ANY_PATH                                           VARCHAR2(4000)
    RESID                                              RAW(16)
    I think, I have something wrong with installation or configuration.
    Any idea about what have I done wrong here?

    Works fine now. Got the result "<order ID="2793"></order>".
    Thanks. :-)

  • Essbase as a source in OBIEE 11g and using lookup to Relation database

    Hello All,
    We are trying to implement Essbase as primary source for our obiee 11g repository and using relational database as secondary source of data.
    I was using a lookup functionality in repository to lookup value from one of the Essbase's Dimension's attribute to relational database, however, there is error when testing this in OBIEE Dashboards.
    And tried to look this error in support.oracle and googled it but was not able to find anything on it.
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 46008] Internal error: File server\Query\Src\SQLookupUtility.cpp, line 145. (HY000)
    Have you had any such issue and how can we pass this issue?
    I am trying to find if there are any quick resolution before going to oracle support.
    TIA.
    Parish

    Is Essbase 11.1.1.3 supported as a source for OBIEE 11g?From the certification matrix (http://www.oracle.com/technetwork/middleware/bi-enterprise-edition/bi-11gr1certmatrix-166168.xls) we can see that only Essbase 9.3.3+ and 11.1.2+ are supported, not 11.1.1.3
    Paul

  • Insert XML file into Relational database model - no XMLTYPE!

    Dear all,
    How can I store a known complex XML file into an existing relational database WITHOUT using xmltypes in the database ?
    I read the article on DBMS_XMLSTORE. DBMS_XMLSTORE indeed partially bridges the gap between XML and RDBMS to a certain extent, namely for simply structured XML (canonical structure) and simple tables.
    However, when the XML structure will become arbitrary and rapidly evolving, surely there must be a way to map XML to a relational model more flexibly.
    We work in a java/Oracle10 environment that receives very large XML documents from an independent data management source. These files comply with an XML schema. That is all we know. Still, all these data must be inserted/updated daily in an existing relational model. Quite an assignment isn't it ?
    The database does and will not contain XMLTYPES, only plain RDBMS tables.
    Are you aware of a framework/product or tool to do what DBMS_XMLSTORE does but with any format of XML file ? If not, I am doomed.
    Cheers.
    Luc.
    Edited by: user6693852 on Jan 13, 2009 7:02 AM

    In case you decide to follow my advice, here's a simple example showing how to do this.. (Note the XMLTable syntax is the preferred approach in 10gr2 and later..
    SQL> spool testase.log
    SQL> --
    SQL> connect / as sysdba
    Connected.
    SQL> --
    SQL> set define on
    SQL> set timing on
    SQL> --
    SQL> define USERNAME = XDBTEST
    SQL> --
    SQL> def PASSWORD = XDBTEST
    SQL> --
    SQL> def USER_TABLESPACE = USERS
    SQL> --
    SQL> def TEMP_TABLESPACE = TEMP
    SQL> --
    SQL> drop user &USERNAME cascade
      2  /
    old   1: drop user &USERNAME cascade
    new   1: drop user XDBTEST cascade
    User dropped.
    Elapsed: 00:00:00.59
    SQL> grant create any directory, drop any directory, connect, resource, alter session, create view to &USERNAME identified by &PASS
    ORD
      2  /
    old   1: grant create any directory, drop any directory, connect, resource, alter session, create view to &USERNAME identified by &
    ASSWORD
    new   1: grant create any directory, drop any directory, connect, resource, alter session, create view to XDBTEST identified by XDB
    EST
    Grant succeeded.
    Elapsed: 00:00:00.01
    SQL> alter user &USERNAME default tablespace &USER_TABLESPACE temporary tablespace &TEMP_TABLESPACE
      2  /
    old   1: alter user &USERNAME default tablespace &USER_TABLESPACE temporary tablespace &TEMP_TABLESPACE
    new   1: alter user XDBTEST default tablespace USERS temporary tablespace TEMP
    User altered.
    Elapsed: 00:00:00.00
    SQL> connect &USERNAME/&PASSWORD
    Connected.
    SQL> --
    SQL> var SCHEMAURL varchar2(256)
    SQL> var XMLSCHEMA CLOB
    SQL> --
    SQL> set define off
    SQL> --
    SQL> begin
      2    :SCHEMAURL := 'http://xmlns.example.com/askTom/TransactionList.xsd';
      3    :XMLSCHEMA :=
      4  '<?xml version="1.0" encoding="UTF-8"?>
      5  <!--W3C Schema generated by XMLSpy v2008 rel. 2 sp2 (http://www.altova.com)-->
      6  <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" xdb:storeVarrayAsTable="true">
      7     <xs:element name="TransactionList" type="transactionListType" xdb:defaultTable="LOCAL_TABLE"/>
      8     <xs:complexType name="transactionListType"  xdb:maintainDOM="false" xdb:SQLType="TRANSACTION_LIST_T">
      9             <xs:sequence>
    10                     <xs:element name="Transaction" type="transactionType" maxOccurs="unbounded" xdb:SQLCollType="TRANSACTION_V"
    >
    11             </xs:sequence>
    12     </xs:complexType>
    13     <xs:complexType name="transactionType" xdb:maintainDOM="false"  xdb:SQLType="TRANSACTION_T">
    14             <xs:sequence>
    15                     <xs:element name="TradeVersion" type="xs:integer"/>
    16                     <xs:element name="TransactionId" type="xs:integer"/>
    17                     <xs:element name="Leg" type="legType" maxOccurs="unbounded" xdb:SQLCollType="LEG_V"/>
    18             </xs:sequence>
    19             <xs:attribute name="id" type="xs:integer" use="required"/>
    20     </xs:complexType>
    21     <xs:complexType name="paymentType"  xdb:maintainDOM="false" xdb:SQLType="PAYMENT_T">
    22             <xs:sequence>
    23                     <xs:element name="StartDate" type="xs:date"/>
    24                     <xs:element name="Value" type="xs:integer"/>
    25             </xs:sequence>
    26             <xs:attribute name="id" type="xs:integer" use="required"/>
    27     </xs:complexType>
    28     <xs:complexType name="legType"  xdb:maintainDOM="false"  xdb:SQLType="LEG_T">
    29             <xs:sequence>
    30                     <xs:element name="LegNumber" type="xs:integer"/>
    31                     <xs:element name="Basis" type="xs:integer"/>
    32                     <xs:element name="FixedRate" type="xs:integer"/>
    33                     <xs:element name="Payment" type="paymentType" maxOccurs="unbounded"  xdb:SQLCollType="PAYMENT_V"/>
    34             </xs:sequence>
    35             <xs:attribute name="id" type="xs:integer" use="required"/>
    36     </xs:complexType>
    37  </xs:schema>';
    38  end;
    39  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.00
    SQL> set define on
    SQL> --
    SQL> declare
      2    res boolean;
      3    xmlSchema xmlType := xmlType(:XMLSCHEMA);
      4  begin
      5    dbms_xmlschema.registerSchema
      6    (
      7      schemaurl => :schemaURL,
      8      schemadoc => xmlSchema,
      9      local     => TRUE,
    10      genTypes  => TRUE,
    11      genBean   => FALSE,
    12      genTables => TRUE,
    13      ENABLEHIERARCHY => DBMS_XMLSCHEMA.ENABLE_HIERARCHY_NONE
    14    );
    15  end;
    16  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.26
    SQL> desc LOCAL_TABLE
    Name                                                                   Null?    Type
    TABLE of SYS.XMLTYPE(XMLSchema "http://xmlns.example.com/askTom/TransactionList.xsd" Element "TransactionList") STORAGE Object-rela
    ional TYPE "TRANSACTION_LIST_T"
    SQL> --
    SQL> create or replace VIEW TRAN_VIEW
      2  as
      3  select
      4    extractvalue(x.column_value,'/Transaction/TradeVersion/text()') tradeversion,
      5    extractvalue(x.column_value,'/Transaction//text()') transactionid
      6  from
      7    local_table,
      8    table(xmlsequence(extract(OBJECT_VALUE,'/TransactionList/Transaction'))) x
      9  /
    View created.
    Elapsed: 00:00:00.01
    SQL> create or replace VIEW TRAN_LEG_VIEW
      2  as
      3  select
      4    extractvalue(x.column_value,'/Transaction/TransactionId/text()') transactionid,
      5    extractvalue(y.column_value,'/Leg/Basis/text()') leg_basis,
      6    extractValue(y.column_value,'/Leg/FixedRate/text()') leg_fixedrate
      7  from
      8    local_table,
      9    table(xmlsequence(extract(OBJECT_VALUE,'/TransactionList/Transaction'))) x,
    10    table(xmlsequence(extract(x.column_value,'/Transaction/Leg'))) y
    11  /
    View created.
    Elapsed: 00:00:00.01
    SQL> create or replace VIEW TRAN_LEG_PAY_VIEW
      2  as
      3  select
      4    extractvalue(x.column_value,'/Transaction/TransactionId/text()') transactionid,
      5    extractvalue(y.column_value,'/Leg/LegNumber/text()') leg_legnumber,
      6    extractvalue(z.column_value,'/Payment/StartDate/text()') pay_startdate,
      7    extractValue(z.column_value,'/Payment/Value/text()') pay_value
      8  from
      9    local_table,
    10    table(xmlsequence(extract(OBJECT_VALUE,'/TransactionList/Transaction'))) x,
    11    table(xmlsequence(extract(x.column_value,'/Transaction/Leg'))) y,
    12    table(xmlsequence(extract(y.column_value,'/Leg/Payment'))) z
    13  /
    View created.
    Elapsed: 00:00:00.03
    SQL> desc TRAN_VIEW
    Name                                                                   Null?    Type
    TRADEVERSION                                                                    NUMBER(38)
    TRANSACTIONID                                                                   VARCHAR2(4000)
    SQL> --
    SQL> desc TRAN_LEG_VIEW
    Name                                                                   Null?    Type
    TRANSACTIONID                                                                   NUMBER(38)
    LEG_BASIS                                                                       NUMBER(38)
    LEG_FIXEDRATE                                                                   NUMBER(38)
    SQL> --
    SQL> desc TRAN_LEG_PAY_VIEW
    Name                                                                   Null?    Type
    TRANSACTIONID                                                                   NUMBER(38)
    LEG_LEGNUMBER                                                                   NUMBER(38)
    PAY_STARTDATE                                                                   DATE
    PAY_VALUE                                                                       NUMBER(38)
    SQL> --
    SQL> create or replace VIEW TRAN_VIEW_XMLTABLE
      2  as
      3  select t.*
      4    from LOCAL_TABLE,
      5         XMLTable
      6         (
      7            '/TransactionList/Transaction'
      8            passing OBJECT_VALUE
      9            columns
    10            TRADE_VERSION  NUMBER(4) path 'TradeVersion/text()',
    11            TRANSACTION_ID NUMBER(4) path 'TransactionId/text()'
    12         ) t
    13  /
    View created.
    Elapsed: 00:00:00.01
    SQL> create or replace VIEW TRAN_LEG_VIEW_XMLTABLE
      2  as
      3  select t.TRANSACTION_ID, L.*
      4    from LOCAL_TABLE,
      5         XMLTable
      6         (
      7            '/TransactionList/Transaction'
      8            passing OBJECT_VALUE
      9            columns
    10            TRANSACTION_ID NUMBER(4) path 'TransactionId/text()',
    11            LEG            XMLType   path 'Leg'
    12         ) t,
    13         XMLTABLE
    14         (
    15           '/Leg'
    16           passing LEG
    17           columns
    18           LEG_NUMBER     NUMBER(4) path 'LegNumber/text()',
    19           LEG_BASIS      NUMBER(4) path 'Basis/text()',
    20           LEG_FIXED_RATE NUMBER(4) path 'FixedRate/text()'
    21         ) l
    22  /
    View created.
    Elapsed: 00:00:00.01
    SQL> create or replace VIEW TRAN_LEG_PAY_VIEW_XMLTABLE
      2  as
      3  select TRANSACTION_ID, L.LEG_NUMBER, P.*
      4    from LOCAL_TABLE,
      5         XMLTable
      6         (
      7            '/TransactionList/Transaction'
      8            passing OBJECT_VALUE
      9            columns
    10            TRANSACTION_ID NUMBER(4) path 'TransactionId/text()',
    11            LEG            XMLType   path 'Leg'
    12         ) t,
    13         XMLTABLE
    14         (
    15           '/Leg'
    16           passing LEG
    17           columns
    18           LEG_NUMBER     NUMBER(4) path 'LegNumber/text()',
    19           PAYMENT        XMLType   path 'Payment'
    20         ) L,
    21         XMLTABLE
    22         (
    23           '/Payment'
    24           passing PAYMENT
    25           columns
    26           PAY_START_DATE     DATE      path 'StartDate/text()',
    27           PAY_VALUE          NUMBER(4) path 'Value/text()'
    28         ) p
    29  /
    View created.
    Elapsed: 00:00:00.03
    SQL> desc TRAN_VIEW_XMLTABLE
    Name                                                                   Null?    Type
    TRADE_VERSION                                                                   NUMBER(4)
    TRANSACTION_ID                                                                  NUMBER(4)
    SQL> --
    SQL> desc TRAN_LEG_VIEW_XMLTABLE
    Name                                                                   Null?    Type
    TRANSACTION_ID                                                                  NUMBER(4)
    LEG_NUMBER                                                                      NUMBER(4)
    LEG_BASIS                                                                       NUMBER(4)
    LEG_FIXED_RATE                                                                  NUMBER(4)
    SQL> --
    SQL> desc TRAN_LEG_PAY_VIEW_XMLTABLE
    Name                                                                   Null?    Type
    TRANSACTION_ID                                                                  NUMBER(4)
    LEG_NUMBER                                                                      NUMBER(4)
    PAY_START_DATE                                                                  DATE
    PAY_VALUE                                                                       NUMBER(4)
    SQL> --
    SQL> set long 10000 pages 100 lines 128
    SQL> set timing on
    SQL> set autotrace on explain
    SQL> set heading on feedback on
    SQL> --
    SQL> VAR DOC1 CLOB
    SQL> VAR DOC2 CLOB
    SQL> --
    SQL> begin
      2    :DOC1 :=
      3  '<TransactionList>
      4    <Transaction id="1">
      5      <TradeVersion>1</TradeVersion>
      6      <TransactionId>1</TransactionId>
      7      <Leg id="1">
      8        <LegNumber>1</LegNumber>
      9        <Basis>1</Basis>
    10        <FixedRate>1</FixedRate>
    11        <Payment id="1">
    12          <StartDate>2000-01-01</StartDate>
    13          <Value>1</Value>
    14        </Payment>
    15        <Payment id="2">
    16          <StartDate>2000-01-02</StartDate>
    17          <Value>2</Value>
    18        </Payment>
    19      </Leg>
    20      <Leg id="2">
    21        <LegNumber>2</LegNumber>
    22        <Basis>2</Basis>
    23        <FixedRate>2</FixedRate>
    24        <Payment id="1">
    25          <StartDate>2000-02-01</StartDate>
    26          <Value>10</Value>
    27        </Payment>
    28        <Payment id="2">
    29          <StartDate>2000-02-02</StartDate>
    30          <Value>20</Value>
    31        </Payment>
    32      </Leg>
    33    </Transaction>
    34    <Transaction id="2">
    35      <TradeVersion>2</TradeVersion>
    36      <TransactionId>2</TransactionId>
    37      <Leg id="1">
    38        <LegNumber>21</LegNumber>
    39        <Basis>21</Basis>
    40        <FixedRate>21</FixedRate>
    41        <Payment id="1">
    42          <StartDate>2002-01-01</StartDate>
    43          <Value>21</Value>
    44        </Payment>
    45        <Payment id="2">
    46          <StartDate>2002-01-02</StartDate>
    47          <Value>22</Value>
    48        </Payment>
    49      </Leg>
    50      <Leg id="22">
    51        <LegNumber>22</LegNumber>
    52        <Basis>22</Basis>
    53        <FixedRate>22</FixedRate>
    54        <Payment id="21">
    55          <StartDate>2002-02-01</StartDate>
    56          <Value>210</Value>
    57        </Payment>
    58        <Payment id="22">
    59          <StartDate>2002-02-02</StartDate>
    60          <Value>220</Value>
    61        </Payment>
    62      </Leg>
    63    </Transaction>
    64  </TransactionList>';
    65    :DOC2 :=
    66  '<TransactionList>
    67    <Transaction id="31">
    68      <TradeVersion>31</TradeVersion>
    69      <TransactionId>31</TransactionId>
    70      <Leg id="31">
    71        <LegNumber>31</LegNumber>
    72        <Basis>31</Basis>
    73        <FixedRate>31</FixedRate>
    74        <Payment id="31">
    75          <StartDate>3000-01-01</StartDate>
    76          <Value>31</Value>
    77        </Payment>
    78      </Leg>
    79    </Transaction>
    80  </TransactionList>';
    81  end;
    82  /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.01
    SQL> insert into LOCAL_TABLE values ( xmltype(:DOC1))
      2  /
    1 row created.
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1
    | Id  | Operation                | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT         |             |     1 |   100 |     1   (0)| 00:00:01 |
    |   1 |  LOAD TABLE CONVENTIONAL | LOCAL_TABLE |       |       |            |          |
    SQL> insert into LOCAL_TABLE values ( xmltype(:DOC2))
      2  /
    1 row created.
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1
    | Id  | Operation                | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT         |             |     1 |   100 |     1   (0)| 00:00:01 |
    |   1 |  LOAD TABLE CONVENTIONAL | LOCAL_TABLE |       |       |            |          |
    SQL> select * from TRAN_VIEW_XMLTABLE
      2  /
    TRADE_VERSION TRANSACTION_ID
                1              1
                2              2
               31             31
    3 rows selected.
    Elapsed: 00:00:00.03
    Execution Plan
    Plan hash value: 650975545
    | Id  | Operation          | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |                                |     3 |   168 |     3   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS      |                                |     3 |   168 |     3   (0)| 00:00:01 |
    |*  2 |   TABLE ACCESS FULL| SYS_NTGgl+TKyhQnWoFRSrCxeX9g== |     3 |   138 |     3   (0)| 00:00:01 |
    |*  3 |   INDEX UNIQUE SCAN| SYS_C0010174                   |     1 |    10 |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("SYS_NC_TYPEID$" IS NOT NULL)
       3 - access("NESTED_TABLE_ID"="LOCAL_TABLE"."SYS_NC0000800009$")
    Note
       - dynamic sampling used for this statement
    SQL> select * from TRAN_LEG_VIEW_XMLTABLE
      2  /
    TRANSACTION_ID LEG_NUMBER  LEG_BASIS LEG_FIXED_RATE
                 1          1          1              1
                 1          2          2              2
                 2         21         21             21
                 2         22         22             22
                31         31         31             31
    5 rows selected.
    Elapsed: 00:00:00.04
    Execution Plan
    Plan hash value: 1273661583
    | Id  | Operation           | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |                                |     5 |   560 |     7  (15)| 00:00:01 |
    |*  1 |  HASH JOIN          |                                |     5 |   560 |     7  (15)| 00:00:01 |
    |   2 |   NESTED LOOPS      |                                |     3 |   159 |     3   (0)| 00:00:01 |
    |*  3 |    TABLE ACCESS FULL| SYS_NTGgl+TKyhQnWoFRSrCxeX9g== |     3 |   129 |     3   (0)| 00:00:01 |
    |*  4 |    INDEX UNIQUE SCAN| SYS_C0010174                   |     1 |    10 |     0   (0)| 00:00:01 |
    |*  5 |   TABLE ACCESS FULL | SYS_NTUmyermF/S721C/2UXo40Uw== |     5 |   295 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("SYS_ALIAS_1"."NESTED_TABLE_ID"="SYS_ALIAS_0"."SYS_NC0000800009$")
       3 - filter("SYS_NC_TYPEID$" IS NOT NULL)
       4 - access("NESTED_TABLE_ID"="LOCAL_TABLE"."SYS_NC0000800009$")
       5 - filter("SYS_NC_TYPEID$" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    SQL> select * from TRAN_LEG_PAY_VIEW_XMLTABLE
      2  /
    TRANSACTION_ID LEG_NUMBER PAY_START  PAY_VALUE
                 1          1 01-JAN-00          1
                 1          1 02-JAN-00          2
                 1          2 01-FEB-00         10
                 1          2 02-FEB-00         20
                 2         21 01-JAN-02         21
                 2         21 02-JAN-02         22
                 2         22 01-FEB-02        210
                 2         22 02-FEB-02        220
                31         31 01-JAN-00         31
    9 rows selected.
    Elapsed: 00:00:00.07
    Execution Plan
    Plan hash value: 4004907785
    | Id  | Operation            | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |                                |     9 |  1242 |    10  (10)| 00:00:01 |
    |*  1 |  HASH JOIN           |                                |     9 |  1242 |    10  (10)| 00:00:01 |
    |*  2 |   HASH JOIN          |                                |     5 |   480 |     7  (15)| 00:00:01 |
    |   3 |    NESTED LOOPS      |                                |     3 |   159 |     3   (0)| 00:00:01 |
    |*  4 |     TABLE ACCESS FULL| SYS_NTGgl+TKyhQnWoFRSrCxeX9g== |     3 |   129 |     3   (0)| 00:00:01 |
    |*  5 |     INDEX UNIQUE SCAN| SYS_C0010174                   |     1 |    10 |     0   (0)| 00:00:01 |
    |*  6 |    TABLE ACCESS FULL | SYS_NTUmyermF/S721C/2UXo40Uw== |     5 |   215 |     3   (0)| 00:00:01 |
    |*  7 |   TABLE ACCESS FULL  | SYS_NTelW4ZRtKS+WKqCaXhsHnNQ== |     9 |   378 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("NESTED_TABLE_ID"="SYS_ALIAS_1"."SYS_NC0000900010$")
       2 - access("SYS_ALIAS_1"."NESTED_TABLE_ID"="SYS_ALIAS_0"."SYS_NC0000800009$")
       4 - filter("SYS_NC_TYPEID$" IS NOT NULL)
       5 - access("NESTED_TABLE_ID"="LOCAL_TABLE"."SYS_NC0000800009$")
       6 - filter("SYS_NC_TYPEID$" IS NOT NULL)
       7 - filter("SYS_NC_TYPEID$" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    SQL>Out of interest why are you so against using XMLType...
    Edited by: mdrake on Jan 13, 2009 8:25 AM

  • Insert XML file into Relational database model without using XMLTYPE tables

    Dear all,
    How can I store a known complex XML file into an existing relational database WITHOUT using xmltypes in the database ?
    I read the article on DBMS_XMLSTORE. DBMS_XMLSTORE indeed partially bridges the gap between XML and RDBMS to a certain extent, namely for simply structured XML (canonical structure) and simple tables.
    However, when the XML structure will become arbitrary and rapidly evolving, surely there must be a way to map XML to a relational model more flexibly.
    We work in a java/Oracle10 environment that receives very large XML documents from an independent data management source. These files comply with an XML schema. That is all we know. Still, all these data must be inserted/updated daily in an existing relational model. Quite an assignment isn't it ?
    The database does and will not contain XMLTYPES, only plain RDBMS tables.
    Are you aware of a framework/product or tool to do what DBMS_XMLSTORE does but with any format of XML file ? If not, I am doomed.
    Constraints : Input via XML files defined by third party
    Storage : relational database model with hundreds of tables and thousands of existing queries that can not be touched. The model must not be altered.
    Target : get this XML into the database on a daily basis via an automated process.
    Cheers.
    Luc.

    Luc,
    your Doomed !
    If you would try something like DBMS_XMLSTORE, you probably would be into serious performance problems in your case, very fast, and it would be very difficult to manage.
    If you would use a little bit of XMLType stuff, you would be able to shred the data into the relational model very fast and controlable. Take it from me, I am one of those old geezers like Mr. Tom Kyte way beyond 40 years (still joking). No seriously. I started out as a classical PL/SQL, Forms Guy that switched after two years to become a "DBA 1.0" and Mr Codd and Mr Date were for years my biggest hero's. I have the utmost respect for Mr. Tom Kyte for all his efforts for bringing the concepts manual into the development world. Just to name some off the names that influenced me. But you will have to work with UNSTRUCTURED data (as Mr Date would call it). 80% of the data out there exists off it. Stuff like XMLTABLE and XML VIEWs bridge the gap between that unstructured world and the relational world. It is very doable to drag and drop an XML file into the XMLDB database into a XMLtype table and or for instance via FTP. From that point on it is in the database. From there you could move into relational tables via XMLTABLE methods or XML Views.
    You could see the described method as a filtering option so XML can be transformed into relational data. If you don't want any XML in your current database, then create an small Oracle database with XML DB installed (if doable 11.1.0.7 regarding the best performance --> all the new fast optimizer stuff etc). Use that database as a staging area that does all the XML shredding for you into relational components and ship the end result via database links and or materialized views or other probably known methodes into your relational database that isn't allowed to have XMLType.
    This way you would keep your realtional Oracle database clean and have the Oracle XML DB staging database do all the filtering and shredding into relational components.
    Throwing the XML DB option out off the window beforehand would be like replacing your Mercedes with a bicycle. With both you will be able to travel the distance from Paris to Rome, but it will take you a hell of lot longer.
    :-)

  • How to update transaction data automatically into MySQL database using PI

    Dear All,
    With reference to subject matter I want a sincere advice regarding how to update transaction data automatically into MySQL database using PI. Is there any link available where I can get step-by-step process.
    Ex: I have a MYSQL database in my organization. Whenever a delivery created in SAP some fields like DO Number, DO quantity, SO/STO number should get updated in MYSQL database automatically.
    This scenario is related to updation of transactional data into MYSQL DB and I want your suggestions pertaining to same issue.
    Thanks and Regards,
    Chandra Sekhar

    Hi .
    Develop a sceanrio between SAP to Database system,When the data updates in SAP Tables read the data and update it in DATA Base using JDBC adapter,but there will be some delay in updating data in MySQL.
    serach in sdn for IDOC-TOJDBC sceannario,many documents available for the same.
    Regards,
    Raja Sekhar

  • Test data generator for Oracle database

    Hello.
    I'm a student of Computer Engeneering in Poland. I've recently written simple application that can help people to populate database with some simple test data (appliation works only on relational database).
    this is link to website: http://testdatagenerator.jak.pl/
    Requires: .NET Framework 2.0 and ODP.NET
    Every feedback is welcome!

    Yes, I know about it...
    In Firefox you can easily close the ad (in IE it's a little bit difficult).
    Here is direct link to installer (if someone has a problem to handle with ads ):
    http://metis.weia.po.opole.pl/~d51422/generatorOracle/setup_ml.exe
    (both EN and PL version)

  • CITADEL DATABASE NOT CONFIGURED AS A RELATIONAL DATABASE

    HI:
    I have the same problem described by another member before, and I haven't found any  resolution of this problem:
    The problem was:
    >I enabled database logging, and configured the shared variables that I wanted to log. Then I deployed the variables. When I try to read alarms and events (Alarm & Event Query.vi), I get this message:
    >Error -1967386611 occurred at HIST_RunAlarmQueryCORE.vi,
    Citadel:  (Hex 0x8ABC100D) The given Citadel database is not currently
    configured to log alarms to a relational database.
    Please help me with this topic
    Thanks in advance

    I'm configuring the variables to be logged in the same way that appears on the file you send, but it doesn't work... I don't know what else to do.
    I'm sending you the configuration image file, the error message image and a simple vi that creates the database; after, values are logged; I generate several values for the variable, values that are above the HI limit of the acceptance value (previously configured) so alarms are generated. When I push the button STOP, the system stops logging values to the database and performs a query to the alarms database, and the corresponding error is generated... (file attached)
    The result: With the aid of MAXThe data is logged correctly on the DATA database (I can view the trace), but the alarm generated is not logged on the alarms database created programatically...
    The same vi is used but creating another database manually with the aid of MAX and configuring the library to log the alarms to that database.... same result
    I try this sabe conditions on three different PCs with the same result, and I try to reinstall LabVIEW (development and DSC) completelly (uff!) and still doesn't work... ¿what else can I do?
    I'd appreciate very much your help.
    Ignacio
    Attachments:
    error.jpg ‏56 KB
    test_db.vi ‏38 KB
    config.jpg ‏150 KB

Maybe you are looking for