Do we still need relational databases?
Dears,
will this new in-memory technology reduce the need of a relational database?
There are also rumors about a column-based database... are we talking about the same product?
Thanks,
Federico Biavati
Hi Vitali,
2 things:
1st
It is much more as SQL Ansi and the in-memory stuff.
See the different DB vendors
How they
* handle lock escalation on their DB objects a
* concurrency
* high availability
* reliable DB recovery and backups
This is a knowledge they had to buy from Sybase - SAP will not invent a new DB relational system
Sybase has a very small customer base wich may grow with SAP installations in the future.
But have in mind: Other DB vendors are technologically spoken more advanced (not only Oracle)
2nd:
Oracle Exadata is not vapor ware as HANA , you can buy since 2008 (first with HP servers now also on Sun systems)
and it's not only adressing in-memory but also intelligent prefetching of OS data blocks via massive parallel systems BEFORE they go into the DB buffer .
Think of IT DB staff: you need expirienced personell for it, not sure if SAP can move Oracle DB staff to Sybase staff
at their customer base (at least not in the U.S.
I think customers will force SAP also to certify the Exadata machine also.
Best breed strategy may go for Exdata on Oracle and SAP ERP/BW
bye
yk
Similar Messages
-
Migrating HP SIM related databases to a new database server
Greetings,
I'm working on a migration task for our HP SIM to a new database server (the application and services will reside on the original server).
Migrating the Insight_v50_0_xxx database and redirecting the server to point the the new database was no problem and HP SIM runs along just fine.
There are still some related databases that I haven't moved yet, and I've searched all over the web to get some documentation on the matter, but found none.
The databases and related services are the following.
PMP_V5_0 - PMP Services
hpvmmsqldb - Virtual Machine Management
I guess I could do like I did with the Insight_v50_0_xxx , take a backup and restore it on the new server and then change the configuration files to point to the new one. The big issue is that I can't seem to find any configuration files for these services.
Also I've run into something called mx services, and the same here. I have no idea where these points.
Any HP SIM savvy technician out there that think they can help me out?
1. Product: HP SIM and related services
2. Operating System: Windows Server 2008 R2
Best regards,
KarlOld topic but in case someone is searching...
PMP database uses an ODBC connection. Easy to reconfigure.
HPVMMDB uses a JDBC configuration file in the \Program Files\HP\Insight Control Virtual Machine Management\Bin folder called hpvmmdb.conf. -
CITADEL and RELATIONAL DATABASE Alarms & Events Logging Time Resolution
Hello,
I would like to know how I can setup Logging Time Resolution when Logging Alarms & Events to CITADEL or RELATIONAL DATABASE.
I tried use Logging:Time Resolution Property of Class: Variable Properties without success.
In my application I need that timestamp of SetTime, AckTime and ClearTime will be logged using Second Time Resolution, in other words, with fractional seconds part zero.
I am using a Event Structure to get SetTime, AckTime and ClearTime events and I want to UPDATE Area and Ack Comment Fields thru Database Connectivity but when I use SetTime timestamp supplied by Event Structure in WHERE clause Its not possible get the right alarm record because there are a different time resolution between LV SetTime timestamp and timestamp value logged in database.
Eduardo Condemarin
Attachments:
Logging Time Resolution.jpg 271 KBI'm configuring the variables to be logged in the same way that appears on the file you send, but it doesn't work... I don't know what else to do.
I'm sending you the configuration image file, the error message image and a simple vi that creates the database; after, values are logged; I generate several values for the variable, values that are above the HI limit of the acceptance value (previously configured) so alarms are generated. When I push the button STOP, the system stops logging values to the database and performs a query to the alarms database, and the corresponding error is generated... (file attached)
The result: With the aid of MAXThe data is logged correctly on the DATA database (I can view the trace), but the alarm generated is not logged on the alarms database created programatically...
The same vi is used but creating another database manually with the aid of MAX and configuring the library to log the alarms to that database.... same result
I try this sabe conditions on three different PCs with the same result, and I try to reinstall LabVIEW (development and DSC) completelly (uff!) and still doesn't work... ¿what else can I do?
I'd appreciate very much your help.
Ignacio
Attachments:
error.jpg 56 KB
test_db.vi 38 KB
config.jpg 150 KB -
BW will be still needed when HANA using?
Hi Experts,
Is HANA replacement for relational database? What support and maintenance for such database? what application that can use HANA, OLTP,BW,SCM,CRM? Is HANA same with BW/BI Accelerator? HANA appliance may be addition or plug in to existing traditional database which used for reporting and query purpose which 3600x faster transaction than traditional database query. HANA may also replace OLAP/BI/BW requirement for certain function? Is HANA also replacement for BI Business Object in future? BW will be still needed when HANA using?"Will HANA eliminate the need for BW?
This is a question that surprisingly keeps creeping up from all
sides and customers. However, we can only see so far ahead. My
current statement will be not for the next five to eight years
based on two basic myths that must be bust right now:
1. If we take all the ERP fields into a data warehouse, or an
appliance, we should then be able to answer every
question.
a. This is a myth that is still prevailing in the data
warehouse world. It is most applicable to the
Accelerated Explorer and now to Hana. Not
considering this will result in a costly mistake. All data
can answer is simple questions. However, large
organizations do not live by simple answers alone.
Data filled appliances cannot answer complex
questions that make modern enterprises run u2018Give me
the stock position for the next week considering all
operational variables like production schedules, stock,
returns, customer bookings, requisitions, PO orders in
process, etc..
Release 1.0 of HANA is for non-disruptive and very fast
Operational Reporting. Operational reporting normally
consists of simple reports and to meet day-to-day
reporting needs.
b. HANA version 1.0 is for ECC data access only
c. HANA leverages existing technologies of BW
Accelerator, BusinessObjects Explorer and
BusinessObjects WebI along with Sybase database to
deliver real-time informatics in critical areas. The key
here is critical areas and how business defines this
singularity for using HANA.
d. Data warehouses enable complex transformations,
summarizations, exceptions and a host of other
filtering that enables a u2018SPOTu2019 (Single Point of Truth)
answer to critical and complex questions. So far as
executives need answers to complex questions data
warehouses will remain u2013 unless of course we get a self
modeling InfoCube that remodels itself on information
demand and consumption." -
Segment reporting, split by profit center; still need segment splitting?
HI experts,
With reference to the subject of this post, I would like to have better understanding on the followings:
Settings (1) IMG > Financial Accounting (New) > Financial Accounting Global Settings (New) > Ledgers > Ledger > Assign Scenarios and Customer Fields to Ledgers ... I have settings of <Profit Center Update and Cost Center Update>
VERSUS
Settings (2) IMG > Financial Accounting (New) > General Ledger Accounting (New) > Business Transactions > Document Splitting > Define Document Splitting Characteristics for General Ledger Accounting ... I have profit center to be split (updated)
My concern, is I want to have segmental reporting. In the SAP material, it says that segment derived from Profit Center.
My Question: With the above settings (1) and (2), do I still need to have additional settings for segment? For example:
Settings (2), do I need to set segment here?
How about settings (1), any need to have <segment update>?
In settings (2), help file says: "This determines what fields in a ledger are updated when it receives posting from other application components." Not quite sure what this statement mean.
Thanks and regards,
sbmelHello,
The actual need of scenarios to be added in every ledger is the need for these objects to be updated in new tables like FAGLFLEXT. If you do not add scenarios then information of cost center, profit center , segment will not be updated in New GL Tables.
Hence it is mandatory to pass entire information related to Cost Centers, Profit Centers and Segment by adding relevant scenarios to all the ledgers activated.
Secondly, segment is such a field that can not only be derived from profit center but also through a BADI using a logic written using ABAP code.
Regards,
Sam -
Do I still need to run the catupgrd.sql??
Hi
I would like if I install the Oracle in the following procedure, do I still need to run the catupgrd.sql.
1. Install Oracle 10.2.0.1 software only( i.e. without create any database instance)
2. Install Oracle 10.2.0.2 patch
3. Create an Oracle Instance using DBCA
Do I still need to run the catupgrd.sql for the newly created database instance?
DenisDo I still need to run the catupgrd.sql for the newly created database instance?No, you don't. Because database was created on already patched software.
-
Insert XML file into Relational database model - no XMLTYPE!
Dear all,
How can I store a known complex XML file into an existing relational database WITHOUT using xmltypes in the database ?
I read the article on DBMS_XMLSTORE. DBMS_XMLSTORE indeed partially bridges the gap between XML and RDBMS to a certain extent, namely for simply structured XML (canonical structure) and simple tables.
However, when the XML structure will become arbitrary and rapidly evolving, surely there must be a way to map XML to a relational model more flexibly.
We work in a java/Oracle10 environment that receives very large XML documents from an independent data management source. These files comply with an XML schema. That is all we know. Still, all these data must be inserted/updated daily in an existing relational model. Quite an assignment isn't it ?
The database does and will not contain XMLTYPES, only plain RDBMS tables.
Are you aware of a framework/product or tool to do what DBMS_XMLSTORE does but with any format of XML file ? If not, I am doomed.
Cheers.
Luc.
Edited by: user6693852 on Jan 13, 2009 7:02 AMIn case you decide to follow my advice, here's a simple example showing how to do this.. (Note the XMLTable syntax is the preferred approach in 10gr2 and later..
SQL> spool testase.log
SQL> --
SQL> connect / as sysdba
Connected.
SQL> --
SQL> set define on
SQL> set timing on
SQL> --
SQL> define USERNAME = XDBTEST
SQL> --
SQL> def PASSWORD = XDBTEST
SQL> --
SQL> def USER_TABLESPACE = USERS
SQL> --
SQL> def TEMP_TABLESPACE = TEMP
SQL> --
SQL> drop user &USERNAME cascade
2 /
old 1: drop user &USERNAME cascade
new 1: drop user XDBTEST cascade
User dropped.
Elapsed: 00:00:00.59
SQL> grant create any directory, drop any directory, connect, resource, alter session, create view to &USERNAME identified by &PASS
ORD
2 /
old 1: grant create any directory, drop any directory, connect, resource, alter session, create view to &USERNAME identified by &
ASSWORD
new 1: grant create any directory, drop any directory, connect, resource, alter session, create view to XDBTEST identified by XDB
EST
Grant succeeded.
Elapsed: 00:00:00.01
SQL> alter user &USERNAME default tablespace &USER_TABLESPACE temporary tablespace &TEMP_TABLESPACE
2 /
old 1: alter user &USERNAME default tablespace &USER_TABLESPACE temporary tablespace &TEMP_TABLESPACE
new 1: alter user XDBTEST default tablespace USERS temporary tablespace TEMP
User altered.
Elapsed: 00:00:00.00
SQL> connect &USERNAME/&PASSWORD
Connected.
SQL> --
SQL> var SCHEMAURL varchar2(256)
SQL> var XMLSCHEMA CLOB
SQL> --
SQL> set define off
SQL> --
SQL> begin
2 :SCHEMAURL := 'http://xmlns.example.com/askTom/TransactionList.xsd';
3 :XMLSCHEMA :=
4 '<?xml version="1.0" encoding="UTF-8"?>
5 <!--W3C Schema generated by XMLSpy v2008 rel. 2 sp2 (http://www.altova.com)-->
6 <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" xdb:storeVarrayAsTable="true">
7 <xs:element name="TransactionList" type="transactionListType" xdb:defaultTable="LOCAL_TABLE"/>
8 <xs:complexType name="transactionListType" xdb:maintainDOM="false" xdb:SQLType="TRANSACTION_LIST_T">
9 <xs:sequence>
10 <xs:element name="Transaction" type="transactionType" maxOccurs="unbounded" xdb:SQLCollType="TRANSACTION_V"
>
11 </xs:sequence>
12 </xs:complexType>
13 <xs:complexType name="transactionType" xdb:maintainDOM="false" xdb:SQLType="TRANSACTION_T">
14 <xs:sequence>
15 <xs:element name="TradeVersion" type="xs:integer"/>
16 <xs:element name="TransactionId" type="xs:integer"/>
17 <xs:element name="Leg" type="legType" maxOccurs="unbounded" xdb:SQLCollType="LEG_V"/>
18 </xs:sequence>
19 <xs:attribute name="id" type="xs:integer" use="required"/>
20 </xs:complexType>
21 <xs:complexType name="paymentType" xdb:maintainDOM="false" xdb:SQLType="PAYMENT_T">
22 <xs:sequence>
23 <xs:element name="StartDate" type="xs:date"/>
24 <xs:element name="Value" type="xs:integer"/>
25 </xs:sequence>
26 <xs:attribute name="id" type="xs:integer" use="required"/>
27 </xs:complexType>
28 <xs:complexType name="legType" xdb:maintainDOM="false" xdb:SQLType="LEG_T">
29 <xs:sequence>
30 <xs:element name="LegNumber" type="xs:integer"/>
31 <xs:element name="Basis" type="xs:integer"/>
32 <xs:element name="FixedRate" type="xs:integer"/>
33 <xs:element name="Payment" type="paymentType" maxOccurs="unbounded" xdb:SQLCollType="PAYMENT_V"/>
34 </xs:sequence>
35 <xs:attribute name="id" type="xs:integer" use="required"/>
36 </xs:complexType>
37 </xs:schema>';
38 end;
39 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
SQL> set define on
SQL> --
SQL> declare
2 res boolean;
3 xmlSchema xmlType := xmlType(:XMLSCHEMA);
4 begin
5 dbms_xmlschema.registerSchema
6 (
7 schemaurl => :schemaURL,
8 schemadoc => xmlSchema,
9 local => TRUE,
10 genTypes => TRUE,
11 genBean => FALSE,
12 genTables => TRUE,
13 ENABLEHIERARCHY => DBMS_XMLSCHEMA.ENABLE_HIERARCHY_NONE
14 );
15 end;
16 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.26
SQL> desc LOCAL_TABLE
Name Null? Type
TABLE of SYS.XMLTYPE(XMLSchema "http://xmlns.example.com/askTom/TransactionList.xsd" Element "TransactionList") STORAGE Object-rela
ional TYPE "TRANSACTION_LIST_T"
SQL> --
SQL> create or replace VIEW TRAN_VIEW
2 as
3 select
4 extractvalue(x.column_value,'/Transaction/TradeVersion/text()') tradeversion,
5 extractvalue(x.column_value,'/Transaction//text()') transactionid
6 from
7 local_table,
8 table(xmlsequence(extract(OBJECT_VALUE,'/TransactionList/Transaction'))) x
9 /
View created.
Elapsed: 00:00:00.01
SQL> create or replace VIEW TRAN_LEG_VIEW
2 as
3 select
4 extractvalue(x.column_value,'/Transaction/TransactionId/text()') transactionid,
5 extractvalue(y.column_value,'/Leg/Basis/text()') leg_basis,
6 extractValue(y.column_value,'/Leg/FixedRate/text()') leg_fixedrate
7 from
8 local_table,
9 table(xmlsequence(extract(OBJECT_VALUE,'/TransactionList/Transaction'))) x,
10 table(xmlsequence(extract(x.column_value,'/Transaction/Leg'))) y
11 /
View created.
Elapsed: 00:00:00.01
SQL> create or replace VIEW TRAN_LEG_PAY_VIEW
2 as
3 select
4 extractvalue(x.column_value,'/Transaction/TransactionId/text()') transactionid,
5 extractvalue(y.column_value,'/Leg/LegNumber/text()') leg_legnumber,
6 extractvalue(z.column_value,'/Payment/StartDate/text()') pay_startdate,
7 extractValue(z.column_value,'/Payment/Value/text()') pay_value
8 from
9 local_table,
10 table(xmlsequence(extract(OBJECT_VALUE,'/TransactionList/Transaction'))) x,
11 table(xmlsequence(extract(x.column_value,'/Transaction/Leg'))) y,
12 table(xmlsequence(extract(y.column_value,'/Leg/Payment'))) z
13 /
View created.
Elapsed: 00:00:00.03
SQL> desc TRAN_VIEW
Name Null? Type
TRADEVERSION NUMBER(38)
TRANSACTIONID VARCHAR2(4000)
SQL> --
SQL> desc TRAN_LEG_VIEW
Name Null? Type
TRANSACTIONID NUMBER(38)
LEG_BASIS NUMBER(38)
LEG_FIXEDRATE NUMBER(38)
SQL> --
SQL> desc TRAN_LEG_PAY_VIEW
Name Null? Type
TRANSACTIONID NUMBER(38)
LEG_LEGNUMBER NUMBER(38)
PAY_STARTDATE DATE
PAY_VALUE NUMBER(38)
SQL> --
SQL> create or replace VIEW TRAN_VIEW_XMLTABLE
2 as
3 select t.*
4 from LOCAL_TABLE,
5 XMLTable
6 (
7 '/TransactionList/Transaction'
8 passing OBJECT_VALUE
9 columns
10 TRADE_VERSION NUMBER(4) path 'TradeVersion/text()',
11 TRANSACTION_ID NUMBER(4) path 'TransactionId/text()'
12 ) t
13 /
View created.
Elapsed: 00:00:00.01
SQL> create or replace VIEW TRAN_LEG_VIEW_XMLTABLE
2 as
3 select t.TRANSACTION_ID, L.*
4 from LOCAL_TABLE,
5 XMLTable
6 (
7 '/TransactionList/Transaction'
8 passing OBJECT_VALUE
9 columns
10 TRANSACTION_ID NUMBER(4) path 'TransactionId/text()',
11 LEG XMLType path 'Leg'
12 ) t,
13 XMLTABLE
14 (
15 '/Leg'
16 passing LEG
17 columns
18 LEG_NUMBER NUMBER(4) path 'LegNumber/text()',
19 LEG_BASIS NUMBER(4) path 'Basis/text()',
20 LEG_FIXED_RATE NUMBER(4) path 'FixedRate/text()'
21 ) l
22 /
View created.
Elapsed: 00:00:00.01
SQL> create or replace VIEW TRAN_LEG_PAY_VIEW_XMLTABLE
2 as
3 select TRANSACTION_ID, L.LEG_NUMBER, P.*
4 from LOCAL_TABLE,
5 XMLTable
6 (
7 '/TransactionList/Transaction'
8 passing OBJECT_VALUE
9 columns
10 TRANSACTION_ID NUMBER(4) path 'TransactionId/text()',
11 LEG XMLType path 'Leg'
12 ) t,
13 XMLTABLE
14 (
15 '/Leg'
16 passing LEG
17 columns
18 LEG_NUMBER NUMBER(4) path 'LegNumber/text()',
19 PAYMENT XMLType path 'Payment'
20 ) L,
21 XMLTABLE
22 (
23 '/Payment'
24 passing PAYMENT
25 columns
26 PAY_START_DATE DATE path 'StartDate/text()',
27 PAY_VALUE NUMBER(4) path 'Value/text()'
28 ) p
29 /
View created.
Elapsed: 00:00:00.03
SQL> desc TRAN_VIEW_XMLTABLE
Name Null? Type
TRADE_VERSION NUMBER(4)
TRANSACTION_ID NUMBER(4)
SQL> --
SQL> desc TRAN_LEG_VIEW_XMLTABLE
Name Null? Type
TRANSACTION_ID NUMBER(4)
LEG_NUMBER NUMBER(4)
LEG_BASIS NUMBER(4)
LEG_FIXED_RATE NUMBER(4)
SQL> --
SQL> desc TRAN_LEG_PAY_VIEW_XMLTABLE
Name Null? Type
TRANSACTION_ID NUMBER(4)
LEG_NUMBER NUMBER(4)
PAY_START_DATE DATE
PAY_VALUE NUMBER(4)
SQL> --
SQL> set long 10000 pages 100 lines 128
SQL> set timing on
SQL> set autotrace on explain
SQL> set heading on feedback on
SQL> --
SQL> VAR DOC1 CLOB
SQL> VAR DOC2 CLOB
SQL> --
SQL> begin
2 :DOC1 :=
3 '<TransactionList>
4 <Transaction id="1">
5 <TradeVersion>1</TradeVersion>
6 <TransactionId>1</TransactionId>
7 <Leg id="1">
8 <LegNumber>1</LegNumber>
9 <Basis>1</Basis>
10 <FixedRate>1</FixedRate>
11 <Payment id="1">
12 <StartDate>2000-01-01</StartDate>
13 <Value>1</Value>
14 </Payment>
15 <Payment id="2">
16 <StartDate>2000-01-02</StartDate>
17 <Value>2</Value>
18 </Payment>
19 </Leg>
20 <Leg id="2">
21 <LegNumber>2</LegNumber>
22 <Basis>2</Basis>
23 <FixedRate>2</FixedRate>
24 <Payment id="1">
25 <StartDate>2000-02-01</StartDate>
26 <Value>10</Value>
27 </Payment>
28 <Payment id="2">
29 <StartDate>2000-02-02</StartDate>
30 <Value>20</Value>
31 </Payment>
32 </Leg>
33 </Transaction>
34 <Transaction id="2">
35 <TradeVersion>2</TradeVersion>
36 <TransactionId>2</TransactionId>
37 <Leg id="1">
38 <LegNumber>21</LegNumber>
39 <Basis>21</Basis>
40 <FixedRate>21</FixedRate>
41 <Payment id="1">
42 <StartDate>2002-01-01</StartDate>
43 <Value>21</Value>
44 </Payment>
45 <Payment id="2">
46 <StartDate>2002-01-02</StartDate>
47 <Value>22</Value>
48 </Payment>
49 </Leg>
50 <Leg id="22">
51 <LegNumber>22</LegNumber>
52 <Basis>22</Basis>
53 <FixedRate>22</FixedRate>
54 <Payment id="21">
55 <StartDate>2002-02-01</StartDate>
56 <Value>210</Value>
57 </Payment>
58 <Payment id="22">
59 <StartDate>2002-02-02</StartDate>
60 <Value>220</Value>
61 </Payment>
62 </Leg>
63 </Transaction>
64 </TransactionList>';
65 :DOC2 :=
66 '<TransactionList>
67 <Transaction id="31">
68 <TradeVersion>31</TradeVersion>
69 <TransactionId>31</TransactionId>
70 <Leg id="31">
71 <LegNumber>31</LegNumber>
72 <Basis>31</Basis>
73 <FixedRate>31</FixedRate>
74 <Payment id="31">
75 <StartDate>3000-01-01</StartDate>
76 <Value>31</Value>
77 </Payment>
78 </Leg>
79 </Transaction>
80 </TransactionList>';
81 end;
82 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.01
SQL> insert into LOCAL_TABLE values ( xmltype(:DOC1))
2 /
1 row created.
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 1
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | LOCAL_TABLE | | | | |
SQL> insert into LOCAL_TABLE values ( xmltype(:DOC2))
2 /
1 row created.
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 1
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | LOCAL_TABLE | | | | |
SQL> select * from TRAN_VIEW_XMLTABLE
2 /
TRADE_VERSION TRANSACTION_ID
1 1
2 2
31 31
3 rows selected.
Elapsed: 00:00:00.03
Execution Plan
Plan hash value: 650975545
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3 | 168 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 3 | 168 | 3 (0)| 00:00:01 |
|* 2 | TABLE ACCESS FULL| SYS_NTGgl+TKyhQnWoFRSrCxeX9g== | 3 | 138 | 3 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN| SYS_C0010174 | 1 | 10 | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("SYS_NC_TYPEID$" IS NOT NULL)
3 - access("NESTED_TABLE_ID"="LOCAL_TABLE"."SYS_NC0000800009$")
Note
- dynamic sampling used for this statement
SQL> select * from TRAN_LEG_VIEW_XMLTABLE
2 /
TRANSACTION_ID LEG_NUMBER LEG_BASIS LEG_FIXED_RATE
1 1 1 1
1 2 2 2
2 21 21 21
2 22 22 22
31 31 31 31
5 rows selected.
Elapsed: 00:00:00.04
Execution Plan
Plan hash value: 1273661583
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 560 | 7 (15)| 00:00:01 |
|* 1 | HASH JOIN | | 5 | 560 | 7 (15)| 00:00:01 |
| 2 | NESTED LOOPS | | 3 | 159 | 3 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| SYS_NTGgl+TKyhQnWoFRSrCxeX9g== | 3 | 129 | 3 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN| SYS_C0010174 | 1 | 10 | 0 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | SYS_NTUmyermF/S721C/2UXo40Uw== | 5 | 295 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("SYS_ALIAS_1"."NESTED_TABLE_ID"="SYS_ALIAS_0"."SYS_NC0000800009$")
3 - filter("SYS_NC_TYPEID$" IS NOT NULL)
4 - access("NESTED_TABLE_ID"="LOCAL_TABLE"."SYS_NC0000800009$")
5 - filter("SYS_NC_TYPEID$" IS NOT NULL)
Note
- dynamic sampling used for this statement
SQL> select * from TRAN_LEG_PAY_VIEW_XMLTABLE
2 /
TRANSACTION_ID LEG_NUMBER PAY_START PAY_VALUE
1 1 01-JAN-00 1
1 1 02-JAN-00 2
1 2 01-FEB-00 10
1 2 02-FEB-00 20
2 21 01-JAN-02 21
2 21 02-JAN-02 22
2 22 01-FEB-02 210
2 22 02-FEB-02 220
31 31 01-JAN-00 31
9 rows selected.
Elapsed: 00:00:00.07
Execution Plan
Plan hash value: 4004907785
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 9 | 1242 | 10 (10)| 00:00:01 |
|* 1 | HASH JOIN | | 9 | 1242 | 10 (10)| 00:00:01 |
|* 2 | HASH JOIN | | 5 | 480 | 7 (15)| 00:00:01 |
| 3 | NESTED LOOPS | | 3 | 159 | 3 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL| SYS_NTGgl+TKyhQnWoFRSrCxeX9g== | 3 | 129 | 3 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN| SYS_C0010174 | 1 | 10 | 0 (0)| 00:00:01 |
|* 6 | TABLE ACCESS FULL | SYS_NTUmyermF/S721C/2UXo40Uw== | 5 | 215 | 3 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | SYS_NTelW4ZRtKS+WKqCaXhsHnNQ== | 9 | 378 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("NESTED_TABLE_ID"="SYS_ALIAS_1"."SYS_NC0000900010$")
2 - access("SYS_ALIAS_1"."NESTED_TABLE_ID"="SYS_ALIAS_0"."SYS_NC0000800009$")
4 - filter("SYS_NC_TYPEID$" IS NOT NULL)
5 - access("NESTED_TABLE_ID"="LOCAL_TABLE"."SYS_NC0000800009$")
6 - filter("SYS_NC_TYPEID$" IS NOT NULL)
7 - filter("SYS_NC_TYPEID$" IS NOT NULL)
Note
- dynamic sampling used for this statement
SQL>Out of interest why are you so against using XMLType...
Edited by: mdrake on Jan 13, 2009 8:25 AM -
Insert XML file into Relational database model without using XMLTYPE tables
Dear all,
How can I store a known complex XML file into an existing relational database WITHOUT using xmltypes in the database ?
I read the article on DBMS_XMLSTORE. DBMS_XMLSTORE indeed partially bridges the gap between XML and RDBMS to a certain extent, namely for simply structured XML (canonical structure) and simple tables.
However, when the XML structure will become arbitrary and rapidly evolving, surely there must be a way to map XML to a relational model more flexibly.
We work in a java/Oracle10 environment that receives very large XML documents from an independent data management source. These files comply with an XML schema. That is all we know. Still, all these data must be inserted/updated daily in an existing relational model. Quite an assignment isn't it ?
The database does and will not contain XMLTYPES, only plain RDBMS tables.
Are you aware of a framework/product or tool to do what DBMS_XMLSTORE does but with any format of XML file ? If not, I am doomed.
Constraints : Input via XML files defined by third party
Storage : relational database model with hundreds of tables and thousands of existing queries that can not be touched. The model must not be altered.
Target : get this XML into the database on a daily basis via an automated process.
Cheers.
Luc.Luc,
your Doomed !
If you would try something like DBMS_XMLSTORE, you probably would be into serious performance problems in your case, very fast, and it would be very difficult to manage.
If you would use a little bit of XMLType stuff, you would be able to shred the data into the relational model very fast and controlable. Take it from me, I am one of those old geezers like Mr. Tom Kyte way beyond 40 years (still joking). No seriously. I started out as a classical PL/SQL, Forms Guy that switched after two years to become a "DBA 1.0" and Mr Codd and Mr Date were for years my biggest hero's. I have the utmost respect for Mr. Tom Kyte for all his efforts for bringing the concepts manual into the development world. Just to name some off the names that influenced me. But you will have to work with UNSTRUCTURED data (as Mr Date would call it). 80% of the data out there exists off it. Stuff like XMLTABLE and XML VIEWs bridge the gap between that unstructured world and the relational world. It is very doable to drag and drop an XML file into the XMLDB database into a XMLtype table and or for instance via FTP. From that point on it is in the database. From there you could move into relational tables via XMLTABLE methods or XML Views.
You could see the described method as a filtering option so XML can be transformed into relational data. If you don't want any XML in your current database, then create an small Oracle database with XML DB installed (if doable 11.1.0.7 regarding the best performance --> all the new fast optimizer stuff etc). Use that database as a staging area that does all the XML shredding for you into relational components and ship the end result via database links and or materialized views or other probably known methodes into your relational database that isn't allowed to have XMLType.
This way you would keep your realtional Oracle database clean and have the Oracle XML DB staging database do all the filtering and shredding into relational components.
Throwing the XML DB option out off the window beforehand would be like replacing your Mercedes with a bicycle. With both you will be able to travel the distance from Paris to Rome, but it will take you a hell of lot longer.
:-) -
How to store, in an effective way, analyzer data into a relational database?
We want to store the "sweep traces" of a network analyzer in a relational database in a way that it saves as much as possible space without loosing resolution.
The solutions were we thinking on are to separate the x-axes information from the y-axes information and store it in different tables of the database.
Because the repeating character of the measurements the data in the x-axes will be nearly all ways the same. So we want to store only new data in the x-axes table as a different x-axes is detected.
In a third table we want to save the relation between the x and y data and other data that belongs to the measurement.
Question is are there other or better possibilities to solve this proble
m?Hi Ben,
Thanks for you help.
The use of a third table that links the X-axe and y-axe table together depends on if I store the datapoints in the y-axe table sequential, so I need an identification of the points belonging together and I can have a varying number of data-points, (i.e. 401 of 801 ...) or I save it in one record.
The problem here is I have to save a varying nummer of points in tables with a lot of "datapoint columns".
Another solution is save the datapoints as a semicolon ( separated text string in one field.
Problem now is the limitation in the max. text field length.
In my Oracle Rdb database I can use "Varchar" fields.
(is here no limitation??)
In other databases a "Note field" will maybe give a solution.
The question sti
ll is: What is the best solution and uses the smallest amount of space?
In the next week I will do some tests with the solutions mentioned.
Please let me know what DSC is??
Greetings Huub -
CITADEL DATABASE NOT CONFIGURED AS A RELATIONAL DATABASE
HI:
I have the same problem described by another member before, and I haven't found any resolution of this problem:
The problem was:
>I enabled database logging, and configured the shared variables that I wanted to log. Then I deployed the variables. When I try to read alarms and events (Alarm & Event Query.vi), I get this message:
>Error -1967386611 occurred at HIST_RunAlarmQueryCORE.vi,
Citadel: (Hex 0x8ABC100D) The given Citadel database is not currently
configured to log alarms to a relational database.
Please help me with this topic
Thanks in advanceI'm configuring the variables to be logged in the same way that appears on the file you send, but it doesn't work... I don't know what else to do.
I'm sending you the configuration image file, the error message image and a simple vi that creates the database; after, values are logged; I generate several values for the variable, values that are above the HI limit of the acceptance value (previously configured) so alarms are generated. When I push the button STOP, the system stops logging values to the database and performs a query to the alarms database, and the corresponding error is generated... (file attached)
The result: With the aid of MAXThe data is logged correctly on the DATA database (I can view the trace), but the alarm generated is not logged on the alarms database created programatically...
The same vi is used but creating another database manually with the aid of MAX and configuring the library to log the alarms to that database.... same result
I try this sabe conditions on three different PCs with the same result, and I try to reinstall LabVIEW (development and DSC) completelly (uff!) and still doesn't work... ¿what else can I do?
I'd appreciate very much your help.
Ignacio
Attachments:
error.jpg 56 KB
test_db.vi 38 KB
config.jpg 150 KB -
The error below makes absolutely no sense! I'm using Enterprise Core...yet I'm being told I can't use remote data sources:
w3wp!library!8!03/05/2015-19:08:48:: i INFO: Catalog SQL Server Edition = EnterpriseCore
w3wp!library!8!03/05/2015-19:08:48:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.OperationNotSupportedException: , Microsoft.ReportingServices.Diagnostics.Utilities.OperationNotSupportedException: The feature: "The edition of Reporting
Services that you are using requires that you use local SQL Server relational databases for report data sources and the report server database." is not supported in this edition of Reporting Services.;
Really? This totally contradicts the documentation found here:
https://msdn.microsoft.com/en-us/library/ms157285(v=sql.110).aspx
That article says remote connections are completely supported.
ARGH! Why does this have to be so difficult to setup?!?Hi jeffoliver1000,
According to your description, you are using Enterprise Core edition and you are prompted that you can’t use remote data sources.
In your scenario, we neither ignore your point nor be doubt with what you say. But actually we have met the case before that even though the SQL Server engine is Enterprise but the reporting services is still standard. So I would recommend you to find the
actual edition of reporting services you are using. You can find Reporting Services starting SKU in the Reporting Service logs ( default location: C:\Program Files\Microsoft SQL Server\<instance name>\Reporting Services\LogFiles). For more information,
please refer to the similar thread below:
https://social.technet.microsoft.com/Forums/en-US/f98c2f3e-1a30-4993-ab41-acbc5014f92e/data-driven-subscription-button-not-displayed?forum=sqlreportingservices
By the way, have you installed the other SQL Server edition before?
Best regards,
Qiuyun Yu
Qiuyun Yu
TechNet Community Support -
Can I configure Enterprise search to search a Relational database
We have several legacy web applications. We are currently using coveo to search the relational databases. We create mapping xml file for the coveo indexing services. The coveo access the database using jdbc. It provides the ability to create uri for search result. The uri is invoked when the user clicks on search result.
Can I setup something similar with SAP Enterprise search.Hi:
SAP NetWeaver Enterprise Search contains an SAP NetWeaver BI instance within the architecture on the appliance. You can utilize the BI features to extract data from your relational DataBase, then index that data into the TREX part of ES. BI features both "DB connect" and "UD Connect", which give you techiques to connect the BI system to an external DataBase and extract data. You will need to model a DataSource, but the features of SAP NetWeaver 2004s BI make this relatively easy. Once you have a datasource, you can create an Open Hub destination for indexing (with a transformation and DTP to deliver the data), and create a process chain to extract and index the data.
ES offers numerous DataSources "out of the box" for extraction from SAP ERP systems, you could follow the business content structure for those, but instead use DB connect or UDconnect to get the data from a non-SAP system.
For more info on this, see the BI area of SDN and help.sap.com > NetWeaver > Key Capability > Information Integration > BI.
Thanks for any points you choose to assign (they are the way of saying thanks on SDN).
Best Regards -
Ron Silberstein
SAP -
Embedded LDAP Server or relational database
Hi,
I'm pretty new to this subject, but I do have a question. Here is the situation.
I need to set up a login portal (in weblogic 8.1) for a webapplication. Customers
(in the future) can login into a secure part of the website, where they can modify
their personal settings and information. We are talking about < 100.000 users.
Now I was thinking of using the embedded LDAP server to set up the authorisation
and identification, but because 2 variables are needed to see if it is a customer
of the company, I am also looking into the possibility of using a relation database
(oracle) to set up the username - password authentication table.
Can somebody tell me the (dis)advantages of using LDAP instead of the relational
database (oracle)? Or give me advise which authorisation method is the best one?
Your help is needed!
Thanks in advance,
Hans
the customer more information is needed toEnsure that the managed server is running with "Managed Server Independence Enabled" flag checked.
It can be checked on console via Environment --> Servers --> <ServerName> --> Configuration --> Tuning
For more information, please check
http://docs.oracle.com/cd/E14571_01/web.1111/e13708/failures.htm#START169
The above flag is required for the managed server to use the local LDAP repository.
Arun -
Start schema in relational database vs OLAP
I understand what OLAP and MOLAP are and their purpose in theory. Under what scenarios would one consider switching from start schemas in relational database to OLAP technology and building cubes? I understand this is more frequent in financial departments. Can one expand more about the usage? I understand it improves performance, but you can get pretty good performance by having star schemas in relational databses right?
And to move to OLAP technology, can you elaborate how it would be? Would I still populate the star schemas in relational databases and create cubes from there? Do you use an ETL tool for this?
If I have OLAP technology in our financial department, would I build a data warehouse in the same server?Yes the tables are automatically created and populated at the point of configuration.
Cheers
John
http://john-goodwin.blogspot.com/ -
How to import spreadsheet into relational database?
I hope this is the best forum for placing this type of question. I had a general question regarding what are the best/accepted methods for doing this type of thing. XML? Web Services? Combination of things?
I have an existing Oracle relational database, containing several tables. From the client, I receive bulk amount of data (in Excel spreadsheet format) that needs to be directly imported into the database (while respecting all of the integrity and foreign key constraints). The data from the spreadsheet cannot be directly dumped into one table. For example, if you have information in the spreadsheet that captures Country and the user has entered in 'Spain'. You would have to look up 'Spain' in the Countries Table to retrieve its id value and then use this as a foreign key value in the appropriate table.
So my question is, how do I go about doing this? What steps and what technologies are best in achieving this goal? I am thinking XML should play a part in this somehow. I have played around with a trial version of XMLSpy and generated an xsd file from my database, but I am not sure where to go from there. My IDE is IBM RAD 7, however I am not sure if I should try doing this separate from the actual application or not.
Please advise. Any ideas or feedback of previous experience regarding this type of task is greatly appreciated.
Thanks.You have some arbitrary data somewhere in an Excel spreadsheet. You're going to need some code which knows where that data is and how to get at it. You weren't planning for any such code? Or were you hoping it would somehow happen automatically? I don't seem to be following your understanding of the problem at all.
Here's how I see it:
1. Get the data from the spreadsheet.
2. Update the database with that data.
You have to do both of these things. You can hard-code the bit which says "get the order quantity from cell J6" or you can write your own language that allows you to say that outside Java, it doesn't really matter. This could be XML if you're determined to get XML in there somehow. But you are going to need code that reads the spreadsheet.
Maybe you are looking for
-
Creating a Fusion Drive ?
Hi All, I have a mid-2011 iMac which I plan to install a Samsung 840 Pro SSD to use as part of a Fusion Drive setup. Typically I backup using Carbon Copy and wil be taking a backup prior to the mod. Once the drive has been configured...... will I n
-
Settlement Transaction type problem
Gurus, We currently use CJ88 to settle to AUCs to Fixed assets for US Projects. SAP internally uses transaction types 331/336 while posting this settlement and values are settled to all depreciation areas. Now, we are implementing for Mexico where th
-
Hi all if I have 4 routers all on 172.16.1.0, 172.16.2.0, 172.16.3.0, 172.16.4.0, If I use rip on each one, will it advertise 172.16.0.0 as its classful, and would this cause probs with routing ?
-
Multiple use of internet cable (Pro Mac – Laptop)
Hello, I'm working from home with a Pro Mac (Early 2008) and have a broadband internet connection with a Motorola SB100 SURFboard cable modem. My wife who works with a Macbook Pro laptop (3 years old) would like to have internet access at home as wel
-
If I want to compare two very similar PDFs I combine to a single multipage file. So when I use the right and left arrow keys on the mac keyboard, the preview quickly toggles from first to second page. This allows me to see any Type or object shifts.