JPA ORM for generalization table hierarchies
I'm looking for any documentation or guidance the shows how to map existing table hierarchies to class hierarchies. I've read the following the book [Java Persistence API| http://www.amazon.com/Pro-EJB-Java-Persistence-API/dp/1590596455/ref=sr_1_1?ie=UTF8&s=books&qid=1238683389&sr=1-1], but its direction on this subject is rather limited and does not provide me with enough information to map the table hierarchy that I am working with.
Are there any references that fully explain the techniques involved in mapping complicated data models?
Also, if it would be helpful, I could describe the table hierarchy that I am working with.
According the the section [Joined Multiple Table Inheritance|http://en.wikibooks.org/wiki/Java_Persistence/Inheritance#Joined.2C_Multiple_Table_Inheritance], some providers allow that a discriminator column not be present. My G/H does not have a discriminator column as defined by the persistence API. I use a foreign key subset to define the types of entities that fall within a given classification.
If a discriminator column is not used, would simply omitting the @DiscriminatorColumn and @DiscriminatorValue annotations be sufficient to implement the desired mapping?
Similar Messages
-
Any JPA features for updating table metadata without loosing existing data?
Hi there,
I'm having a database with live data in it.
Now i have to change some tables (add columns, etc.). How can i do this without using drop-create-schema mechanics?
Do i really have to change my entity classes first, change schema creation to just create-schema so that only new tables are created automatically and finally have to put all the "alter table" stuff manually?
Is there any better solution?
Thanx in advanceno need, you can add columns using alter table command.
The table schema change will not affect your entities.
stop the application.
change the table columns.
start application having new entities.
let me know -
What is the prerequisite for creating two hierarchies from one fact table i
Hi,
what is the prerequisite for creating two hierarchies from one a single fact table.
Rgds,
Amitcreate global temporary table t1 as select * from trn_ordbase on commit preserve rows;You CANNOT use this syntax.
http://download-east.oracle.com/docs/cd/B19188_01/doc/B15917/sqcmd.htm
http://download-east.oracle.com/docs/cd/B19188_01/doc/B15917/glob_tab.gif
http://download-east.oracle.com/docs/cd/B19188_01/doc/B15917/cre_tabl.gif -
Creating view to get first row for each table !!
I am having tables(more than 10) which are related using foreign key and primary key relationship.
Example:
Table1:
T1Prim T1Col1 T1Col2
Table2
T2For T2Prim T2Col1 T2Col2 T2Col3
(here T2For will have value same as T1Prim and in my design it has same column name i.e. T1Prim)
Table3
T3For T3Prim T3Col1 T3Col2 T3Col3
(here T3For will have value same as T2Prim)
and so on.
The data in the tables is like For table1 there will be one record, for table2 there will be one record and for table 3 there are more than one records.
Can i view either the first record for each of them or all records from each of them by writing the following view.
I have written a view like this:
Create or replace view test (T1Prim,T1Col1, T1Col2,T2Prim,T2Col1 T2Col2, T2Col3, T3Prim,T3Col1, T3Col2, T3Col3)
As
Select
Table1.T1Prim,
Table1.T1Col1,
Table1.T1Col2,
Table2.T2Prim,
Table2.T2Col1,
Table2.T2Col2,
Table2.T2Col3,
Table3.T3Prim,
Table3.T3Col1,
Table3.T3Col2,
Table3.T3Col3
From
Table1,
Table2,
Table3
where
Table1.Prim = Table2.For
and Table2.Prim = Table3.For
When i ran the select statement on the view I am not getting any data. Whereas there is data when select is ran on individual table.
Can someone please tell me where i am goofing.
Thanks in the anticipation that i will get some hint to solve this.
Eagerly waiting for reply.
Thanks !!I mean use a collection :
Collection Methods
A collection method is a built-in function or procedure that operates on collections and is called using dot notation. The methods EXISTS, COUNT, LIMIT, FIRST, LAST, PRIOR, NEXT, EXTEND, TRIM, and DELETE help generalize code, make collections easier to use, and make your applications easier to maintain.
EXISTS, COUNT, LIMIT, FIRST, LAST, PRIOR, and NEXT are functions, which appear as part of an expression. EXTEND, TRIM, and DELETE are procedures, which appear as a statement. EXISTS, PRIOR, NEXT, TRIM, EXTEND, and DELETE take integer parameters. EXISTS, PRIOR, NEXT, and DELETE can also take VARCHAR2 parameters for associative arrays with string keys. EXTEND and TRIM cannot be used with index-by tables.
For more information, see "Using Collection Methods".
Syntax
Text description of the illustration collection_method_call.gif
Keyword and Parameter Description
collection_name
This identifies an index-by table, nested table, or varray previously declared within the current scope.
COUNT
COUNT returns the number of elements that a collection currently contains, which is useful because the current size of a collection is not always known. You can use COUNT wherever an integer expression is allowed.
For varrays, COUNT always equals LAST. For nested tables, normally, COUNT equals LAST. But, if you delete elements from the middle of a nested table, COUNT is smaller than LAST.
DELETE
This procedure has three forms. DELETE removes all elements from a collection. DELETE(n) removes the nth element from an index-by table or nested table. If n is null, DELETE(n) does nothing. DELETE(m,n) removes all elements in the range m..n from an index-by table or nested table. If m is larger than n or if m or n is null, DELETE(m,n) does nothing.
EXISTS
EXISTS(n) returns TRUE if the nth element in a collection exists. Otherwise, EXISTS(n) returns FALSE. Mainly, you use EXISTS with DELETE to maintain sparse nested tables. You can also use EXISTS to avoid raising an exception when you reference a nonexistent element. When passed an out-of-range subscript, EXISTS returns FALSE instead of raising SUBSCRIPT_OUTSIDE_LIMIT.
EXTEND
This procedure has three forms. EXTEND appends one null element to a collection. EXTEND(n) appends n null elements to a collection. EXTEND(n,i) appends n copies of the ith element to a collection. EXTEND operates on the internal size of a collection. So, if EXTEND encounters deleted elements, it includes them in its tally. You cannot use EXTEND with index-by tables.
FIRST, LAST
FIRST and LAST return the first and last (smallest and largest) subscript values in a collection. The subscript values are usually integers, but can also be strings for associative arrays. If the collection is empty, FIRST and LAST return NULL. If the collection contains only one element, FIRST and LAST return the same subscript value.
For varrays, FIRST always returns 1 and LAST always equals COUNT. For nested tables, normally, LAST equals COUNT. But, if you delete elements from the middle of a nested table, LAST is larger than COUNT.
index
This is an expression that must yield (or convert implicitly to) an integer in most cases, or a string for an associative array declared with string keys.
LIMIT
For nested tables, which have no maximum size, LIMIT returns NULL. For varrays, LIMIT returns the maximum number of elements that a varray can contain (which you must specify in its type definition).
NEXT, PRIOR
PRIOR(n) returns the subscript that precedes index n in a collection. NEXT(n) returns the subscript that succeeds index n. If n has no predecessor, PRIOR(n) returns NULL. Likewise, if n has no successor, NEXT(n) returns NULL.
TRIM
This procedure has two forms. TRIM removes one element from the end of a collection. TRIM(n) removes n elements from the end of a collection. If n is greater than COUNT, TRIM(n) raises SUBSCRIPT_BEYOND_COUNT. You cannot use TRIM with index-by tables.
TRIM operates on the internal size of a collection. So, if TRIM encounters deleted elements, it includes them in its tally.
Usage Notes
You cannot use collection methods in a SQL statement. If you try, you get a compilation error.
Only EXISTS can be applied to atomically null collections. If you apply another method to such collections, PL/SQL raises COLLECTION_IS_NULL.
You can use PRIOR or NEXT to traverse collections indexed by any series of subscripts. For example, you can use PRIOR or NEXT to traverse a nested table from which some elements have been deleted.
EXTEND operates on the internal size of a collection, which includes deleted elements. You cannot use EXTEND to initialize an atomically null collection. Also, if you impose the NOT NULL constraint on a TABLE or VARRAY type, you cannot apply the first two forms of EXTEND to collections of that type.
If an element to be deleted does not exist, DELETE simply skips it; no exception is raised. Varrays are dense, so you cannot delete their individual elements.
PL/SQL keeps placeholders for deleted elements. So, you can replace a deleted element simply by assigning it a new value. However, PL/SQL does not keep placeholders for trimmed elements.
The amount of memory allocated to a nested table can increase or decrease dynamically. As you delete elements, memory is freed page by page. If you delete the entire table, all the memory is freed.
In general, do not depend on the interaction between TRIM and DELETE. It is better to treat nested tables like fixed-size arrays and use only DELETE, or to treat them like stacks and use only TRIM and EXTEND.
Within a subprogram, a collection parameter assumes the properties of the argument bound to it. So, you can apply methods FIRST, LAST, COUNT, and so on to such parameters. For varray parameters, the value of LIMIT is always derived from the parameter type definition, regardless of the parameter mode.
Examples
In the following example, you use NEXT to traverse a nested table from which some elements have been deleted:
i := courses.FIRST; -- get subscript of first element
WHILE i IS NOT NULL LOOP
-- do something with courses(i)
i := courses.NEXT(i); -- get subscript of next element
END LOOP;
In the following example, PL/SQL executes the assignment statement only if element i exists:
IF courses.EXISTS(i) THEN
courses(i) := new_course;
END IF;
The next example shows that you can use FIRST and LAST to specify the lower and upper bounds of a loop range provided each element in that range exists:
FOR i IN courses.FIRST..courses.LAST LOOP ...
In the following example, you delete elements 2 through 5 from a nested table:
courses.DELETE(2, 5);
In the final example, you use LIMIT to determine if you can add 20 more elements to varray projects:
IF (projects.COUNT + 20) < projects.LIMIT THEN
-- add 20 more elements
Related Topics
Collections, Functions, Procedures
http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/13_elems7.htm#33054
Joel P�rez -
"Missing most detailed table for dimension tables" eror when I run the Global Consistency check
ERRORS:
Business Model DAC Measures:
[nQSError: 15003] Missing most detailed table for dimension tables: [D_DETAILS,D_EXECUTION_PLAN,D_TASK].
[nQSError: 15001] Could not load navigation space for subject area DAC Measures.
I am also attaching my Business Model layer for easy understanding. I have a fact table and several Dimension table. I got this error only after creating the following hierarchies:
Execution Plan -> Tasks -> Details
Start Date Time Hierarchy
End Date Time Hierarchy
Is there a solution for this problem? Thanks in advance!Yes ! My Task Hierarchy has 3 dimension tables that form a hierarchy :Execution Plan -> Tasks -> Detail
All the 3 levels in the hierarchy are 3 different dimension tables. -
How to come up with a magic number for any table that returns more than 32KB?
I am in a unique situation where in I am trying to retrieve values from multiple tables and publish them as XML output. The problem is based on the condition a few tables could retrieve data more than 32KB and a few less than 32KB. Less than 32KB is not an issue as XML generation is smooth. The minute it reaches greater than 32KB it generates a run time error. Just wondering if there is any way to ensure that the minute the query's results is greater than 32 kb, it should break say - if the results is 35KB, then I should break that result into 32 KB and 3kb; once done then pass this data to be published as an XML output. This is again not just for one table, but all the tables that are called in the function.
Is there any way?? I am unable to get any ideas nor have I done anything so complex from production support stand point. Would appreciate if someone can guide me on this.
The way it is, is as follows:
I have a table called ctn_pub_cntl
CREATE TABLE CTNAPP.ctn_pub_cntl
(ctn_pub_cntl_id NUMBER(18)
,table_name VARCHAR2(50)
,last_pub_tms DATE
,queue_name VARCHAR2(50)
,dest_system VARCHAR2(50)
,frequency NUMBER(6)
,status VARCHAR2(8)
,record_create_tms DATE
,create_user_id VARCHAR2(8)
,record_update_tms DATE
,update_user_id VARCHAR2(8)
,CONSTRAINT ctn_pub_cntl_id_pk PRIMARY KEY(ctn_pub_cntl_id)
Data for this is:
INSERT INTO CTNAPP.ctn_pub_cntl
(ctn_pub_cntl_id
,table_name
,last_pub_tms
,queue_name
,dest_system
,frequency
VALUES
(CTNAPP_SQNC.nextval
,'TRKFCG_SBDVSN'
,TO_DATE('10/2/2004 10:17:44PM','MM/DD/YYYY HH12:MI:SSPM')
,'UT.TSD.TSZ601.UNP'
,'SAP'
,15
INSERT INTO CTNAPP.ctn_pub_cntl
(ctn_pub_cntl_id
,table_name
,last_pub_tms
,queue_name
,dest_system
,frequency
VALUES
(CTNAPP_SQNC.nextval
,'TRKFCG_TRACK_SGMNT_DN'
,TO_DATE('02/06/2015 9:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
,'UT.TSD.WRKORD.UNP'
,'SAP'
,30
INSERT INTO CTNAPP.ctn_pub_cntl
(ctn_pub_cntl_id
,table_name
,last_pub_tms
,queue_name
,dest_system
,frequency
VALUES
(CTNAPP_SQNC.nextval
,'TRKFCG_FXPLA_TRACK_LCTN_DN'
,TO_DATE('10/2/2004 10:17:44PM','MM/DD/YYYY HH12:MI:SSPM')
,'UT.TSD.YRDPLN.INPUT'
,'SAP'
,30
INSERT INTO CTNAPP.ctn_pub_cntl
(ctn_pub_cntl_id
,table_name
,last_pub_tms
,queue_name
,dest_system
,frequency
VALUES
(CTNAPP_SQNC.nextval
,'TRKFCG_FXPLA_TRACK_LCTN2_DN'
,TO_DATE('02/06/2015 9:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
,'UT.TSD.TSZ601.UNP'
,'SAP'
,120
INSERT INTO CTNAPP.ctn_pub_cntl
(ctn_pub_cntl_id
,table_name
,last_pub_tms
,queue_name
,dest_system
,frequency
VALUES
(CTNAPP_SQNC.nextval
,'TRKFCG_FXPLA_TRACK_LCTN2_DN'
,TO_DATE('04/23/2015 11:50:00PM','MM/DD/YYYY HH12:MI:SSPM')
,'UT.TSD.YRDPLN.INPUT'
,'SAP'
,10
INSERT INTO CTNAPP.ctn_pub_cntl
(ctn_pub_cntl_id
,table_name
,last_pub_tms
,queue_name
,dest_system
,frequency
VALUES
(CTNAPP_SQNC.nextval
,'TRKFCG_FIXED_PLANT_ASSET'
,TO_DATE('04/23/2015 11:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
,'UT.TSD.WRKORD.UNP'
,'SAP'
,10
INSERT INTO CTNAPP.ctn_pub_cntl
(ctn_pub_cntl_id
,table_name
,last_pub_tms
,queue_name
,dest_system
,frequency
VALUES
(CTNAPP_SQNC.nextval
,'TRKFCG_OPRLMT'
,TO_DATE('03/26/2015 7:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
,'UT.TSD.WRKORD.UNP'
,'SAP'
,30
INSERT INTO CTNAPP.ctn_pub_cntl
(ctn_pub_cntl_id
,table_name
,last_pub_tms
,queue_name
,dest_system
,frequency
VALUES
(CTNAPP_SQNC.nextval
,'TRKFCG_OPRLMT_SGMNT_DN'
,TO_DATE('03/28/2015 12:50:00AM','MM/DD/YYYY HH12:MI:SSPM')
,'UT.TSD.WRKORD.UNP'
,'SAP'
,30
COMMIT;
Once the above data is inserted and committed, then I created a function in a package:
CREATE OR REPLACE PACKAGE CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH
IS
TYPE tNameTyp IS TABLE OF ctn_pub_cntl.table_name%TYPE INDEX BY BINARY_INTEGER;
g_tName tNameTyp;
TYPE tClobTyp IS TABLE OF CLOB INDEX BY BINARY_INTEGER;
g_tClob tClobTyp;
FUNCTION GetCtnData(p_nInCtnPubCntlID IN CTN_PUB_CNTL.ctn_pub_cntl_id%TYPE,p_count OUT NUMBER ) RETURN tClobTyp;
END CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH;
--Package body
CREATE OR REPLACE PACKAGE BODY CTNAPP.CTN_PUB_CNTL_EXTRACT_PUBLISH
IS
doc xmldom.DOMDocument;
main_node xmldom.DOMNode;
root_node xmldom.DOMNode;
root_elmt xmldom.DOMElement;
child_node xmldom.DOMNode;
child_elmt xmldom.DOMElement;
leaf_node xmldom.DOMNode;
elmt_value xmldom.DOMText;
tbl_node xmldom.DOMNode;
table_data XMLDOM.DOMDOCUMENTFRAGMENT;
l_ctx DBMS_XMLGEN.CTXHANDLE;
vStrSqlQuery VARCHAR2(32767);
l_clob tClobTyp;
l_xmltype XMLTYPE;
--Local Procedure to build XML header
PROCEDURE BuildCPRHeader IS
BEGIN
child_elmt := xmldom.createElement(doc, 'PUBLISH_HEADER');
child_node := xmldom.appendChild (root_node, xmldom.makeNode (child_elmt));
child_elmt := xmldom.createElement (doc, 'SOURCE_APLCTN_ID');
elmt_value := xmldom.createTextNode (doc, 'CTN');
leaf_node := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
leaf_node := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
child_elmt := xmldom.createElement (doc, 'SOURCE_PRGRM_ID');
elmt_value := xmldom.createTextNode (doc, 'VALUE');
leaf_node := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
leaf_node := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
child_elmt := xmldom.createElement (doc, 'SOURCE_CMPNT_ID');
elmt_value := xmldom.createTextNode (doc, 'VALUE');
leaf_node := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
leaf_node := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
child_elmt := xmldom.createElement (doc, 'PUBLISH_TMS');
elmt_value := xmldom.createTextNode (doc, TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
leaf_node := xmldom.appendChild (child_node, xmldom.makeNode (child_elmt));
leaf_node := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
END BuildCPRHeader;
--Get table data based on table name
FUNCTION GetCtnData(p_nInCtnPubCntlID IN CTN_PUB_CNTL.ctn_pub_cntl_id%TYPE,p_Count OUT NUMBER) RETURN tClobTyp IS
vTblName ctn_pub_cntl.table_name%TYPE;
vLastPubTms ctn_pub_cntl.last_pub_tms%TYPE;
BEGIN
g_vProcedureName:='GetCtnData';
g_vTableName:='CTN_PUB_CNTL';
SELECT table_name,last_pub_tms
INTO vTblName, vLastPubTms
FROM CTN_PUB_CNTL
WHERE ctn_pub_cntl_id=p_nInCtnPubCntlID;
-- Start the XML Message generation
doc := xmldom.newDOMDocument;
main_node := xmldom.makeNode(doc);
root_elmt := xmldom.createElement(doc, 'PUBLISH');
root_node := xmldom.appendChild(main_node, xmldom.makeNode(root_elmt));
--Append Table Data as Publish Header
BuildCPRHeader;
--Append Table Data as Publish Body
child_elmt := xmldom.createElement(doc, 'PUBLISH_BODY');
leaf_node := xmldom.appendChild (root_node, xmldom.makeNode(child_elmt));
DBMS_SESSION.SET_NLS('NLS_DATE_FORMAT','''YYYY:MM:DD HH24:MI:SS''');
vStrSqlQuery := 'SELECT * FROM ' || vTblName
|| ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
-- || ' AND rownum < 16'
DBMS_OUTPUT.PUT_LINE(vStrSqlQuery);
l_ctx := DBMS_XMLGEN.NEWCONTEXT(vStrSqlQuery);
DBMS_XMLGEN.SETNULLHANDLING(l_ctx, 0);
DBMS_XMLGEN.SETROWSETTAG(l_ctx, vTblName);
-- Append Table Data as XML Fragment
l_clob(1):=DBMS_XMLGEN.GETXML(l_ctx);
elmt_value := xmldom.createTextNode (doc, l_clob(1));
leaf_node := xmldom.appendChild (leaf_node, xmldom.makeNode (elmt_value));
xmldom.writeToBuffer (doc, l_clob(1));
l_clob(1):=REPLACE(l_clob(1),'<?xml version="1.0"?>', NULL);
l_clob(1):=REPLACE(l_clob(1),'<', '<');
l_clob(1):=REPLACE(l_clob(1),'>', '>');
RETURN l_clob;
DBMS_OUTPUT.put_line('Answer is' ||l_clob(1));
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.put_line('There is no data with' || SQLERRM);
g_vProcedureName:='GetCtnData';
g_vTableName:='CTN_PUB_CNTL';
g_vErrorMessage:=SQLERRM|| g_vErrorMessage;
g_nSqlCd:=SQLCODE;
ctn_log_error('ERROR',g_vErrorMessage,'SELECT',g_nSqlCd,p_nInCtnPubCntlID,g_vPackageName,g_vProcedureName,g_vTableName);
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR : ' || SQLERRM);
ctn_log_error('ERROR',g_vErrorMessage,'OTHERS',g_nSqlCd,p_nInCtnPubCntlID,g_vPackageName,g_vProcedureName,g_vTableName);
END GetCtnData;
PROCEDURE printClob (result IN OUT NOCOPY CLOB) IS
xmlstr VARCHAR2 (32767);
line VARCHAR2 (2000);
BEGIN
xmlstr := DBMS_LOB.SUBSTR (result, 32767);
LOOP
EXIT WHEN xmlstr IS NULL;
line := SUBSTR (xmlstr, 1, INSTR (xmlstr, CHR (10)) - 1);
DBMS_OUTPUT.put_line (line);
xmlstr := SUBSTR (xmlstr, INSTR (xmlstr, CHR (10)) + 1);
END LOOP;
END printClob;
END CTN_PUB_CNTL_EXTRACT_PUBLISH;
If you notice my query:
vStrSqlQuery := 'SELECT * FROM ' || vTblName
|| ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
|| ' AND rownum < 16'
The minute I comment
|| ' AND rownum < 16' ;
, it throws an error because this query returns around 600 rows and all of those rows need to be published as XML and the tragedy is that there is a C program in between as well i.e. C will call my packged functions and then will do all the processing. Once this is done will pass the results back to C program. So obviously C does not recognise CLOB and somewhere in the process I have to convert the CLOB to VARCHAR or instead of CLOB I have to use VARCHAR array as a return type. This is my challenge.
Anyone that can help me to find out the required magic number and also a brief know how, I would appreciate that. Many thanks in advance.Not sure I understand which part is failing.
Is it the C program calling your packaged function? Or does the error occur in the PL/SQL code, in which case you should be able to pinpoint where it's wrong?
A few comments :
1) Using DOM to build XML out of relational data? What for? Use SQL/XML functions.
2) Giving sample data is usually great, but it's not useful here since we can't run your code. We're missing the base tables.
3) This is wrong :
vStrSqlQuery := 'SELECT * FROM ' || vTblName || ' WHERE record_update_tms <= TO_DATE(''' || TO_CHAR(vLastPubTms, 'MM/DD/YYYY HH24:MI:SS') || ''', ''MM/DD/YYYY HH24:MI:SS'') ' ;
A bind variable should be used here for the date.
4) This is wrong :
elmt_value := xmldom.createTextNode (doc, l_clob(1));
createTextNode does not support CLOB so it will fail as soon as the CLOB you're trying to pass exceeds 32k.
Maybe that's the problem you're referring to?
5) This is most wrong :
l_clob(1):=REPLACE(l_clob(1),'<?xml version="1.0"?>', NULL);
l_clob(1):=REPLACE(l_clob(1),'<', '<');
l_clob(1):=REPLACE(l_clob(1),'>', '>');
I understand what you're trying to do but it's not the correct way.
You're trying to convert a text() node representing XML in escaped form back to XML content.
The problem is that there are other things to take care of besides just '<' and '>'.
If you want to insert an XML node into an existing document, treat that as an XML node, not as a string.
Anyway,
Anyone that can help me to find out the required magic number
That would be a bad idea. Fix what needs to be fixed.
And please clearly state which part is failing : the C program or the PL/SQL code?
I'd vote for PL/SQL, as pointed out in [4]. -
How to tcack the table update event in sap for all tables by use of single
Hello,
I want to store the list of all OM,HR tables in a file which are updated after perticular date. For that i tried TRIGGER but i can write only one trigger for one table, i want such a that i have to write only one trigger which will be invoked affter update operation on every HR,OM tables and i store information in a file regarding which rows are updated and external application can use it.
Thanks in advance,
SANDIPhi all the log for the change of any thing will be available in the tables DBTABLOG..REPOSRC ....
regards,
venkat. -
BW DS Extractors for FI Tables
Hi everybody,
can anybody please tell me the standard extractors (BW Datasources) for following tables
- GLT3
- GLPCT
- FAGLFLEXT
Thanks a lothello,
For GLPCT:
data-/info source 0EC_PCA_1. The cube is 0PCA_C01.
For GLT3 :
3FI_SL_09_TT
or
check the DS 3EC_CS1A.It is giving the data as like data there in table GLT3.
http://help.sap.com/saphelp_nw2004s/helpdata/en/6c/4d6637a04c2367e10000009b38f8cf/frameset.htm
For FAGLFLEXT :
0FI_GL_10 extracts data from FAGLFLEXT table, that is connected to DSO (0FIGL_O10) --> Cube (0FIGL_C10) --> Virtual Cube (0FIGL_V10) and u will all the standard reports from there
Reagrds,
Dhanya -
How to get the Horizontal Scroll Bar for a Table?
Hi All,
As per my requirement, I am displaying several records in a Screen in a Tabular Format. But here I have to show 21 Columns in that table which is too high. I am able to display it but due to it I am getting a Horizontal scroll bar for the whole screen since all the columns are not getting displayed in the normal window screen space. But its looking too odd since once I am scrolling it to right the columns are getting displayed but the above Header Bar and Global buttons are not displaying, they are bound to the normal screen space.
Is there a way to have a Horizontal scroll bar only for that table instead of the entire screen so that on scrolling that bar only the table rows will beshifted ant got displayed?
With Thanks
Kumar Gautamtry this approach.
include a raw text item before and table item.
include the appropriate HTML tags in raw text item to enable horizontal scroll
--Prasanna -
How to get all minimum values for a table of unique records?
I need to get the list of minimum value records for a table which has the below structure and data
create table emp (name varchar2(50),org varchar2(50),desig varchar2(50),salary number(10),year number(10));
insert into emp (name,org,desig,salary,year) values ('emp1','org1','mgr',3000,2005);
insert into emp (name,org,desig,salary,year) values ('emp1','org1','mgr',4000,2007);
insert into emp (name,org,desig,salary,year) values ('emp1','org1','mgr',7000,2007);
insert into emp (name,org,desig,salary,year) values ('emp1','org1','mgr',7000,2008);
insert into emp (name,org,desig,salary,year) values ('emp1','org1','mgr',7000,2010);
commit;
SELECT e.name,e.org,e.desig,min(e.year) FROM emp e,(
SELECT e1.name,e1.org,e1.desig,e1.salary FROM emp e1
GROUP BY (e1.name,e1.org,e1.desig,e1.salary)
HAVING COUNT(*) >1) min_query
WHERE min_query.name = e.name AND min_query.org = e.org AND min_query.desig =e.desig
AND min_query.salary = e.salary
group by (e.name,e.org,e.desig);With the above query i can get the least value year where the emp has maximum salary. It will return only one record. But i want to all the records which are minimum compare to the max year value
Required output
emp1 org1 mgr 7000 2008
emp1 org1 mgr 7000 2007Please help me with this..Frank,
Can I write the query like this in case of duplicates?
Definitely there would have been a better way than the query I've written.
WITH got_analytics AS
SELECT name, org, desig, salary, year
, MAX (SALARY) OVER ( PARTITION BY NAME, ORG, DESIG) AS MAX_SALARY
, ROW_NUMBER () OVER ( PARTITION BY NAME, ORG, DESIG, SALARY
ORDER BY year DESC
) AS YEAR_NUM
FROM (SELECT 'emp1' AS NAME, 'org1' AS ORG, 'mgr' AS DESIG, 3000 AS SALARY, 2005 AS YEAR FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',4000,2007 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',4000,2008 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2007 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2007 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2008 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2010 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2010 FROM DUAL)
SELECT name, org, desig, salary, year
FROM got_analytics
WHERE salary = max_salary
AND YEAR_NUM > 1
Result:
emp1 org1 mgr 7000 2010
emp1 org1 mgr 7000 2008
emp1 org1 mgr 7000 2007
emp1 org1 mgr 7000 2007
WITH got_analytics AS
SELECT name, org, desig, salary, year
, MAX (SALARY) OVER ( PARTITION BY NAME, ORG, DESIG) AS MAX_SALARY
, ROW_NUMBER () OVER ( PARTITION BY NAME, ORG, DESIG, SALARY
ORDER BY year DESC
) AS YEAR_NUM
, ROW_NUMBER () OVER ( PARTITION BY NAME, ORG, DESIG, SALARY, Year
ORDER BY YEAR DESC
) AS year_num2
FROM (SELECT 'emp1' AS NAME, 'org1' AS ORG, 'mgr' AS DESIG, 3000 AS SALARY, 2005 AS YEAR FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',4000,2007 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',4000,2008 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2007 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2007 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2008 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2010 FROM DUAL UNION ALL
SELECT 'emp1','org1','mgr',7000,2010 FROM DUAL)
SELECT name, org, desig, salary, year
FROM got_analytics
WHERE salary = max_salary
AND YEAR_NUM > 1
AND YEAR_NUM2 < 2
Result:
emp1 org1 mgr 7000 2008
emp1 org1 mgr 7000 2007 -
How to get the data from pcl2 cluster for TCRT table.
Hi frndz,
How to get the data from pcl2 cluster for tcrt table for us payroll.
Thanks in advance.
Harisumanth.ChPL take a look at the sample Program EXAMPLE_PNP_GET_PAYROLL in your system. There are numerous other ways to read payroll results.. Pl use the search forum option & you sure will get a lot of hits..
~Suresh -
[ADF Help] How to create a view for multiple tables
Hi,
I am using Jdeveloper 11G and ADF framework, and trying to create a view to update multiple tables.
ex:
Table A has these fields: ID, Name
Table B has these fields: ID, Address
A.ID and B.ID are primary keys.
B.ID has FK relationship with A.ID
(basically, these tables have one-to-one relation)
I want to create a view object, which contains these fields: B.ID (or A.ID), A.Name, B.Address.
So I can execute C,R,U,D for both tables.
I create these tables in DB, and create entity objects for these tables.
So there are 2 entity objects and 1 association.
Then I create a view object based on B and add fields of A into the view:
If the association is not a "Composition Association",
when I run the model ("Oracle Business Component Browser") and try to insert new data, fields of A can't edit.
If the association is a "Composition Association", and click the insert button, I will get
"oracle.jbo.InvalidOwnerException: JBO-25030: Failed to find or invalidate owning entity"
If I create a view object based on A and add filed of B into the view:
When I run the model and try to insert new data, fields of B can't edit, no matter the association is or is not a composition association.
So... how can I create a view for multiple tables correctly?
Thanks for any advices!
Here are some pictures about my problem, if there is any unclear point, please let me know.
http://leonjava.blogspot.com/2009_10_01_archive.html
(A is Prod, B is CpuSocket)
Edited by: user8093176 on Oct 25, 2009 12:29 AMHi Branislav,
Thanks, but the result is same ....
In the step 2 of creating view object, I can select entity objects to be added in to the view.
If I select A first, and then select B (the "Source Usage" of B is A), then finishing the wizards.
When I try to create a new record in the view, I can't edit any properties of B (those files are disabled).
If I select B first, and then select A in crating view object, the result is similar ...
Thanks for any further suggestion.
Leon -
How to look for the Table Name
Hi Friends,
Sometimes we need to download the table for the desired information if the same is not available from a particular report. How to look for the table name? Is there a report or a particular feild, where we can find the name of the particular table?
Thanks for the assistance.
RegardsHi Friend,
If you want to see the structures then go to SE11. Sometimes it happens that you cannot find the table names but only fields. In such case, if you want to find the Table names which is not available, then go to SE90.
Abap Dictionary > Fields > Table Fields.
Now Enter the Field name in Right Hand Side of the screen then Execute. You will see the all tables by which that Fields are used.
Regards,
Jigar -
Unable to generate spool for two tables in report output
Hi,
I created report with two custom containers displaying two tables in output. When I execute the report in background spool is created only for one table in top custom container.
What should be done to generate spool for both the tables in two different custom containers.
Thanks,
Abhiram.Hi,
Check the bellow link for your requirement.
<<link removed>>
Regards,
Goutam Kolluru.
Edited by: kishan P on Feb 2, 2012 1:50 PM -
Unable to retrieve nametab info for logic table BSEG during Database Export
Hi,
Our aim is to Migrate to New hardware and do the Database Export of the existing System(Unicode) and Import the same in the new Hardware
I am doing Database Export on SAP 4.7 SR1,HP-UX ,Oracle 9i(Unicode System) and during Database Export "Post Load Processing phase" got the error as mentioned in SAPCLUST.log
more SAPCLUST.log
/sapmnt/BIA/exe/R3load: START OF LOG: 20090216174944
/sapmnt/BIA/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#20
$ SAP
/sapmnt/BIA/exe/R3load: version R6.40/V1.4 [UNICODE]
Compiled Aug 13 2007 16:20:31
/sapmnt/BIA/exe/R3load -ctf E /nas/biaexp2/DATA/SAPCLUST.STR /nas/biaexp2/DB/DDLORA.T
PL /SAPinst_DIR/SAPCLUST.TSK ORA -l /SAPinst_DIR/SAPCLUST.log
/sapmnt/BIA/exe/R3load: job completed
/sapmnt/BIA/exe/R3load: END OF LOG: 20090216174944
/sapmnt/BIA/exe/R3load: START OF LOG: 20090216182102
/sapmnt/BIA/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#20
$ SAP
/sapmnt/BIA/exe/R3load: version R6.40/V1.4 [UNICODE]
Compiled Aug 13 2007 16:20:31
/sapmnt/BIA/exe/R3load -datacodepage 1100 -e /SAPinst_DIR/SAPCLUST.cmd -l /SAPinst_DI
R/SAPCLUST.log -stop_on_error
(DB) INFO: connected to DB
(DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
(GSI) INFO: dbname = "BIA20071101021156
(GSI) INFO: vname = "ORACLE "
(GSI) INFO: hostname = "tinsp041
(GSI) INFO: sysname = "HP-UX"
(GSI) INFO: nodename = "tinsp041"
(GSI) INFO: release = "B.11.11"
(GSI) INFO: version = "U"
(GSI) INFO: machine = "9000/800"
(GSI) INFO: instno = "0020293063"
(EXP) TABLE: "AABLG"
(EXP) TABLE: "CDCLS"
(EXP) TABLE: "CLU4"
(EXP) TABLE: "CLUTAB"
(EXP) TABLE: "CVEP1"
(EXP) TABLE: "CVEP2"
(EXP) TABLE: "CVER1"
(EXP) TABLE: "CVER2"
(EXP) TABLE: "CVER3"
(EXP) TABLE: "CVER4"
(EXP) TABLE: "CVER5"
(EXP) TABLE: "DOKCL"
(EXP) TABLE: "DSYO1"
(EXP) TABLE: "DSYO2"
(EXP) TABLE: "DSYO3"
(EXP) TABLE: "EDI30C"
(EXP) TABLE: "EDI40"
(EXP) TABLE: "EDIDOC"
(EXP) TABLE: "EPIDXB"
(EXP) TABLE: "EPIDXC"
(EXP) TABLE: "GLS2CLUS"
(EXP) TABLE: "IMPREDOC"
(EXP) TABLE: "KOCLU"
(EXP) TABLE: "PCDCLS"
(EXP) TABLE: "REGUC"
myCluster (55.16.Exp): 1557: inconsistent field count detected.
myCluster (55.16.Exp): 1558: nametab says field count (TDESCR) is 305.
myCluster (55.16.Exp): 1561: alternate nametab says field count (TDESCR) is 304.
myCluster (55.16.Exp): 1250: unable to retrieve nametab info for logic table BSEG
myCluster (55.16.Exp): 8033: unable to retrieve nametab info for logic table BSEG
myCluster (55.16.Exp): 2624: failed to convert cluster data of cluster item.
myCluster: RFBLG *003**IN07**0001100000**2007*
myCluster (55.16.Exp): 318: error during conversion of cluster item.
myCluster (55.16.Exp): 319: affected physical table is RFBLG.
(CNV) ERROR: data conversion failed. rc = 2
(RSCP) WARN: env I18N_NAMETAB_TIMESTAMPS = IGNORE
(DB) INFO: disconnected from DB
/sapmnt/BIA/exe/R3load: job finished with 1 error(s)
/sapmnt/BIA/exe/R3load: END OF LOG: 20090216182145
/sapmnt/BIA/exe/R3load: START OF LOG: 20090217115935
/sapmnt/BIA/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#20
$ SAP
/sapmnt/BIA/exe/R3load: version R6.40/V1.4 [UNICODE]
Compiled Aug 13 2007 16:20:31
/sapmnt/BIA/exe/R3load -datacodepage 1100 -e /SAPinst_DIR/SAPCLUST.cmd -l /SAPinst_DI
R/SAPCLUST.log -stop_on_error
(DB) INFO: connected to DB
(DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF8
(GSI) INFO: dbname = "BIA20071101021156
(GSI) INFO: vname = "ORACLE "
(GSI) INFO: hostname = "tinsp041
(GSI) INFO: sysname = "HP-UX"
(GSI) INFO: nodename = "tinsp041"
(GSI) INFO: release = "B.11.11"
(GSI) INFO: version = "U"
(GSI) INFO: machine = "9000/800"
(GSI) INFO: instno = "0020293063"
myCluster (55.16.Exp): 1557: inconsistent field count detected.
myCluster (55.16.Exp): 1558: nametab says field count (TDESCR) is 305.
myCluster (55.16.Exp): 1561: alternate nametab says field count (TDESCR) is 304.
myCluster (55.16.Exp): 1250: unable to retrieve nametab info for logic table BSEG
myCluster (55.16.Exp): 8033: unable to retrieve nametab info for logic table BSEG
myCluster (55.16.Exp): 2624: failed to convert cluster data of cluster item.
myCluster: RFBLG *003**IN07**0001100000**2007*
myCluster (55.16.Exp): 318: error during conversion of cluster item.
myCluster (55.16.Exp): 319: affected physical table is RFBLG.
(CNV) ERROR: data conversion failed. rc = 2
(RSCP) WARN: env I18N_NAMETAB_TIMESTAMPS = IGNORE
(DB) INFO: disconnected from DB
SAPCLUST.l/sapmnt/BIA/exe/R3load: job finished with 1 error(s)
/sapmnt/BIA/exe/R3load: END OF LOG: 20090217115937
og (97%)
The main eror is "unable to retrieve nametab info for logic table BSEG "
Your reply to this issue is highly appreciated
Thanks
SunilHello,
acording to this output:
/sapmnt/BIA/exe/R3load -datacodepage 1100 -e /SAPinst_DIR/SAPCLUST.cmd -l /SAPinst_DI
R/SAPCLUST.log -stop_on_error
you are doing the export with a non-unicode SAP codepage. The codepage has to be 4102/4103 (see note #552464 for details). There is a screen in the sapinst dialogues that allows the change of the codepage. 1100 is the default in some sapinst versions.
Best Regards,
Michael
Maybe you are looking for
-
1. I recently needed to re install my Tiger. It came up at 10.4.6 I was at 11 ppc. How do I get 11 back? 2. When troubleshooting network connections, I get the following. "your network settings have been changed by another application" how do i inter
-
Low quality when using Ken Burns effect
Hi, I tried to import hi-res photos from iPhoto 6 to iMovie HD 6, adding some zooming with Ken Burns. The resulting clips are very blurry, lots of artefacts - very low quality. Is anyone else having this problem? It happens on my iBook as well on my
-
Lenovo T61p and Seagate Momentus 7200.4 POST problem in UltraBay
Two different T61ps. Mine is a 6459-CTO model. Two different Seagate Momentus 7200.4 500GB drives. System boots fine, even with a second hard drive installed in the Ultrabay SATA adapter, most recently a Samsung Spinpoint 500GB 5,400rpm drive in m
-
Select list pagination not working for big tables
Hi, i am trying to view a table with large amount of data using tabular form. the pagination using select list is not working in this page. i have selected select list kind of pagination but it is showing "row range 1-15 16-30(with set pagination)' t
-
Cannot Enable Genius Itunes 10 Macbook Pro
I cannot enable Genius. Error "your Request cannot be processed. Try again later. " Itunes 10 on mac. Loggin in to the store first does not fix this. Help!!