Loading tree data lazily?
I have a tree to display some ejb3 entities; the entities are
related to each other in a tree hierarchy. I implement
ITreeDataDescriptor to load the entities into the tree, which works
fine. However, when I try to expand a branch node, I got an error
"PersistenceCollection initializing", because the collections are
marked to load lazily. So I need to do the following in sequence:
1. Intercept the branch opening event; (I can do this by
listening for itemOpening)
2. Find out which item is being opened;
3. Load the collection for that item;
4. Wait till the collection finishes loading, then continue
with the branch open;
I can figure out how to do steps 1 and 3, but have no idea
how 2 and 4 could be done. I appreciate if someone could shed a
light on this.
"wt.ustc" <[email protected]> wrote in
message
news:gmva76$2mq$[email protected]..
>I have a tree to display some ejb3 entities; the entities
are related to
>each
> other in a tree hierarchy. I implement
ITreeDataDescriptor to load the
> entities
> into the tree, which works fine. However, when I try to
expand a branch
> node, I
> got an error "PersistenceCollection initializing",
because the collections
> are
> marked to load lazily. So I need to do the following in
sequence:
>
> 1. Intercept the branch opening event; (I can do this by
listening for
> itemOpening)
> 2. Find out which item is being opened;
> 3. Load the collection for that item;
> 4. Wait till the collection finishes loading, then
continue with the
> branch
> open;
>
> I can figure out how to do steps 1 and 3, but have no
idea how 2 and 4
> could
> be done. I appreciate if someone could shed a light on
this.
This might help:
http://flexdiary.blogspot.com/2009/01/lazy-loading-tree-example-file-posted.html
Similar Messages
-
Shared Services 9.3.1 error - "...error loading tree data"
We have recently upgraded to version 9.3.1 and have been using it for several months now.
This past week, when logging in to Shared Services, we are suddenly experiencing a pop up with an error that says "There was a communication error while loading tree data".
Has anyone seen this error, and why would it suddenly start appearing when everything has worked for months and no changes have been applied?
Thanks for any advice, I will post screens and some sections from log files soon...We are also facing the issue while accessing the Hyperion Planning 9.3.1 application from Workspace. But it allows us to access it through planning URL. The workspace classic administration for planning works perfectly fine. Let me know in case you get any solution.
-
Shared Services Error: Failed to load Tree
Hi All,
I am trying to open up an application in Shared services under the projects folder:
path : Projects -> Analytic Servers:<Server Name> -> <Application>
When I do this the page just keeps loading for a very long time and then gives me an error which says "Failed to load Tree"
Can anyone help me out on what the problem is here?
The exact error message is : "There was a communication error while loading tree data"
Regards,
Anindyo
Edited by: user644118 on Oct 31, 2008 4:27 PMAnindyo,
Which operating system are you working on, I remember some one faced same problem when it was solaris, am not very sure though.I guess , it has something to do with 'Hyperion Remote Authentication Module'.
Sandeep Reddy Enti
HCC
http://hyperionconsultancy.com/ -
Dynamic loading tree and data grid
Hi All,
I new to java as well as JSF. I am very impressed with the jsf and Sun Java Creator IDE. I made a sample project.
Now I want to load tree and data grid with dynamic values how can I achieve this.
Please help to find out some examples.
Also I need to know who I can use SOAP call using JSF.
Thanks
CSCSTo dynamically load a Basic Table (ui:table) from a database, see http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/2/databoundcomponents.html
To dynamically load a Basic Table from other sources of data that are loaded into an array or such, see http://blogs.sun.com/roller/page/divas?entry=table_component_sample_project
To dynamically CREATE a Basic Table, see http://developers.sun.com/prodtech/javatools/jscreator/reference/tips/2/createTableDynamically.html and http://developers.sun.com/prodtech/javatools/jscreator/reference/tips/2/add_component_to_table.html
To dynamically create an HTML table on the fly, see section 7.5 in Chapter 7 of the Field Guide at http://developers.sun.com/prodtech/javatools/jscreator/learning/bookshelf/index.html
To dynamically create a tree, see Dynamic Tree example at http://developers.sun.com/prodtech/javatools/jscreator/reference/index.jsp.
A tutorial for dynamically creating a tree from a database is work in progress.
Hope this helps,
Chris -
I have hierarchy data in R/3 side how will i load that data from R/3 to BW
Hi all,
i have my hierarchy data in the R/3 side how will i load that data from R/3 to BW side
Regard
Kiran KumarHi Kiran,
Here is the procedure:
1. In the Data Warehousing Workbench under Modeling, select the InfoSource tree.
2. Select the InfoSource (with direct update) for the InfoObject, to which you want to load the hierarchy.
3. Choose Additional Functions® Create Transfer Rules from the context menu of the hierarchy table object for the InfoObject. The Assign Source System dialog box appears.
4. Select the source system from which the hierarchy is to be loaded. The InfoSource maintenance screen appears.
○ If the DataSource only supports the transfer method IDoc, then only the transfer structure is displayed (tab page DataSource/Transfer Structure).
○ If the DataSource also supports transfer method PSA, you can maintain the transfer rules (tab page Transfer Rules).
If it is possible and useful, we recommend that you use the transfer method PSA and set the indicator Expand Leaf Values and Node InfoObjects. You can then also load hierarchies with characteristics whose node name has a length >32.
5. Save your entries and go back. The InfoSource tree for the Data Warehousing Workbench is displayed.
6. Choose Create InfoPackage from the context menu (see Maintaining InfoPackages). The Create InfoPackage dialog box appears.
7. Enter the description for the InfoPackage. Select the DataSource (data element Hierarchies) that you require and confirm your entries.
8. On the Tab Page: Hierarchy Selection, select the hierarchy that you want to load into your BI system.
Specify if the hierarchy should be automatically activated after loading or be marked for activation.
Select an update method (Full Update, Insert Subtree, Update Subtree).
If you want to load a hierarchy from an external system with BAPI functionality, make BAPI-specific restrictions, if necessary.
9. If you want to load a hierarchy from a flat file, maintain the tab page: external data.
10. Maintain the tab page: processing.
11. Maintain the tab page: updating.
12. To schedule the InfoPackage, you have the following options:
○ (Manually) in the scheduler, see Scheduling InfoPackages
○ (Automatically) using a process chain (see Loading Hierarchies Using a Process Chain)
When you upload hierarchies, the system carries out a consistency check, making sure that the hierarchy structure is correct. Error messages are logged in the Monitor. You can get technical details about the error and how to correct it in the long text for the respective message.
For more info visit this help pages on SAP Help:
http://help.sap.com/saphelp_nw04s/helpdata/en/80/1a6729e07211d2acb80000e829fbfe/frameset.htm
http://help.sap.com/saphelp_nw04s/helpdata/en/3d/320e3d89195c59e10000000a114084/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6729e07211d2acb80000e829fbfe/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4dae0795-0501-0010-cc96-fe3a9e8959dc
Cheers,
Habeeb -
Best method to load XML data into Oracle
Hi,
I have to load XML data into Oracle tables. I tried using different options and have run into a dead end in each of those. I do not have knowledge of java and hence have restricted myself to PL/SQL solutions. I tried the following options.
1. Using DBMS_XMLSave package : Expects the ROWSET and ROW tags. Connot change format of the incoming XML file (Gives error oracle.xml.sql.OracleXMLSQLException: Start of root element expected).
2. Using the XMLPARSER and XMLDOM PL/SQL APIs : Works fine for small files. Run into memory problems for large files (Gives error java.lang.OutOfMemoryError). Have tried increasing the JAVA_POOL_SIZE but does not work. I am not sure whether I am changing the correct parameter.
I have read that the SAX API does not hog memory resources since it does not build the entire DOM tree structure. But the problem is that it does not have a PL/SQL implementation.
Can anyone PLEASE guide me in the right direction, as to the best way to achieve this through PL/SQL ??? I have not designed the tables so am flexible on using purely relational or object-relational design. Although would prefer to keep a purely relational design. (Had tried used object-relational for 1. and purely relational for 2. above)
The XML files are in the following format, (EXAMINEEs with single DEMOGRAPHIC and multiple TESTs)
<?xml version="1.0"?>
<Root_Element>
<Examinee>
<MACode>A</MACode>
<TestingJID>TN</TestingJID>
<ExamineeID>100001</ExamineeID>
<CreateDate>20020221</CreateDate>
<Demographic>
<InfoDate>20020221</InfoDate>
<FirstTime>1</FirstTime>
<LastName>JANE</LastName>
<FirstName>DOE</FirstName>
<MiddleInitial>C</MiddleInitial>
<LithoNumber>73</LithoNumber>
<StreetAddress>SomeAddress</StreetAddress>
<City>SomeCity</City>
<StateCode>TN</StateCode>
<ZipCode>37000</ZipCode>
<PassStatus>1</PassStatus>
</Demographic>
<Test>
<TestDate>20020221</TestDate>
<TestNbr>1</TestNbr>
<SrlNbr>13773784</SrlNbr>
</Test>
<Test>
<TestDate>20020221</TestDate>
<TestNbr>2</TestNbr>
<SrlNbr>13773784</SrlNbr>
</Test>
</Examinee>
</Root_Element>
Thanks for the help.Please refer to the XSU(XML SQL Utility) or TransX Utility(for Multi-language Document) if you want to load data in XML format into database.
Both of them require special XML formats, please first refer to the following docs:
http://otn.oracle.com/docs/tech/xml/xdk_java/doc_library/Production9i/doc/java/xsu/xsu_userguide.html
http://otn.oracle.com/docs/tech/xml/xdk_java/doc_library/Production9i/doc/java/transx/readme.html
You can use XSLT to transform your document to the required format.
If you document is large, you can use SAX method to insert data into database. But you need to write the code.
The following sample may be useful:
http://otn.oracle.com/tech/xml/xdk_sample/xdksample_040602i.html -
Order of loading Master Data - Fact or Fiction
I understand that for loading Master Data for InfoCube 0FIAA_C01 (or any other) you should load starting from the lowest level.
That means for every characteristic in the cube you have to check and see if any of the InfoObjects have Master Data attributes, and if any of those attributes have attributes, and so on. This quickly becomes a multi-level structure.
Part of the tree structure for 0FIAA_C01 would look like:
0FIAA_C01
..........0COMP_CODE
....................0CHRT_ACCTS
....................0C_CTR_AREA
..........0ASSET_AFAB
..........0ASSET
....................0ACTTYPE
....................0BUS_AREA
<snip>
So does that mean that 0bus_area should be loaded first before 0asset?
Is this fact or fiction?
If its a fact I am wondering what tools SAP has for determining the order of loading Master Data.
Discussion points and tools for facts awarded!
Mike
Edited by: Michael Hill on Feb 12, 2008 4:52 PMHi,
My master data loads are largely in the area of HR.
The only order I follow while loading master data is for a particular infoobject with regard to text, attributes and hierarchy - The order being text>>attributes >>hirerachy. Frankly, I have not checked doing it otherwise.
Across different master data infoobjects I see no need to have any order atleast in HR. Generally speaking a master data object has data that has an independant existence as extracted from R/3 or other sources and not derived from any other master data object in BW.
Master data as its name implies should not have referential integrity checks with other master data.
It would be good to know if someone has real experience to the contrary.
Mathew. -
How to load master data and hierarchies from R/3 systems
HI all,
how to load master data and hierarchies from R/3 systems.
Please explain the steps.
Thanks,
cheta.HI,
Its normally done following: Transferring the master datasources in RSA5 to RSA6 and then replicating the DS into BW and assignment of DS to Infosource and cretaion of Infopackage and load it into the master tables.
Generally, the control parameters for data transfer from a source system are maintained in extractor
customizing. In extractor customizing, you can access the corresponding source system in the source
system tree of the SAP BW Administrator Workbench by using the context menu.
To display or change the settings for data transfer at source system level, choose Business
Information Warehouse → General Settings → Maintaining Control Parameters for Data Transfer.
Note: The values for the data transfer are not hard limitations. It depends on the DataSource if these
limits can be followed.
In the SAP BW Scheduler, you can determine the control parameters for data transfer for individual
DataSources. You can determine the size of the data packet, the number of parallel processes for
data transfer and the frequency with which the status IDocs are sent, for every possible update
method for a DataSource.
To do so, choose Scheduler → DataSource → Default Settings for Data transfer.
In this way you can, for example, update transaction data in larger data packets in the PSA. If you
want to update master data in dialog mode, smaller packets ensure faster processing.
Hope this info helps.
Thanks,Ramoji. -
Loading Master Data - Select the 'right' attributes
Hello everybody,
hopefully some expert can answer my question.
I want to load Master Data for the InfoObject 0CRM_PROD.
When I look at the tree beneath the 0CRM_PROD attributes following DataSources are listed:
- 0CRM_PRODUCT_ATTR
- 0CRM_PROD_ATTR (obsolete)
- 0CRM_PR_MAT_ATTR
- 0CRM_PR_REST_IN_ATTR
- 0CRM_TR_CONTROL_ATTR
- 0PRODUCT_ATTR (obsolete)
- 0PRODUCT_GENERAL_ATTR
- 0PRODUCT_STATUS_ATTR
- 0PR_BASE_UNIT_ATTR
- 0PR_COMMERCIAL_ATTR
- 0PR_IL_PROREF_ATTR
- 0PR_PROD_VAR_ATTR
- 0PR_PURCHASE_CATEG_ATTR
- 0PR_SALES_CATEG_ATTR
Now my question is which attributes for 0CRM_PROD do I have to load?
Are any of them mandatory or basic attributes?
And how do the attributes 0CRM_PRODUCT_ATTR and 0PRODUCT_ATTR differ from each other?
Thanks in advance!
ChrisHello,
The BW DataSources used in CRM 4.0 for the SAP product are replaced with new DataSources.
Please see the Note 673053 - SAP product: New DataSources in PI_BASIS 2004_1_640
Just for Reference
[Integration of SAP Products in SAP BW|http://help.sap.com/saphelp_nw04/helpdata/en/f8/580f40763f1e07e10000000a1550b0/content.htm]
Thanks
Chandran -
Flex Lazy Loading Tree example posted
Hi, all;
I've posted a new example to my blog:
http://flexdiary.blogspot.com/2009/01/lazy-loading-tree-example-file-posted.html
This example demonstrates
- Using lazy loading with a Tree component.
- Using an interface rather than a concrete type to allow the
LazyDataDescriptor to work with any class that implements
LazyLoading.
- An all-actionscript remoting connection
It also has the PHP files and instructions for creating the
mySQL database
that provide the service data."danger42" <[email protected]> wrote in
message
news:gkig47$hqu$[email protected]..
> Very cool!
Thanks :-) -
How can I load my data faster? Is there a SQL solution instead of PL/SQL?
11.2.0.2
Solaris 10 sparc
I need to backfill invoices from a customer. The raw data has 3.1 million records. I have used pl/sql to load these invoices into our system (dev), however, our issue is the amount of time it's taking to run the load - effectively running at approx 4 hours. (Raw data has been loaded into a staging table)
My research keeps coming back to one concept: sql is faster than pl/sql. Where I'm stuck is the need to programmatically load the data. The invoice table has a sequence on it (primary key = invoice_id)...the invoice_header and invoice_address tables use the invoice_id as a foreign key. So my script takes advantage of knowing the primary key and uses that on the subsequent inserts to the subordinate invoice_header and invoice_address tables, respectively.
My script is below. What I'm asking is if there are other ideas on the quickest way to load this data...what am I not considering? I have to load the data in dev, qa, then production so the sequences and such change between the environments. I've dummied down the code to protect the customer; syntax and correctness of the code posted here (on the forum) is moot...it's only posted to give the framework for what I currently have.
Any advice would be greatly appreciated; how can I load the data faster knowing that I need to know sequence values for inserts into other tables?
DECLARE
v_inv_id invoice.invoice_id%TYPE;
v_inv_addr_id invoice_address.invoice_address_id%TYPE;
errString invoice_errors.sqlerrmsg%TYPE;
v_guid VARCHAR2 (128);
v_str VARCHAR2 (256);
v_err_loc NUMBER;
v_count NUMBER := 0;
l_start_time NUMBER;
TYPE rec IS RECORD
BILLING_TYPE VARCHAR2 (256),
CURRENCY VARCHAR2 (256),
BILLING_DOCUMENT VARCHAR2 (256),
DROP_SHIP_IND VARCHAR2 (256),
TO_PO_NUMBER VARCHAR2 (256),
TO_PURCHASE_ORDER VARCHAR2 (256),
DUE_DATE DATE,
BILL_DATE DATE,
TAX_AMT VARCHAR2 (256),
PAYER_CUSTOMER VARCHAR2 (256),
TO_ACCT_NO VARCHAR2 (256),
BILL_TO_ACCT_NO VARCHAR2 (256),
NET_AMOUNT VARCHAR2 (256),
NET_AMOUNT_CURRENCY VARCHAR2 (256),
ORDER_DT DATE,
TO_CUSTOMER VARCHAR2 (256),
TO_NAME VARCHAR2 (256),
FRANCHISES VARCHAR2 (4000),
UPDT_DT DATE
TYPE tab IS TABLE OF rec
INDEX BY BINARY_INTEGER;
pltab tab;
CURSOR c
IS
SELECT billing_type,
currency,
billing_document,
drop_ship_ind,
to_po_number,
to_purchase_order,
due_date,
bill_date,
tax_amt,
payer_customer,
to_acct_no,
bill_to_acct_no,
net_amount,
net_amount_currency,
order_dt,
to_customer,
to_name,
franchises,
updt_dt
FROM BACKFILL_INVOICES;
BEGIN
l_start_time := DBMS_UTILITY.get_time;
OPEN c;
LOOP
FETCH c
BULK COLLECT INTO pltab
LIMIT 1000;
v_err_loc := 1;
FOR i IN 1 .. pltab.COUNT
LOOP
BEGIN
v_inv_id := SEQ_INVOICE_ID.NEXTVAL;
v_guid := 'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff');
v_str := str_parser (pltab (i).FRANCHISES); --function to string parse - this could be done in advance, yes.
v_err_loc := 2;
v_count := v_count + 1;
INSERT INTO invoice nologging
VALUES (v_inv_id,
pltab (i).BILL_DATE,
v_guid,
'111111',
'NONE',
TO_TIMESTAMP (pltab (i).BILL_DATE),
TO_TIMESTAMP (pltab (i).UPDT_DT),
'READ',
'PAPER',
pltab (i).payer_customer,
v_str,
'111111');
v_err_loc := 3;
INSERT INTO invoice_header nologging
VALUES (v_inv_id,
TRIM (LEADING 0 FROM pltab (i).billing_document), --invoice_num
NULL,
pltab (i).BILL_DATE, --invoice_date
pltab (i).TO_PO_NUMBER,
NULL,
pltab (i).net_amount,
NULL,
pltab (i).tax_amt,
NULL,
NULL,
pltab (i).due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
TO_TIMESTAMP (SYSDATE),
TO_TIMESTAMP (SYSDATE),
PLTAB (I).NET_AMOUNT_CURRENCY,
(SELECT i.bc_value
FROM invsvc_owner.billing_codes i
WHERE i.bc_name = PLTAB (I).BILLING_TYPE),
PLTAB (I).BILL_DATE);
v_err_loc := 4;
INSERT INTO invoice_address nologging
VALUES (invsvc_owner.SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH INITIAL',
pltab (i).BILL_DATE,
NULL,
pltab (i).to_acct_no,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 5;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_ACCT_NO,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 6;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH2',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_CUSTOMER,
pltab (i).to_name,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 7;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH3',
pltab (i).BILL_DATE,
NULL,
'SOME PROPRIETARY DATA',
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 8;
INSERT
INTO invoice_event nologging (id,
eid,
root_eid,
invoice_number,
event_type,
event_email_address,
event_ts)
VALUES ( SEQ_INVOICE_EVENT_ID.NEXTVAL,
'111111',
'222222',
TRIM (LEADING 0 FROM pltab (i).billing_document),
'READ',
'some_user@some_company.com',
SYSTIMESTAMP);
v_err_loc := 9;
INSERT INTO backfill_invoice_mapping
VALUES (v_inv_id,
v_guid,
pltab (i).billing_document,
pltab (i).payer_customer,
pltab (i).net_amount);
IF v_count = 10000
THEN
COMMIT;
END IF;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (
pltab (i).billing_document,
pltab (i).payer_customer,
errString || ' ' || v_err_loc
COMMIT;
END;
END LOOP;
v_err_loc := 10;
INSERT INTO backfill_invoice_timing
VALUES (
ROUND ( (DBMS_UTILITY.get_time - l_start_time) / 100,
2)
|| ' seconds.',
(SELECT COUNT (1)
FROM backfill_invoice_mapping),
(SELECT COUNT (1)
FROM backfill_invoice_errors),
SYSDATE
COMMIT;
EXIT WHEN c%NOTFOUND;
END LOOP;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (NULL, NULL, errString || ' ' || v_err_loc);
COMMIT;
END;Hello
You could use insert all in your case and make use of sequence.NEXTVAL and sequence.CURRVAL like so (excuse any typos - I can't test without table definitions). I've done the first 2 tables, so it's just a matter of adding the rest in...
INSERT ALL
INTO invoice nologging
VALUES ( SEQ_INVOICE_ID.NEXTVAL,
BILL_DATE,
my_guid,
'111111',
'NONE',
CAST(BILL_DATE AS TIMESTAMP),
CAST(UPDT_DT AS TIMESTAMP),
'READ',
'PAPER',
payer_customer,
parsed_francises,
'111111'
INTO invoice_header
VALUES ( SEQ_INVOICE_ID.CURRVAL,
TRIM (LEADING 0 FROM billing_document), --invoice_num
NULL,
BILL_DATE, --invoice_date
TO_PO_NUMBER,
NULL,
net_amount,
NULL,
tax_amt,
NULL,
NULL,
due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
SYSTIMESTAMP,
NET_AMOUNT_CURRENCY,
bc_value,
BILL_DATE)
SELECT
src.billing_type,
src.currency,
src.billing_document,
src.drop_ship_ind,
src.to_po_number,
src.to_purchase_order,
src.due_date,
src.bill_date,
src.tax_amt,
src.payer_customer,
src.to_acct_no,
src.bill_to_acct_no,
src.net_amount,
src.net_amount_currency,
src.order_dt,
src.to_customer,
src.to_name,
src.franchises,
src.updt_dt,
str_parser (src.FRANCHISES) parsed_franchises,
'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff') my_guid,
i.bc_value
FROM BACKFILL_INVOICES src,
invsvc_owner.billing_codes i
WHERE i.bc_name = src.BILLING_TYPE;Some things to note
1. Don't commit in a loop - you only add to the run time and load on the box ultimately reducing scalability and removing transactional integrity. Commit once at the end of the job.
2. Make sure you specify the list of columns you are inserting into as well as the values or columns you are selecting. This is good practice as it protects your code from compilation issues in the event of new columns being added to tables. Also it makes it very clear what you are inserting where.
3. If you use WHEN OTHERS THEN... to log something, make sure you either rollback or raise the exception. What you have done in your code is say - I don't care what the problem is, just commit whatever has been done. This is not good practice.
HTH
David
Edited by: Bravid on Oct 13, 2011 4:35 PM -
What are the tables will update while loading Master data ?
Hello Experts,
What are the tables will update while loading Master data ? And requesting you to provide more information about Master data loading and its related settings in the beginning of creation infoobjects.It depends upon the type of Master data u r loading....
In all the master data loadings, for every new value of master data an SID will be created in the SID table /BI*/S<INFOOBJECT NAME> irrespective of the type of master data.
But the exceptional tables that get updated depending on the type of master data are.....
If it is a time Independent master data then the /BI*/P<INFOOBJECT NAME> table gets updated with the loaded data.
If it is a time dependent master data then the /BI*/Q<INFOOBJECT NAME> table gets updated with the loaded data.
If the master data is of time Independent Navigational attributes then for every data load the SID table will get updated first and then the /BI*/X<INFOOBJECT NAME> table gets updated with the SID's created in the SID table (NOT WITH THE MASTER DATA).
If the master data is of time dependent navigational attributes then for every data load the SID table will get updated first and then the /BI*/Y<INFOOBJECT NAME> table gets updated with the SID's created in the SID table (NOT WITH THE MASTER DATA).
NOTE: As said above, For all the data in P, Q, T, X, Y tables the SID's will be created in the S table /BI*/S<INFOOBJECT NAME>
NOTE: Irrespective of the time dependency or Independency the VIEW /BI*/M<INFOOBJECT NAME> defined on the top of /BI*/P<INFOOBJECT NAME> & /BI*/Q<INFOOBJECT NAME> tables gives the view of entire master data.
NOTE: it is just a View and it is not a Table. So it will not have any physical storage of data.
All the above tables are for ATTRIBUTES
But when it comes to TEXTS, irrespective of the Time dependency or Independency, the /BI*/T<INFOOBJECT NAME> table gets updated (and of course the S table also).
Naming Convention: /BIC/*<InfoObject Name> or /BI0/*<InfoObject Name>
C = Customer Defined Characteristic
0 = Standard or SAP defined Characteristic
* = P, Q, T, X,Y, S (depending on the above said conditions)
Thanks & regards
Sasidhar -
Unable to load the data from PSA to INFOCUBE
Hi BI Experts, good afternoon.
I am loading 3 years data( Full load ) from R/3 to Infocube.
So loaded the data by monthwise. So i created 36 info packages.
Everything is fine. But i got a error in Jan 2005 and Mar 2005. It is the same error in both months. That is Caller 01and caller 02 errors( Means invalid characteristics are there PSA data )
So i deleted both PSA and Data target Requests and again i loaded the data only to PSA.
Here i got data in PSA without fail.
Then i tried to load the data from PSA to Infocube MANUALLY.
But its not happening.
One message came this
SID 60,758 is smaller than the compress SID of cube ZIC_C03; no request booking.
Please give me the solution how to solve this problem.
Thanks & Regards
AnjaliHi Teja,
Thanks for the good response.
How can i check whether it is already compressed or not?
Pls give me the reply.
Thanks
Anjali -
Unable to load the data into HFM
Hello,
We created new HFM app configured that with FDM, generated output file through FDM and loaded that file through HFM directly 5-6 times, there was no issue till here.
Then I loaded the file through FDM 4 times successfully, even for different months. But, after 4 loads I start getting Error. Attached is the error log .
Please help us earliest..
** Begin fdmFM11XG6A Runtime Error Log Entry [2013-10-30-13:44:26] **
Error:
Code............-2147217873
Description.....System.Runtime.InteropServices.COMException (0x80040E2F): Exception from HRESULT: 0x80040E2F
at HSVCDATALOADLib.HsvcDataLoadClass.Load(String bstrClientFilename, String bstrClientLogFileName)
at fdmFM11XG6A.clsFMAdapter.fDBLoad(String strLoadFile, String strErrFile, String& strDelimiter, Int16& intMethod, Boolean& blnAccumFile, Boolean& blnHasShare, Int16& intMode)
Procedure.......clsHPDataManipulation.fDBLoad
Component.......E:\Opt\Shared\Apps\Hyperion\Install\Oracle\Middleware\EPMSystem11R1\products\FinancialDataQuality\SharedComponents\FM11X-G6-A_1016\AdapterComponents\fdmFM11XG6A\fdmFM11XG6A.dll
Version.........1116
Identification:
User............fdmadmin
Computer Name...EMSHALGADHYFD02
FINANCIAL MANAGEMENT Connection:
App Name........
Cluster Name....
Domain............
Connect Status.... Connection Open
Thanks,'
RaamWe are working with the DB team but they have confirmed that they is no issue with the TB, the process we have followed
As a standard process – while loading the data from FDM or manually to HFM – we don’t write any SQL query. Using the web interface – data would be loaded to HFM application. This data can we viewed by different reporting tools (smart view(excel)/HFR Report/etc.)
There is no any official documents on oracle website which talk about Insert SQL query which is used to insert data to HFM tables. Even, Hyperion does not provide much details on its internal tables used. Hyperion does not provide much insight on internal structure of HFM system.
As per Hyperion blogs/forums on internet –HFM stores the base level data in so called DCE tables (for example EMHFMFinal _DCE_1_2013 where EMHFMFinal is application name, 1 identifies the Scenario and 2013 the Year). Each row in the DCE table contains data for all periods of a given combination of dimensions (also called an intersection).
We are trying to load same data file with a replace option( it should delete the existing data before loading the data file). -
Unable to load the data into Cube Using DTP in the quality system
Hi,
I am unable to load the data from PSA to Cube using DTP in the quality system for the first time
I am getting the error like" Data package processing terminated" and "Source TRCS 2LIS_17_NOTIF is not allowed".
Please suggest .
Thanks,
SatyaprasadHi,
Some Infoobjects are missing while collecting the transport.
I collected those objects and transported ,now its working fine.
Many Thanks to all
Regards,
Satyaprasad
Maybe you are looking for
-
I really hope someone can help me. I've been having an issue with apps on my MBP crashing for like 9 months and it's getting worse and I'm losing my mind! It started with Firefox. Everything will be going along fine, and then suddenly I'll try to scr
-
ASCII representations of double-byte characters
My file contains ASCII representations of double-byte CJK characters (output of native2ascii). How do I restore them back to the original native characters? I mean, when I load the file with FileInputStream, what I get are all strings like \uabcd. Ho
-
Could not load Adobe Acrobat NPAPI plug-in version 10.1.3.
I could not load Adobe Acrobat NPAPI plug-in version 10.1.3. I have an Apple Mac OSX 10.6.8. Any ideas on what the problem could be?
-
How to Secure SQL SERVER 2012 Backup without using TDE or any thirdparty backup solution
Hi Experts Actually I was using backup set password feature for MS SQL SERVER 2008 but it is dropped in new versions (2012 & 2014). Please suggest some options to making the backups secure without using TDE or any third party tools.
-
Kuler disappeared after the update this morning
Hi, I use Illustrator CC in French vesion. After the update this morning (06/10/2014), I cannot find my Kuler platte anymore.. However I still have accès to the web site. Thanks for your help ! Yaqian