Best Practise for loading data into BW CSV vs XML ?
Hi Everyone,
I would like to get some of your thoughts on what file format would be best or most efficient to push data into BW. CSV or XML ?
Also what are the advantages / Disadvantages?
Appreciate your thoughts.
XML is used only for small data fields - more like it is easier to do it by XML rather than build an application for the same - provided the usage is less.
Flat files are used for HUGE data loads ( non SAP ) and definitely the choice of data formats would be flat files.
Also XML files are transformed into a flat file type format with each tag referring to the field and the size of the XML file grows to a high value depending on the number of fields.
Arun
Similar Messages
-
Hi,
We are having trouble while importing one ledger 'GERMANY EUR GGAAP'. It works for Dec 2014 but while trying to import data for 2015 it gives an error.
Import error shows " RuntimeError: No periods were identified for loading data into table 'AIF_EBS_GL_BALANCES_STG'."
I tried all Knowledge docs from Oracle support but no luck. Please help us resolving this issue as its occurring in our Production system.
I also checked all period settings under Data Management> Setup> Integration Setup > Global Mapping and Source Mapping and they all look correct.
Also its only happening to one ledger rest all ledgers are working fine without any issues.
ThanksHi,
there are some Support documents related to this issue.
I would suggest you have a look to them.
Regards -
Steps for loading data into the infocube in BI7, with dso in between
Dear All,
I am loading data into the infocube in BI7, with dso in between. By data flow looks like....
Top to bottom:
InfoCube (Customized)
Transformation
DSO (Customized)
Transformation
DataSource (Customized).
The mapping and everything else looks fine and data is also seen in the cube on the FULL Load.
But due to some minor error (i guess), i am unable to see the DELTA data in DSO, although it is loaded in DataSource through process chains.
Kindly advise me, where did i miss.
OR .. Step by step instructions for loading data into the infocube in BI7, with dso in between would be really helpful.
Regards,Hi,
my first impulse would be to check if the DSO is set to "direct update". In this case there is no Delta possible, because the Change log is not maintained.
My second thought would be to check the DTP moving data between the DSO and the target cube. If this is set to full, you will not get a delta. It is only possible to create one DTP. So if you created one in FULL mode you can't switch to Delta. Just create the DTP in Delta mode.
Hope this helps.
Kind regards,
Jürgen -
Procedure for loading data into gl_interface
hi all
iam new to oracle applications.....i just want to know how to load data into gl_interface table using pl/sql (that is...first loading data into a temporary table and then using pl/sql procedure to load into gl_interface). can anybody help me out with this by providing the pl/sql structure for it??
thanx in advanceAsuming you have data in a datafile and file is camma delimited. I asume table has two columns you can add more columns also.
CREATE OR REPLACE PROCEDURE p10
IS
lv_filehandle UTL_FILE.FILE_TYPE;
lv_iicc_premium ref_cursor;
lv_newline VARCHAR2(2000); -- Input line
lv_header_line VARCHAR2(20);
lv_trailer_line VARCHAR2(20);
lv_file_dir VARCHAR2(100);
lv_file_name VARCHAR2(100);
lv_col1 VARCHAR2(10);
lv_col2 VARCHAR2(50);
lv_comma VARCHAR2(1) := ',';
BEGIN
gv_PrFnName := '[pr_iicc_premium]';
DELETE FROM temp_table;
lv_file_dir := 'p:\temp';
lv_file_name := 'test.dat';
lv_filehandle := UTL_FILE.FOPEN (lv_file_dir, lv_file_name, 'r', 32766);
UTL_FILE.PUT_LINE (lv_filehandle, lv_header_line);
LOOP
FETCH lv_iicc_premium INTO lv_newline;
EXIT WHEN lv_iicc_premium%NOTFOUND;
UTL_FILE.PUT_LINE (lv_filehandle, lv_newline);
lv_col1 := substr(lv_newline, 1, instr(lv_newline, ',', 1)-1);
lv_col2 := substr(lv_newline, instr(lv_newline, ',', 1)+1, instr(lv_newline, ',', 2)-1);
INSERT INTO temp_table VALUES (lv_col1, lv_col2);
COMMIT;
END LOOP;
INSERT INTO your_production_tables VALUES ( SELECT * FROM temp_table );
COMMIT;
UTL_FILE.FFLUSH (lv_filehandle);
UTL_FILE.FCLOSE (lv_filehandle);
EXCEPTION
WHEN UTL_FILE.INVALID_PATH THEN
RAISE_APPLICATION_ERROR(-20100,'Invalid Path');
WHEN UTL_FILE.INVALID_MODE THEN
RAISE_APPLICATION_ERROR(-20101,'Invalid Mode');
WHEN UTL_FILE.INVALID_OPERATION then
RAISE_APPLICATION_ERROR(-20102,'Invalid Operation');
WHEN UTL_FILE.INVALID_FILEHANDLE then
RAISE_APPLICATION_ERROR(-20103,'Invalid Filehandle');
WHEN UTL_FILE.WRITE_ERROR then
NULL;
WHEN UTL_FILE.READ_ERROR then
RAISE_APPLICATION_ERROR(-20105,'Read Error');
WHEN UTL_FILE.INTERNAL_ERROR then
RAISE_APPLICATION_ERROR(-20106,'Internal Error');
WHEN OTHERS THEN
UTL_FILE.FCLOSE(lv_filehandle);
END p10;
/Code is not tested.
Hope this helps
Ghulam -
Interface for loading data into customer products
I am trying to find if there is any interface for loading Customer products in Install Base apart from entering them manually. I understand that this exists in Oracle Apps version 11.5.7 but I need this for version 11.5.4.
Hi,
In 11.5.4 , you've to write yourself the loader, using Oracle standard API.
I've performed it, it is working fine.
Hugues -
Should I wrap the queries in a procedure with a @StartDate and @EndDate and create a test partition to pass a small date range?
Or can i use the Table properties screen to put the command there and will it run and not be affected or affect the partitions?This would be nice if this SQL statement on this screen was independent of the partitions and I could just leave it with the the
command text = EXEC TransactionDetail '2014-01-01', '2014-05-31' Especially since if you have many tables that load based on a date range. i would not want to jump in and change that query on all of them.
Is there a a way to have a parameter in the project so all tables would get the same @startDate and @EndDate so I could change it in one place?
And I am not stuck to these questions\options, If there is a better way to mass change the queries to run a subset of data for the designer I'd like to hear it.
Thank You,
PhilHi Phil,
According to your description, you are looking for the best way to control the rows that are loaded into a table, right?
When importing data to a table of tabular model, we can apply filters to control the rows that are loaded into a table. After you have imported the data, you cannot delete individual rows. However, you can apply custom filters to control the way that
rows are displayed. Rows that do not meet the filtering criteria are hidden. For the detail information about it, please refer to the link below.Filter Data in a Table (SSAS Tabular)
If I have anything misunderstand, please point it out.
Regards,
Charlie Liao
If you have any feedback on our support, please click
here
Charlie Liao
TechNet Community Support -
Best practise for returning data from EJB's
I have an EJB that runs a query on a backend database and i want to return the data back to my Java GUI. Ideally i would like to pass a ResultSet back but i don't think they are serialisable so this isn't an option.
What's considered the best way to pass database results back from EJB's to a front end Java application ?
Thanks for any ideas you guys haveIf you want type-safety, define a VO (value object) that maps to your result-set, extract the data from the result set into the VO, and return an array of the data. Yes, it's extra work on the "back-end," but that's what the back-end is for. Just make sure your client.jar has the VO in it, as well as the Home and Remote interfaces.
-
Best Practices for Loading Data in 0SD_C03
Hi, Guru, I want to know which is the best practice to have information about Sales, billing, delivery. I know it has this Datasources.
Sales Order Item Data - 2LIS_11_VAITM
Billing Document Data: Items - 2LIS_13_VDITM
Billing Document Header Data - 2LIS_13_VDHDR
Sales-Shipping: Allocation Item Data - 2LIS_11_V_ITM
Delivery Header Data - 2LIS_12_VCHDR
Delivery Item Data - 2LIS_12_VCITM
Sales Order Header Data - 2LIS_11_VAHDR
Do I have to load all this Datasource to Infocube 0SD_C03 or I have to create copy of 0SD_C03 to mach with each Datasources.Hi.
If you just want to statistic the amount or quantity of the sales process,I suppose you to create 3 cubes and then use a multi provider to integrated those 3 cubes you created.for example:
2LIS_11_VAITM -> ZSD_C01
2LIS_12_VCITM -> ZSD_C02
2LIS_13_VDITM -> ZSD_C03
In this scenario,you can enhance the 2lis_12_vcitm and 2lis_13_vditm with sales order data,such as request delivery date etc..and then create a Multiprovider such as ZSD_M01.
Best Regards
Martin Xie -
Best Way to Load Data in Hash Partition
Hi,
I have partitioning by Hash on a Large Table of 5 TB. We have to load Data say more than 500GB daily on that table from ETL.
What is the best way to Load data into that Big Table which has hash Partition .
Regards
Sahil SoniDo you have any specific requirements to match records to lookup tables or it just a straight load - that is an insert?
Do you have any specific performance requirements?
The easiest and fastest way to load data into Oracle is via external file and parallel query/parallel insert. Remember that parallel DML is not enabled by default and you have to do so via alter session command. You can leverage multiple CPU cores and direct path operation to perform the load.
Assuming your database is on a linux/unix server - you could NFS load the file if it is on a remote system, but then you will be most likely limited by network transfer speed. -
Hi
I have written sql loader script for loading data into two tables.
Script is working ..But output is not coming properly.
I want to load data into first table which lines are having first char 'R'.
In the second table I have to load data which are related from the first line. Incase
first line data is not properly (means discarded) then related second table data
will not be load.
But I am getting both rows.Though the first table record is discarded. Please find below
the output.
Any other solution also ....Ok..external tables..Utl_file.
LOAD DATA
infile "inputFileForRmaReceiptAcknowledgement.dat"
BADFILE 'inputFileForRmaReceiptAcknowledgement.bad'
DISCARDFILE 'inputFileForRmaReceiptAcknowledgement.dsc'
APPEND
INTO TABLE XXGW_RMA_HEDR_RCPTACK_TAB
WHEN (01)='R'
( LINE_TYPE POSITION(1:1) "substr(:Line_Type, 1)",
RMA_ORDER_NO POSITION(2:16) CHAR,
ACKNOWLEDGEMENT_NO POSITION(17:31) CHAR,
ACKNOWLEDGEMENT_DATE POSITION(32:45)
"to_date(substr(:acknowledgement_date,3),'YYMMDDHH24MISS')",
DETAIL_LINE_COUNT POSITION(46:51) INTEGER EXTERNAL,
FLAG CHAR)
INTO TABLE XXGW_RMA_RCPT_ACKLDGMNT_TAB
WHEN (01) = 'D'
( LINE_TYPE POSITION(1:1) "substr(:Line_Type, 1)",
RMA_ORDER_NO POSITION(2:16) CHAR,
RMA_ORDER_LINE POSITION(17:19) INTEGER EXTERNAL,
SERIAL_NUMBER POSITION(20:49) CHAR,
SKU POSITION(50:63) CHAR,
QUANTITY POSITION(64:69) INTEGER EXTERNAL,
WAREHOUSE_CODE POSITION(70:71) CHAR,
WAYBILL_NUMBER POSITION(72:121) CHAR,
COURIER POSITION(122:146) CHAR,
RETURN_DEALER_FLAG POSITION(147:156) CHAR)
inputFileForRmaReceiptAcknowledgement.dat
R12345678901 2345456789123200 21111228241113000002 --- discarded record
D12345678901 00159123687402 45678925803 00000102name
D12345678901 00159143687402 45678925603 00000102name
T000004Regards
ArPl post details of OS and database versions.
Create a foreign key constraint between the detail table and the master table. If the row fails to load into the master table, then the detail table rows will fail the foreign key constraint and will not load.
http://docs.oracle.com/cd/E11882_01/server.112/e25789/datainte.htm#CNCPT1649
HTH
Srini -
Loading data into infocube in bi 7.0 from flat file.
Hello All,
I need the complete procedure for loading data into the infocube from flat file in bi 7.0 i.e.using transformation,DTP etc.
Please help me with some gud documents.Hi Pratighya,
Step by step procedure for loading data from flat file.
1. Create the infoobjects you might need in BI
2. Create your target infoprovider
3. . Create a source system
4. Create a datasource
5. Create and configure an Infopackage that will bring your records to the PSA
6. Create a transformation from the datasource to the Infoprovider
7. Create a Data Transfer Process (DTP) from the datasource to the Infoprovider
8. Schedule the infopackage
9. Once succesful, run the DTP
10. This will fill your target.
Hope this helps
Regards
Karthik -
Best practices for loading apo planning book data to cube for reporting
Hi,
I would like to know whether there are any Best practices for loading apo planning book data to cube for reporting.
I have seen 2 types of Design:
1) The Planning Book Extractor data is Loaded first to the APO BW system within a Cube, and then get transferred to the Actual BW system. Reports are run from the Actual BW system cube.
2) The Planning Book Extractor data is loaded directly to a cube within the Actual BW system.
We do these data loads during evening hours once in a day.
Rgds
GkHi GK,
What I have normally seen is:
1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
For DP monthly, SNP daily
You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
Thanks - Pawan -
Unable to load data into Planning cube
Hi,
I am trying to load data into a planning cube using a DTP.
But the system throws a message saying that, real time data loading cannot be performed to the planning cube.
What would be the possible reason for the same?
Thanks & Regards,
Surjit PHi Surjit,
To load data into cube using DTP, it should be put in loading mode only.
The real time cube behaviour is set to planning mode only when the data is input to the cube via the planning layouts / or through the Fileupload program.
You can change the behaviour of the cube in RSA1 -> Right click on the cube -> Change Real-time load behaviour and select the first option(real time cube can be loaded with data; planning not allowed)
Best Rgds
Shyam
Edited by: Syam K on Mar 14, 2008 7:57 AM
Edited by: Syam K on Mar 14, 2008 7:59 AM -
Need to load data from source .CSV files to oracle target database.
Hi,
This is the my scenario
I have .CSV files in ftp folder and need to load the data into target tables.
For that i need to create package and load the data into daily basis.
But some time .csv file name will vary daily basis.
can you any one suggest me???
Thanks in Advacne.
ZakeerDear Roy,
Thanks for your response
Now I am able to extract the .zip file OdiUnZip (file). and loading data into target this is chapping in static way
and my scenario is that some time i will get .zip files with different names with different .csv files
i need to dynamically find the new .zip file and extract it and load the data into target.
Please advice me..
Thanks in advance
Zakeer -
ODI Error when Loading data from a .csv file to Planning
Hello,
I am trying to load data from a csv file to planning using ODI 10.1.3.6 and I am facing this particular error. I am using staging area as Sunopsis memory engine.
7000 : null : java.sql.SQLException: Invalid COL ALIAS "DEFAULT C12_ALIAS__DEFAULT" for column "ALIAS:"
java.sql.SQLException: Invalid COL ALIAS "DEFAULT C12_ALIAS__DEFAULT" for column "ALIAS:"
at com.sunopsis.jdbc.driver.file.bb.b(bb.java)
at com.sunopsis.jdbc.driver.file.bb.a(bb.java)
at com.sunopsis.jdbc.driver.file.w.b(w.java)
at com.sunopsis.jdbc.driver.file.w.executeQuery(w.java)
at com.sunopsis.sql.SnpsQuery.executeQuery(SnpsQuery.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
at com.sunopsis.dwg.cmd.e.k(e.java)
at com.sunopsis.dwg.cmd.g.A(g.java)
at com.sunopsis.dwg.cmd.e.run(e.java)
at java.lang.Thread.run(Unknown Source)
Code from Operator:
select Account C1_ACCOUNT,
Parent C2_PARENT,
Alias: Default C12_ALIAS__DEFAULT,
Data Storage C3_DATA_STORAGE,
Two Pass Calculation C9_TWO_PASS_CALCULATION,
Account Type C6_ACCOUNT_TYPE,
Time Balance C14_TIME_BALANCE,
Data Type C5_DATA_TYPE,
Variance Reporting C10_VARIANCE_REPORTING,
Source Plan Type C13_SOURCE_PLAN_TYPE,
Plan Type (FinStmt) C7_PLAN_TYPE__FINSTMT_,
Aggregation (FinStmt) C8_AGGREGATION__FINSTMT_,
Plan Type (WFP) C15_PLAN_TYPE__WFP_,
Aggregation (WFP) C4_AGGREGATION__WFP_,
Formula C11_FORMULA
from TABLE
/*$$SNPS_START_KEYSNP$CRDWG_TABLESNP$CRTABLE_NAME=Account.csvSNP$CRLOAD_FILE=Y:/1 Metadata/Account//Account.csvSNP$CRFILE_FORMAT=DSNP$CRFILE_SEP_FIELD=2CSNP$CRFILE_SEP_LINE=0D0ASNP$CRFILE_FIRST_ROW=1SNP$CRFILE_ENC_FIELD=SNP$CRFILE_DEC_SEP=SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=AccountSNP$CRTYPE_NAME=STRINGSNP$CRORDER=1SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=ParentSNP$CRTYPE_NAME=STRINGSNP$CRORDER=2SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Alias: DefaultSNP$CRTYPE_NAME=STRINGSNP$CRORDER=3SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Data StorageSNP$CRTYPE_NAME=STRINGSNP$CRORDER=4SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Two Pass CalculationSNP$CRTYPE_NAME=STRINGSNP$CRORDER=5SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Account TypeSNP$CRTYPE_NAME=STRINGSNP$CRORDER=6SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Time BalanceSNP$CRTYPE_NAME=STRINGSNP$CRORDER=7SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Data TypeSNP$CRTYPE_NAME=STRINGSNP$CRORDER=8SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Variance ReportingSNP$CRTYPE_NAME=STRINGSNP$CRORDER=9SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Source Plan TypeSNP$CRTYPE_NAME=STRINGSNP$CRORDER=10SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Plan Type (FinStmt)SNP$CRTYPE_NAME=STRINGSNP$CRORDER=11SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Aggregation (FinStmt)SNP$CRTYPE_NAME=STRINGSNP$CRORDER=12SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Plan Type (WFP)SNP$CRTYPE_NAME=STRINGSNP$CRORDER=13SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=Aggregation (WFP)SNP$CRTYPE_NAME=STRINGSNP$CRORDER=14SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=FormulaSNP$CRTYPE_NAME=STRINGSNP$CRORDER=15SNP$CRLENGTH=150SNP$CRPRECISION=150SNP$CR$$SNPS_END_KEY*/
insert into "C$_0Account"
C1_ACCOUNT,
C2_PARENT,
C12_ALIAS__DEFAULT,
C3_DATA_STORAGE,
C9_TWO_PASS_CALCULATION,
C6_ACCOUNT_TYPE,
C14_TIME_BALANCE,
C5_DATA_TYPE,
C10_VARIANCE_REPORTING,
C13_SOURCE_PLAN_TYPE,
C7_PLAN_TYPE__FINSTMT_,
C8_AGGREGATION__FINSTMT_,
C15_PLAN_TYPE__WFP_,
C4_AGGREGATION__WFP_,
C11_FORMULA
values
:C1_ACCOUNT,
:C2_PARENT,
:C12_ALIAS__DEFAULT,
:C3_DATA_STORAGE,
:C9_TWO_PASS_CALCULATION,
:C6_ACCOUNT_TYPE,
:C14_TIME_BALANCE,
:C5_DATA_TYPE,
:C10_VARIANCE_REPORTING,
:C13_SOURCE_PLAN_TYPE,
:C7_PLAN_TYPE__FINSTMT_,
:C8_AGGREGATION__FINSTMT_,
:C15_PLAN_TYPE__WFP_,
:C4_AGGREGATION__WFP_,
:C11_FORMULA
Thanks in advance!Right-clicking "data" on the model tab can you see the data?
In your code there's written:
P$CRLOAD_FILE=Y:/1 Metadata/Account//Account.csv
Is it right the double slash before the file name?
Maybe you are looking for
-
RPD Patching ISSUES in OBIEE 11G
Hi all, when i am doing the RPD patching in OBIEE 11G i ll be missing one subject area in the process of merging can some one provide the solution for this. i have two rpds one rpd contains the two subject areas and another rpd contains the single su
-
Downloaded drivers from microsoft, now internet not working. please help
Internet connection lost after downloaded drivers updates
-
Ipod mini plays broken metallic sound
it started when I downloaded the latetes version of itunes, 7.0.1. Also, the podcasts show in playlist but they won't play. First time problem was solved by itself, now I can't make it work again...
-
Hi - I am so grateful for this Forum - thank you in advance for all your great input! I am loading .swf files in with this code (via buttons): var phoLoader:Loader=new Loader(); photos_btn.addEventListener(MouseEvent.CLICK, photos); function photos(m
-
How to list the rac databases on unix terminal
Hi guys, I am trying to find out how to list all the rac databases on the unix terminal. the following command gives the list of all the databases running on a server *'ps -ef |grep smon | awk '{print $8}' | awk -F"_" '{print $3}''* of those database