To load multiple records in multiple target tables
I have to read multiple records as a set from a single source flat file and load into different target tables.
I have to read 3 records as a set and load into 3 different target tables and read another 3 records as a set and load into previous 3 target tables and it goes on
till all the records from the flat file are loaded.
The structure of the file is as follows:
Header Record
Line Record
Distribution Record
Header Record
Line Record
Distribution Record
Read Header Record and load into target table A
Read Line Record and load into target table B
Read Distribution Record and load into target table C
--- repeat the same steps till all records are read and loaded into target tables.
I would appreciate if anyone can suggest the best approach of designing interface/package and error conditions to handle ?
Thanks,
Ram
Hi,
in this case you must create datastore under a 'Flat file' model
Create for example 3 datastore with the same ressource (file) but change defintion of column
Generaly on the first column, define the 'pattern' for each specific format 'variable code'
You must create 3 interfaces and load your target (databse table for example)
Regards
Stephane
Similar Messages
-
Inserting multiple records in to database table using webdynpro abap
Hi all,
I have created a username inputfield,a button and a table
with one coloumn.
If i enter names in the input field then the values should be
displayed in that table.
Even i have got the answer i am not able to insert
the values in to database(ztable) table.
i.e. only one value(1st) was inserted the second value was
not inserted ....
so kindly send me the coding to insert multiple records
into the database table......
by,
ranjithhi Ranjith,
If you want to insert multiple records from the webdynpro view table to database table then try the following code.
DATA lo_nd_tablenode TYPE REF TO if_wd_context_node.
DATA lo_el_tablenode TYPE REF TO if_wd_context_element.
DATA ls_tablenode TYPE wd_this->element_tablenode.
DATA it_tablenode LIKE STANDARD TABLE OF ls_tablenode.
navigate from <CONTEXT> to <tablenode> via lead selection
lo_nd_tablenode = wd_context->get_child_node( name = wd_this->wdctx_tablenode ).
get element via lead selection
lo_el_tablenode = lo_nd_tablenode->get_element( ).
get all declared attributes
lo_nd_tablenode->get_static_attributes_table(
IMPORTING
table = it_tablenode ).
MODIFY databasetablename FROM TABLE it_tablenode.
here it_tablenode is the internal table which holds the value from webdynpro view..
Regards,
Shamila. -
Locking multiple records in a database table at a time
Hi Experts,
I have a requirement to lock multiple records in the database table for writing. I have created lock object in SE11. But with this we have only 2 possibilities either lock entire table or lock a single record at a time.
My requirement is, i have table with key field PROJECTID. I want more than one project to be locked, but not the complete table. I dont have any other field in the table to compare.
Thanks in advance..
Regards,
AsrarHi ,
Try with FOR UPDATE in the SELECT sentence.
SELECT FOR UPDATE *
INTO Internal_Table
FROM Table
WHERE Conditions.
UPDATE "Table_name" FROM TABLE Internal_Table.
COMMIT WORK.
This sentence blocks only the records that satisfy of the WHERE conditions (not the entire table) and unlocks after commit work.
Hope this information is help to you.
Regards,
José -
I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY QUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM .I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ?? HOW IT IS ??
Hi Kishore,
First identify deleted records by selecting "Detect deleted rows from comparison table" feature in Table Comparison
Then Use Map Operation with Input row type as "delete" and output row type as "delete" to delete records from target table. -
Splitting of a CSV File with Multiple Records into Multiple XML File
Dear All,
<b> I am doing a Scenario of CSV to XML Files. I am using BPM for the same. My incoming CSV File has got multiple records. I want to break this Multiple records into Multiple XML Files having one record each.</b>
Can someone suggest how can I break this rather Split this into Multiple XML Files.
Is Multimapping absoltely necesaary for this. Can't we do this without Multimapping. Can we have some workaround in the FCC parameters that we use in the Integration Directory.
Kindly reply ASAP. Thanks a lot to all in anticipation.
Pls Help.
Best Regards
Chakra and SomnathDear All,
I am trying to do the Multimapping, and have 0....unbounded also. Someways it is not working.
<b>
Smitha please tell me one thing...Assigning the Recordsets per Message to 1, does it mean that it will write multiple XML Files as I want.</b>
Also I am usinf Set to Read only. So once the File is read it becomes RA from A. Then will it write the other Records.
I have to use a BPM because there are certain dependencies that are there for the entire Process Flow. I cannot do without a BPM.
Awaiting a reply. Thanks a lot in anticipation.
Best Regards
Chakra and Somnath -
Loading multiple files into a target table parallely
Hi
We are building an interface to load from flat files to an Oracle table. The input can be more than one files all of th same format. I have created a variable for the flat file data store. I have created 2 packages
In package 1
I have put odi file wait to trigger whenever files come into the directory . The list of files is written to a metadata table table. A procedure is triggered which picks a record from the metadata table and calls the scenario for calls package 2 by passing the selected record as parameter.
In package 2
The variable value is set to the parameter passed from package 1 procedure
The interface runs to load the flat file = to the variable value into the target table
Now if the directory has more than one flat file , the procedure in package 1 calls the package 2 scenario more than one times. This causes sequential load from each flat file to the target table. Is there a way wherein the multiple flat files can be loaded in parallel to the same target table? At this point ,I am not sure if this is possible as there is only one interface and one variable for the source fileAs per your requirement , your process seems to be always continues like a loop[ reason being as you have mentiond - I dont want to wait for all files to come in. As soon as a file comes in the file has to be loaded. ] .
You can solve the issue of file capture using OdiFileWait to let you know when the files have arrived.
Tell me this what do you plan to when the File loading is complete, will that be in the same folder and when new files come will that update with the same name or different name , becuase irresptive of same file or different if the files are present in the same folder after loading into target , then we might do repetitive loading of files. Please tell me how do you plan to handle this.
When you have plan to use LKM file to sql , what is the average number of source records coming into each sample files. ?
Just to add to above question , what is the average numner of parallel loads you are expecting and what is the time interval between each paralled loading you are expecting. -
How to load multiple (2 or more) target tables simultaneously?
Hi,
I have a requirement where I have to load 2 target tables simultaneously. Reason same primary key value has to be loaded at both the targets.
For eg:
Source table:
col1, col2, col3, col4, col5.... col20
Target 1:
T1C1=unique value from seq1
T1C2=col1
T1C3=col2
T1C9=col8
Target 2:
T2C1=same value as inserted above from seq1
T2C2=col9
T2C3=col10
T2C4=col11
T2C13=col20
Any response is highly appreciated.
Thanks for your time in advance.
-- CHikkHi, please, call me Cezar! :D
OK, that is a overview guide, are you confident to go thru KM customizing? If not, let me know.
1) For achieve that I figured out to use the UD's (UD1..UD5) fields from interface
2) Create 2 news option at IKM like: TABLE_UD1 and TABLE_UD2
3) change the KM code (C$ and I$) for generate code for each column group (UD1 and UD2 update and insert code). The I$ must have all columns.
4) at Interface level you will drag and drop both target tables at source, right-click on each one and add it to the target as temp table.
5) drop the targets from source area
6) for each group of columns from Target1 and target2 check the columns as UD1 and UD2 respectively and the sequence must be checked as UD1 and UD2 because it is common to the both target table.
7) add the real sources and create all mapping.
8) now just go to the Flow table and put the correspondent table name at each option. You should use an ODI API to get the right schema when change the context to production.
I just describe the general architecture once the technical implementation needs KM customize knowledge.
Make any sense and/or help you?
Be free to contact me at my email (see profile). -
How to load data into 3 different target tables usin BODS ?
Hello Friends,
I have 5 different source tables with same field definitions, Now I want to load all the records into three/four different target tables (Flat file, SQL Server, XML, and Oracle ) Could anyone please tell me how to do this task ?
Thanks in Advance,
Bheem.Hello Bheem,
You can create separated dataflow for each target as suggested by Bala, this is a good choice when evaluating you scenario.
If you put all targets in the same dataflow you may experience problems if one of them is down (as you have different servers as targets).
BODS will send the data simultaneously to all targets and when one fails, the load will be break and you may have the tables not sinc anyways (if you plan to use a single dataflow to avoid this situation, it won't do it).
So the best option is to create separated dataflow and put inside another one so you can run a singe dataflow that will call each one of the target and if onw fails the other will complete accordingly.
Then if you put them inside a error trapping, you may even make your dataflow to retry the load prior abend the job.
Think about cascading your dataflows and let the leaf level simple as it can be so it will be easier to schedule and debug your process.
Pay attention to datatypes and other conversions you might need when working with more than one source/target.
Regards, -
Insert Multiple Records into Multiple ZTABLEs inside the BAPI
Hi,
I have a requirement to Insert or Update Multiple Records into a ZTABLE inside the BAPI. Is there any special approach inside the BAPI to insert or update ZTABLEs with multiple records. It looks like the simple INSERT Statement is not working. Can you please suggest me a Suitable solution for this req?
Thanks and Regards,
KannanHi,
INSERT ZTABLE FROM TABLE ITAB.
The itab structure should be of the structure ZTABLE..
Same for MODIFY and update.
MODIFY ZTABLE FROM TABLE ITAB..
Thanks,
Naren -
Can not Load CLOB data 32k to target table
SQL> DESC testmon1 ;
Name Null? Type
FILENAME VARCHAR2(200)
SCANSTARTTIME VARCHAR2(50)
SCANENDTIME VARCHAR2(50)
JOBID VARCHAR2(50)
SCANNAME VARCHAR2(200)
SCANTYPE VARCHAR2(200)
FAULTLINEID VARCHAR2(50)
RISK VARCHAR2(5)
VULNNAME VARCHAR2(2000)
CVE VARCHAR2(200)
DESCRIPTION CLOB
OBSERVATION CLOB
RECOMMENDATION CLOB
SQL> DESC test_target;
Name Null? Type
LOCALID NOT NULL NUMBER
DESCRIPTION NOT NULL CLOB
SCANTYPE NOT NULL VARCHAR2(12)
RISK NOT NULL VARCHAR2(6)
TIMESTAMP NOT NULL DATE
VULNERABILITY_NAME NOT NULL VARCHAR2(2000)
CVE_ID VARCHAR2(200)
BUGTRAQ_ID VARCHAR2(200)
ORIGINAL VARCHAR2(50)
RECOMMEND CLOB
VERSION VARCHAR2(15)
FAMILY VARCHAR2(15)
XREF VARCHAR2(15)
create or replace PROCEDURE proc1 AS
CURSOR C1 IS
SELECT FAULTLINEID,VULNNAME,scanstarttime, risk,
dbms_lob.substr(DESCRIPTION,dbms_lob.getlength(DESCRIPTION),1) "DESCR",dbms_lob.substr(OBSERVATION,dbms_lob.getlength(OBSERVATION),1) "OBS",
dbms_lob.substr(RECOMMENDATION) "REC",CVE
FROM testmon1;
c_rec C1%ROWTYPE;
descobs clob;
FSCAN_VULN_TRANS_REC VULN_TRANSFORM_STG%ROWTYPE;
TIMESTAMP varchar2(50);
riskval varchar2(10);
pCTX PLOG.LOG_CTX := PLOG.init (pSECTION => 'foundscanVuln Procedure',
pLEVEL => PLOG.LDEBUG,
pLOG4J => TRUE,
pLOGTABLE => TRUE,
pOUT_TRANS => TRUE,
pALERT => TRUE,
pTRACE => TRUE,
pDBMS_OUTPUT => TRUE);
amount number;
buffer varchar2(32000);
BEGIN
---INITIALIZE THE LOCATOR FOR CLOB DATA TYPE
select observation into descobs from testmon1 where rownum=1;
OPEN C1;
loop
fetch C1 INTO c_rec;
exit when C1%NOTFOUND;
--LOAD THE DESCRIPTION FIELD FROM CURSOR AND WRITE IT TO THE CLOB LOCATOR descobs.
dbms_lob.Write(descobs,dbms_lob(c_rec.DESCR),1,c_rec.DESCR);
------APPEND THE OBSERVATION FIELD FROM CURSOR TO THE CLOB LOCATOR descobs.
dbms_lob.Writeappend(descobs,dbms_lob(c_rec.DESCR),c_rec.OBS);
-- dbms_output.put_line ('the timestamp is :'||c_rec.scanstarttime);
--dbms_lob.write(descobs,amount,1,buffer);
descobs:=c_rec.OBS;
--dbms_lob.read(descobs,amount,1,buffer);
--dbms_lob.append(c_rec.DESCR,c_rec.OBS);
--descobs:=c_rec.OBS;
--dbms_output.put_line ('the ADDED DESCROBS is :'||dbms_lob.substr(c_rec.DESCR,dbms_lob.getlength(c_rec.DESCR),1));
dbms_output.put_line ('the ADDED DESCRIPTION AND OBSERVATION is :'||descobs);
--dbms_output.put_line ('the DESCROBS buffer is :'||buffer);
SELECT DESCRIPTION INTO FSCAN_VULN_TRANS_REC.DESCRIPTION
FROM TESTMON1 WHERE ROWNUM=1;
---------LOAD THE DESCRIPTION+ observation value into the target table description
DBMS_LOB.WRITE(FSCAN_VULN_TRANS_REC.DESCRIPTION, dbms_lob.getlength(descobs),1,descobs);
TIMESTAMP:=substr(c_rec.scanstarttime,1,10)||' '|| substr(c_rec.scanstarttime,12,8);
IF c_rec.risk <3
THEN riskval:='Low';
ELSIF c_rec.risk <6
THEN riskval:='Medium';
ELSIF c_rec.risk <10
THEN riskval:='High';
END IF;
FSCAN_VULN_TRANS_REC.TIMESTAMP:=TO_DATE(TIMESTAMP, 'YYYY/MM/DD HH24:MI:SS');
FSCAN_VULN_TRANS_REC.risk:= riskval;
--dbms_lob.append(c_rec.DESCR,c_rec.OBS);
FSCAN_VULN_TRANS_REC.DESCRIPTION:=c_rec.DESCR;
FSCAN_VULN_TRANS_REC.RECOMMEND:=c_rec.REC;
FSCAN_VULN_TRANS_REC.LocalID:=to_number(c_rec.FAULTLINEID);
FSCAN_VULN_TRANS_REC.SCANTYPE:='FOUNDSCAN';
FSCAN_VULN_TRANS_REC.CVE_ID:=c_rec.CVE;
FSCAN_VULN_TRANS_REC.VULNERABILITY_NAME:=c_rec.VULNNAME;
-- dbms_output.put_line ('the plog timestamp is :'||timestamp);
-- dbms_output.put_line ('the timestamp is :'||riskval);
--dbms_output.put_line ('the recommend is :'||FSCAN_VULN_TRANS_REC.RECOMMEND);
--dbms_output.put_line ('the app desc is :'||FSCAN_VULN_TRANS_REC.DESCRIPTION);
insert into test_target values FSCAN_VULN_TRANS_REC;
End loop;
close C1;
commit;
EXCEPTION
WHEN OTHERS THEN
-- dbms_output.put_line ('Data not found');
-----------dbms_output.put_line (sqlcode|| ':'||sqlerrm);
end proc1;
using dbms_lob package is not helping. Either DB stops responding. Or the Observation field ( which has max length >300000) can not be loaed into a CLOB variable.
Please help or give me a sample code that helps.select
BANKING_INSTITUTION.BANK_REF_CODE C1_BANK_ID,
BANKING_INSTITUTION.NAME_BANK C2_BANK_NAME,
BANKING_INSTITUTION.BANK_NUMBER C3_BANK_NUMBER,
BANKING_INSTITUTION.ISO_CODE C4_GBA_CODE,
BANKING_INSTITUTION.STATUS C5_STATUS,
BANKING_INSTITUTION.SOURCE C6_SOURCE,
BANKING_INSTITUTION.START_DATE_BANK C7_START_DATE,
BANKING_INSTITUTION.ADDRESS_BANK C8_BANK_ADDRESS1
from REF_DATA_DB.BANKING_INSTITUTION BANKING_INSTITUTION
where (1=1)
insert /*+ append */ into XXSVB.C$_0XXSVB_BANKS_STAGING
C1_BANK_ID,
C2_BANK_NAME,
C3_BANK_NUMBER,
C4_GBA_CODE,
C5_STATUS,
C6_SOURCE,
C7_START_DATE,
C8_BANK_ADDRESS1
values
:C1_BANK_ID,
:C2_BANK_NAME,
:C3_BANK_NUMBER,
:C4_GBA_CODE,
:C5_STATUS,
:C6_SOURCE,
:C7_START_DATE,
:C8_BANK_ADDRESS1
) -
Load XML records in a normal table
Good afternoon all,
I have a very simple question:
I get a XML file and want to store that data in my Oracle database in a normal table.
I have seen so many answers everywhere, varying from LOBs and using XDB etc.
What i don't understand is why it is so difficult.
When i want to load a CSV file in a table I make a very small Control File CTL and from the command prompt / command line I run the SQL Loader.
Control file:
load data
infile 'import.csv'
into table emp
fields terminated by "," optionally enclosed by '"'
( empno, empname, sal, deptno )
command:
sqlldr user/password@SID control=loader_Control_File.ctl
Next I connect to the database and run SQL query:
select * from emp;
and i see my data as usual, I can make Crystal Reports on it, etc etc
I really don't understand why this can't be done with an XML file
Oracle know the fields in the table EMP
The xml file has around every field the <EMPNO> and </EMPNO>
Can't be easier than that I would say.
I can understand Oracle likes some kind of description of the XML table, so reference to a XSD file would be understandable.
But all examples are describing LOB things (whatever that is)
Who can help me to get XML data in a normal table?
Thanks
FrankHi Frank,
What i don't understand is why it is so difficult.Why do you think that?
An SQL*Loader control file might appear very small and simple to you, but you don't actually see what happens inside the loader itself, I guess a lot of complex operations (parsing, datatype mapping, memory allocation etc.).
XML, contrary to a CSV format, is a structured, well standardized language and could handle far more complex documents than row-organized CSV files.
I think it naturally requires a few extra work (for a developer) to describe what we want to do out of it.
However, using an XML schema is not mandatory to load XML data into a relational table.
It's useful if you're interested in high-performance loading and scalability, as it allows Oracle to fully understand the XML data model it has to deal with, and make the correct mapping with SQL types in the database.
Furthermore, now with 11g BINARY XMLType, performance has been improved with or without schema.
Here's a simple example, loading XML file "import.xml" into table MY_EMP.
Do you find it difficult? ;)
SQL> create or replace directory test_dir as 'D:\ORACLE\test';
Directory created
SQL> create table my_emp as
2 select empno, ename, sal, deptno
3 from scott.emp
4 where 1 = 0
5 ;
Table created
SQL> insert into my_emp (empno, ename, sal, deptno)
2 select *
3 from xmltable('/ROWSET/ROW'
4 passing xmltype(bfilename('TEST_DIR', 'import.xml'), nls_charset_id('CHAR_CS'))
5 columns empno number(4) path 'EMPNO',
6 ename varchar2(10) path 'ENAME',
7 sal number(7,2) path 'SAL',
8 deptno number(2) path 'DEPTNO'
9 )
10 ;
14 rows inserted
SQL> select * from my_emp;
EMPNO ENAME SAL DEPTNO
7369 SMITH 800.00 20
7499 ALLEN 1600.00 30
7521 WARD 1250.00 30
7566 JONES 2975.00 20
7654 MARTIN 1250.00 30
7698 BLAKE 2850.00 30
7782 CLARK 2450.00 10
7788 SCOTT 3000.00 20
7839 KING 5000.00 10
7844 TURNER 1500.00 30
7876 ADAMS 1100.00 20
7900 JAMES 950.00 30
7902 FORD 3000.00 20
7934 MILLER 1300.00 10
14 rows selected
import.xml :
<?xml version="1.0"?>
<ROWSET>
<ROW>
<EMPNO>7369</EMPNO>
<ENAME>SMITH</ENAME>
<SAL>800</SAL>
<DEPTNO>20</DEPTNO>
</ROW>
<ROW>
<EMPNO>7499</EMPNO>
<ENAME>ALLEN</ENAME>
<SAL>1600</SAL>
<DEPTNO>30</DEPTNO>
</ROW>
<!-- more rows here -->
<ROW>
<EMPNO>7934</EMPNO>
<ENAME>MILLER</ENAME>
<SAL>1300</SAL>
<DEPTNO>10</DEPTNO>
</ROW>
</ROWSET>
Who can help me to get XML data in a normal table?If you have a specific example, feel free to post it, including the following information :
- structure of the target table
- sample XML file
- database version (select * from v$version)
Hope that helps.
Edited by: odie_63 on 9 mars 2011 21:22 -
Error when edit multiple records in a details table
Hi,
I have a requirement to provide forms like details table which allows user to add new rows or edit rows in the same page and same table.
My page is SetObjectivesPG under /oracle/apps/per/wpm/objectives/webui/
I embeded my custom region in the standard page which allows user to add or edit in same table like forms.
I can able to create mutlple records, and i'll save once I created all the records I want. But when I edit it is not allowing me to do multiple without committing each changes.
I'm getting the following error:
Unable to perform transaction on the record.
Cause: The record contains stale data. The record has been modified by another user.
Action: Cancel the transaction and re-query the record to get the new data.
Please give suggestions to solve this issue.
Thanks a lot in advance,
SANHi,
Is there any specific reason that you are using 'Row.STATUS_NEW' ?
Comment that line and instead of that, write below line after setting all the values (i.e. after 'scorecardObjVO.getCurrentRow().setAttribute("StartDate", today);'):
newObjective.setNewRowState(Row.STATUS_INITIALIZED);
Check this and let me know if you are still facing the issue.
--Sushant -
Inserting multiple records in multiple tables - High Performance
I have a input form in the table and the user can input any no. of rows. Every row has around 25 columns and when a single row is saved, each of this column is saved in its own table, i.e. there are 25 inserts happening (for 25 columns) for a single row. Now if a user paste even 100 rows this takes a lot of time (because of 100*25 inserts happening). Now I want to support 10000 inserts and obviously it should be very fast. Can someone tell me how to maximize the performance. I am using mysql 4.1 as my DB.
javanewbie80 wrote:
I have a input form in the table and the user can input any no. of rows. Every row has around 25 columns and when a single row is saved, each of this column is saved in its own table, Why is each value its own table? Are they not related in some way?
This sounds like a bad RDBMS design.
i.e. there are 25 inserts happening (for 25 columns) for a single row. Now if a user paste even 100 rows this takes a lot of time (because of 100*25 inserts happening). Now I want to support 10000 inserts and obviously it should be very fast. Can someone tell me how to maximize the performance. I am using mysql 4.1 as my DB.
Rethink your design.
If you must persist in this folly, I'd recommend batching your INSERTs to try and minimize network traffic.
But do consult with somebody who actually knows SQL and relational design. This smells bad.
% -
File to rfc (file has multiple records)
Hi Guys,
I have a filr to rfc scenario..
I created a Z table with ITAB in import tab and field names and then a BAPi using se37...
The BAPI source code is just one line
Modify Ztable from ITAB
I first tried wth input file having one record, and in scenario runs successfully to update the Z table, checked in se 16
Now when the input file has more than one record, but ZBAPI has just 1..1 ITAB tab
Ho can I do multiple inserts using the same rfc/bapi??
Regards,
TejaHi,
As Prateek suggested 2 opionts, so I think option 2 is best for your otherwise option 1 can lead into performance issues.
Make sure that you have declard your internal table ITAB as standard table of ztable. e.g.
DATA: ITAB type standard table of ztable,
WA_ITAB type ztable.
if you do like this and import the BAPI in XI then you can see the occurence of target has been changed as 0..unbounded
Finally, when you have multiple records in your internal table then loop on it and update the table. e.g.
Loop at itab into wa_itab.
Modify Ztable from WA_ITAB.
Clear:wa_itab.
Endloop.
Regards,
Sarvesh -
Calculations in query taking long time and to load target table
Hi,
I am pulling approx 45 Million records using the below query in a ssis package which pulls from one DB on one server and loading the results to another target table on the another server. In the select query I have a calculation for 6 columns. The target
table is trunctaed and loaded every day. Also most of the columns in the source which I used for the calculations is having 0 and it took approximately 1 hour 45 min to load the target table. Is there any way to reduce the load time? Also can I do the calcultions
after once all the 47 M records loaded during query running and then calculate for the non zero records alone?
SELECT T1.Col1,
T1.Col2,
T1.Col3,
T2.Col1,
T2.Col2,
T3.Col1,
convert( numeric(8,5), (convert( numeric,T3.COl2) / 1000000)) AS Colu2,
convert( numeric(8,5), (convert( numeric,T3.COl3) / 1000000)) AS Colu3,
convert( numeric(8,5), (convert( numeric,T3.COl4) / 1000000)) AS Colu4,
convert( numeric(8,5),(convert( numeric, T3.COl5) / 1000000)) AS Colu5,
convert( numeric(8,5), (convert( numeric,T3.COl6) / 1000000)) AS Colu6,
convert( numeric(8,5), (convert( numeric,T3.COl7) / 1000000)) AS Colu7,
FROM Tab1 T1
JOIN Tab2 T2
ON (T1.Col1 = T2.Col1)
JOIN Tab3 T3
ON (Tab3.Col9 =Tab3.Col9)
AnandSo 45 or 47? Nevertheless ...
This is hardly a heavy calculation, the savings will be dismal. Also anything numeric is very easy on CPU in general.
But
convert( numeric(8,5), (convert( numeric,T3.COl7) / 1000000))
is not optimal.
CONVERT( NUMERIC(8,5),300 / 1000000.00000 )
Is
Now it boils to how to make load faster: do it in parallel. Find how many sockets the machine have and split the table into as many chunks. Also profile to find out where it spends most of the time. I saw sometimes the network is not letting me thru so you
may want to play with buffers, and packet sizes, for example if OLEDB used increase the packet size two times see if works faster, then x2 more and so forth.
To help you further you need to tell more e.g. what is this source, destination, how you configured the load.
Please understand that there is no Silver Bullet anywhere, or a blanket solution, and you need to tell me your desired load time. E.g. if you tell me it needs to load in 5 min I will give your ask a pass.
Arthur
MyBlog
Twitter
Maybe you are looking for
-
My ipod touch isn't connecting to the internet???
The internet on my iPod touch has always worked, but today it's not connecting to anything. When I go to settings and then wi-fi it says it's connected to my internet but doesn't show at the top left bar and isn't working? I have tried turning it on
-
Hi I have a desktop with a ATI R7 240 Graphic Card. I can activate the gpu usage in LR without any error message. But LR CC is horribly slow then. As soon as I deactivate the gpu in LR Settings it works normally. I installed the newest ATI driver but
-
Why does my phone show no service
phone shows no service
-
Preview and printer skipping pages
I am trying to print some articles from online. The articles should come out as 4 pages long. Instead, it prints pages 1, 2 and then skips to pages 4 (which consists only of the links at the bottom of the website, which are outside of the "frame" tha
-
Process chain Logic for this scenario
HI, We have a typical requirement that need to be implemented using a Process chain. We need to Load data from R/3 to PSA, then Load the data from the PSA into ODS1, then execute an ABAP program, then load the data from PSA into ODS2 (we cannot rep