How to batch load multi data files to several tables
Hi,
One customer have such data structure and have large number of data(arround 10 Million). I think it's proper to convert them to data that SQL loader can recognize and then insert into Oracle 8 or 9. The question is how to convert?
Or maybe to insert them one by one is simpler?
1: Component of Data
The data file consists of nameplate and some records.
1.1 Structure of nameplate
ID datatype length(byte) comments
1 char 4
2 char 19
3 char 2
4 char 6 records in this file
5 char 8
1.2 structure of each record
ID datatype length(byte)
1 char 21
2 char 18
3 char 30
4 char 1
5 char 8
6 char 2
7 char 6
8 char 70
9 char 30
10 char 8
11 char 8
12 char 1
13 char 1
14 char 1
15 char 30
16 char 20
17 char 6
18 char 70
19 char 5
24 binï¼blobï¼ 1024
25 bin(blob) defined in ID19
2: data file and table spaces in database
dataID 1-13 of each record insert to table1,
14-18 to table2, and 19,24,25 to table3
Is there a method to convert them to some data that SQL loader can input and then at a whole load into Oracle 8 or 9?
I've check the Oracle Utilities docs, but did not find a way to load so many data files at a batch action.
In my view the solution consist in two ways:
1, Load each of them individualy individually to different tables by some programme. But the speed may be problem because the uninterrupted db connections and close.
2, Convert them to one or three files then use SQL loader.
But either isn't much easy, I wonder if there's a better method to handle.
Many thanks!
My coworker tried that, but it dragged down portal.
How about to update WWDOC_DOCUMENT$ table, then use WWSBR_API.add_item_post_upload to update folder information etc.?
If possible, is there any sample code?
Similar Messages
-
SQL Loader: Multiple data files to Multiple Tables
How do you create one control file that refrences multiple data file and each file loads data in a different table.
Eg.
DataFile1 --> Table 1
DataFile2 --> Table 2
Contents and Structure of both data files are different. Data file is comma seperated.
Below example is for 1 data file to 1 table. Need to modify this or create a wrapper that would call multiple control files.
OPTIONS (SKIP=1)
LOAD DATA
INFILE DataFile1
BADFILE DataFile1_bad.txt'
DISCARDFILE DataFile1_dsc.txt'
REPLACE
INTO TABLE Table1
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
Col1,
Col2,
Col3,
create_dttm sysdate,
MySeq "myseq.nextval"
Welcome any other suggestions.I was thinking if there is a way to indicate what file goes with what table (structure) in one control file.
Example ( This does not work but wondering if something similar is allowed..)
OPTIONS (SKIP=1)
LOAD DATA
INFILE DataFile1
BADFILE DataFile1_bad.txt'
DISCARDFILE DataFile1_dsc.txt'
REPLACE
INTO TABLE Table1
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
Col1,
Col2,
Col3,
create_dttm sysdate,
MySeq "myseq.nextval"
INFILE DataFile2
BADFILE DataFile2_bad.txt'
DISCARDFILE DataFile2_dsc.txt'
REPLACE
INTO TABLE "T2"
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
T2Col1,
T2Col2,
T2Col3
) -
How to load unicode data files with fixed records lengths?
Hi!
To load unicode data files with fixed records lengths (in terms of charachters and not of bytes!) using SQL*Loader manually, I found two ways:
Alternative 1: one record per row
SQL*Loader control file example (without POSITION, since POSITION always refers to bytes!)<br>
LOAD DATA
CHARACTERSET UTF8
LENGTH SEMANTICS CHAR
INFILE unicode.dat
INTO TABLE STG_UNICODE
TRUNCATE
A CHAR(2) ,
B CHAR(6) ,
C CHAR(2) ,
D CHAR(1) ,
E CHAR(4)
) Datafile:
001111112234444
01NormalDExZWEI
02ÄÜÖßêÊûÛxöööö
03ÄÜÖßêÊûÛxöööö
04üüüüüüÖÄxµôÔµ Alternative2: variable length records
LOAD DATA
CHARACTERSET UTF8
LENGTH SEMANTICS CHAR
INFILE unicode_var.dat "VAR 4"
INTO TABLE STG_UNICODE
TRUNCATE
A CHAR(2) ,
B CHAR(6) ,
C CHAR(2) ,
D CHAR(1) ,
E CHAR(4)
) Datafile:
001501NormalDExZWEI002702ÄÜÖßêÊûÛxöööö002604üuüüüüÖÄxµôÔµ Problems
Implementing these two alternatives in OWB, I encounter the following problems:
* How to specify LENGTH SEMANTICS CHAR?
* How to suppress the POSITION definition?
* How to define a flat file with variable length and how to specify the number of bytes containing the length definition?
Or is there another way that can be implemented using OWB?
Any help is appreciated!
Thanks,
Carsten.Hi Carsten
If you need to support the LENGTH SEMANTICS CHAR clause in an external table then one option is to use the unbound external table and capture the access parameters manually. To create an unbound external table you can skip the selection of a base file in the external table wizard. Then when the external table is edited you will get an Access Parameters tab where you can define the parameters. In 11gR2 the File to Oracle external table can also add this clause via an option.
Cheers
David -
I have hierarchy data in R/3 side how will i load that data from R/3 to BW
Hi all,
i have my hierarchy data in the R/3 side how will i load that data from R/3 to BW side
Regard
Kiran KumarHi Kiran,
Here is the procedure:
1. In the Data Warehousing Workbench under Modeling, select the InfoSource tree.
2. Select the InfoSource (with direct update) for the InfoObject, to which you want to load the hierarchy.
3. Choose Additional Functions® Create Transfer Rules from the context menu of the hierarchy table object for the InfoObject. The Assign Source System dialog box appears.
4. Select the source system from which the hierarchy is to be loaded. The InfoSource maintenance screen appears.
○ If the DataSource only supports the transfer method IDoc, then only the transfer structure is displayed (tab page DataSource/Transfer Structure).
○ If the DataSource also supports transfer method PSA, you can maintain the transfer rules (tab page Transfer Rules).
If it is possible and useful, we recommend that you use the transfer method PSA and set the indicator Expand Leaf Values and Node InfoObjects. You can then also load hierarchies with characteristics whose node name has a length >32.
5. Save your entries and go back. The InfoSource tree for the Data Warehousing Workbench is displayed.
6. Choose Create InfoPackage from the context menu (see Maintaining InfoPackages). The Create InfoPackage dialog box appears.
7. Enter the description for the InfoPackage. Select the DataSource (data element Hierarchies) that you require and confirm your entries.
8. On the Tab Page: Hierarchy Selection, select the hierarchy that you want to load into your BI system.
Specify if the hierarchy should be automatically activated after loading or be marked for activation.
Select an update method (Full Update, Insert Subtree, Update Subtree).
If you want to load a hierarchy from an external system with BAPI functionality, make BAPI-specific restrictions, if necessary.
9. If you want to load a hierarchy from a flat file, maintain the tab page: external data.
10. Maintain the tab page: processing.
11. Maintain the tab page: updating.
12. To schedule the InfoPackage, you have the following options:
○ (Manually) in the scheduler, see Scheduling InfoPackages
○ (Automatically) using a process chain (see Loading Hierarchies Using a Process Chain)
When you upload hierarchies, the system carries out a consistency check, making sure that the hierarchy structure is correct. Error messages are logged in the Monitor. You can get technical details about the error and how to correct it in the long text for the respective message.
For more info visit this help pages on SAP Help:
http://help.sap.com/saphelp_nw04s/helpdata/en/80/1a6729e07211d2acb80000e829fbfe/frameset.htm
http://help.sap.com/saphelp_nw04s/helpdata/en/3d/320e3d89195c59e10000000a114084/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6729e07211d2acb80000e829fbfe/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4dae0795-0501-0010-cc96-fe3a9e8959dc
Cheers,
Habeeb -
How to load a XML file into a table
Hi,
I've been working on Oracle for many years but for the first time I was asked to load a XML file into a table.
As an example, I've found this on the web, but it doesn't work
Can someone tell me why? I hoped this example could help me.
the file acct.xml is this:
<?xml version="1.0"?>
<ACCOUNT_HEADER_ACK>
<HEADER>
<STATUS_CODE>100</STATUS_CODE>
<STATUS_REMARKS>check</STATUS_REMARKS>
</HEADER>
<DETAILS>
<DETAIL>
<SEGMENT_NUMBER>2</SEGMENT_NUMBER>
<REMARKS>rp polytechnic</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>3</SEGMENT_NUMBER>
<REMARKS>rp polytechnic administration</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>4</SEGMENT_NUMBER>
<REMARKS>rp polytechnic finance</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>5</SEGMENT_NUMBER>
<REMARKS>rp polytechnic logistics</REMARKS>
</DETAIL>
</DETAILS>
<HEADER>
<STATUS_CODE>500</STATUS_CODE>
<STATUS_REMARKS>process exception</STATUS_REMARKS>
</HEADER>
<DETAILS>
<DETAIL>
<SEGMENT_NUMBER>20</SEGMENT_NUMBER>
<REMARKS> base polytechnic</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>30</SEGMENT_NUMBER>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>40</SEGMENT_NUMBER>
<REMARKS> base polytechnic finance</REMARKS>
</DETAIL>
<DETAIL>
<SEGMENT_NUMBER>50</SEGMENT_NUMBER>
<REMARKS> base polytechnic logistics</REMARKS>
</DETAIL>
</DETAILS>
</ACCOUNT_HEADER_ACK>
For the two tags HEADER and DETAILS I have the table:
create table xxrp_acct_details(
status_code number,
status_remarks varchar2(100),
segment_number number,
remarks varchar2(100)
before I've created a
create directory test_dir as 'c:\esterno'; -- where I have my acct.xml
and after, can you give me a script for loading data by using XMLTABLE?
I've tried this but it doesn't work:
DECLARE
acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
BEGIN
insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
select x1.status_code,
x1.status_remarks,
x2.segment_number,
x2.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns header_no for ordinality,
status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
passing acct_doc as "d",
x1.header_no as "hn"
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS'
) x2
END;
This should allow me to get something like this:
select * from xxrp_acct_details;
Statuscode status remarks segement remarks
100 check 2 rp polytechnic
100 check 3 rp polytechnic administration
100 check 4 rp polytechnic finance
100 check 5 rp polytechnic logistics
500 process exception 20 base polytechnic
500 process exception 30
500 process exception 40 base polytechnic finance
500 process exception 50 base polytechnic logistics
but I get:
Error report:
ORA-06550: line 19, column 11:
PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
ORA-06550: line 4, column 2:
PL/SQL: SQL Statement ignored
06550. 00000 - "line %s, column %s:\n%s"
*Cause: Usually a PL/SQL compilation error.
and if I try to change the script without using the column HEADER_NO to keep track of the header rank inside the document:
DECLARE
acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
BEGIN
insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
select x1.status_code,
x1.status_remarks,
x2.segment_number,
x2.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'/ACCOUNT_HEADER_ACK/DETAILS'
passing acct_doc
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS'
) x2
END;
I get this message:
Error report:
ORA-19114: error during parsing the XQuery expression:
ORA-06550: line 1, column 13:
PLS-00201: identifier 'SYS.DBMS_XQUERYINT' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
ORA-06512: at line 4
19114. 00000 - "error during parsing the XQuery expression: %s"
*Cause: An error occurred during the parsing of the XQuery expression.
*Action: Check the detailed error message for the possible causes.
My oracle version is 10gR2 Express Edition
I do need a script for loading xml files into a table as soon as possible, Give me please a simple example for understanding and that works on 10gR2 Express Edition
Thanks in advance!The reason your first SQL statement
select x1.status_code,
x1.status_remarks,
x2.segment_number,
x2.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns header_no for ordinality,
status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
passing acct_doc as "d",
x1.header_no as "hn"
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS'
) x2
returns the error you noticed
PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
is because Oracle is expecting XML to be passed in. At the moment I forget if it requires a certain format or not, but it is simply expecting the value to be wrapped in simple XML.
Your query actually runs as is on 11.1 as Oracle changed how that functionality worked when 11.1 was released. Your query runs slowly, but it does run.
As you are dealing with groups, is there any way the input XML can be modified to be like
<ACCOUNT_HEADER_ACK>
<ACCOUNT_GROUP>
<HEADER>....</HEADER>
<DETAILS>....</DETAILS>
</ACCOUNT_GROUP>
<ACCOUNT_GROUP>
<HEADER>....</HEADER>
<DETAILS>....</DETAILS>
</ACCOUNT_GROUP>
</ACCOUNT_HEADER_ACK>
so that it is easier to associate a HEADER/DETAILS combination? If so, it would make parsing the XML much easier.
Assuming the answer is no, here is one hack to accomplish your goal
select x1.status_code,
x1.status_remarks,
x3.segment_number,
x3.remarks
from xmltable(
'/ACCOUNT_HEADER_ACK/HEADER'
passing acct_doc
columns header_no for ordinality,
status_code number path 'STATUS_CODE',
status_remarks varchar2(100) path 'STATUS_REMARKS'
) x1,
xmltable(
'$d/ACCOUNT_HEADER_ACK/DETAILS'
passing acct_doc as "d",
columns detail_no for ordinality,
detail_xml xmltype path 'DETAIL'
) x2,
xmltable(
'DETAIL'
passing x2.detail_xml
columns segment_number number path 'SEGMENT_NUMBER',
remarks varchar2(100) path 'REMARKS') x3
WHERE x1.header_no = x2.detail_no;
This follows the approach you started with. Table x1 creates a row for each HEADER node and table x2 creates a row for each DETAILS node. It assumes there is always a one and only one association between the two. I use table x3, which is joined to x2, to parse the many DETAIL nodes. The WHERE clause then joins each header row to the corresponding details row and produces the eight rows you are seeking.
There is another approach that I know of, and that would be using XQuery within the XMLTable. It should require using only one XMLTable but I would have to spend some time coming up with that solution and I can't recall whether restrictions exist in 10gR2 Express Edition compared to what can run in 10.2 Enterprise Edition for XQuery. -
"how to load a text file to oracle table"
hi to all
can anybody help me "how to load a text file to oracle table", this is first time i am doing, plz give me steps.
Regards
MKhaleelUsage: SQLLOAD keyword=value [,keyword=value,...]
Valid Keywords:
userid -- ORACLE username/password
control -- Control file name
log -- Log file name
bad -- Bad file name
data -- Data file name
discard -- Discard file name
discardmax -- Number of discards to allow (Default all)
skip -- Number of logical records to skip (Default 0)
load -- Number of logical records to load (Default all)
errors -- Number of errors to allow (Default 50)
rows -- Number of rows in conventional path bind array or between direct path data saves (Default: Conventional path 64, Direct path all)
bindsize -- Size of conventional path bind array in bytes (Default 256000)
silent -- Suppress messages during run (header, feedback, errors, discards, partitions)
direct -- use direct path (Default FALSE)
parfile -- parameter file: name of file that contains parameter specifications
parallel -- do parallel load (Default FALSE)
file -- File to allocate extents from
skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE)
skip_index_maintenance -- do not maintain indexes, mark affected indexes as unusable (Default FALSE)
commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE)
readsize -- Size of Read buffer (Default 1048576)
external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE
(Default NOT_USED)
columnarrayrows -- Number of rows for direct path column array (Default 5000)
streamsize -- Size of direct path stream buffer in bytes (Default 256000)
multithreading -- use multithreading in direct path
resumable -- enable or disable resumable for current session (Default FALSE)
resumable_name -- text string to help identify resumable statement
resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200)
PLEASE NOTE: Command-line parameters may be specified either by position or by keywords. An example of the former case is 'sqlldr scott/tiger foo'; an example of the latter is 'sqlldr control=foo userid=scott/tiger'. One may specify parameters by position before but not after parameters specified by keywords. For example, 'sqlldr scott/tiger control=foo logfile=log' is allowed, but 'sqlldr scott/tiger control=foo log' is not, even though the position of the parameter 'log' is correct.
SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\PFS2004.CTL LOG=D:\PFS2004.LOG BAD=D:\PFS2004.BAD DATA=D:\PFS2004.CSV
SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\CLAB2004.CTL LOG=D:\CLAB2004.LOG BAD=D:\CLAB2004.BAD DATA=D:\CLAB2004.CSV
SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CTL LOG=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.LOG BAD=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.BAD DATA=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CSV -
SQL*Loader Sequential Data File Record Processing?
If I use the conventional path will SQL*Loader process a data file sequentially from top to bottom? I have a file comprised of header and detail records with no value found in the detail records that can be used to relate to the header records. The only option is to derive a header value via a sequence (nextval) and then populate the detail records with the same value pulled from the same sequence (currval). But for this to work SQL*Loader must process the file in the exact same sequence that the data has been written to the data file. I've read through the 11g Oracle® Database Utilities SQL*Loader sections looking for proof that this is what will happen but haven't found this information and I don't want to assume that SQL*Loader will always process the data file records sequentially.
Thank youOracle Support responded with the following statement.
"Yes, SQL*LOADER process data file from top to bottom.
This was touched in the note below:
SQL*Loader - How to Load a Single Logical Record from Physical Records which Include Linefeeds (Doc ID 160093.1)"
Jason -
How can I load my data faster? Is there a SQL solution instead of PL/SQL?
11.2.0.2
Solaris 10 sparc
I need to backfill invoices from a customer. The raw data has 3.1 million records. I have used pl/sql to load these invoices into our system (dev), however, our issue is the amount of time it's taking to run the load - effectively running at approx 4 hours. (Raw data has been loaded into a staging table)
My research keeps coming back to one concept: sql is faster than pl/sql. Where I'm stuck is the need to programmatically load the data. The invoice table has a sequence on it (primary key = invoice_id)...the invoice_header and invoice_address tables use the invoice_id as a foreign key. So my script takes advantage of knowing the primary key and uses that on the subsequent inserts to the subordinate invoice_header and invoice_address tables, respectively.
My script is below. What I'm asking is if there are other ideas on the quickest way to load this data...what am I not considering? I have to load the data in dev, qa, then production so the sequences and such change between the environments. I've dummied down the code to protect the customer; syntax and correctness of the code posted here (on the forum) is moot...it's only posted to give the framework for what I currently have.
Any advice would be greatly appreciated; how can I load the data faster knowing that I need to know sequence values for inserts into other tables?
DECLARE
v_inv_id invoice.invoice_id%TYPE;
v_inv_addr_id invoice_address.invoice_address_id%TYPE;
errString invoice_errors.sqlerrmsg%TYPE;
v_guid VARCHAR2 (128);
v_str VARCHAR2 (256);
v_err_loc NUMBER;
v_count NUMBER := 0;
l_start_time NUMBER;
TYPE rec IS RECORD
BILLING_TYPE VARCHAR2 (256),
CURRENCY VARCHAR2 (256),
BILLING_DOCUMENT VARCHAR2 (256),
DROP_SHIP_IND VARCHAR2 (256),
TO_PO_NUMBER VARCHAR2 (256),
TO_PURCHASE_ORDER VARCHAR2 (256),
DUE_DATE DATE,
BILL_DATE DATE,
TAX_AMT VARCHAR2 (256),
PAYER_CUSTOMER VARCHAR2 (256),
TO_ACCT_NO VARCHAR2 (256),
BILL_TO_ACCT_NO VARCHAR2 (256),
NET_AMOUNT VARCHAR2 (256),
NET_AMOUNT_CURRENCY VARCHAR2 (256),
ORDER_DT DATE,
TO_CUSTOMER VARCHAR2 (256),
TO_NAME VARCHAR2 (256),
FRANCHISES VARCHAR2 (4000),
UPDT_DT DATE
TYPE tab IS TABLE OF rec
INDEX BY BINARY_INTEGER;
pltab tab;
CURSOR c
IS
SELECT billing_type,
currency,
billing_document,
drop_ship_ind,
to_po_number,
to_purchase_order,
due_date,
bill_date,
tax_amt,
payer_customer,
to_acct_no,
bill_to_acct_no,
net_amount,
net_amount_currency,
order_dt,
to_customer,
to_name,
franchises,
updt_dt
FROM BACKFILL_INVOICES;
BEGIN
l_start_time := DBMS_UTILITY.get_time;
OPEN c;
LOOP
FETCH c
BULK COLLECT INTO pltab
LIMIT 1000;
v_err_loc := 1;
FOR i IN 1 .. pltab.COUNT
LOOP
BEGIN
v_inv_id := SEQ_INVOICE_ID.NEXTVAL;
v_guid := 'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff');
v_str := str_parser (pltab (i).FRANCHISES); --function to string parse - this could be done in advance, yes.
v_err_loc := 2;
v_count := v_count + 1;
INSERT INTO invoice nologging
VALUES (v_inv_id,
pltab (i).BILL_DATE,
v_guid,
'111111',
'NONE',
TO_TIMESTAMP (pltab (i).BILL_DATE),
TO_TIMESTAMP (pltab (i).UPDT_DT),
'READ',
'PAPER',
pltab (i).payer_customer,
v_str,
'111111');
v_err_loc := 3;
INSERT INTO invoice_header nologging
VALUES (v_inv_id,
TRIM (LEADING 0 FROM pltab (i).billing_document), --invoice_num
NULL,
pltab (i).BILL_DATE, --invoice_date
pltab (i).TO_PO_NUMBER,
NULL,
pltab (i).net_amount,
NULL,
pltab (i).tax_amt,
NULL,
NULL,
pltab (i).due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
TO_TIMESTAMP (SYSDATE),
TO_TIMESTAMP (SYSDATE),
PLTAB (I).NET_AMOUNT_CURRENCY,
(SELECT i.bc_value
FROM invsvc_owner.billing_codes i
WHERE i.bc_name = PLTAB (I).BILLING_TYPE),
PLTAB (I).BILL_DATE);
v_err_loc := 4;
INSERT INTO invoice_address nologging
VALUES (invsvc_owner.SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH INITIAL',
pltab (i).BILL_DATE,
NULL,
pltab (i).to_acct_no,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 5;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_ACCT_NO,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 6;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH2',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_CUSTOMER,
pltab (i).to_name,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 7;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH3',
pltab (i).BILL_DATE,
NULL,
'SOME PROPRIETARY DATA',
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 8;
INSERT
INTO invoice_event nologging (id,
eid,
root_eid,
invoice_number,
event_type,
event_email_address,
event_ts)
VALUES ( SEQ_INVOICE_EVENT_ID.NEXTVAL,
'111111',
'222222',
TRIM (LEADING 0 FROM pltab (i).billing_document),
'READ',
'some_user@some_company.com',
SYSTIMESTAMP);
v_err_loc := 9;
INSERT INTO backfill_invoice_mapping
VALUES (v_inv_id,
v_guid,
pltab (i).billing_document,
pltab (i).payer_customer,
pltab (i).net_amount);
IF v_count = 10000
THEN
COMMIT;
END IF;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (
pltab (i).billing_document,
pltab (i).payer_customer,
errString || ' ' || v_err_loc
COMMIT;
END;
END LOOP;
v_err_loc := 10;
INSERT INTO backfill_invoice_timing
VALUES (
ROUND ( (DBMS_UTILITY.get_time - l_start_time) / 100,
2)
|| ' seconds.',
(SELECT COUNT (1)
FROM backfill_invoice_mapping),
(SELECT COUNT (1)
FROM backfill_invoice_errors),
SYSDATE
COMMIT;
EXIT WHEN c%NOTFOUND;
END LOOP;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (NULL, NULL, errString || ' ' || v_err_loc);
COMMIT;
END;Hello
You could use insert all in your case and make use of sequence.NEXTVAL and sequence.CURRVAL like so (excuse any typos - I can't test without table definitions). I've done the first 2 tables, so it's just a matter of adding the rest in...
INSERT ALL
INTO invoice nologging
VALUES ( SEQ_INVOICE_ID.NEXTVAL,
BILL_DATE,
my_guid,
'111111',
'NONE',
CAST(BILL_DATE AS TIMESTAMP),
CAST(UPDT_DT AS TIMESTAMP),
'READ',
'PAPER',
payer_customer,
parsed_francises,
'111111'
INTO invoice_header
VALUES ( SEQ_INVOICE_ID.CURRVAL,
TRIM (LEADING 0 FROM billing_document), --invoice_num
NULL,
BILL_DATE, --invoice_date
TO_PO_NUMBER,
NULL,
net_amount,
NULL,
tax_amt,
NULL,
NULL,
due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
SYSTIMESTAMP,
NET_AMOUNT_CURRENCY,
bc_value,
BILL_DATE)
SELECT
src.billing_type,
src.currency,
src.billing_document,
src.drop_ship_ind,
src.to_po_number,
src.to_purchase_order,
src.due_date,
src.bill_date,
src.tax_amt,
src.payer_customer,
src.to_acct_no,
src.bill_to_acct_no,
src.net_amount,
src.net_amount_currency,
src.order_dt,
src.to_customer,
src.to_name,
src.franchises,
src.updt_dt,
str_parser (src.FRANCHISES) parsed_franchises,
'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff') my_guid,
i.bc_value
FROM BACKFILL_INVOICES src,
invsvc_owner.billing_codes i
WHERE i.bc_name = src.BILLING_TYPE;Some things to note
1. Don't commit in a loop - you only add to the run time and load on the box ultimately reducing scalability and removing transactional integrity. Commit once at the end of the job.
2. Make sure you specify the list of columns you are inserting into as well as the values or columns you are selecting. This is good practice as it protects your code from compilation issues in the event of new columns being added to tables. Also it makes it very clear what you are inserting where.
3. If you use WHEN OTHERS THEN... to log something, make sure you either rollback or raise the exception. What you have done in your code is say - I don't care what the problem is, just commit whatever has been done. This is not good practice.
HTH
David
Edited by: Bravid on Oct 13, 2011 4:35 PM -
How do i move a data file from my pc to my ipad
How do I move a data file from my computer to my ipad? It is my product catalogue and it would help if it were on my ipad.
Plug the iPad into iTunes and you can use file sharing to move it into an app that supports file sharing and that document format.
see http://support.apple.com/kb/ht4094
Or, use an online storage service (iCloud, dropbox, box, google drive, sugarsync, etc) and upload the file to that. Then on the iPad, from within the app you plan to use the file with, link to that service and download the file.
The app will again have to work with that file type, and support the particular service you used. -
How do I encrypt a data file so that only I can retrieve the info?
How do I encrypt a data file so that it cannot be read without the proper authority?
I have an application where the customer should not have access to the data I need to record for troubleshooting purposes. (there are industrial secrets I wish to protect) My plan is to record a datalog (currently I am producing a tab-delimited spreadsheet format) whenever the device is running and hide the files where they will probably not be found. But some sort of encryption or at least password protection would be better.
I've never tried to do this before, but thought it would be fairly easy. Maybe I'm just not looking in the right places.
Thanks
Solved!
Go to Solution.Well, you could look into something like DES, Triple DES, AES, etc. There are libries floating around for these written in LabVIEW. I'm not sure about the cost though.
If you want a really simple way to protect your data, just invert all or some of the bits in each byte of your file. It is super simple and turns a nice ASCII text file into noise when read from a text file.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines -
How to batch paginate multiple pdf files consecutively?
How to batch paginate multiple pdf files consecutively?
Thanks.What programming method are you using with the Acrobat SDK?
-
How to load duplicate data to a temporary table in ssis
i have duplicate data in my table.i want to load unique records in one destination .and i want to load duplicate data in a temporary table in another destination. .how can we impliment package for this
Hi V60,
To achieve your goal, you can use the following two approaches:
Use Script Component to redirect the duplicate rows.
Use Fuzzy Grouping Transformation which performs data cleaning tasks by identifying rows of data that are likely to be duplicates and selecting a canonical row of data to use in standardizing the data. Then, use a Conditional Split Transform to redirect
the unique rows and the duplicate rows to different destinations.
For the step-by-step guidance about the above two methods, walk through the following blogs:
http://microsoft-ssis.blogspot.in/2011/12/redirect-duplicate-rows.html
http://hussain-msbi.blogspot.in/2013/02/redirect-duplicate-rows-using-ssis-step.html
Regards,
Mike Yin
TechNet Community Support -
How to design SQL server data file and log file growth
how to design SQL DB data file and log file growth- SQL server 2012
if my data file is having 10 GB sizze and log file is having 5 GB size
what should be the size in MB (not in %) of autogrowth. based on what we have to determine the ideal size of file auto growth.It's very difficult to give a definitive answer on this. Best principal is to size your database correctly in advance so that you never have to autogrow, of course in reality that isn't always practical.
The setting you use is really dictated by the expected growth in your files. Given that the size is relatively small why not set it to 1gb on the datafile(s) and 512mb on the log file? The important thing is to monitor it on an on-going basis to see if that's
the appropriate amount.
One thing you should do is enable instant file initialization by granting the service account Perform Volume Maintenance tasks in group policy. This will allow the data files to grow quickly when required, details here:
https://technet.microsoft.com/en-us/library/ms175935%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396
Also, it possible to query the default trace to find autogrowth events, if you wanted you could write an alert/sql job based on this
SELECT
[DatabaseName],
[FileName],
[SPID],
[Duration],
[StartTime],
[EndTime],
CASE [EventClass]
WHEN 92 THEN 'Data'
WHEN 93 THEN 'Log' END
FROM sys.fn_trace_gettable('c:\path\to\trace.trc', DEFAULT)
WHERE
EventClass IN (92,93)
hope that helps -
How do you load exs. format files into EXS 24 sampler?
How do you load EXS24 format files into the EXS24 sampler? the only format I can load directly is aiff and wav. can't find any posts addressing this question? please help
You don't load them in, you need to put them in:
*System/users/your user name/library/application support/logic/Sampler instruments*
Restart Logic if it's running, they will then appear in the EXS library. -
How to batch upload some data into SAP system?
Hi All:
I'm a SAP key user in our company, I want to know how to batch upload master data into SAP system? What t-code we can use for batch input?
Thanks in advance!Hi,
I think at least there are four methods for batch input which you can use in SAP system:
Standard mass change t-code: for example, MM17 for mass change of material master data, you can find the mass change t-code under the relevant function module;
T-code SCAT, you can use to record and generate the simple batch input template, it's easy to use for key user;
T-code LSMW, it's another transaction to generate batch input template, it's often used at the beginning of SAP rollout;
ABAP program, of course, ABAP is a universal tool for everything, you can use ABAP to generate a batch input program to upload data into SAP system as well.
So you had better contact your SAP support department, help you to upload data!
Good luck
Tao
Maybe you are looking for
-
Urgent: Problem opening a data form that uses a smart list
Hi, we've got a problem with a data form that is quite unusual. The data form was working perfectly until yesterday when we put a smart list in it. The smart list is composed of 8 possible choices activated by the following member formula: IF (SML==1
-
Hi, Suppose we have created the customer with VD01and we know that there is no company code in that customer, then at the time of sales order how system determine the company code. Regards Prabudh
-
Desktop icons off screen when removing second monitor
Hi there, well I have found a few bugs so far using Leopard so thought I should see if anyone else is getting them: @ I use a second monitor, and if I remove the monitor, although the rest of the OS resizes to just the laptop, the desktop icons are n
-
Macbook Pro Batteries and Fans Whirring
I have a Macbook pro. It is only 2 months old and since the logic board was replaced due to a fault twice I have had the battery symbol at the top coming up with an x in it (power not connected) and the fans have been whirring inside. I have to unplu
-
BUG: retrieving NCHAR value using ViewObject returns "???" with OCI driver
Hi, after upgrading JDeveloper to 10.1.3.1 we cannot use OCI JDBC drivers, because they don't work correctly - characters, that do not exist in database character set (EE8ISO8859P2 in our case) are converted to "?". Thin driver works well. We need to