How to bulk load topics in RH9
Is there anyway to bulk load topics into a RH TOC and create topic files? I have a long list of topics (100+) to add to my TOC and project file that are listed in a text document. Is there an easy way to take the list in the text document and create all of the corresponding TOC entries and topic files?
I've never tried it, but you could try to import your list from someplace like Word or FrameMaker and set the Pagination settings on the style you're using to break the doc up into topics.
Similar Messages
-
How to UPDATE a big table in Oracle via Bulk Load
Hi all,
in a datastore target as Oracle 11g, I have a big table having 300milions of record; the structure is One integer key + 10 columns attributes .
In IQ Source i have the same table with the same size ; the structure is One integer key + 1 column attributes .
What i need to do is to UPDATE that single field in Oracle from the values stored in IQ .
Any idea on how to organize efficiently the dataflow and the target writing mode ? bulk load ? api ?
thank you
MaurizioHi,
You cannot do bulk load when you need to UPDATE a field. Because all a bulk load does is add records to your table.
Since you have to UPDATE a field, i would suggest to go for SCD with
source > TC > MO > KG >target
Arun -
How to prevent Evaluate User Policies to run for Bulk loaded users?
Hi,
I have an OIM 11G R2 environment, where i did a bulk load of abount 200,000+ users, and all the users' accounts were created using target recon.
How do I prevent the evaluate user policies scheduler from running for these users?
Any ideas are welcome.
Thanks,
Aravind SureshHi,
I do have roles and access policies.
But i do not want them to applied to them at this stage as they already got everything through target recon.
Only for new users or these users on update i want the evaluate user policies to run.
Otherwise running evaluate user policies for these many users could be a very time and resource consuming task.
Thanks,
Aravind Suresh -
How to improve performance for Azure Table Storage bulk loads
Hello all,
Would appreciate your help as we are facing a challenge.
We are tried to bulk load Azure table storage. We have a file that contains nearly 2 million rows.
We would need to reach a point where we could bulk load 100000-150000 entries per minute. Currently, it takes more than 10 hours to process the file..
We have tried Parallel.Foreach but it doesn't help. Today I discovered Partitioning in PLINQ. Would that be the way to go??
Any ideas? I have spent nearly two days in trying to optimize it using PLINQ, but still I am not sure what is the best thing to do.
Kindly, note that we shouldn't be using SQL/Azure SQL for this.
I would really appreciate your help.
ThanksI'd think you're just pooling the parallel connections to Azure, if you do it on one system. You'd also have a bottleneck of round trip time from you, through the internet to Azure and back again.
You could speed it up by moving the data file to the cloud and process it with a Cloud worker role. That way you'd be in the datacenter (which is a much faster, more optimized network.)
Or, if that's not fast enough - if you can split the data so multiple WorkerRoles could each process part of the file, you can use the VM's scale to put enough machines to it that it gets done quickly.
Darin R. -
How can I load my data faster? Is there a SQL solution instead of PL/SQL?
11.2.0.2
Solaris 10 sparc
I need to backfill invoices from a customer. The raw data has 3.1 million records. I have used pl/sql to load these invoices into our system (dev), however, our issue is the amount of time it's taking to run the load - effectively running at approx 4 hours. (Raw data has been loaded into a staging table)
My research keeps coming back to one concept: sql is faster than pl/sql. Where I'm stuck is the need to programmatically load the data. The invoice table has a sequence on it (primary key = invoice_id)...the invoice_header and invoice_address tables use the invoice_id as a foreign key. So my script takes advantage of knowing the primary key and uses that on the subsequent inserts to the subordinate invoice_header and invoice_address tables, respectively.
My script is below. What I'm asking is if there are other ideas on the quickest way to load this data...what am I not considering? I have to load the data in dev, qa, then production so the sequences and such change between the environments. I've dummied down the code to protect the customer; syntax and correctness of the code posted here (on the forum) is moot...it's only posted to give the framework for what I currently have.
Any advice would be greatly appreciated; how can I load the data faster knowing that I need to know sequence values for inserts into other tables?
DECLARE
v_inv_id invoice.invoice_id%TYPE;
v_inv_addr_id invoice_address.invoice_address_id%TYPE;
errString invoice_errors.sqlerrmsg%TYPE;
v_guid VARCHAR2 (128);
v_str VARCHAR2 (256);
v_err_loc NUMBER;
v_count NUMBER := 0;
l_start_time NUMBER;
TYPE rec IS RECORD
BILLING_TYPE VARCHAR2 (256),
CURRENCY VARCHAR2 (256),
BILLING_DOCUMENT VARCHAR2 (256),
DROP_SHIP_IND VARCHAR2 (256),
TO_PO_NUMBER VARCHAR2 (256),
TO_PURCHASE_ORDER VARCHAR2 (256),
DUE_DATE DATE,
BILL_DATE DATE,
TAX_AMT VARCHAR2 (256),
PAYER_CUSTOMER VARCHAR2 (256),
TO_ACCT_NO VARCHAR2 (256),
BILL_TO_ACCT_NO VARCHAR2 (256),
NET_AMOUNT VARCHAR2 (256),
NET_AMOUNT_CURRENCY VARCHAR2 (256),
ORDER_DT DATE,
TO_CUSTOMER VARCHAR2 (256),
TO_NAME VARCHAR2 (256),
FRANCHISES VARCHAR2 (4000),
UPDT_DT DATE
TYPE tab IS TABLE OF rec
INDEX BY BINARY_INTEGER;
pltab tab;
CURSOR c
IS
SELECT billing_type,
currency,
billing_document,
drop_ship_ind,
to_po_number,
to_purchase_order,
due_date,
bill_date,
tax_amt,
payer_customer,
to_acct_no,
bill_to_acct_no,
net_amount,
net_amount_currency,
order_dt,
to_customer,
to_name,
franchises,
updt_dt
FROM BACKFILL_INVOICES;
BEGIN
l_start_time := DBMS_UTILITY.get_time;
OPEN c;
LOOP
FETCH c
BULK COLLECT INTO pltab
LIMIT 1000;
v_err_loc := 1;
FOR i IN 1 .. pltab.COUNT
LOOP
BEGIN
v_inv_id := SEQ_INVOICE_ID.NEXTVAL;
v_guid := 'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff');
v_str := str_parser (pltab (i).FRANCHISES); --function to string parse - this could be done in advance, yes.
v_err_loc := 2;
v_count := v_count + 1;
INSERT INTO invoice nologging
VALUES (v_inv_id,
pltab (i).BILL_DATE,
v_guid,
'111111',
'NONE',
TO_TIMESTAMP (pltab (i).BILL_DATE),
TO_TIMESTAMP (pltab (i).UPDT_DT),
'READ',
'PAPER',
pltab (i).payer_customer,
v_str,
'111111');
v_err_loc := 3;
INSERT INTO invoice_header nologging
VALUES (v_inv_id,
TRIM (LEADING 0 FROM pltab (i).billing_document), --invoice_num
NULL,
pltab (i).BILL_DATE, --invoice_date
pltab (i).TO_PO_NUMBER,
NULL,
pltab (i).net_amount,
NULL,
pltab (i).tax_amt,
NULL,
NULL,
pltab (i).due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
TO_TIMESTAMP (SYSDATE),
TO_TIMESTAMP (SYSDATE),
PLTAB (I).NET_AMOUNT_CURRENCY,
(SELECT i.bc_value
FROM invsvc_owner.billing_codes i
WHERE i.bc_name = PLTAB (I).BILLING_TYPE),
PLTAB (I).BILL_DATE);
v_err_loc := 4;
INSERT INTO invoice_address nologging
VALUES (invsvc_owner.SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH INITIAL',
pltab (i).BILL_DATE,
NULL,
pltab (i).to_acct_no,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 5;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_ACCT_NO,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 6;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH2',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_CUSTOMER,
pltab (i).to_name,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 7;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH3',
pltab (i).BILL_DATE,
NULL,
'SOME PROPRIETARY DATA',
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 8;
INSERT
INTO invoice_event nologging (id,
eid,
root_eid,
invoice_number,
event_type,
event_email_address,
event_ts)
VALUES ( SEQ_INVOICE_EVENT_ID.NEXTVAL,
'111111',
'222222',
TRIM (LEADING 0 FROM pltab (i).billing_document),
'READ',
'some_user@some_company.com',
SYSTIMESTAMP);
v_err_loc := 9;
INSERT INTO backfill_invoice_mapping
VALUES (v_inv_id,
v_guid,
pltab (i).billing_document,
pltab (i).payer_customer,
pltab (i).net_amount);
IF v_count = 10000
THEN
COMMIT;
END IF;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (
pltab (i).billing_document,
pltab (i).payer_customer,
errString || ' ' || v_err_loc
COMMIT;
END;
END LOOP;
v_err_loc := 10;
INSERT INTO backfill_invoice_timing
VALUES (
ROUND ( (DBMS_UTILITY.get_time - l_start_time) / 100,
2)
|| ' seconds.',
(SELECT COUNT (1)
FROM backfill_invoice_mapping),
(SELECT COUNT (1)
FROM backfill_invoice_errors),
SYSDATE
COMMIT;
EXIT WHEN c%NOTFOUND;
END LOOP;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (NULL, NULL, errString || ' ' || v_err_loc);
COMMIT;
END;Hello
You could use insert all in your case and make use of sequence.NEXTVAL and sequence.CURRVAL like so (excuse any typos - I can't test without table definitions). I've done the first 2 tables, so it's just a matter of adding the rest in...
INSERT ALL
INTO invoice nologging
VALUES ( SEQ_INVOICE_ID.NEXTVAL,
BILL_DATE,
my_guid,
'111111',
'NONE',
CAST(BILL_DATE AS TIMESTAMP),
CAST(UPDT_DT AS TIMESTAMP),
'READ',
'PAPER',
payer_customer,
parsed_francises,
'111111'
INTO invoice_header
VALUES ( SEQ_INVOICE_ID.CURRVAL,
TRIM (LEADING 0 FROM billing_document), --invoice_num
NULL,
BILL_DATE, --invoice_date
TO_PO_NUMBER,
NULL,
net_amount,
NULL,
tax_amt,
NULL,
NULL,
due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
SYSTIMESTAMP,
NET_AMOUNT_CURRENCY,
bc_value,
BILL_DATE)
SELECT
src.billing_type,
src.currency,
src.billing_document,
src.drop_ship_ind,
src.to_po_number,
src.to_purchase_order,
src.due_date,
src.bill_date,
src.tax_amt,
src.payer_customer,
src.to_acct_no,
src.bill_to_acct_no,
src.net_amount,
src.net_amount_currency,
src.order_dt,
src.to_customer,
src.to_name,
src.franchises,
src.updt_dt,
str_parser (src.FRANCHISES) parsed_franchises,
'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff') my_guid,
i.bc_value
FROM BACKFILL_INVOICES src,
invsvc_owner.billing_codes i
WHERE i.bc_name = src.BILLING_TYPE;Some things to note
1. Don't commit in a loop - you only add to the run time and load on the box ultimately reducing scalability and removing transactional integrity. Commit once at the end of the job.
2. Make sure you specify the list of columns you are inserting into as well as the values or columns you are selecting. This is good practice as it protects your code from compilation issues in the event of new columns being added to tables. Also it makes it very clear what you are inserting where.
3. If you use WHEN OTHERS THEN... to log something, make sure you either rollback or raise the exception. What you have done in your code is say - I don't care what the problem is, just commit whatever has been done. This is not good practice.
HTH
David
Edited by: Bravid on Oct 13, 2011 4:35 PM -
Issue with Bulk Load Post Process
Hi,
I ran bulk load command line utility to create users in OIM. I had 5 records in my csv file. Out of which 2 users were successfully created in OIM and for rest i got exception because users already existed. After that if i run bulk load post process for LDAP sync and generate the password and send notification. It is not working even for successfully created users. Ideally it should sync successfully created users. However if there is no exception i during bulk load command line utility then LDAP sync work fine through bulk load post process.Any idea how to resolve this issue and sync the user in OID which were successfully created. Urgent help would be appreciated.The scheduled task carries out post-processing activities on the users imported through the bulk load utility.
-
Bulk loading BLOBs using PL/SQL - is it possible?
Hi -
Does anyone have a good reference article or example of how I can bulk load BLOBs (videos, images, audio, office docs/pdf) into the database using PL/SQL?
Every example I've ever seen in PL/SQL for loading BLOBs does a commit; after each file loaded ... which doesn't seem very scalable.
Can we pass in an array of BLOBs from the application, into PL/SQL and loop through that array and then issue a commit after the loop terminates?
Any advice or help is appreciated. Thanks
LJIt is easy enough to modify the example to commit every N files. If you are loading large amounts of media, I think that you will find that the time to load the media is far greater than the time spent in SQL statements doing inserts or retrieves. Thus, I would not expect to see any significant benefit to changing the example to use PL/SQL collection types in order to do bulk row operations.
If your goal is high performance bulk load of binary content then I would suggest that you look to use Sqlldr. A PL/SQL program loading from BFILEs is limited to loading files that are accessible from the database server file system. Sqlldr can do this but it can also load data from a remote client. Sqlldr has parameters to control batching of operations.
See section 7.3 of the Oracle Multimedia DICOM Developer's Guide for the example Loading DICOM Content Using the SQL*Loader Utility. You will need to adapt this example to the other Multimedia objects (ORDImage, ORDAudio .. etc) but the basic concepts are the same.
Once the binary content is loaded into the database, you will need a to write a program to loop over the new content and initialize the Multimedia objects (extract attributes). The example in 7.3 contains a sample program that does this for the ORDDicom object. -
Hello,
I have one question regarding bulk loading. I did lot of bulk loading.
But my requirement is to call function which will do some DML operation and give ref key so that i can insert to fact table.
Because i can't use DML function in select statement. (which will give error). otherway is using autonomous transaction. which i tried working but performance is very slow.
How to call this function inside bulk loading process.
Help !!
xx_f is function which is using autonmous transction,
See my sample code
declare
cursor c1 is select a,b,c from xx;
type l_a is table of xx.a%type;
type l_b is table of xx.b%type;
type l_c is table of xx.c%type;
v_a l_a;
v_b l_b;
v_c l_c;
begin
open c1;
loop
fetch c1 bulk collect into v_a,v_b,v_c limit 1000;
exit when c1%notfound;
begin
forall i in 1..v_a.count
insert into xxyy
(a,b,c) values (xx_f(v_a(i),xx_f(v_b(i),xx_f(v_c(i));
commit;
end bulkload;
end loop;
close c1;
end;
I just want to call xx_f function without autonoumous transaction.
but with bulk loading. Please let me if you need more details
Thanks
yreddyrCan you show the code for xx_f? Does it do DML, or just transformations on the columns?
Depending on what it does, an alternative could be something like:
DECLARE
CURSOR c1 IS
SELECT xx_f(a), xx_f(b), xx_f(c) FROM xx;
TYPE l_a IS TABLE OF whatever xx_f returns;
TYPE l_b IS TABLE OF whatever xx_f returns;
TYPE l_c IS TABLE OF whatever xx_f returns;
v_a l_a;
v_b l_b;
v_c l_c;
BEGIN
OPEN c1;
LOOP
FETCH c1 BULK COLLECT INTO v_a, v_b, v_c LIMIT 1000;
BEGIN
FORALL i IN 1..v_a.COUNT
INSERT INTO xxyy (a, b, c)
VALUES (v_a(i), v_b(i), v_c(i));
END;
EXIT WHEN c1%NOTFOUND;
END LOOP;
CLOSE c1;
END;John -
Hi,
I have a file where fields are wrapped with ".
=========== file sample
"asdsa","asdsadasdas","1123"
"asdsa","asdsadasdas","1123"
"asdsa","asdsadasdas","1123"
"asdsa","asdsadasdas","1123"
==========
I am having a .net method to remove the wrap characters and write out a file without wrap characters.
======================
asdsa,asdsadasdas,1123
asdsa,asdsadasdas,1123
asdsa,asdsadasdas,1123
asdsa,asdsadasdas,1123
======================
the .net code is here.
========================================
public static string RemoveCharacter(string sFileName, char cRemoveChar)
object objLock = new object();
//VirtualStream objInputStream = null;
//VirtualStream objOutStream = null;
FileStream objInputFile = null, objOutFile = null;
lock(objLock)
try
objInputFile = new FileStream(sFileName, FileMode.Open);
//objInputStream = new VirtualStream(objInputFile);
objOutFile = new FileStream(sFileName.Substring(0, sFileName.LastIndexOf('\\')) + "\\" + Guid.NewGuid().ToString(), FileMode.Create);
//objOutStream = new VirtualStream(objOutFile);
int nByteRead;
while ((nByteRead = objInputFile.ReadByte()) != -1)
if (nByteRead != (int)cRemoveChar)
objOutFile.WriteByte((byte)nByteRead);
finally
objInputFile.Close();
objOutFile.Close();
return sFileName.Substring(0, sFileName.LastIndexOf('\\')) + "\\" + Guid.NewGuid().ToString();
==================================
however when I run the bulk load utility I get the error
=======================================
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 3 (NumberOfMultipleMatches).
==========================================
the bulk insert statement is as follows
=========================================
BULK INSERT Temp
FROM '<file name>' WITH
FIELDTERMINATOR = ','
, KEEPNULLS
==========================================
Does anybody know what is happening and what needs to be done ?
PLEASE HELP
Thanks in advance
VikramTo load that file with BULK INSERT, use this format file:
9.0
4
1 SQLCHAR 0 0 "\"" 0 "" ""
2 SQLCHAR 0 0 "\",\"" 1 col1 Latin1_General_CI_AS
3 SQLCHAR 0 0 "\",\"" 2 col2 Latin1_General_CI_AS
4 SQLCHAR 0 0 "\"\r\n" 3 col3 Latin1_General_CI_AS
Note that the format file defines four fields while the fileonly seems to have three. The format file defines an empty field before the first quote.
Or, since you already have a .NET program, use a stored procedure with table-valued parameter instead. I have an example of how to do this here:
http://www.sommarskog.se/arrays-in-sql-2008.html
Erland Sommarskog, SQL Server MVP, [email protected] -
Bulk Load question for an insert statement.
I'm looking to put the following statement into a FORALL statement using BULK COLLLECT and I need some guidance.
Am I going to be putting the SELECT statement into a cursor and then load the cursor values into a defined Nested Table type defined variable?
INSERT INTO TEMP_ASSOC_CURRENT_WEEK_IDS
SELECT aor.associate_office_record_id ,
sched.get_assoc_sched_rotation_week(aor.associate_office_record_id, v_weekType.start_date) week_id
FROM ASSOCIATE_OFFICE_RECORDS aor
WHERE aor.OFFICE_ID = v_office_id
AND (
(aor.lt_assoc_stage_result_id in (4,8)
AND v_officeWeekType.start_date >= trunc(aor.schedule_start_date)
OR aor.lt_assoc_stage_result_id in (1, 2)
));I see people are reading this so for the insanely curious here's how I did it.
Type AOR_REC is RECORD(
associate_office_record_id dbms_sql.number_table,
week_id dbms_sql.number_table); --RJS.***Setting up Type for use with Bulk Collect FORALL statements.
v_a_rec AOR_REC; -- RJS. *** defining variable of defined Type to use with Bulk Collect FORALL statements.
CURSOR cur_aor_ids -- RJS *** Cursor for BULK COLLECT.
IS
SELECT aor.associate_office_record_id associate_office_record_id,
sched.get_assoc_sched_rotation_week(aor.associate_office_record_id, v_weekType.start_date) week_id
FROM ASSOCIATE_OFFICE_RECORDS aor
WHERE aor.OFFICE_ID = v_office_id
AND (
(aor.lt_assoc_stage_result_id in (4,8)
AND v_officeWeekType.start_date >= trunc(aor.schedule_start_date)
OR aor.lt_assoc_stage_result_id in (1, 2)
FOR UPDATE NOWAIT;
BEGIN
BEGIN
OPEN cur_aor_ids;
LOOP
FETCH cur_aor_ids BULK COLLECT into
v_a_rec.associate_office_record_id, v_a_rec.week_id; --RJS. *** Bulk Load your cursor data into a buffer to do the Delete all at once.
FORALL i IN 1..v_a_rec.associate_office_record_id.COUNT SAVE EXCEPTIONS
INSERT INTO TEMP_ASSOC_CURRENT_WEEK_IDS
(associate_office_record_id,week_id)
VALUES
(v_a_rec.associate_office_record_id(i), v_a_rec.week_id(i)); --RJS. *** Single FORALL BULK DELETE statement.
EXIT WHEN cur_aor_ids%NOTFOUND;
END LOOP;
CLOSE cur_aor_ids;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('ERROR ENCOUNTERED IS SQLCODE = '|| SQLCODE ||' AND SQLERRM = ' || SQLERRM);
dbms_output.put_line('Number of INSERT statements that
failed: ' || SQL%BULK_EXCEPTIONS.COUNT);
End;
Easy right? -
Error when Bulk load hierarchy data
Hi,
While loading P6 Reporting databases following message error appears atthe step in charge of Bulk load hierarchy data into ODS.
<04.29.2011 14:03:59> load [INFO] (Message) - === Bulk load hierarchy data into ODS (ETL_LOADWBSHierarchy.ldr)
<04.29.2011 14:04:26> load [INFO] (Message) - Load completed - logical record count 384102.
<04.29.2011 14:04:26> load [ERROR] (Message) - SqlLoaderSQL LOADER ACTION FAILED. [control=D:\oracle\app\product\11.1.0\db_1\p6rdb\scripts\DATA_WBSHierarchy.csv.ldr] [file=D:\oracle\app\product\11.1.0\db_1\p6rdb\temp\WBSHierarchy\DATA_WBSHierarchy.csv]
<04.29.2011 14:04:26> load [INFO] (Progress) - Step 3/9 Part 5/6 - FAILED (-1) (0 hours, 0 minutes, 28 seconds, 16 milliseconds)
Checking corresponding log error file (see below) I see that effectively some records are rejected. Question is: How could I identify the source of the problem and fix it?
QL*Loader: Release 11.1.0.6.0 - Production on Mon May 2 09:03:22 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Control File: DATA_WBSHierarchy.csv.ldr
Character Set UTF16 specified for all input.
Using character length semantics.
Byteorder little endian specified.
Data File: D:\oracle\app\product\11.1.0\db_1\p6rdb\temp\WBSHierarchy\DATA_WBSHierarchy.csv
Bad File: DATA_WBSHierarchy.bad
Discard File: none specified
+(Allow all discards)+
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Table WBSHIERARCHY, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
PARENTOBJECTID FIRST * WHT CHARACTER
PARENTPROJECTID NEXT * WHT CHARACTER
PARENTSEQUENCENUMBER NEXT * WHT CHARACTER
PARENTNAME NEXT * WHT CHARACTER
PARENTID NEXT * WHT CHARACTER
CHILDOBJECTID NEXT * WHT CHARACTER
CHILDPROJECTID NEXT * WHT CHARACTER
CHILDSEQUENCENUMBER NEXT * WHT CHARACTER
CHILDNAME NEXT * WHT CHARACTER
CHILDID NEXT * WHT CHARACTER
PARENTLEVELSBELOWROOT NEXT * WHT CHARACTER
CHILDLEVELSBELOWROOT NEXT * WHT CHARACTER
LEVELSBETWEEN NEXT * WHT CHARACTER
CHILDHASCHILDREN NEXT * WHT CHARACTER
FULLPATHNAME NEXT 8000 WHT CHARACTER
SKEY SEQUENCE (MAX, 1)
value used for ROWS parameter changed from 64 to 21
Record 14359: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 14360: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 14361: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 27457: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 27458: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 27459: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 38775: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 38776: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 38777: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 52411: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 52412: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 52413: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 114619: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 114620: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 127921: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 127922: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 164588: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 164589: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 171322: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 171323: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 186779: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 186780: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 208687: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 208688: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 221167: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 221168: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 246951: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 246952: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Table WBSHIERARCHY:
+384074 Rows successfully loaded.+
+28 Rows not loaded due to data errors.+
+0 Rows not loaded because all WHEN clauses were failed.+
+0 Rows not loaded because all fields were null.+
Space allocated for bind array: 244377 bytes(21 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 384102
Total logical records rejected: 28
Total logical records discarded: 0
Run began on Mon May 02 09:03:22 2011
Run ended on Mon May 02 09:04:07 2011
Elapsed time was: 00:00:44.99Hi Mandeep,
Thanks for the information.
But still it doesnot seem to work.
Actally, i have Group ID and Group Name as display field in the Hiearchy table.
Group ID i have directly mapped to Group ID.
I have created a Split Hierarchy of Group Name and mapped it.
I have also made all the options configurations as per your suggestions, but it doenot work still.
Can you please help.
Thanks,
Priya. -
Bulk load in OIM 11g enabled with LDAP sync
Have anyone performed bulk load of more than 100,000 users using bulk load utility in OIM 11g ?
The challenge here is we have OIM 11.1.1.5.0 environment enabled with LDAP sync.
We are trying to figure out some performance factors and best way to achieve our requirement
1.Have you performed any timings around use of Bulk Load tool. Any idea how long will it take to LDAP sync more than 100,000 users into OID. What are the problems that we could encounter during this flow ?
2.Is it possible we could migrate users into another environment and then swap this database for the OIM database? Also is there any effective way to load into OID directly ?
3.We also have some custom Scheduled Task to modify couple of user attributes (using update API) from the flat file. Have you guys tried such scenario after the bulk load ? And did you face any problem while doing so ?
Thanks
DKto Update a UDF you must assign a copy value adpter in Lookup.USR_PROCESS_TRIGGERS(design console / lookup definition)
eg.
CODE --------------------------DECODE
USR_UDF_MYATTR1----- Change MYATTR1
USR_UDF_MYATTR2----- Change MYATTR2
Edited by: Lighting Cui on 2011-8-3 上午12:25 -
Retry "Bulk Load Post Process" batch
Hi,
First question, what is the actual use of the scheduled task "Bulk Load Post Process"? If I am not sending out email notification, nor LDAP syncing nor generating the password do I still need to run this task after performing a bulk load through the utility?
Also, I ran this task, now there are some batches which are in the "READY FOR PROCESSING" state. How do I re-run these batches?
Thanks,
VishalThe scheduled task carries out post-processing activities on the users imported through the bulk load utility.
-
OIM Bulk Load: Insufficient privileges
Hi All,
I'm trying to use the OIM Bulk Load Utility and I keep getting this error message:
Exception in thread "main" java.sql.SQLException: ORA-01031: insufficient privileges
ORA-06512: at "OIMUSER.OIM_BLKLD_SP_CREATE_LOG", line 39
ORA-06512: at "OIMUSER.OIM_BLKLD_PKG_USR", line 281
I've followed the instructions and gone over everything a few times. The utility tests the connection to the database OK.
I don't know much about oracle db's so I am not sure how to do even basic troubleshooting. Could I just give my OIMUSER full permissions? Shouldn't it have full permission as it is?
I did have to create a tablespace for this utility, maybe the OIMUSER needs to be give access to this? I have no idea....
Any help would be greatly appreciated!
AlexEven i got same error, at that time db oim user had following permission:
CREATE TABLE
CREATE VIEW
QUERY REWRITE
UNLIMITED TABLESPACE
EXECUTE ON SYS.DBMS_SHARED_POOL
EXECUTE ON SYS.DBMS_SYSTEM
SELECT ON SYS.DBA_2PC_PENDING
SELECT ON SYS.DBA_PENDING_TRANSACTIONS
SELECT ON SYS.PENDING_TRANS$
SELECT ON SYS.V$XATRANS$
CONNECT
RESOURCE
Later DBA provided following additional permission and it worked like a charm:-
CREATE ANY INDEX
CREATE ANY SYNONYM
CREATE ANY TRIGGER
CREATE ANY TYPE
CREATE DATABASE LINK
CREATE JOB
CREATE LIBRARY
CREATE MATERIALIZED VIEW
CREATE PROCEDURE
CREATE SEQUENCE
CREATE TABLE
CREATE TRIGGER
CREATE VIEW -
I have a huge dat file to load using sqlldr. I am told that there is a Bulk Load option that can be used. If true, how do I use it (syntax)?
Are there any other ways of loading large volume data on dat files into an Oracle DB?
A quick reply is appreciatedhttp://download-west.oracle.com/docs/cd/B10501_01/server.920/a96652/ch09.htm#1007453
Maybe you are looking for
-
Quicktime vs. QuicktimePlayer
Are they the same executable? In my applications directory I can only find QuicktimePlayer. Am wondering if I lost something along the way.
-
How to associate recent (non-CC) Acrobat XI purchase with account?
I recently purchased Acrobat XI Pro (Mac, non-CC), and it is installed and running on my rMBP. I already had Premiere Pro CC installed, as well as Dreamweaver CS6. These last 2 apps are displayed in the "Your Apps" section of the "Apps" tab on the Cr
-
SQL Server 2000, 2005 and 2008 migration
Hi, we have to migrate SQL Server 2000, 2005 and 2008 to SQL Server 2014. Can anyone please help with some good links to migrate DTS and SSIS package from 2000, 2005 and 2008 to SQL Server 2014? Regards, Satendra Singh
-
Can I jump between two presentations without exiting presentation mode?
is it possible to quickly jump between two separate presentations without having to go out of presentation / play mode ??
-
Make command and ins_rdbms.mk file
RDBMS version : 11.2 os : UNIX/Unix Like To enable a component in Oracle RDBMS , we usually run make command. For example to enable partitioning , we run $ cd $ORACLE_HOME/rdbms/lib $ make -f ins_rdbms.mk part_onWhat does make command do ? What is in