Coherence Bulk Load Memory Problem
Hi,
I am trying to load like 35 million object(java object) into stand alone cache servers. According to reports of coherence, average item size is 360 byte. the thing is, to do this implementation I am using invocationservice's. And it means that, my invocable threads are running on on separate nodes(I have arranged them to be run on storage enabled nodes).
But the thing is; I am getting some memory probems liks:
8 nodes with 5 GB memory for each. Means 40 GB but even with this config, it makes full gc s several times in a day.
So to avoid full gc a little bit, I did not use invocalble threads, instead, I used regular threads in my application and it makes the loading work but it does not seem nice I think.
Any ideas that which one is better?(or maybe a new method)
Thanks
Have you taken a look at your GC logs to see what is happening?
Also, my rough calculations put you at an under sized grid:
35M objects @ 360 bytes means you need ~12GB of storage, not taking into account indexes, etc.
Optimistically, you'd need over 50GB of heap space in your grid. I'd suggest over 60GB. Why? Because you need space for backups copies of your data, (assuming you're running distributed services), working heap for serialisation, entry processes, etc, and your grid should never be more than 80% full.
My guess? You need more nodes/capacity in your grid - hence GC.
Similar Messages
-
Labview file load memory problem
Hi Guys,
I am loading a binay different binary files and am having error of "Memory full"
When I load a 43,837 and 110,196kb file this error comes .
and when I load 5,811 kb file it works fine.
Please note that 43,837kb file has 5611000 samples and 110,196kb file has 8060000
can anyone tell me the solution to fix it ?
Rgs
M Omar TariqPlease post your code so we can give you real suggestions. It sounds like you are make extra memory copies you probably don't need. To get a jump start on the problem, take a look at the tutorial Managing Large Data Sets in LabVIEW. Newer versions of LabVIEW have this information in the help files.
This account is no longer active. Contact ShadesOfGray for current posts and information. -
Memory problem with loading a csv file and displaying 2 xy graphs
Hi there, i'm having some memory issues with this little program.
What i'm trying to do is reading a .csv file of 215 mb (6 million lines more or less), extracting the x-y values as 1d array and displaying them in 2 xy graphs (vi attacked).
I've noticed that this process eats from 1.6 to 2 gb of ram and the 2 x-y graphs, as soon as they are loaded (2 minutes more or less) are really realy slow to move with the scrollbar.
My question is: Is there a way for use less memory resources and make the graphs move smoother ?
Thanks in advance,
Ierman Gert
Attachments:
read from file test.vi 106 KBHi lerman,
how many datapoints do you need to handle? How many do you display on the graphs?
Some notes:
- Each graph has its own data buffer. So all data wired to the graph will be buffered again in memory. When wiring a (big) 1d array to the graph a copy will be made in memory. And you mentioned 2 graphs...
- load the array in parts: read a number of lines, parse them to arrays as before (maybe using "spreadsheet string to array"?), finally append the parts to build the big array (may lead to memory problems too).
- avoid datacopies when handling big arrays. You can show buffer creation using menu->tools->advanced->show buffer allocation
- use SGL instead of DBL when possible...
Message Edited by GerdW on 05-12-2009 10:02 PM
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome -
Critical performance problem upon bulk load of groups
All (including product development),
I think there are missing indexes in wwsec_flat$ and wwsec_sys_priv$. Anyway, I'd like assistance on fixing the critical performance problems I see, properly. Read on...
During and after bulk load of a few (about 500) users and groups from an external database, it becomes evident that there's a performance problem somewhere. Many of the calls to wwsec_api.addGroupToList took several minutes to finish. Afterwards the machine went 100% CPU just from logging in with the portal30 user (which happens to be group owner for all the groups).
Running SQL trace points in the directions of the following SQL statement:
SELECT ID,PARENT_ID,NAME,TITLE_ID,TITLEIMAGE_ID,ROLLOVERIMAGE_ID,
DESCRIPTION_ID,LAYOUT_ID,STYLE_ID,PAGE_TYPE,CREATED_BY,CREATED_ON,
LAST_MODIFIED_BY,LAST_MODIFIED_ON,PUBLISHED_ON,HAS_BANNER,HAS_FOOTER,
EXPOSURE,SHOW_CHILDREN,IS_PUBLIC,INHERIT_PRIV,IS_READY,EXECUTE_MODE,
CACHE_MODE,CACHE_EXPIRES,TEMPLATE FROM
WWPOB_PAGE$ WHERE ID = :b1
I checked the existing indexes, and see that the following ones are missing (I'm about to test with these, but have not yet done so):
CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_GROUP_ID"
ON "PORTAL30"."WWSEC_FLAT$"("GROUP_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING
CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_PERSON_ID"
ON "PORTAL30"."WWSEC_FLAT$"("PERSON_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING
CREATE UNIQUE INDEX "PORTAL30"."WWSEC_SYS_PRIV_IX_PATCH1"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("OWNER", "GRANTEE_GROUP_ID",
"GRANTEE_TYPE", "OWNER", "NAME", "OBJECT_TYPE_NAME")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 80K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING
Note that when I deleted the newly inserted groups, the CPU consumption immediately went down from 100% to some 2-3%.
This behaviour has been observed on a Sun Solaris system, but I think it's the same on NT (I have observed it during the bulk load on my NT laptop, but so far have not had the time to test further.).
Also note: In the call to addGroupToList, I set owner to true for all groups.
Also note: During loading of the groups, I logged a few errors, all of the same type ("PORTAL30.WWSEC_API", line 2075), as follows:
Error: Problem calling addGroupToList for child group'Marketing' (8030), list 'NO_OSL_Usenet'(8017). Reason: java.sql.SQLException: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "PORTAL30.WWSEC_API", line 2075
Please help. If you like, I may supply the tables and the java program that I use. It's fully reproducable.
Thanks,
Erik Hagen (you may call me on +47 90631013)
nullYES!
I have now tested with insertion of the missing indexes. It seems the call to addGroupToList takes just as long time as before, but the result is much better: WITH THE INDEXES DEFINED, THERE IS NO LONGER A PERFORMANCE PROBLEM!! The index definitions that I used are listed below (I added these to the ones that are there in Portal 3.0.8, but I guess some of those could have been deleted).
About the info at http://technet.oracle.com:89/ubb/Forum70/HTML/000894.html: Yes! Thanks! Very interesting, and I guess you found the cause for the error messages and maybe also for the performance problem during bulk load (I'll look into it as soon as possible anbd report what I find.).
Note: I have made a pretty foolproof and automated installation script (or actually, it's part of my Java program), that will let anybody interested recreate the problem. Mail your interest to [email protected].
============================================
CREATE INDEX "PORTAL30"."LDAP_WWSEC_PERS_IX1"
ON "PORTAL30"."WWSEC_PERSON$"("MANAGER")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_IX2
ON PORTAL30.WWSEC_PERSON$('ORGANIZATION')
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_PK
ON PORTAL30.WWSEC_PERSON$('ID')
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_UK
ON PORTAL30.WWSEC_PERSON$('USER_NAME')
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_UK
ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID",
"SPONSORING_MEMBER_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_PK
ON PORTAL30.WWSEC_FLAT$("ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX5
ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX4
ON PORTAL30.WWSEC_FLAT$("SPONSORING_MEMBER_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX3
ON PORTAL30.WWSEC_FLAT$("GROUP_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX2
ON PORTAL30.WWSEC_FLAT$("PERSON_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX1"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_GROUP_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX2"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_USER_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX3"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME", "NAME")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_PK"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_UK"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME",
"NAME", "OWNER", "GRANTEE_TYPE", "GRANTEE_GROUP_ID",
"GRANTEE_USER_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 88K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
==================================
Thanks,
Erik Hagen
null -
ORA-29516: Bulk load of method failed; insufficient shm-object space
Hello,
Just installed 11.2.0.1.0 on CentOS 5.5 64-bit. All dependencies satisfied, installation/linking went without a problem.
Server has 32GB RAM, using AMM with target set at 29GB, no swapping is occuring.
No matter what i do when loading Java code (loadjava with JARs or "create and compile java source") I keep getting the error:
ORA-29516: Error in module Aurora: Assertion failure at joez.c:3311
Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
Checked shm-related kernel params, all seems to be normal:
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
Please help.Hi there,
I've stumbled into exactly the same issue for 11g. After I start the database up and I ran loadjava on an externally
compiled class (Hello.class in my instance) I got the following error:
Error while testing for existence of dbms_java.handleMd5
ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
ORA-06512: at "SYS.DBMS_JAVA", line 679
Error while creating class Hello
ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
ORA-06512: at line 1
The following operations failed
class Hello: creation (createFailed)
exiting : Failures occurred during processing
After this, I checked the trace file and saw the following error message:
peshmmap_Create_Memory_Map:
Map_Length = 4096
Map_Protection = 7
Flags = 1
File_Offset = 0
mmap failed with error 1
error message:Operation not permitted
ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
peshmmap_Create_Memory_Map:
Map_Length = 4096
Map_Protection = 7
Flags = 1
File_Offset = 0
mmap failed with error 1
error message:Operation not permitted
ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
Assertion failure at joez.c:3311
Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
It seems as though that "JOXSHM" of size "134217728" (which is 128MB) corresponds to the java_pool_size setting in my init.ora file:
memory_target=1000M
memory_max_target=2000M
java_pool_size=128M
shared_pool_size=256M
Whenever I change that size it propagates to the trace file. I also picked up that only 592MB of shm memory gets used. My df -h dump:
Filesystem Size Used Avail Use% Mounted on
/dev/sda7 39G 34G 4.6G 89% /
udev 10M 288K 9.8M 3% /dev
/dev/sda5 63M 43M 21M 69% /boot
/dev/sda4 59G 45G 11G 81% /mnt/data
shm 2.0G 592M 1.5G 29% /dev/shm
The only way in which I could get loadjava to work was to remove java from the database by calling the rmjvm.sql script.
After this I installed java again by calling the initjvm.sql script. I noticed that after these scripts my shm-memory usage
increased to about 624MB which is 32MB larger than before:
Filesystem Size Used Avail Use% Mounted on
/dev/sda7 39G 34G 4.6G 89% /
udev 10M 288K 9.8M 3% /dev
/dev/sda5 63M 43M 21M 69% /boot
/dev/sda4 59G 45G 11G 81% /mnt/data
shm 2.0G 624M 1.4G 31% /dev/shm
However, after I stopped the database and started it again my Java was broken again and calling loadjava produced
the same error message as before. The shm memory usage would also return to 592MB again. Is there something I
need to do in terms of persisting the changes that initjvm and rmjvm does to the database? Or is there something else
wrong that I'm overlooking like the memory management settings or something?
Regards,
Wiehann -
I built a SSRS 2005 report, which calls a stored proc on SQL Server 2005. The proc contains following code:
CREATE TABLE #promo (promo VARCHAR(1000))
BULK
INSERT #promo
FROM '\\aseposretail\c$\nz\promo_names.txt'
WITH
--FIELDTERMINATOR = '',
ROWTERMINATOR = '\n'
SELECT * from #promo
It's ok when I manually execute the proc in SSMS.
When I try to run the report from BIDS I got following error:
*Cannot bulk load because the file "\aseposretail\c$\nz\promo_names.txt" could not be opened. Operating system error code 5(Access is denied.).*
Note: I have gooled a bit and see many questions on this but they are not relevant because I CAN run the code no problem in SSMS. It's the SSRS having the issue. I know little about the security of SSRS.I'm having the same type of issue. I can bulk load the same file into the same table on the same server using the same login on one workstation, but not on another. I get this error:
Msg 4861, Level 16, State 1, Line 1
Cannot bulk load because the file "\\xxx\abc.txt" could not be opened. Operating system error code 5(Access is denied.).
I've checked SQL client versions and they are the same, I've also set the client connection to TCP/IP only in the SQL Server Configuration Manager. Still this one workstation is getting the error. Since the same login is being used on both workstations and it works on one but not the other, the issue is not a permissions issue. I can also have another user login into the bad workstation and have the bulk load fail, but when they log into their regular workstation it works fine. Any ideas on what the client configuration issue is? These are the version numbers for Management Studio:
Microsoft SQL Server Management Studio 9.00.3042.00
Microsoft Analysis Services Client Tools 2005.090.3042.00
Microsoft Data Access Components (MDAC) 2000.085.1132.00 (xpsp.080413-0852)
Microsoft MSXML 2.6 3.0 5.0 6.0
Microsoft Internet Explorer 6.0.2900.5512
Microsoft .NET Framework 2.0.50727.1433
Operating System 5.1.2600
Thanks,
MWise -
Memory problem if OLE-object references to WMF files
Hi there,
I have a report with an OLE object containing WMFs.
The graphic files are variable and their name is loaded from the database during runtime (path + filename).
Running the report leads to 185 pages, each one containing a different WMF.
If I preview the report in CR, everything looks fine.
If I print the report, the OLE object / graphic is left empty....
If I export the report to PDF (as an example) I get the error message 'memory full'. Reducing the data set to ~50, the PDF is created. But the pictures get resized (much bigger) and only parts are visible.
The machine I'm using doesn't have any memory problems.
The WM files are only 3 to 12 KB each.
If I convert the WMFs to JPG and use these within the report it works...
Problem with this: a loss of quality (it is necessary to stretch the pictures to certain size)
Thanks in advance for any ideas!
Susanne
I'm using CR 2008 SP 3 on Windows 2003 ServerFormat the pictures outside of CR for best results.
-
Notifications are not being sent when Bulk Load is done
Hi All,
I have OIM 11g setup on my machine. I use the bulk load utility for loading the user data. Now in my OIM setup, the notifications are being sent for all stuff like Reset Password. New account creation and other. However when I bulk load the users, notifications are not sent to their mail ids. I am running the scheduled job "Bulk load Post Process" which is necessary so that the users are synced to the LDAP repository. I have the LDAP Sync option checked and also the Notifications option set to yes in this scheduled job. Though the users are loaded successfully and are synced properly, the notifications are not sent. Can some one please guide me as to what could be the problem here?
Thanks,
$idThe code is probably only called in the Event method of the event handler that sends the notification. You can check the mds files and find the notification you are looking for and then use a code decompiler to find the class that is called. You can then use this code as a sample, or write your own notification code and create an event handler that runs in the BulkEvent.
And on another note there is also this System Configuration Variable: Recon.SEND_NOTIFICATION which is set to FALSE by default.
-Kevin -
Using API to run Catalog Bulk Load - Items & Price Lists concurrent prog
Hi everyone. I want to be able to run the concurrent program "Catalog Bulk Load - Items & Price Lists" for iProcurement. I have been able to run concurrent programs in the past using the fnd_request.submit_request API. But I seem to be having problems with the item loading concurrent program. for one thing, the program is stuck on phase code P (pending) status.
When I run the same concurrent program using the iProcurement Administration page it runs ok.
Has anyone been able to run this program through the backend? If so, any help is appreciated.
ThanksHello S.P,
Basically this is what I am trying to achieve.
1. Create a staging table. The columns available for it are category_name, item_number, item_description, supplier, supplier_site, price, uom and currency.
So basically the user can load item details into the database from an excel sheet.
2. use the utl_file api, create an xml file called item_load.xml using the data in the staging table. this will create the xml file used to load items in iprocurement and save it in the database directory /var/tmp/iprocurement This part works great.
3. use the api fnd_request.submit_request to submit the concurrent program 'Catalog Bulk Load - Items & Price Lists'. This is where I am stuck. The process simply says pending or comes up with an error saying:
oracle.apps.fnd.cp.request.FileAccessException: File /var/tmp/iprocurement is not accessable from node/machine moon1.oando-plc.com.
I'm wondering if anyone has used my approach to load items before and if so, have they been successful?
Thank you -
Error when Bulk load hierarchy data
Hi,
While loading P6 Reporting databases following message error appears atthe step in charge of Bulk load hierarchy data into ODS.
<04.29.2011 14:03:59> load [INFO] (Message) - === Bulk load hierarchy data into ODS (ETL_LOADWBSHierarchy.ldr)
<04.29.2011 14:04:26> load [INFO] (Message) - Load completed - logical record count 384102.
<04.29.2011 14:04:26> load [ERROR] (Message) - SqlLoaderSQL LOADER ACTION FAILED. [control=D:\oracle\app\product\11.1.0\db_1\p6rdb\scripts\DATA_WBSHierarchy.csv.ldr] [file=D:\oracle\app\product\11.1.0\db_1\p6rdb\temp\WBSHierarchy\DATA_WBSHierarchy.csv]
<04.29.2011 14:04:26> load [INFO] (Progress) - Step 3/9 Part 5/6 - FAILED (-1) (0 hours, 0 minutes, 28 seconds, 16 milliseconds)
Checking corresponding log error file (see below) I see that effectively some records are rejected. Question is: How could I identify the source of the problem and fix it?
QL*Loader: Release 11.1.0.6.0 - Production on Mon May 2 09:03:22 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Control File: DATA_WBSHierarchy.csv.ldr
Character Set UTF16 specified for all input.
Using character length semantics.
Byteorder little endian specified.
Data File: D:\oracle\app\product\11.1.0\db_1\p6rdb\temp\WBSHierarchy\DATA_WBSHierarchy.csv
Bad File: DATA_WBSHierarchy.bad
Discard File: none specified
+(Allow all discards)+
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Table WBSHIERARCHY, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
PARENTOBJECTID FIRST * WHT CHARACTER
PARENTPROJECTID NEXT * WHT CHARACTER
PARENTSEQUENCENUMBER NEXT * WHT CHARACTER
PARENTNAME NEXT * WHT CHARACTER
PARENTID NEXT * WHT CHARACTER
CHILDOBJECTID NEXT * WHT CHARACTER
CHILDPROJECTID NEXT * WHT CHARACTER
CHILDSEQUENCENUMBER NEXT * WHT CHARACTER
CHILDNAME NEXT * WHT CHARACTER
CHILDID NEXT * WHT CHARACTER
PARENTLEVELSBELOWROOT NEXT * WHT CHARACTER
CHILDLEVELSBELOWROOT NEXT * WHT CHARACTER
LEVELSBETWEEN NEXT * WHT CHARACTER
CHILDHASCHILDREN NEXT * WHT CHARACTER
FULLPATHNAME NEXT 8000 WHT CHARACTER
SKEY SEQUENCE (MAX, 1)
value used for ROWS parameter changed from 64 to 21
Record 14359: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 14360: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 14361: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 27457: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 27458: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 27459: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 38775: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 38776: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 38777: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 52411: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 52412: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 52413: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 114619: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 114620: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 127921: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 127922: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 164588: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 164589: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 171322: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 171323: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 186779: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 186780: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 208687: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 208688: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 221167: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 221168: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Record 246951: Rejected - Error on table WBSHIERARCHY, column PARENTLEVELSBELOWROOT.
ORA-01400: cannot insert NULL into ("ODSUSER"."WBSHIERARCHY"."PARENTLEVELSBELOWROOT")
Record 246952: Rejected - Error on table WBSHIERARCHY, column PARENTOBJECTID.
ORA-01722: invalid number
Table WBSHIERARCHY:
+384074 Rows successfully loaded.+
+28 Rows not loaded due to data errors.+
+0 Rows not loaded because all WHEN clauses were failed.+
+0 Rows not loaded because all fields were null.+
Space allocated for bind array: 244377 bytes(21 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 384102
Total logical records rejected: 28
Total logical records discarded: 0
Run began on Mon May 02 09:03:22 2011
Run ended on Mon May 02 09:04:07 2011
Elapsed time was: 00:00:44.99Hi Mandeep,
Thanks for the information.
But still it doesnot seem to work.
Actally, i have Group ID and Group Name as display field in the Hiearchy table.
Group ID i have directly mapped to Group ID.
I have created a Split Hierarchy of Group Name and mapped it.
I have also made all the options configurations as per your suggestions, but it doenot work still.
Can you please help.
Thanks,
Priya. -
Bulk load in OIM 11g enabled with LDAP sync
Have anyone performed bulk load of more than 100,000 users using bulk load utility in OIM 11g ?
The challenge here is we have OIM 11.1.1.5.0 environment enabled with LDAP sync.
We are trying to figure out some performance factors and best way to achieve our requirement
1.Have you performed any timings around use of Bulk Load tool. Any idea how long will it take to LDAP sync more than 100,000 users into OID. What are the problems that we could encounter during this flow ?
2.Is it possible we could migrate users into another environment and then swap this database for the OIM database? Also is there any effective way to load into OID directly ?
3.We also have some custom Scheduled Task to modify couple of user attributes (using update API) from the flat file. Have you guys tried such scenario after the bulk load ? And did you face any problem while doing so ?
Thanks
DKto Update a UDF you must assign a copy value adpter in Lookup.USR_PROCESS_TRIGGERS(design console / lookup definition)
eg.
CODE --------------------------DECODE
USR_UDF_MYATTR1----- Change MYATTR1
USR_UDF_MYATTR2----- Change MYATTR2
Edited by: Lighting Cui on 2011-8-3 上午12:25 -
Please HELP! issue with BULK LOAD in FDM 11.1.2.1
Please assist with a solution to the following error!
See log below
** Begin FDM Runtime Error Log Entry [2011-10-07 13:43:39] **
ERROR:
Code............................................. -2147217900
Description...................................... You do not have permission to use the bulk load statement.
BULK INSERT POLFDM..tWkalnic158050364335 FROM N'\\pochfm04\apps\POLFDM\Inbox\tWkalnic158050364335.tmp' WITH (FORMATFILE = N'\\pochfm04\apps\POLFDM\Inbox\tWkalnic158050364335.fmt',DATAFILETYPE = N'widechar',ROWS_PER_BATCH=221593,TABLOCK)
Procedure........................................ clsDataManipulation.fExecuteDML
Component........................................ upsWDataWindowDM
Version.......................................... 1112
Thread........................................... 5036
IDENTIFICATION:
User............................................. kalnickim
Computer Name.................................... POCHFM04
App Name......................................... POLFDM
Client App....................................... WebClient
CONNECTION:
Provider......................................... SQLOLEDB
Data Server...................................... pochfmsql01\hfm
Database Name.................................... POLFDM
Trusted Connect.................................. False
Connect Status.. Connection Open
GLOBALS:
Location......................................... BW
Location ID...................................... 751
Location Seg..................................... 5
Category......................................... ActSeg
Category ID...................................... 38
Period........................................... Sep - 2011
Period ID........................................ 9/30/2011
POV Local........................................ True
Language......................................... 1033
User Level....................................... 1
All Partitions................................... True
Is Auditor....................................... False
** Begin FDM Runtime Error Log Entry [2011-10-07 13:43:40] **
ERROR:
Code............................................. -2147217900
Description...................................... Data access error.
Procedure........................................ clsImpDataPump.fImportTextFile
Component........................................ upsWObjectsDM
Version.......................................... 1112
Thread........................................... 5036
IDENTIFICATION:
User............................................. kalnickim
Computer Name.................................... POCHFM04
App Name......................................... POLFDM
Client App....................................... WebClient
CONNECTION:
Provider......................................... SQLOLEDB
Data Server...................................... pochfmsql01\hfm
Database Name.................................... POLFDM
Trusted Connect.................................. False
Connect Status.. Connection Open
GLOBALS:
Location......................................... BW
Location ID...................................... 751
Location Seg..................................... 5
Category......................................... ActSeg
Category ID...................................... 38
Period........................................... Sep - 2011
Period ID........................................ 9/30/2011
POV Local........................................ True
Language......................................... 1033
User Level....................................... 1
All Partitions................................... True
Is Auditor....................................... False
** Begin FDM Runtime Error Log Entry [2011-10-07 13:43:40] **
ERROR:
Code............................................. -2147217900
Description...................................... Data access error.
Procedure........................................ clsImpProcessMgr.fLoadAndProcessFile
Component........................................ upsWObjectsDM
Version.......................................... 1112
Thread........................................... 5036
IDENTIFICATION:
User............................................. kalnickim
Computer Name.................................... POCHFM04
App Name......................................... POLFDM
Client App....................................... WebClient
CONNECTION:
Provider......................................... SQLOLEDB
Data Server...................................... pochfmsql01\hfm
Database Name.................................... POLFDM
Trusted Connect.................................. False
Connect Status.. Connection Open
GLOBALS:
Location......................................... BW
Location ID...................................... 751
Location Seg..................................... 5
Category......................................... ActSeg
Category ID...................................... 38
Period........................................... Sep - 2011
Period ID........................................ 9/30/2011
POV Local........................................ True
Language......................................... 1033
User Level....................................... 1
All Partitions................................... True
Is Auditor....................................... False
** Begin FDM Runtime Error Log Entry [2011-10-07 13:55:38] **
ERROR:
Code............................................. -2147217900
Description...................................... You do not have permission to use the bulk load statement.
BULK INSERT POLFDM..tWkalnic46564644597 FROM N'\\pochfm04\apps\POLFDM\Inbox\tWkalnic46564644597.tmp' WITH (FORMATFILE = N'\\pochfm04\apps\POLFDM\Inbox\tWkalnic46564644597.fmt',DATAFILETYPE = N'widechar',ROWS_PER_BATCH=221593,TABLOCK)
Procedure........................................ clsDataManipulation.fExecuteDML
Component........................................ upsWDataWindowDM
Version.......................................... 1112
Thread........................................... 4644
IDENTIFICATION:
User............................................. kalnickim
Computer Name.................................... POCHFM04
App Name......................................... POLFDM
Client App....................................... WebClient
CONNECTION:
Provider......................................... SQLOLEDB
Data Server...................................... pochfmsql01\hfm
Database Name.................................... POLFDM
Trusted Connect.................................. False
Connect Status.. Connection Open
GLOBALS:
Location......................................... BW
Location ID...................................... 751
Location Seg..................................... 5
Category......................................... ActSeg
Category ID...................................... 38
Period........................................... Sep - 2011
Period ID........................................ 9/30/2011
POV Local........................................ True
Language......................................... 1033
User Level....................................... 1
All Partitions................................... True
Is Auditor....................................... False
** Begin FDM Runtime Error Log Entry [2011-10-07 13:55:38] **
ERROR:
Code............................................. -2147217900
Description...................................... Data access error.
Procedure........................................ clsImpDataPump.fImportTextFile
Component........................................ upsWObjectsDM
Version.......................................... 1112
Thread........................................... 4644
IDENTIFICATION:
User............................................. kalnickim
Computer Name.................................... POCHFM04
App Name......................................... POLFDM
Client App....................................... WebClient
CONNECTION:
Provider......................................... SQLOLEDB
Data Server...................................... pochfmsql01\hfm
Database Name.................................... POLFDM
Trusted Connect.................................. False
Connect Status.. Connection Open
GLOBALS:
Location......................................... BW
Location ID...................................... 751
Location Seg..................................... 5
Category......................................... ActSeg
Category ID...................................... 38
Period........................................... Sep - 2011
Period ID........................................ 9/30/2011
POV Local........................................ True
Language......................................... 1033
User Level....................................... 1
All Partitions................................... True
Is Auditor....................................... False
** Begin FDM Runtime Error Log Entry [2011-10-07 13:55:39] **
ERROR:
Code............................................. -2147217900
Description...................................... Data access error.
Procedure........................................ clsImpProcessMgr.fLoadAndProcessFile
Component........................................ upsWObjectsDM
Version.......................................... 1112
Thread........................................... 4644
IDENTIFICATION:
User............................................. kalnickim
Computer Name.................................... POCHFM04
App Name......................................... POLFDM
Client App....................................... WebClient
CONNECTION:
Provider......................................... SQLOLEDB
Data Server...................................... pochfmsql01\hfm
Database Name.................................... POLFDM
Trusted Connect.................................. False
Connect Status.. Connection Open
GLOBALS:
Location......................................... BW
Location ID...................................... 751
Location Seg..................................... 5
Category......................................... ActSeg
Category ID...................................... 38
Period........................................... Sep - 2011
Period ID........................................ 9/30/2011
POV Local........................................ True
Language......................................... 1033
User Level....................................... 1
All Partitions................................... True
Is Auditor....................................... FalseHave you read the installation documentation? It appears that you did not take the time to do a basic level of troubleshooting. A simple Google search of the error message provides the root problem as well as the solution.
The forums are intended to be used when you have exhausted other options. Please be mindful of this and contributors time when posting further questions.
I have attached a google search result for the error "You do not have permission to use the bulk load statement."
http://www.google.com/#sclient=psy-ab&hl=en&safe=off&site=&source=hp&q=+You+do+not+have+permission+to+use+the+bulk+load+statement.&pbx=1&oq=+You+do+not+have+permission+to+use+the+bulk+load+statement.&aq=f&aqi=g4&aql=&gs_sm=e&gs_upl=1556l1556l0l2633l1l1l0l0l0l0l184l184l0.1l1l0&bav=on.2,or.r_gc.r_pw.r_cp.,cf.osb&fp=ebaa3ff8b466872e&biw=1920&bih=955 -
Error in Add/Replace Bulk Load component - illegal character in XML
Has anyone ever seen the bulk load component complain about some illegal character in xml? I see this error and not sure what exactly the problem is:
ERROR [SocketReader] - Received error message from server: Character is not legal in XML 1.0
It's a very simple graph - reading data from clover data file and ingesting it straight into Endeca using the out of the box bulk load component.
Thanks for your help!
Edited by: 935345 on May 18, 2012 11:48 AMAssuming you are on EID 2.3, this transformation will apply the fix to all your string fields and print on your console the fields that had non-compliant XML 1.0 characters.
//#CTL2
string[] fields;
// Transforms input record into output record.
function integer transform() {
$out.0.* = $in.0.*;
for(integer i = $in.0.length() - 1; i >=0 ; i--) {
if (getFieldType($in.0.*, i) == "string" && getFieldType($out.0.*, i) == "string") {
if (!isNull($in.0.*, i)) {
string originalValue = getStringValue($in.0.*, i);
string newValue = originalValue.replace("([^\\u0009\\u000a\\u000d\\u0020-\\uD7FF\\uE000-\\uFFFD]|[\\u0092\\u007F]+)","");
if (originalValue != newValue) {
fields[i] = getFieldName($in.0, i);
setStringValue($out.0.*, i, newValue);
return OK;
// Called during component initialization.
// function boolean init() {}
// Called during each graph run before the transform is executed. May be used to allocate and initialize resources
// required by the transform. All resources allocated within this method should be released
// by the postExecute() method.
// function void preExecute() {}
// Called only if transform() throws an exception.
// function integer transformOnError(string errorMessage, string stackTrace) {}
// Called during each graph run after the entire transform was executed. Should be used to free any resources
// allocated within the preExecute() method.
function void postExecute() {
printErr("Fields with non-compliant XML 1.0 characters");
for(integer i = 0; i < fields.length(); i++) {
if (fields[i] != null) {
printErr(fields);
// Called to return a user-defined error message when an error occurs.
// function string getMessage() {}
-- Alex -
Error when doing a ATGOrder Bulk load
Hi
Getting the below error when trying to do a bulk load ATGOrder in CSC.
Machine Details :Linux 64bit machine
ATG Version:10.1
17:44:07,487 INFO [OrderOutputConfig] Starting bulk load
17:44:11,482 WARN [loggerI18N] [com.arjuna.ats.internal.jta.recovery.xarecovery1] Local XARecoveryModule.xaRecovery got XA exception javax.transaction.xa.XAException, XAException.XAER_RMERR
17:44:11,488 WARN [loggerI18N] [com.arjuna.ats.internal.jta.recovery.xarecovery1] Local XARecoveryModule.xaRecovery got XA exception javax.transaction.xa.XAException, XAException.XAER_RMERR
17:44:11,495 WARN [loggerI18N] [com.arjuna.ats.internal.jta.recovery.xarecovery1] Local XARecoveryModule.xaRecovery got XA exception javax.transaction.xa.XAException, XAException.XAER_RMERR
17:44:17,651 WARN [LiveIndexingService] Current hosts for environment ATGOrderBulk cannot support requested engine count
17:44:17,652 WARN [LiveIndexingService] Allocate more hosts or increase the maximum number of search engines for one of its hosts
17:44:17,656 ERROR [LiveIndexingService] Unable to release lock: __routingLiveIndexingLock:ATGOrder
atg.service.lockmanager.LockManagerException: Attempt to release a write lock when not the owner: key=__routingLiveIndexingLock:ATGOrder Owner=Thread[http-0.0.0.0-8580-1:ipaddr=172.21.21.49;path=/dyn/admin/nucleus/atg/commerce/search/OrderOutputConfig/;sessionid=B0DC1551B81ACFD6B7C987E59116D825,5,jboss]
at atg.service.lockmanager.ClientLockEntry.releaseWriteLock(ClientLockEntry.java:713)
at atg.service.lockmanager.ClientLockManager.releaseWriteLock(ClientLockManager.java:1386)
at atg.service.lockmanager.ClientLockManager.releaseWriteLock(ClientLockManager.java:1415)
at atg.search.routing.LiveIndexingService.releaseLock(LiveIndexingService.java:1843)
at atg.search.routing.LiveIndexingService.prepareIndexing(LiveIndexingService.java:1455)
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:193)
at atg.repository.search.indexing.BulkLoaderImpl.bulkLoad(BulkLoaderImpl.java:921)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1610)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1563)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at atg.nucleus.ServiceAdminServlet.printMethodInvocation(ServiceAdminServlet.java:1463)
at atg.nucleus.ServiceAdminServlet.service(ServiceAdminServlet.java:251)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at atg.nucleus.Nucleus.service(Nucleus.java:2967)
at atg.nucleus.Nucleus.service(Nucleus.java:2867)
at atg.servlet.pipeline.DispatcherPipelineServletImpl.service(DispatcherPipelineServletImpl.java:253)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.ServletPathPipelineServlet.service(ServletPathPipelineServlet.java:208)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.security.ExpiredPasswordAdminServlet.service(ExpiredPasswordAdminServlet.java:312)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.BasicAuthenticationPipelineServlet.service(BasicAuthenticationPipelineServlet.java:513)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.DynamoPipelineServlet.service(DynamoPipelineServlet.java:491)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.dtm.TransactionPipelineServlet.service(TransactionPipelineServlet.java:249)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.HeadPipelineServlet.passRequest(HeadPipelineServlet.java:1271)
at atg.servlet.pipeline.HeadPipelineServlet.service(HeadPipelineServlet.java:952)
at atg.servlet.pipeline.PipelineableServletImpl.service(PipelineableServletImpl.java:272)
at atg.nucleus.servlet.NucleusProxyServlet.service(NucleusProxyServlet.java:237)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:183)
at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:95)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:451)
at java.lang.Thread.run(Thread.java:662)
17:44:17,658 ERROR [BulkLoader]
atg.repository.search.indexing.IndexingException: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:209)
at atg.repository.search.indexing.BulkLoaderImpl.bulkLoad(BulkLoaderImpl.java:921)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1610)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1563)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at atg.nucleus.ServiceAdminServlet.printMethodInvocation(ServiceAdminServlet.java:1463)
at atg.nucleus.ServiceAdminServlet.service(ServiceAdminServlet.java:251)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at atg.nucleus.Nucleus.service(Nucleus.java:2967)
at atg.nucleus.Nucleus.service(Nucleus.java:2867)
at atg.servlet.pipeline.DispatcherPipelineServletImpl.service(DispatcherPipelineServletImpl.java:253)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.ServletPathPipelineServlet.service(ServletPathPipelineServlet.java:208)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.security.ExpiredPasswordAdminServlet.service(ExpiredPasswordAdminServlet.java:312)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.BasicAuthenticationPipelineServlet.service(BasicAuthenticationPipelineServlet.java:513)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.DynamoPipelineServlet.service(DynamoPipelineServlet.java:491)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.dtm.TransactionPipelineServlet.service(TransactionPipelineServlet.java:249)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.HeadPipelineServlet.passRequest(HeadPipelineServlet.java:1271)
at atg.servlet.pipeline.HeadPipelineServlet.service(HeadPipelineServlet.java:952)
at atg.servlet.pipeline.PipelineableServletImpl.service(PipelineableServletImpl.java:272)
at atg.nucleus.servlet.NucleusProxyServlet.service(NucleusProxyServlet.java:237)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:183)
at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:95)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:451)
at java.lang.Thread.run(Thread.java:662)
Caused by: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.search.routing.LiveIndexingService.prepareBulkIndexing(LiveIndexingService.java:1629)
at atg.search.routing.LiveIndexingService.prepareIndexing(LiveIndexingService.java:1444)
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:193)
... 49 more
Caused by: atg.search.routing.LiveIndexException: Current supported by hosts engine count is less than required count of engines
at atg.search.routing.LiveIndexingService.prepareEnginesForLiveIndexingOperation(LiveIndexingService.java:1161)
at atg.search.routing.LiveIndexingService.prepareEnginesForLiveIndexingOperation(LiveIndexingService.java:1063)
at atg.search.routing.LiveIndexingService.prepareBulkIndexing(LiveIndexingService.java:1625)
... 51 more
17:44:17,675 ERROR [OrderOutputConfig]
atg.repository.search.indexing.IndexingException: atg.repository.search.indexing.IndexingException: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.repository.search.indexing.BulkLoaderImpl.bulkLoad(BulkLoaderImpl.java:1040)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1610)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1563)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at atg.nucleus.ServiceAdminServlet.printMethodInvocation(ServiceAdminServlet.java:1463)
at atg.nucleus.ServiceAdminServlet.service(ServiceAdminServlet.java:251)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at atg.nucleus.Nucleus.service(Nucleus.java:2967)
at atg.nucleus.Nucleus.service(Nucleus.java:2867)
at atg.servlet.pipeline.DispatcherPipelineServletImpl.service(DispatcherPipelineServletImpl.java:253)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.ServletPathPipelineServlet.service(ServletPathPipelineServlet.java:208)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.security.ExpiredPasswordAdminServlet.service(ExpiredPasswordAdminServlet.java:312)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.BasicAuthenticationPipelineServlet.service(BasicAuthenticationPipelineServlet.java:513)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.DynamoPipelineServlet.service(DynamoPipelineServlet.java:491)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.dtm.TransactionPipelineServlet.service(TransactionPipelineServlet.java:249)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.HeadPipelineServlet.passRequest(HeadPipelineServlet.java:1271)
at atg.servlet.pipeline.HeadPipelineServlet.service(HeadPipelineServlet.java:952)
at atg.servlet.pipeline.PipelineableServletImpl.service(PipelineableServletImpl.java:272)
at atg.nucleus.servlet.NucleusProxyServlet.service(NucleusProxyServlet.java:237)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:183)
at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:95)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:451)
at java.lang.Thread.run(Thread.java:662)
Caused by: atg.repository.search.indexing.IndexingException: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:209)
at atg.repository.search.indexing.BulkLoaderImpl.bulkLoad(BulkLoaderImpl.java:921)
... 48 more
Caused by: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.search.routing.LiveIndexingService.prepareBulkIndexing(LiveIndexingService.java:1629)
at atg.search.routing.LiveIndexingService.prepareIndexing(LiveIndexingService.java:1444)
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:193)
... 49 more
Caused by: atg.search.routing.LiveIndexException: Current supported by hosts engine count is less than required count of engines
at atg.search.routing.LiveIndexingService.prepareEnginesForLiveIndexingOperation(LiveIndexingService.java:1161)
at atg.search.routing.LiveIndexingService.prepareEnginesForLiveIndexingOperation(LiveIndexingService.java:1063)
at atg.search.routing.LiveIndexingService.prepareBulkIndexing(LiveIndexingService.java:1625)
... 51 moreIn my /atg/search/routing/LiveIndexingService/ component i have the following values.
ATGProfile running yes yes 8000001 null 1 1 1 start stop cycle delete
backup restore disable
ATGProfileBulk stopped NO yes null null 1 0 0 start stop cycle delete
backup restore disable
ATGOrder running yes yes 8000002 null 1 4 4 start stop cycle delete
backup restore disable
ATGOrderBulk stopped NO yes null null 1 0 0 start stop cycle delete
backup restore disable
Why is there 4 engins running for ATG Order???? i think this is wat is causing the problem, but i am unable to find from where its creating this 4 engins. -
Hi,
I'm using Oracle Endeca 2.3.
I encountered a problem in data integrator, Some batch of records were missing in the Front end and when I checked the status of Graph , It Showed "Graph Executed sucessfully".
So, I've connected the Bulk loader to "Universal data writer" to see the data domain status of the bulk load.
I've listed the results below, However I'm not able to interpret the information from the status and I've looked up the documentation but I found nothing useful.
0|10000|0|In progress
0|11556|0|In progress
0|20000|0|In progress
0|30000|0|In progress
0|39891|0|In progress
0|39891|0|In progress
0|39891|0|In progress
0|39891|0|In progress
0|39891|0|In progress
0|39891|0|In progress
40009|-9|0|In progress
40009|9991|0|In progress
40009|19991|0|In progress
40009|20846|0|In progress
Could anyone enlighten me more about this status.
Also,Since these messages are a part of "Post load", I'm wondering why is it still showing "In-Progress".
Cheers,
KhurshidI assume there was nothing of note in the dgraph.log?
The other option is to see what happens when you either:
A) filter your data down to the records that are missing prior to the load and see what happens
Or
B) use the regular data ingest API rather than the bulk.
Option b will definitely perform much worse on 2.3 so it may not be feasible.
The other thing to check is that your record spec is truly unique. The only time I can remember seeing an issue like this was loading a record, then loading a different record with the same spec value. The first record would get in and then be overwritten by the second record making it seem like the first record was dropped. Figured it would be worth checking.
Patrick Rafferty
Branchbird
Maybe you are looking for
-
Connectivity issues - Bridge CC to Photoshop CC - Image Processor issues.
Hi guys. I'm having issues with Image Processor between Bridge CC and Photoshop CC. I can see in the forums that others are having issues that are close to mine but not the same. I can still see my image processor tab in my Tools menu. The problem is
-
I did not realize I made any changes but my Clone Stamp nor my Healing brush have any effect & the circle is white. How do I reset to default or correct his problem. Dave, [email protected]
-
Does Apple TV need Internet bandwidth even if the content is available on LAN?
Hi, I was wondering whether Apple TV needs Internet connection just to verify user credentials or it actually uses bandwidth to stream media even if the videos are available locally on a storage device. I am using an Apple TV 2nd Generation connected
-
Flashing question mark with added frustration
I have a Mac Book Pro 17inch, 6 months old. After 3 months it had to go off for a new hard drive because the original simply stopped working. I've had it back for 4 weeks and today the spinning wheel appeared, I shut down with the power key, tried to
-
Junit question...multiple input files
I'm not very clear if fixtures is what I need to use and if so how to even use it in my case. I have a simple XMLUnit testcase that compares a set of xml files using DetailedDiff and getAllDifferences(). This works just fine for a comparison of a sin