Unable to perform bulk load in BODS 3.2
Hi
We have upgraded our Development server from BODS 3.0 to BODS 3.2. There is a dataflow wherein the job uses the Bulk load option. The job is giving warnings at that dataflow and all the data is shown as warnings in the log. No data is loaded to the Target table. We have recently migrated SQL Server 2005 to SQL Server 2008. Will someone let me know why the Bulk load option is not working in BODS 3.2
Kind Regards,
Mahesh
Hi,
I want to upgrade SQL Server 2005 to SQL server 2008 with BODS 4.0.
I want to know the recommandations for do it.
- How to use SQL Server 2008 with Bods?
- What are the performece on SQL server 2008?
- What are the things to evaluate?
- Is it necessary migrate with BackUp restore mode ?
- What are the step of migration?
- Can we merge the disabled in BODS?
Similar Messages
-
Bulk loading BLOBs using PL/SQL - is it possible?
Hi -
Does anyone have a good reference article or example of how I can bulk load BLOBs (videos, images, audio, office docs/pdf) into the database using PL/SQL?
Every example I've ever seen in PL/SQL for loading BLOBs does a commit; after each file loaded ... which doesn't seem very scalable.
Can we pass in an array of BLOBs from the application, into PL/SQL and loop through that array and then issue a commit after the loop terminates?
Any advice or help is appreciated. Thanks
LJIt is easy enough to modify the example to commit every N files. If you are loading large amounts of media, I think that you will find that the time to load the media is far greater than the time spent in SQL statements doing inserts or retrieves. Thus, I would not expect to see any significant benefit to changing the example to use PL/SQL collection types in order to do bulk row operations.
If your goal is high performance bulk load of binary content then I would suggest that you look to use Sqlldr. A PL/SQL program loading from BFILEs is limited to loading files that are accessible from the database server file system. Sqlldr can do this but it can also load data from a remote client. Sqlldr has parameters to control batching of operations.
See section 7.3 of the Oracle Multimedia DICOM Developer's Guide for the example Loading DICOM Content Using the SQL*Loader Utility. You will need to adapt this example to the other Multimedia objects (ORDImage, ORDAudio .. etc) but the basic concepts are the same.
Once the binary content is loaded into the database, you will need a to write a program to loop over the new content and initialize the Multimedia objects (extract attributes). The example in 7.3 contains a sample program that does this for the ORDDicom object. -
Bulk load in OIM 11g enabled with LDAP sync
Have anyone performed bulk load of more than 100,000 users using bulk load utility in OIM 11g ?
The challenge here is we have OIM 11.1.1.5.0 environment enabled with LDAP sync.
We are trying to figure out some performance factors and best way to achieve our requirement
1.Have you performed any timings around use of Bulk Load tool. Any idea how long will it take to LDAP sync more than 100,000 users into OID. What are the problems that we could encounter during this flow ?
2.Is it possible we could migrate users into another environment and then swap this database for the OIM database? Also is there any effective way to load into OID directly ?
3.We also have some custom Scheduled Task to modify couple of user attributes (using update API) from the flat file. Have you guys tried such scenario after the bulk load ? And did you face any problem while doing so ?
Thanks
DKto Update a UDF you must assign a copy value adpter in Lookup.USR_PROCESS_TRIGGERS(design console / lookup definition)
eg.
CODE --------------------------DECODE
USR_UDF_MYATTR1----- Change MYATTR1
USR_UDF_MYATTR2----- Change MYATTR2
Edited by: Lighting Cui on 2011-8-3 上午12:25 -
Shell scripts for bulk loading
Hello
Does ECM have the capability to use operating system shell scripts to perform bulk loading?
Best RegardsIDCCommand
There is a guide on this. Here is the 10g one.
http://download.oracle.com/docs/cd/E10316_01/cs/cs_doc_10/sdk/idc_command_reference/wwhelp/wwhimpl/js/html/wwhelp.htm
You generate batch files with commands (service calls) which can be run from command line or shell script.
So yes.
the commands in this question would be to execute Batchloader. -
Critical performance problem upon bulk load of groups
All (including product development),
I think there are missing indexes in wwsec_flat$ and wwsec_sys_priv$. Anyway, I'd like assistance on fixing the critical performance problems I see, properly. Read on...
During and after bulk load of a few (about 500) users and groups from an external database, it becomes evident that there's a performance problem somewhere. Many of the calls to wwsec_api.addGroupToList took several minutes to finish. Afterwards the machine went 100% CPU just from logging in with the portal30 user (which happens to be group owner for all the groups).
Running SQL trace points in the directions of the following SQL statement:
SELECT ID,PARENT_ID,NAME,TITLE_ID,TITLEIMAGE_ID,ROLLOVERIMAGE_ID,
DESCRIPTION_ID,LAYOUT_ID,STYLE_ID,PAGE_TYPE,CREATED_BY,CREATED_ON,
LAST_MODIFIED_BY,LAST_MODIFIED_ON,PUBLISHED_ON,HAS_BANNER,HAS_FOOTER,
EXPOSURE,SHOW_CHILDREN,IS_PUBLIC,INHERIT_PRIV,IS_READY,EXECUTE_MODE,
CACHE_MODE,CACHE_EXPIRES,TEMPLATE FROM
WWPOB_PAGE$ WHERE ID = :b1
I checked the existing indexes, and see that the following ones are missing (I'm about to test with these, but have not yet done so):
CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_GROUP_ID"
ON "PORTAL30"."WWSEC_FLAT$"("GROUP_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING
CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_PERSON_ID"
ON "PORTAL30"."WWSEC_FLAT$"("PERSON_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING
CREATE UNIQUE INDEX "PORTAL30"."WWSEC_SYS_PRIV_IX_PATCH1"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("OWNER", "GRANTEE_GROUP_ID",
"GRANTEE_TYPE", "OWNER", "NAME", "OBJECT_TYPE_NAME")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 80K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING
Note that when I deleted the newly inserted groups, the CPU consumption immediately went down from 100% to some 2-3%.
This behaviour has been observed on a Sun Solaris system, but I think it's the same on NT (I have observed it during the bulk load on my NT laptop, but so far have not had the time to test further.).
Also note: In the call to addGroupToList, I set owner to true for all groups.
Also note: During loading of the groups, I logged a few errors, all of the same type ("PORTAL30.WWSEC_API", line 2075), as follows:
Error: Problem calling addGroupToList for child group'Marketing' (8030), list 'NO_OSL_Usenet'(8017). Reason: java.sql.SQLException: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "PORTAL30.WWSEC_API", line 2075
Please help. If you like, I may supply the tables and the java program that I use. It's fully reproducable.
Thanks,
Erik Hagen (you may call me on +47 90631013)
nullYES!
I have now tested with insertion of the missing indexes. It seems the call to addGroupToList takes just as long time as before, but the result is much better: WITH THE INDEXES DEFINED, THERE IS NO LONGER A PERFORMANCE PROBLEM!! The index definitions that I used are listed below (I added these to the ones that are there in Portal 3.0.8, but I guess some of those could have been deleted).
About the info at http://technet.oracle.com:89/ubb/Forum70/HTML/000894.html: Yes! Thanks! Very interesting, and I guess you found the cause for the error messages and maybe also for the performance problem during bulk load (I'll look into it as soon as possible anbd report what I find.).
Note: I have made a pretty foolproof and automated installation script (or actually, it's part of my Java program), that will let anybody interested recreate the problem. Mail your interest to [email protected].
============================================
CREATE INDEX "PORTAL30"."LDAP_WWSEC_PERS_IX1"
ON "PORTAL30"."WWSEC_PERSON$"("MANAGER")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_IX2
ON PORTAL30.WWSEC_PERSON$('ORGANIZATION')
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_PK
ON PORTAL30.WWSEC_PERSON$('ID')
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_UK
ON PORTAL30.WWSEC_PERSON$('USER_NAME')
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_UK
ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID",
"SPONSORING_MEMBER_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_PK
ON PORTAL30.WWSEC_FLAT$("ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX5
ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX4
ON PORTAL30.WWSEC_FLAT$("SPONSORING_MEMBER_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX3
ON PORTAL30.WWSEC_FLAT$("GROUP_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX2
ON PORTAL30.WWSEC_FLAT$("PERSON_ID")
TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 0 FREELISTS 1);
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX1"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_GROUP_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX2"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_USER_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX3"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME", "NAME")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_PK"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_UK"
ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME",
"NAME", "OWNER", "GRANTEE_TYPE", "GRANTEE_GROUP_ID",
"GRANTEE_USER_ID")
TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE ( INITIAL 32K NEXT 88K MINEXTENTS 1 MAXEXTENTS 4096
PCTINCREASE 1 FREELISTS 1)
LOGGING;
==================================
Thanks,
Erik Hagen
null -
How to improve performance for Azure Table Storage bulk loads
Hello all,
Would appreciate your help as we are facing a challenge.
We are tried to bulk load Azure table storage. We have a file that contains nearly 2 million rows.
We would need to reach a point where we could bulk load 100000-150000 entries per minute. Currently, it takes more than 10 hours to process the file..
We have tried Parallel.Foreach but it doesn't help. Today I discovered Partitioning in PLINQ. Would that be the way to go??
Any ideas? I have spent nearly two days in trying to optimize it using PLINQ, but still I am not sure what is the best thing to do.
Kindly, note that we shouldn't be using SQL/Azure SQL for this.
I would really appreciate your help.
ThanksI'd think you're just pooling the parallel connections to Azure, if you do it on one system. You'd also have a bottleneck of round trip time from you, through the internet to Azure and back again.
You could speed it up by moving the data file to the cloud and process it with a Cloud worker role. That way you'd be in the datacenter (which is a much faster, more optimized network.)
Or, if that's not fast enough - if you can split the data so multiple WorkerRoles could each process part of the file, you can use the VM's scale to put enough machines to it that it gets done quickly.
Darin R. -
Hello,
I have one question regarding bulk loading. I did lot of bulk loading.
But my requirement is to call function which will do some DML operation and give ref key so that i can insert to fact table.
Because i can't use DML function in select statement. (which will give error). otherway is using autonomous transaction. which i tried working but performance is very slow.
How to call this function inside bulk loading process.
Help !!
xx_f is function which is using autonmous transction,
See my sample code
declare
cursor c1 is select a,b,c from xx;
type l_a is table of xx.a%type;
type l_b is table of xx.b%type;
type l_c is table of xx.c%type;
v_a l_a;
v_b l_b;
v_c l_c;
begin
open c1;
loop
fetch c1 bulk collect into v_a,v_b,v_c limit 1000;
exit when c1%notfound;
begin
forall i in 1..v_a.count
insert into xxyy
(a,b,c) values (xx_f(v_a(i),xx_f(v_b(i),xx_f(v_c(i));
commit;
end bulkload;
end loop;
close c1;
end;
I just want to call xx_f function without autonoumous transaction.
but with bulk loading. Please let me if you need more details
Thanks
yreddyrCan you show the code for xx_f? Does it do DML, or just transformations on the columns?
Depending on what it does, an alternative could be something like:
DECLARE
CURSOR c1 IS
SELECT xx_f(a), xx_f(b), xx_f(c) FROM xx;
TYPE l_a IS TABLE OF whatever xx_f returns;
TYPE l_b IS TABLE OF whatever xx_f returns;
TYPE l_c IS TABLE OF whatever xx_f returns;
v_a l_a;
v_b l_b;
v_c l_c;
BEGIN
OPEN c1;
LOOP
FETCH c1 BULK COLLECT INTO v_a, v_b, v_c LIMIT 1000;
BEGIN
FORALL i IN 1..v_a.COUNT
INSERT INTO xxyy (a, b, c)
VALUES (v_a(i), v_b(i), v_c(i));
END;
EXIT WHEN c1%NOTFOUND;
END LOOP;
CLOSE c1;
END;John -
Retry "Bulk Load Post Process" batch
Hi,
First question, what is the actual use of the scheduled task "Bulk Load Post Process"? If I am not sending out email notification, nor LDAP syncing nor generating the password do I still need to run this task after performing a bulk load through the utility?
Also, I ran this task, now there are some batches which are in the "READY FOR PROCESSING" state. How do I re-run these batches?
Thanks,
VishalThe scheduled task carries out post-processing activities on the users imported through the bulk load utility.
-
Anyone know setting primary key deferred help in the bulk loading
Hi,
Anyone know by setting primary key deferred help in the bulk loading in term of performance..cos i do not want to disable the index, cos when user query the existing records in the table, it will affect the search query.
Thank You...In the Oracle 8.0 documentation when deferred constraints were introduced Oracle stated that defering testing the PK constraint until commit time was more efficient than testing each constraint at the time of insert.
I have never tested this assertion.
In order to create a deferred PK constraint the index used to support the PK must be created as non-unique.
HTH -- Mark D Powell -- -
Bulk loading in 11.1.0.6
Hi,
I'm using bulk load to load about 200 million triples into one model in 11.1.0.6. The data is splitted into about 60 files with around 3 millions triples in each file. I have a script file which has
host sqlldr ...FILE1;
exec sem_apis.bulk_load_from_staging_table(...);
host sqlldr ...FILE2;
exec sem_apis.bulk_load_from_staging_table(...);
for every file to load.
When I run the script from command line, it looks that the time needed for the loading grows as more files are loaded. The first file took about 8 min to load, the second file took about 25 min,... It's now taking 2 and half hour to load one file after completing loading 14 files.
Is index rebuild causing this behavior? If that's the case is there any way to turn off the index during bulk loading? If the index rebuild is not the case what other parameters can we adjust to speed up the bulk loading?
Thanks,
WeihuaBulk-append is slower than bulk-load because of incremental index maintenance. The uniqueness constraint enforcing index cannot be disabled. I'd suggest moving to 11.1.0.7 and then installing patch 7600122 to be able to make use of enhanced bulk-append that performs much better than in 11.1.0.6.
The best way to load 200 million rows in 11.1.0.6 would be to load into an empty RDF model via a single bulk-load. You can do it as follows (assuming the filenames are f1.nt thru f60.nt):
- [create a named pipe] mkfifo named_pipe.nt
- cat f*.nt > named_pipe.nt
on a different window:
- run sqlldr with named_pipe.nt as the data file to load all 200 million rows into a staging table (you could create staging table with COMPRESS option to keep the size down)
- next, run exec sem_apis.bulk_load_from_staging_table(...);
(I'd also suggest use of COMPRESS for the application table.) -
Error when doing a ATGOrder Bulk load
Hi
Getting the below error when trying to do a bulk load ATGOrder in CSC.
Machine Details :Linux 64bit machine
ATG Version:10.1
17:44:07,487 INFO [OrderOutputConfig] Starting bulk load
17:44:11,482 WARN [loggerI18N] [com.arjuna.ats.internal.jta.recovery.xarecovery1] Local XARecoveryModule.xaRecovery got XA exception javax.transaction.xa.XAException, XAException.XAER_RMERR
17:44:11,488 WARN [loggerI18N] [com.arjuna.ats.internal.jta.recovery.xarecovery1] Local XARecoveryModule.xaRecovery got XA exception javax.transaction.xa.XAException, XAException.XAER_RMERR
17:44:11,495 WARN [loggerI18N] [com.arjuna.ats.internal.jta.recovery.xarecovery1] Local XARecoveryModule.xaRecovery got XA exception javax.transaction.xa.XAException, XAException.XAER_RMERR
17:44:17,651 WARN [LiveIndexingService] Current hosts for environment ATGOrderBulk cannot support requested engine count
17:44:17,652 WARN [LiveIndexingService] Allocate more hosts or increase the maximum number of search engines for one of its hosts
17:44:17,656 ERROR [LiveIndexingService] Unable to release lock: __routingLiveIndexingLock:ATGOrder
atg.service.lockmanager.LockManagerException: Attempt to release a write lock when not the owner: key=__routingLiveIndexingLock:ATGOrder Owner=Thread[http-0.0.0.0-8580-1:ipaddr=172.21.21.49;path=/dyn/admin/nucleus/atg/commerce/search/OrderOutputConfig/;sessionid=B0DC1551B81ACFD6B7C987E59116D825,5,jboss]
at atg.service.lockmanager.ClientLockEntry.releaseWriteLock(ClientLockEntry.java:713)
at atg.service.lockmanager.ClientLockManager.releaseWriteLock(ClientLockManager.java:1386)
at atg.service.lockmanager.ClientLockManager.releaseWriteLock(ClientLockManager.java:1415)
at atg.search.routing.LiveIndexingService.releaseLock(LiveIndexingService.java:1843)
at atg.search.routing.LiveIndexingService.prepareIndexing(LiveIndexingService.java:1455)
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:193)
at atg.repository.search.indexing.BulkLoaderImpl.bulkLoad(BulkLoaderImpl.java:921)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1610)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1563)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at atg.nucleus.ServiceAdminServlet.printMethodInvocation(ServiceAdminServlet.java:1463)
at atg.nucleus.ServiceAdminServlet.service(ServiceAdminServlet.java:251)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at atg.nucleus.Nucleus.service(Nucleus.java:2967)
at atg.nucleus.Nucleus.service(Nucleus.java:2867)
at atg.servlet.pipeline.DispatcherPipelineServletImpl.service(DispatcherPipelineServletImpl.java:253)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.ServletPathPipelineServlet.service(ServletPathPipelineServlet.java:208)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.security.ExpiredPasswordAdminServlet.service(ExpiredPasswordAdminServlet.java:312)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.BasicAuthenticationPipelineServlet.service(BasicAuthenticationPipelineServlet.java:513)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.DynamoPipelineServlet.service(DynamoPipelineServlet.java:491)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.dtm.TransactionPipelineServlet.service(TransactionPipelineServlet.java:249)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.HeadPipelineServlet.passRequest(HeadPipelineServlet.java:1271)
at atg.servlet.pipeline.HeadPipelineServlet.service(HeadPipelineServlet.java:952)
at atg.servlet.pipeline.PipelineableServletImpl.service(PipelineableServletImpl.java:272)
at atg.nucleus.servlet.NucleusProxyServlet.service(NucleusProxyServlet.java:237)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:183)
at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:95)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:451)
at java.lang.Thread.run(Thread.java:662)
17:44:17,658 ERROR [BulkLoader]
atg.repository.search.indexing.IndexingException: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:209)
at atg.repository.search.indexing.BulkLoaderImpl.bulkLoad(BulkLoaderImpl.java:921)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1610)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1563)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at atg.nucleus.ServiceAdminServlet.printMethodInvocation(ServiceAdminServlet.java:1463)
at atg.nucleus.ServiceAdminServlet.service(ServiceAdminServlet.java:251)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at atg.nucleus.Nucleus.service(Nucleus.java:2967)
at atg.nucleus.Nucleus.service(Nucleus.java:2867)
at atg.servlet.pipeline.DispatcherPipelineServletImpl.service(DispatcherPipelineServletImpl.java:253)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.ServletPathPipelineServlet.service(ServletPathPipelineServlet.java:208)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.security.ExpiredPasswordAdminServlet.service(ExpiredPasswordAdminServlet.java:312)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.BasicAuthenticationPipelineServlet.service(BasicAuthenticationPipelineServlet.java:513)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.DynamoPipelineServlet.service(DynamoPipelineServlet.java:491)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.dtm.TransactionPipelineServlet.service(TransactionPipelineServlet.java:249)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.HeadPipelineServlet.passRequest(HeadPipelineServlet.java:1271)
at atg.servlet.pipeline.HeadPipelineServlet.service(HeadPipelineServlet.java:952)
at atg.servlet.pipeline.PipelineableServletImpl.service(PipelineableServletImpl.java:272)
at atg.nucleus.servlet.NucleusProxyServlet.service(NucleusProxyServlet.java:237)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:183)
at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:95)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:451)
at java.lang.Thread.run(Thread.java:662)
Caused by: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.search.routing.LiveIndexingService.prepareBulkIndexing(LiveIndexingService.java:1629)
at atg.search.routing.LiveIndexingService.prepareIndexing(LiveIndexingService.java:1444)
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:193)
... 49 more
Caused by: atg.search.routing.LiveIndexException: Current supported by hosts engine count is less than required count of engines
at atg.search.routing.LiveIndexingService.prepareEnginesForLiveIndexingOperation(LiveIndexingService.java:1161)
at atg.search.routing.LiveIndexingService.prepareEnginesForLiveIndexingOperation(LiveIndexingService.java:1063)
at atg.search.routing.LiveIndexingService.prepareBulkIndexing(LiveIndexingService.java:1625)
... 51 more
17:44:17,675 ERROR [OrderOutputConfig]
atg.repository.search.indexing.IndexingException: atg.repository.search.indexing.IndexingException: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.repository.search.indexing.BulkLoaderImpl.bulkLoad(BulkLoaderImpl.java:1040)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1610)
at atg.repository.search.indexing.IndexingOutputConfig.bulkLoad(IndexingOutputConfig.java:1563)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at atg.nucleus.ServiceAdminServlet.printMethodInvocation(ServiceAdminServlet.java:1463)
at atg.nucleus.ServiceAdminServlet.service(ServiceAdminServlet.java:251)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at atg.nucleus.Nucleus.service(Nucleus.java:2967)
at atg.nucleus.Nucleus.service(Nucleus.java:2867)
at atg.servlet.pipeline.DispatcherPipelineServletImpl.service(DispatcherPipelineServletImpl.java:253)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.ServletPathPipelineServlet.service(ServletPathPipelineServlet.java:208)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.security.ExpiredPasswordAdminServlet.service(ExpiredPasswordAdminServlet.java:312)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.BasicAuthenticationPipelineServlet.service(BasicAuthenticationPipelineServlet.java:513)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.DynamoPipelineServlet.service(DynamoPipelineServlet.java:491)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.dtm.TransactionPipelineServlet.service(TransactionPipelineServlet.java:249)
at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
at atg.servlet.pipeline.HeadPipelineServlet.passRequest(HeadPipelineServlet.java:1271)
at atg.servlet.pipeline.HeadPipelineServlet.service(HeadPipelineServlet.java:952)
at atg.servlet.pipeline.PipelineableServletImpl.service(PipelineableServletImpl.java:272)
at atg.nucleus.servlet.NucleusProxyServlet.service(NucleusProxyServlet.java:237)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:183)
at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:95)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:451)
at java.lang.Thread.run(Thread.java:662)
Caused by: atg.repository.search.indexing.IndexingException: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:209)
at atg.repository.search.indexing.BulkLoaderImpl.bulkLoad(BulkLoaderImpl.java:921)
... 48 more
Caused by: atg.search.routing.LiveIndexException: Unable to prepare engines for live indexing.
at atg.search.routing.LiveIndexingService.prepareBulkIndexing(LiveIndexingService.java:1629)
at atg.search.routing.LiveIndexingService.prepareIndexing(LiveIndexingService.java:1444)
at atg.repository.search.indexing.submitter.LiveDocumentSubmitter.beginSession(LiveDocumentSubmitter.java:193)
... 49 more
Caused by: atg.search.routing.LiveIndexException: Current supported by hosts engine count is less than required count of engines
at atg.search.routing.LiveIndexingService.prepareEnginesForLiveIndexingOperation(LiveIndexingService.java:1161)
at atg.search.routing.LiveIndexingService.prepareEnginesForLiveIndexingOperation(LiveIndexingService.java:1063)
at atg.search.routing.LiveIndexingService.prepareBulkIndexing(LiveIndexingService.java:1625)
... 51 moreIn my /atg/search/routing/LiveIndexingService/ component i have the following values.
ATGProfile running yes yes 8000001 null 1 1 1 start stop cycle delete
backup restore disable
ATGProfileBulk stopped NO yes null null 1 0 0 start stop cycle delete
backup restore disable
ATGOrder running yes yes 8000002 null 1 4 4 start stop cycle delete
backup restore disable
ATGOrderBulk stopped NO yes null null 1 0 0 start stop cycle delete
backup restore disable
Why is there 4 engins running for ATG Order???? i think this is wat is causing the problem, but i am unable to find from where its creating this 4 engins. -
Hi,
I'm using Oracle Endeca 2.3.
I encountered a problem in data integrator, Some batch of records were missing in the Front end and when I checked the status of Graph , It Showed "Graph Executed sucessfully".
So, I've connected the Bulk loader to "Universal data writer" to see the data domain status of the bulk load.
I've listed the results below, However I'm not able to interpret the information from the status and I've looked up the documentation but I found nothing useful.
0|10000|0|In progress
0|11556|0|In progress
0|20000|0|In progress
0|30000|0|In progress
0|39891|0|In progress
0|39891|0|In progress
0|39891|0|In progress
0|39891|0|In progress
0|39891|0|In progress
0|39891|0|In progress
40009|-9|0|In progress
40009|9991|0|In progress
40009|19991|0|In progress
40009|20846|0|In progress
Could anyone enlighten me more about this status.
Also,Since these messages are a part of "Post load", I'm wondering why is it still showing "In-Progress".
Cheers,
KhurshidI assume there was nothing of note in the dgraph.log?
The other option is to see what happens when you either:
A) filter your data down to the records that are missing prior to the load and see what happens
Or
B) use the regular data ingest API rather than the bulk.
Option b will definitely perform much worse on 2.3 so it may not be feasible.
The other thing to check is that your record spec is truly unique. The only time I can remember seeing an issue like this was loading a record, then loading a different record with the same spec value. The first record would get in and then be overwritten by the second record making it seem like the first record was dropped. Figured it would be worth checking.
Patrick Rafferty
Branchbird -
Hi there
Just wanted to know is it a bug or feature - in case if Column store table has non capital letters in its name, then bulk load does not work and performance is ruined?
Mikeit looks like we're having performance issues here and bulk load is failing because of connection method which is being used with Sybase RS. If we use standard ODBC then everything is as it should be, but as soon as we swith to .NET world then nothing happens, silngle inserts/updates are ok
So, we have Application written in mixed J2ee/.NET and we use HANA applience as host for tables, Procedures and views.
This issues hs been sent to support, will update as soon as i get smth from them -
Bulk Load into SAP system from external application
Hi,
Is there a way to perform a bulk load of data into a SAP system from an external application?
Thanks
SimonHello,
My external application is a C program and I think I want to use IDocs and RFC to communicate with the SAP system.
Simon -
ORA-29516: Bulk load of method failed; insufficient shm-object space
Hello,
Just installed 11.2.0.1.0 on CentOS 5.5 64-bit. All dependencies satisfied, installation/linking went without a problem.
Server has 32GB RAM, using AMM with target set at 29GB, no swapping is occuring.
No matter what i do when loading Java code (loadjava with JARs or "create and compile java source") I keep getting the error:
ORA-29516: Error in module Aurora: Assertion failure at joez.c:3311
Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
Checked shm-related kernel params, all seems to be normal:
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
Please help.Hi there,
I've stumbled into exactly the same issue for 11g. After I start the database up and I ran loadjava on an externally
compiled class (Hello.class in my instance) I got the following error:
Error while testing for existence of dbms_java.handleMd5
ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
ORA-06512: at "SYS.DBMS_JAVA", line 679
Error while creating class Hello
ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
ORA-06512: at line 1
The following operations failed
class Hello: creation (createFailed)
exiting : Failures occurred during processing
After this, I checked the trace file and saw the following error message:
peshmmap_Create_Memory_Map:
Map_Length = 4096
Map_Protection = 7
Flags = 1
File_Offset = 0
mmap failed with error 1
error message:Operation not permitted
ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
peshmmap_Create_Memory_Map:
Map_Length = 4096
Map_Protection = 7
Flags = 1
File_Offset = 0
mmap failed with error 1
error message:Operation not permitted
ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
Assertion failure at joez.c:3311
Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
It seems as though that "JOXSHM" of size "134217728" (which is 128MB) corresponds to the java_pool_size setting in my init.ora file:
memory_target=1000M
memory_max_target=2000M
java_pool_size=128M
shared_pool_size=256M
Whenever I change that size it propagates to the trace file. I also picked up that only 592MB of shm memory gets used. My df -h dump:
Filesystem Size Used Avail Use% Mounted on
/dev/sda7 39G 34G 4.6G 89% /
udev 10M 288K 9.8M 3% /dev
/dev/sda5 63M 43M 21M 69% /boot
/dev/sda4 59G 45G 11G 81% /mnt/data
shm 2.0G 592M 1.5G 29% /dev/shm
The only way in which I could get loadjava to work was to remove java from the database by calling the rmjvm.sql script.
After this I installed java again by calling the initjvm.sql script. I noticed that after these scripts my shm-memory usage
increased to about 624MB which is 32MB larger than before:
Filesystem Size Used Avail Use% Mounted on
/dev/sda7 39G 34G 4.6G 89% /
udev 10M 288K 9.8M 3% /dev
/dev/sda5 63M 43M 21M 69% /boot
/dev/sda4 59G 45G 11G 81% /mnt/data
shm 2.0G 624M 1.4G 31% /dev/shm
However, after I stopped the database and started it again my Java was broken again and calling loadjava produced
the same error message as before. The shm memory usage would also return to 592MB again. Is there something I
need to do in terms of persisting the changes that initjvm and rmjvm does to the database? Or is there something else
wrong that I'm overlooking like the memory management settings or something?
Regards,
Wiehann
Maybe you are looking for
-
Use more than one look and feel in the Portal
Hi, I'm searching for a solution to use more than one Theme in the portal. It's not based on user or a role. One and the same user should be able to enter the portal and depending on his adress he should be rooted to A or B theme. We are usein EP6 sp
-
FI : Automatic Program User Exit
RFFOX900 Framework for user exit RFFOX900 (in program RFFOM100) >>>EXIT_RFFOEXIT_900 I have tried using this user exit to add some data, but when it is activated I lose the original output data and only get the new data line. How to add or modify fie
-
IFelix, or anyone, help me hook up a pc laptop to an AE!!!
Hi, I have been using the discussion page before and I had solved a lot of my previous problems, but now there is another one on the horizon and I need your help guys. I have AE and a wireless connection between Ibook G4 and a PC desktop. Don't ask h
-
How do i get my redemption code to carry on using lightroom after my free trial
Hi how do I get my redemption code in order to continue using lightroom? I have sined up to the creative cloud monthly and installed lightroom but I have not been emailed any activation codes or anything. Peter
-
Excuse me but I am new to video editing and have basic questions 1. Where can I find a definition of terms used; anamorphic, pixel aspect ratio for video, stills. I have tried google and the explainations are not pertinent to this program or so invo