Golden Gate Initial Load - Performance Problem

Hello,
  I'm using the fastest method of initial load. Direct Bulk Load with additional parameters:
BULKLOAD NOLOGGING PARALLEL SKIPALLINDEXES
Unfortunatelly the load of a big Table 734 billions rows (around 30 GB) takes about 7 hours. The same table loaded with normal INSERT Statement in parallel via DB-Link takes 1 hour 20 minutes.
Why does it take so long using Golden Gate? Am I missing something?
I've also noticed that the load time with and without PARALLEL parameter for BULKLOAD is almost the same.
Regards
Pawel

Hi Bobby,
It's Extract / Replicat using SQL Loader.
Created with following commands
ADD EXTRACT initial-load_Extract, SOURCEISTABLE
ADD REPLICAT initial-load_Replicat, SPECIALRUN
The Extract parameter file:
USERIDALIAS {:GGEXTADM}
RMTHOST {:EXT_RMTHOST}, MGRPORT {:REP_MGR_PORT}
RMTTASK replicat, GROUP {:REP_INIT_NAME}_0
TABLE Schema.Table_name;
The Replicat parameter file:
REPLICAT {:REP_INIT_NAME}_0
SETENV (ORACLE_SID='{:REPLICAT_SID}')
USERIDALIAS {:GGREPADM}
BULKLOAD NOLOGGING NOPARALLEL SKIPALLINDEXES
ASSUMETARGETDEFS
MAP Schema.Table_name, TARGET Schema.Table_tgt_name,
COLMAP(USEDEFAULTS),
KEYCOLS(PKEY),
INSERTAPPEND;
Regards,
Pawel

Similar Messages

  • Golden Gate - Initial Load using parallel process group

    Dear all,
    I am new to GG and I was wondering if GG can support initial load with parallel process groups? I have manage to do an initial load using "Direct BULK Load" and "File to Replicat", but I have several big tables and replicat is not catching up. I am aware that GG is not ideal for making initial load, but it is complicated to explain why I am using it.
    Is it possible to user @RANGE function while performing Initial Load regardless of which method is used (file to replicat, direct bulk, ...) ?
    Thanks in advance

    you may use datapump for initial load for large tables.

  • Golden Gate Initial load from 3 tb schema

    Hi
    My source database is 9i rdbms on solaris 5.10. I would like to build 11gR2 database on oracel Enterprise linux .
    How can i do the initial load of 3tb size schema , from my source to target ( which is cross platform and different version of rdbms)
    Thanks

    Couple of options.
    Use old export/import to do the initial load. While that is taking place, turn on change capture on the source so any transactions that take place during exp/imp timeframe are captured in the trails. Once the init load is done, you start replicat with the trails that have accumulated since exp started. Once source and target are fully synchronized, do your cutover to the target system.
    Do an in-place upgrade of your 9i source, to at least 10g. Reason: use transportable tablespaces (or, you can go with expdp/impdp). If you go the TTS route, you will also have to take into account endian/byte ordering of the datafiles (Solaris = big, Linux = little), and that will involve time to run RMAN convert. You can test this out ahead of time both ways. Plus, you can get to 10g on your source via TTS since you are on the same platform. When you do all of this for real, you'll also be starting change capture so trails can be applied to the target (not so much the case with TTS, but for sure with Data Pump).

  • Initial Load Performance Decrease

    Hi colleagues,
    We noticed a huge decrease initial load performance after installing an
    application in the PDA.
    Our first test we downloaded one data object with nearly 6.6Mb that
    corresponds to 30.000 registries with eight fields each. Initial Load
    to PDA took only 2 minutes.
    We performed a second test with same PDA after a reinstallation and
    new device ID. The difference here is that we installed an MI
    application related to same data object. Same amount of data was sent
    to the PDA. It took 3 hours to download it.
    In third test we change the application in order not to have the
    related data object assigned to it. In this case, download took 2
    minutes again.
    In other words, if we have an application with the data object
    assigned, it results in a huge decrease in initial load.
    In both cases we use direct connection to our LAN.
    Here goes our PDA specs:
    - Windows Mobile 6 Classic
    - Processor: Marvell PXA310 de 624MHz
    - 64MB RAM, 256MB flash ROM (190MB available to user)
    Any similar experiences?
    Thanks.
    Edited by: Renato Petrulis on Jun 1, 2010 4:15 PM

    I am confused on downloading a data object with no application.
    I thought you can only download data if it is associated to a Mobile Component, I guess you just assign the DMSCV manually?
    In any case, I have only experienced scenario two when we were downloading application with mobile component with no packaging of messages.  we had maybe a few thousand records to download and process and it would take an hour or more.
    When we enabled packaging, it would take 15-30 minutes.
    Then I went to Create Setup Package because it was just simpler to install the application and data together with no corruption or failure of DMSCV not going operational and not sending data etc... plus it was a faster download using either FTP or ActiveSync to transfer the install files.

  • Numbers Import and Load Performance Problems

    Some initial results of converting a single 1.9MB Excel spreadsheet to Numbers:
    _Results using Numbers v1.0_
    Import 1.9MB Excel spreadsheet into Numbers: 7 minutes 3.5 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 11.7 seconds
    _Results using Numbers v1.0.1_
    Import 1.9MB Excel spreadsheet into Numbers: 6 minutes 36.1 seconds
    Load (saved) Numbers spreadsheet (2.4MB): 5 minutes 5.8 seconds
    _Comparison to Excel_
    Excel loads the original 1.9MB spreadsheet in 4.2 seconds.
    Summary
    Numbers v1.0 and v1.0.1 exhibit severe performance problems with loading (of it's own files) and importing of Excel V.x files.

    Hello
    It seems that you missed a detail.
    When a Numbers document is 1.9MB on disk, it may be a 7 or 8 MB file to load.
    A Numbers document s not a file but a package which is a disguised folder.
    The document itself is described in an WML extremely verbose file stored in a gzip archive.
    Opening such a document starts with an unpack sequence which is a fast one (except maybe if the space available on the support is short).
    The unpacked file may easily be 10 times larger than the packed one.
    Just an example, the xml.gz file containing the report of my bank operations for 2007 is a 300Kb one but the expanded one, the one which Numers must read, is a 4 MB one, yes 13,3 times the original.
    And, loading it is not sufficient, this huge file must be "interpreted" to build the display.
    As it is very long, Apple treats it as the TRUE description of the document and so, each time it must display something, it must work as the interpreters that old users like me knew when they used the Basic available in Apple // machines.
    Addind a supplemetary stage would have add time to the opening sequence but would have fasten the usage of the document.
    Of course, it would also had added a supplementary stage duringthe save it process.
    I hope that they will adopt this scheme but of course I don't know if they will do that.
    Of course, the problem is quite the same when we import a document from Excel or from AppleWorks.
    The app reads the original which is stored in a compact shape then it deciphers it to create the XML code. Optimisation would perhaps reduce a bit these tasks but it will continue to be a time consuming one.
    Yvan KOENIG (from FRANCE dimanche 27 janvier 2008 16:46:12)

  • Initial Load performs deletion TWICE!!

    Hi All,
    I face a very peculiar issue. I started an initial load on a codition object. On the R/3 there are about 3 million records. The load starts
    1)First it deletes all the records in CRM(count bcomes 0)
    2) Then it starts inserting the new records ( the records get inserted and the count reaches 3 million)
    in R3AM1 this adapter object(DNL_COND_A006) status changes to "DONE"!!
    Now comes the problem
    There are still some queue entries which again starts deleting the entries from the condition table and the
    count starts reducing and the record count becomes 0 agai n in the conditio table!
    Then it again starts inserting and the entire load stops after insertin 1.9 million records! Thsi isvery strange.Any pointers will be helpful
    I also checked whether the mappin module is maintained twice in CRM but that is also not the case. Since the initial load takes more than a day i checked whether there are any jobs scheduled but there are no jobs scheduled also.
    I am really confused as to why 2 times deletion should happen. Any pointers will be highly appreciated.
    Thanks,
    Abishek

    Hi Abishek,
    This is really strange and I do not have any clue. What I can suggest is that before you start the load of DNL_COND_A006, load the CNDALL & CND object again. Some time CNDALL resolve this kind of issues.
    Good luck.
    Vikash.

  • Sql loader performance problem with xml

    Hi,
    i have to load a 400 mb big xml file into mz local machine's free oracle db
    i have tested a one record xml and was able to load succesfully, but 400 mb freeying for half an hour and does not even started?
    it is normal? is there any chance i will be able to load it, just need to wait?
    are there any faster solution?
    i ahve created a table below
    CREATE TABLE test_xml
    COL_ID VARCHAR2(1000),
    IN_FILE XMLTYPE
    XMLTYPE IN_FILE STORE AS CLOB
    and control file below
    LOAD DATA
    CHARACTERSET UTF8
    INFILE 'test.xml'
    APPEND
    INTO TABLE product_xml
    col_id filler CHAR (1000),
    in_file LOBFILE(CONSTANT "test.xml") TERMINATED BY EOF
    anything i am doing wrong? thanks for advices

    SQL*Loader: Release 11.2.0.2.0 - Production on H. Febr. 11 18:57:09 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Control File: prodxml.ctl
    Character Set UTF8 specified for all input.
    Data File: test.xml
    Bad File: test.bad
    Discard File: none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 5000
    Bind array: 64 rows, maximum of 256000 bytes
    Continuation: none specified
    Path used: Conventional
    Table PRODUCT_XML, loaded from every logical record.
    Insert option in effect for this table: APPEND
    Column Name Position Len Term Encl Datatype
    COL_ID FIRST 1000 CHARACTER
    (FILLER FIELD)
    IN_FILE DERIVED * EOF CHARACTER
    Static LOBFILE. Filename is bv_test.xml
    Character Set UTF8 specified for all input.
    SQL*Loader-605: Non-data dependent ORACLE error occurred -- load discontinued.
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    Table PRODUCT_XML:
    0 Rows successfully loaded.
    0 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.
    Space allocated for bind array: 256 bytes(64 rows)
    Read buffer bytes: 1048576
    Total logical records skipped: 0
    Total logical records rejected: 0
    Total logical records discarded: 0
    Run began on H. Febr. 11 18:57:09 2013
    Run ended on H. Febr. 11 19:20:54 2013
    Elapsed time was: 00:23:45.76
    CPU time was: 00:05:05.50
    this is the log
    i have truncated everything i am not able to load 400 mega into 4 giga i cannot understand
    windows is not licensed 32 bit

  • Improving initial load performance.

    Hi ,
    Please let me know the setup  and prerequisite required for running parallel request so as to fasten the connection object download .
    I need to download connection object and Point of Delivery from ISU to CRM. Is there any other way to improve the performance.
    Regards,
    Rahul

    Hello,
    May you please tell us more about your scenario? Because using the connection object ID may not be easy to start many request in parallel, as this field is alphanumeric if I remember well... meaning that a range between 1 and 2 will include 10, 11, 100, etc.
    That's why within a migration process SAP introduced a new concept (via table ECRM_TEMP_OBJ) to replicate into CRM only those connection objects that are not already there. This is explained page 12 of the cookbook. Futhermore, as far as replication performance is concerned, I highly recommend to read those OSS notes carefully (which are valid for ISU technical objects as well):
    Note 350176 - CRM/EBP: Performance improvement during exchange of data
    Note 426159 - Adapter: Running requests in parallel
    Regards,
    Nicolas Busson.

  • How do we improve master data load performance

    Hi Experts,
    Could  you please tell me how do we identify the master data load performance problem  and what can be done to improve the master data load performance .
    Thanks in Advance.
    Nitya

    Hi,
    -Alpha conversion is defined at infoobject level for objects with data type CHAR.
    A characteristic in SAP NetWeaver BI can use a conversion routine like the conversion routine called ALPHA. A conversion routine converts data that a user enters (in so called external format) to an internal format before it is stored on the data base.
    The most important conversion routine - due to its common use - is the ALPHA routine that converts purely numeric user input like '4711' into '004711' (assuming that the characteristic value is 6 characters long). If a value is not purely numeric like '4711A' it is left unchanged.
    We have found out that in customers systems there are quite often characteristics using a conversion routine like ALPHA that have values on the data base which are not in internal format, e.g. one might find '4711' instead of '004711' on the data base. It could even happen that there is also a value '04711', or ' 4711' (leading space).
    This possibly results in data inconsistencies, also for query selection; i.e. if you select '4711', this is converted into '004711', so '04711' won't be selected.
    -The check for referential integrity occurs for transaction data and master data if they are flexibly updated. You determine the valid InfoObject values.
    - SID genaration is must for loading transaction data with respect to master data, to cal master data at bex level.
    Regards,
    rvc

  • Initial Load Error - No generation performed. Call transaction GN_START

    Hi Folks,
    We are doing middleware configuration for data migration between R3->CRM.Have followed "Best Practies" configuration Guide.
    System Using; CRM 2007 and ECC6.0
    Issue
    While performing initial load, system is throwing the error as
    001- No generation performed. Call transaction GN_START
    002-Due to system errors the Load is prohibited (check transaction MW_CHECK)!
    After calling the transaction GN_START system asks for job scheduling,whereas I have already scheduled it.
    A job is already scheduled periodically.
    Clicking on 'Continue' will create another job
    that starts immediately.
    After checking(MW_CHECK),message is displayed as
    No generation performed. Call transaction GN_START.
    If anybody has encountered the similar issue and has resolved it,their guidence will be greatly appriciated.
    Thanks in Advance
    VEERA B

    Veera,
    We also faced the same problem when we have done the upgrade from CRM 4.0 to CRM 2007.
    For that you go to SMWP where you can see all the errors related to Middleware with the error message so try to remove the error,
    Also pls check in RZ20 and activate the middleware trace tree.
    Regards
    Vinod

  • Critical performance problem upon bulk load of groups

    All (including product development),
    I think there are missing indexes in wwsec_flat$ and wwsec_sys_priv$. Anyway, I'd like assistance on fixing the critical performance problems I see, properly. Read on...
    During and after bulk load of a few (about 500) users and groups from an external database, it becomes evident that there's a performance problem somewhere. Many of the calls to wwsec_api.addGroupToList took several minutes to finish. Afterwards the machine went 100% CPU just from logging in with the portal30 user (which happens to be group owner for all the groups).
    Running SQL trace points in the directions of the following SQL statement:
    SELECT ID,PARENT_ID,NAME,TITLE_ID,TITLEIMAGE_ID,ROLLOVERIMAGE_ID,
    DESCRIPTION_ID,LAYOUT_ID,STYLE_ID,PAGE_TYPE,CREATED_BY,CREATED_ON,
    LAST_MODIFIED_BY,LAST_MODIFIED_ON,PUBLISHED_ON,HAS_BANNER,HAS_FOOTER,
    EXPOSURE,SHOW_CHILDREN,IS_PUBLIC,INHERIT_PRIV,IS_READY,EXECUTE_MODE,
    CACHE_MODE,CACHE_EXPIRES,TEMPLATE FROM
    WWPOB_PAGE$ WHERE ID = :b1
    I checked the existing indexes, and see that the following ones are missing (I'm about to test with these, but have not yet done so):
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_GROUP_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_PERSON_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("PERSON_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_SYS_PRIV_IX_PATCH1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OWNER", "GRANTEE_GROUP_ID",
    "GRANTEE_TYPE", "OWNER", "NAME", "OBJECT_TYPE_NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 80K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    Note that when I deleted the newly inserted groups, the CPU consumption immediately went down from 100% to some 2-3%.
    This behaviour has been observed on a Sun Solaris system, but I think it's the same on NT (I have observed it during the bulk load on my NT laptop, but so far have not had the time to test further.).
    Also note: In the call to addGroupToList, I set owner to true for all groups.
    Also note: During loading of the groups, I logged a few errors, all of the same type ("PORTAL30.WWSEC_API", line 2075), as follows:
    Error: Problem calling addGroupToList for child group'Marketing' (8030), list 'NO_OSL_Usenet'(8017). Reason: java.sql.SQLException: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "PORTAL30.WWSEC_API", line 2075
    Please help. If you like, I may supply the tables and the java program that I use. It's fully reproducable.
    Thanks,
    Erik Hagen (you may call me on +47 90631013)
    null

    YES!
    I have now tested with insertion of the missing indexes. It seems the call to addGroupToList takes just as long time as before, but the result is much better: WITH THE INDEXES DEFINED, THERE IS NO LONGER A PERFORMANCE PROBLEM!! The index definitions that I used are listed below (I added these to the ones that are there in Portal 3.0.8, but I guess some of those could have been deleted).
    About the info at http://technet.oracle.com:89/ubb/Forum70/HTML/000894.html: Yes! Thanks! Very interesting, and I guess you found the cause for the error messages and maybe also for the performance problem during bulk load (I'll look into it as soon as possible anbd report what I find.).
    Note: I have made a pretty foolproof and automated installation script (or actually, it's part of my Java program), that will let anybody interested recreate the problem. Mail your interest to [email protected].
    ============================================
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_PERS_IX1"
    ON "PORTAL30"."WWSEC_PERSON$"("MANAGER")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_IX2
    ON PORTAL30.WWSEC_PERSON$('ORGANIZATION')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_PK
    ON PORTAL30.WWSEC_PERSON$('ID')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_UK
    ON PORTAL30.WWSEC_PERSON$('USER_NAME')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_UK
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID",
    "SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_PK
    ON PORTAL30.WWSEC_FLAT$("ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX5
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX4
    ON PORTAL30.WWSEC_FLAT$("SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX3
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX2
    ON PORTAL30.WWSEC_FLAT$("PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX2"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX3"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME", "NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_PK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_UK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME",
    "NAME", "OWNER", "GRANTEE_TYPE", "GRANTEE_GROUP_ID",
    "GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 88K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    ==================================
    Thanks,
    Erik Hagen
    null

  • Agentry Sales Manager Initial Load problem

    Hello,
    We've implemented the Agentry Sales Manager solution, everything work well in the development and test environments, but in production we have performance issues for specific users:
    We have a user with:
    5900 Accounts
    21900 Contact Persons
    Which are very large numbers, but the person responsible for our OSS question says this is feasible in the agentry environment.
    The problem occurs when we perform the initial load/transmit for this user, the Accounts are processed like it should, but during the process of the contact persons something goes wrong:
    I see that the function module /SYCLO/CRMMD_DOMYCONTACT_GET is being started and completely processed (Initially we had a dump with a timeout, but this has been solved).
    Then the Agentry server is processing the results of that function module:
    In the log I notice these lines:
    getDocumentLinks::begin
    getDocumentLinks::getDocumentLinks
    Afterwards the server processes the results via the steplets, after which the data is being processed on the device (iPad). Then the employeeFetch should be triggered.
    In our test with a user with lesser data this happens, but in this case we notice the following:
    2015/04/16 15:46:48.852:           + BackEnd=Java-1
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::endFetchRemoval::::begin
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::endFetchRemoval::::--------------------------------
    2015/04/16 15:46:48.852:           + BackEnd=Java-1
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::beginFetchObjectRead::::begin
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::beginFetchObjectRead::::--------------------------------
    2015/04/16 15:46:48.852:           + BackEnd=Java-1
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::endFetchObjectRead::::begin
    2015/04/16 15:46:48.852:             + BackEnd=Java-1
    2015/04/16 15:46:48.852:               com.syclo.sap.FetchSession::endFetchObjectRead::::--------------------------------
    2015/04/16 15:49:21.108: + Thread=4172
    2015/04/16 15:49:21.108:   + Server=Agentry
    2015/04/16 15:49:21.108:     + BackEnd=Java-1
    2015/04/16 15:49:21.108:       Java Back End: current jvm memory usage is 1682243584 bytes
    2015/04/16 15:49:38.096:   + Server=Agentry
    2015/04/16 15:49:38.096:     + BackEnd=Java-1
    2015/04/16 15:49:38.096:       Java Back End: current jvm memory usage is 1682309120 bytes
    2015/04/16 15:49:55.100:   + Server=Agentry
    2015/04/16 15:49:55.100:     + BackEnd=Java-1
    2015/04/16 15:49:55.100:       Java Back End: current jvm memory usage is 1682374656 bytes
    After the last line nothing else happens. 
    In the Agentry GUI I see also that the Connection has disappeared  without an error/exception what so ever...
    Anybody has an idea what might cause this issue?
    We've set the timeout and keepalive parameters to 36000 seconds (10 hours) in the agentry ini, so I think that it isn't a time out.
    Thanks in advance!
    Kind regards,
    Robin

    Hi Jason,
    Thanks for your answer, it is a standalone Agentry server (without SMP). It looks to me also that the amount of data being fetched is too big. But the customer wants to get it on the device as the person on OSS said it should be possible.
    When I look in the AgentryGUI on the server during the fetch I notice the following (see screenshot below):
    The fetch is still busy but the connection is gone. At the point of the screenshot we see the Fetch is taking more then 3 hours (9:27 AM to 12:51 PM), but the connection for that user has been gone in the AgentryGUI from around 11:00 AM.
    Even stranger is that nowhere an exception is thrown. The process on the server continues until the complete data set is processed in the steplets (seen in the log). Then the server is trying to allocate more jvm heap space. But at some point process just stops. In stead of continuing with the process.
    The data is also not sent to the device at that point, so the problem is somewhere on the agentry server it seems.
    The server's memory is 8GB and i'v set the maxheapspace variable in the agentry.ini as following:
    maxHeapSize=2048
    In the log I see that the server that cap doesn't reach.
    We run on an iPad, only iOS devices were in scope.
    Any idea's on where we might change something else?
    Kind regards,
    Robin

  • Perform rollback occurs during initial load of material

    Hi Gurus,
    When we try to do the initial load of materials, only some part of the materials are replicated to SRM. We have the R3AC1 filter of taking only the materials with Purchasing view. We have no other filter. Although there are 576 materials that match this filter, only 368 materials are replicated to SRM.
    One thing we have observed is that when we have a look at SM21 (System Log) we see "Perform rollback" actions. Below is the details of the log. Can anyone help on our issue?
    Details Page 2 Line 30 System Log: Local Analysis of sapsrmt                  1
    Time
    Tip
    Nr
    Clt
    User
    İKodu
    Grp
    N
    Text
    23:52:59
    DIA
    003
    013
    ALEREMOTE
    R6
    8
    Perform rollback
    Perform rollback
    Details
    Recording at local and central time........................ 29.11.2006 23:52:59
    Task......
    Process
    User......
    Terminal
    Session
    İKodu
    Program
    Cl
    Problem cl
    Package
    87262
    Dialog work process No. 003
    ALEREMOTE
    1
    SAPMSSY1
    W
    Warning
    STSK
    Further details for this message type
    Module nam
    Line
    Error text
    Caller....
    Reason/cal
    thxxhead
    1300
    ThIRoll
    roll ba
    No documentation for syslog message R6 8 exists
    Technical details
    File
    Offset
    RecFm
    System log type
    Grp
    N
    variable message data
    4
    456660
    m
    Error (Function,Module,Row)
    R6
    8
    ThIRollroll bathxxhead1300

    Hi,
    Some of our material groups were problematic. After removing these the problem is resolved.
    FYI

  • JSF page 'Initial load' problem

    I've found several threads touching on this already, but none seem to have a solution.
    When JSF loads a JSP page for the first time, it goes through the restore view phase which creates an initial view (as there isn't a current one to restore). It then goes directly to the render response phase.
    My problem is, I have a JSP/JSF page that I pass paramaters to via html GET. For example:
    http://localhost:8080/jsf/region.jsp?locationForm:directorate=1&locationForm=locationForm
    Because the first load goes directly to the render response phase, the parsing of these paramaters is never done & the page does not update as expected.
    The second time you perform the same request, JSF goes through the standard request processing lifecycle and works as you would expect, setting directorate to 1 in the backing bean and displaying an updated page.
    Is there any way to change JSF's default behaviour on a JSP initial load to do the whole lifecycle? Is there another way to get around this, short of loading the page twice to ensure it has the right information in it (which would be quite a hack)?
    I need to use html GET (as opposed to html POST) because:
    I'm using a technique of a hidden iframe that loads dynamically created javascript to update a dropdown list (DDL) on the main page without reloading the page in its entirity. This is to minimise network chatter as the system will be run on a 56k network. I have an onchange event on my JSF DDL that calls javascript to reload the hidden iframe.

    Thanks for the replies.
    I tried both of the suggested options
    1. If your bean is managed (declared as managed bean in faces_config), you can set the initial value of the property as, for example, #{param.locationFor }.
    Unfortunately I can't use this option as the backing bean i'm using has to be session scope. This is because the DDL options are set by the iframe page, not the main page. There could be many request/responses between client/server before the user finally presses the submit button. If I change the backing bean to request scope, I end up getting "Validation Error: Value is not valid" for the DDL because the selected ID is not in the backing bean's list of possible values for the DDL.. #{param} can't be used for session level BBs.
    2. If you don't want to use the managed bean properties, you can go get your parameters in your bean's constructor.
    I'm unable to use this option either. The backing bean is shared between the main page and the hidden iframe page. When the main page loads, the backing bean's constructor is called but that isn't the time when parameters need to be parsed. When the iframe page is loaded for the first time (via javascript onchange on a DDL on the main page) using http://localhost/iframe.jsf?iframeForm:ddlId=1&iframeForm=iframeForm is when I need to parse the parameters, by which time the backing bean is already instanciated and the constructor has already been called.
    I'm looking at where else I could get the parameters other than the constructor. I might be able to do it elsewhere.
    My guess as to why the following code works is it's not using a backing bean & isn't updating backing bean values on the first run:
    <f:view>
    <h:outputText value="param= #{param}"/>
    </f:view>To replicate the problem, create a simple backing bean, for example:
    public class sample {
        private Integer selectedId
        public String getSelectedId() {
            return selectedId
        public void setSelectedId(Integer selectedId) {
            this.selectedId = selectedId;
    }Then create the following sample.jsp:
    <!doctype html public "-//w3c//dtd html 4.01 transitional//en">
    <!--
      Copyright 2004 ArcMind, Inc. All Rights Reserved.
    -->
    <%@taglib uri="http://java.sun.com/jsf/html" prefix="h"%>
    <%@taglib uri="http://java.sun.com/jsf/core" prefix="f"%>
    <html>
    <head>
    <f:view>
      <h:form id="iframeForm">
        <h:panelGroup>
          <h:inputText id="selectedId" value="#{sample.selectedId}" />
        </h:panelGroup>
      </h:form>
    </f:view>
    </head>
    </html>Then try going to sample.jsp?iframeForm:selectedId=10&iframeForm=iframeForm (Similar to the request my main page is doing via javascript to populate the hidden iframe)
    The first time you do this, the text box will be populated with 0 (ie, it skipped the JSF lifecycle and ignored your 10 input). The second time and subsequent times it works as expected, with the text box containing the number 10.

  • Performance problem in loading the Mater data attributes 0Equipment_attr

    Hi Experts,
    We have a Performance problem in loading the Mater data attributes 0Equipment_attr.It is running with psuedo delta(full update) the same infopakage runs with diffrent selections.The problme we are facing is the load is running 2 to 4 hrs in the US morning times and when coming to US night times it is loading for 12-22 hrs and gettin sucessfulluy finished. Even it pulls (less records which are ok )
    when i checked the R/3 side job log(SM37) the job is running late too. it shows the first and second i- docs coming in less time and the next 3and 4 i- docs comes after 5-7 hrs gap to BW and saving in to PSA and then going to info object.
    we have userexits for the data source and abap routines but thay are running fine in less time and the code is not much complex too.
    can you please explain and suggest the steps in r/3 side and bw side. how can i can fix this peformance issue
    Thanks,
    dp

    Hi,
    check this link for data load performance. Under "Extraction Performance" you will find many useful hints.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    Regards
    Andreas

Maybe you are looking for

  • Excise Invoice:difference in duty calculation while calculating ECS& SECS

    Hi All, There is a difference in duty calculations in excise Invoice, (1) which is generated automatically while creating billing doc manually ) &  (2) generating billing doc & excise invoice in PRG.In PRG Billing doc is getting generated by BAPI 'BA

  • Error in object level routine used in transformation to remove special char

    Hi, I have written a code to remove special characters (#,!) at the object level. However i am getting this error "You cannot use the current statement between "CLASS ... DEFINITION" and "ENDCLASS" ".How do i remove this error? Please help. Thanks.

  • Conversion Agent Error - CMException

    Hi , We are trying to map an excel file to another system using conversion Agent for transformation. We have created the transformation in the Conversion Agent Studio and deployed the service in our integration engine on XI Server(in the serviceDB fo

  • 8.1.5 query execution problem

    Hello, I am using the following: Oracle database ver 8i Oracle spatial Ver 8.1.5 windows 2000 server I have a Polygon fatured layer in the db , exported from the Java SDO-sample programs 'sampleshapefileToSDO.java'. when I run the "desc MJM_Parcel" I

  • Copy and Paste Not Working in Terminal?

    Hi, Can someone tell me whose bright idea it was to f**k copy and paste in the Terminal app in Mavericks? No matter what I do, I cannot copy from (as an example) Safari, and paste it into vim in the Terminal.