Data loading slow in BI server

hi all,
We have installed our BI production server recentlly. while loading data BI functional consultants complain of poor speed. The process which took only few seconds in development and quality system is taking several minutes in the production system even though the production server is a complete dedicated server for the BI production with 32GB RAM. being a basis person Please help us in identifying the issue in the production server which is causing the speed to be so slow in the prod server as the prod server would be used by users who wouldnt like to wait for so long. and what can i do to improve the speed of the servers.
thanks,
Priya

loading from PSA takes always less time.
However, is quite generic a time load issue, certainly the production system will have more data so it is not always possible to use development and test as reference.
Anyway, i think you should focus on the tuning (using the SBIW)
For example
on R3 you have to customize the dat aload.
on BW for datamart.
So, in R3
maximal package size in kB (default 10,000 KB)
maximum package size in number of records (Maximum memory consumption per data package is around 2Max. lines1000
Byte)
Frequency of Info IDocs. Default: 1. Recommended: 5 - 10
Max proc. (= maximum number of dialog workprocesses which were
allocated for this dataload for processing extracted data in BW)
in BW
values recomended
Recommended values:
– max size: 20,000 – 50,000 (kbyte)
– max lines: 50,000 – 100,000 (Data lines)
– frequency: 10
– max proc.: 2 – 5
there are a lot of parameters to set, but i suggest you to proceed stepbystep.
ciao

Similar Messages

  • EIS data load slow

    Greetings all,
    Windows Server 2003 SP2. SQL Server 2000 SP3. Hyperion 9.3.1.
    I am running an EIS data load, and the load time is progressively longer depending on the number of records loaded.
    For example, when doing a data load of 1,000,000 records, it takes 2 minutes to load the first 500,000 records but an additional 10 minutes to load the next 500,000 records.
    Is there any reason for this, and can I do anything about it?
    Regards,
    Paul.

    Since non-Streaming mode is the only mode that EIS leverages to load data, it's not surprising that- Data loads are slow.
    If you want to continue using EIS for data loads, I believe- All you can do is- To refine/tune your User-defined SQL.
    You've mentioned about just few million records. However, in the volume-intensive environments where records are in the order of hundreds/thousands of millions, the majority would think of using EIS only for Metadata load & deferring data load to the flat file data extracts, which is way faster. At the same time, you'd have to consider if the extraction time required is not too much that it defeats the very purpose.
    - Natesh

  • Data Load slow on some days

    Dear SDN Team,
    I have a load into ODS which sometimes takes 5 minutes to finish but on some days takes >40 minutes. (Though number of records are same)
    Problem is not in R/3 but in BW itself.
    In the details tab I notice the following:
    The time stamp at 'Tranfer (Idocs and TRFC) shows 5:05
    The time stamp at 'processing datapacket' shows 5:50
    Kindly help me as this is really annoying.
    Best Regards,
    Krishna.
    Edited by: Siegfried Szameitat on Oct 29, 2008 12:39 PM
    I deleted your offer of points as it is against the rules.

    Hi,
    1)You can check with just a click on the last step before the first data package in the details tab.
    This will create a displa of time below
    once you click on the step it will give you a time and if you click on a step below it will give you another time in the same box.
    This is how I determin the time elapsed between different steps in the manage tab.
    2)To find the job search for job begning with BI* and user ALEREMOTE.
    You will get a long list just try to see the tart time for both and he one which matches with the start of the job in BW and may be slightly after that.
    You will have to go inside job log of each job to see the data source and the one whch contains your data source is the job you want.
    Thanks
    Ajeet

  • Error in data load from application server

    Well, this problem again!
    In datasource for dataload by flatfile, tab Extraction i selected Load Text-Type File From Application Server.
    In tab Proposal:
    Converter: Separated with Separator (for Example, CSV).
    No. of Data Records: 9198
    Data load sucefull.
    I add 1 record:
    Converter: Separated with Separator (for Example, CSV).
    No. of Data Records: 9199
    So appear a  message error "Cannot convert character sets for one or more characters".
    I look in line 9199 in file and not have any special character. I use AL11 and debug to see this line.
    When i load file from local workstation, this error not occur.
    What's happen?

    Hi Rodrigo,
    What type of logical file path have yopu created in the application server for loading this file.
    Is it a UNIX file path or an ASCII file path.
    Did you check how the file looks in the application server AL11?
    Prathish

  • Data load from Legacy system to BW Server through BAPI

    Requirements: We have different kind of legacy systems and SAP BW server. We want to load all legacy system data into SAP BW server using BAPI. Before loading we have to validate all data. If there are bad data, data missing we have to let the legacy system user/ operator knows to fix the data into their system with detail explanation. When it is fixed, we have to load the data again.
    Load Scenario:  We have two options to load data from legacy systems to BW Server.
    1.     We need to load data directly from legacy system to BW Server using BAPI program.
    2.     Legacy Systems data would be in workstations or flash drive as .txt (one line separated by comma) or .csv file. Need to load from .txt /.csv file to BW Server using BAPI program.
    What we want in the BAPI program code?
    It will Read / Retrieve data from text / csv file and will put into the Internal table. Internal table structure would be based on BAPI InfoObject structure.
    Call BAPI InfoObject function module ‘BAPI_IOBJ_CREATE’ to create InfoObject, include all necessary / default components, do the error check, load the data and return the status.
    Could some one help me with the sample code please? I am new in ABAP / BAPI coding.
    Is there any other better idea to load data from legacy system to BW server? BTW we are using BW 3.10. Is there any better option with BI 7.0 to resolve the issue? I appreciate your help.

    my answers:
    1. this is a scendario for a data push into SAP BW. You can only use SOAP-Based Transfer of Data.
    http://help.sap.com/saphelp_nw04/helpdata/en/fd/8012403dbedd5fe10000000a155106/frameset.htm
    (here for BW 3.5, but you'll find similar for 7.0)
    In this scenario you'll have an RFC dinamically created for every Infosource you need to transfer data.
    2. You can make a chain for each data load, so you can call the RFC "RSPC_API_CHAIN_START" to start the chain from external.
    the second solution is more simply and available on every release.
    Regards,
    Sergio

  • 4.2.3/.4 Data load wizard - slow when loading large files

    Hi,
    I am using the data load wizard to load csv files into an existing table. It works fine with small files up to a few thousand rows. When loading 20k rows or more the loading process becomes very slow. The table has a single numeric column for primary key.
    The primary key is declared at "shared components" -> logic -> "data load tables" and is recognized as "pk(number)" with "case sensitve" set to "No".
    While loading data, these configuration leads to the execution of the following query for each row:
    select 1 from "KLAUS"."PD_IF_CSV_ROW" where upper("PK") = upper(:uk_1)
    which can be found in the v$sql view while loading.
    It makes the loading process slow, because of the upper function no index can be used.
    It seems that the setting of "case sensitive" is not evaluated.
    Dropping the numeric index for the primary key and using a function based index does not help.
    Explain plan shows an implicit "to_char" conversion:
    UPPER(TO_CHAR(PK)=UPPER(:UK_1)
    This is missing in the query but maybe it is necessary for the function based index to work.
    Please provide a solution or workaround for the data load wizard to work with large files in an acceptable amount of time.
    Best regards
    Klaus

    Nevertheless, a bulk loading process is what I really like to have as part of the wizard.
    If all of the CSV files are identical:
    use the Excel2Collection plugin ( - Process Type Plugin - EXCEL2COLLECTIONS )
    create a VIEW on the collection (makes it easier elsewhere)
    create a procedure (in a Package) to bulk process it.
    The most important thing is to have, somewhere in the Package (ie your code that is not part of APEX), information that clearly states which columns in the Collection map to which columns in the table, view, and the variables (APEX_APPLICATION.g_fxx()) used for Tabular Forms.
    MK

  • 'Cannot begin data load. Analytic Server Error(1042006): Network Error

    Hi...
    I got a error message when I upload data from source file into Planning via IKM SQL to Essbase (data).
    Some records are found following errors.
    'Cannot begin data load. Analytic Server Error(1042006): Network Error [10061]: Unable To Connect To [localhost:32774]. The client timed out waiting to connect to the Essbase Agent using TCP/IP. Check your network connections. Also please make sure that Server and Port values are correct'
    What is this error about? is the commit interval too large? now the value is 1000.

    Hi,
    You could try the following
    1. From the Start menu, click Run.
    2. Type regedit and then click OK.
    3. In the Registry Editor window, click the following directory:
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters
    4. From the Edit menu, click New, DWORD Value.
    The new value appears in the list of parameters.
    5. Type MaxUserPort and then press Enter.
    Double-click MaxUserPort.
    6. In the Edit DWORD Value window, do the following:
    * Click Decimal.
    * Enter 65534.
    * Click OK.
    7. From the Edit menu, click New, DWORD Value.
    The new value appears in the list of parameters.
    8. Type TcpTimedWaitDelay and then press Enter.
    9. Double-click TcpTimedWaitDelay.
    10. In the Edit DWORD Value window, do the following:
    * Click Decimal.
    * Type 300
    * Click OK.
    11. Close the Registry Editor window.
    12. Reboot essbase server
    Let us know how it goes.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Error : Data load from relation source (SQL Server 2005) to Essbase Cube

    Hi All,
    I am looking help from you. I am trying to load data from SQLServer 2005 table to Essbase Cube using IKM SQL to Hyperion Essbase (Metadata) Modules.
    I am getting below error. Let me know if i am missing some things.
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 61, in ?
    com.hyperion.odi.essbase.ODIEssbaseException: Invalid value specified [RULES_FILE] for Load option [null]
         at com.hyperion.odi.essbase.ODIEssbaseMetaWriter.validateLoadOptions(Unknown Source)
         at com.hyperion.odi.essbase.AbstractEssbaseWriter.beginLoad(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
         at org.python.core.PyMethod.__call__(PyMethod.java)
         at org.python.core.PyObject.__call__(PyObject.java)
         at org.python.core.PyInstance.invoke(PyInstance.java)
         at org.python.pycode._pyx3.f$0(<string>:61)
         at org.python.pycode._pyx3.call_function(<string>)
         at org.python.core.PyTableCode.call(PyTableCode.java)
         at org.python.core.PyCode.call(PyCode.java)
         at org.python.core.Py.runCode(Py.java)
         at org.python.core.Py.exec(Py.java)
         at org.python.util.PythonInterpreter.exec(PythonInterpreter.java)
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:144)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Invalid value specified [RULES_FILE] for Load option [null]
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)

    ODI Step: Prepare for Loading Step:
    from java.util import HashMap
    from java.lang import Boolean
    from java.lang import Integer
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    # Target planning connection properties
    serverName = "HCDCD-HYPDB01"
    userName = "admin"
    password = "<@=snpRef.getInfo("DEST_PASS") @>"
    application = "BUDGET01"
    database = "PLAN1"
    portStr = "1423"
    srvportParts = serverName.split(':',2)
    srvStr = srvportParts[0]
    if(len(srvportParts) > 1):
    portStr = srvportParts[1]
    # Put the connection properites and initialize the essbase loader
    targetProps = HashMap()
    targetProps.put(ODIConstants.SERVER,srvStr)
    targetProps.put(ODIConstants.PORT,portStr)
    targetProps.put(ODIConstants.USER,userName)
    targetProps.put(ODIConstants.PASSWORD,password)
    targetProps.put(ODIConstants.APPLICATION_NAME,application)
    targetProps.put(ODIConstants.DATABASE_NAME,database)
    targetProps.put(ODIConstants.WRITER_TYPE,ODIConstants.DATA_WRITER)
    print "Initalizing the essbase wrapper and connecting"
    pWriter = HypAppConnectionFactory.getAppWriter(HypAppConnectionFactory.APP_ESSBASE, targetProps);
    tableName = "BUDGET01_PLAN1"
    rulesFile = r"ActLd"
    ruleSeparator = "Tab"
    clearDatabase = "None"
    calcScript = r""
    maxErrors = 1
    logErrors = 1
    errFileName = r"E:\OraHome_ODI\Error\Budget01Plan1.err"
    logEnabled = 1
    logFileName = r"E:\OraHome_ODI\Error\Budget01Plan1.log"
    errColDelimiter = r","
    errRowDelimiter = r"\r\n"
    errTextDelimiter = r"'"
    logHeader = 1
    commitInterval = 1000
    calcOnly = 0
    preMaxlScript = r""
    postMaxlScript = r""
    abortOnPreMaxlError = 1
    # set the load options
    loadOptions = HashMap()
    loadOptions.put(ODIConstants.CLEAR_DATABASE, clearDatabase)
    loadOptions.put(ODIConstants.CALCULATION_SCRIPT, calcScript)
    loadOptions.put(ODIConstants.RULES_FILE, rulesFile)
    loadOptions.put(ODIConstants.LOG_ENABLED, Boolean(logEnabled))
    loadOptions.put(ODIConstants.LOG_FILE_NAME, logFileName)
    loadOptions.put(ODIConstants.MAXIMUM_ERRORS_ALLOWED, Integer(maxErrors))
    loadOptions.put(ODIConstants.LOG_ERRORS, Boolean(logErrors))
    loadOptions.put(ODIConstants.ERROR_LOG_FILENAME, errFileName)
    loadOptions.put(ODIConstants.RULE_SEPARATOR, ruleSeparator)
    loadOptions.put(ODIConstants.ERR_COL_DELIMITER, errColDelimiter)
    loadOptions.put(ODIConstants.ERR_ROW_DELIMITER, errRowDelimiter)
    loadOptions.put(ODIConstants.ERR_TEXT_DELIMITER, errTextDelimiter)
    loadOptions.put(ODIConstants.ERR_LOG_HEADER_ROW, Boolean(logHeader))
    loadOptions.put(ODIConstants.COMMIT_INTERVAL, Integer(commitInterval))
    loadOptions.put(ODIConstants.RUN_CALC_SCRIPT_ONLY,Boolean(calcOnly))
    loadOptions.put(ODIConstants.PRE_LOAD_MAXL_SCRIPT,preMaxlScript)
    loadOptions.put(ODIConstants.POST_LOAD_MAXL_SCRIPT,postMaxlScript)
    loadOptions.put(ODIConstants.ABORT_ON_PRE_MAXL_ERROR,Boolean(abortOnPreMaxlError))
    #call begin load
    pWriter.beginLoad(loadOptions)
    Execution step from opartor:
    Read rows: 0
    Insert/Delete/updat rows: 0
    ODI Step: Load Data Into Essbase
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    from java.lang import Class
    from java.lang import Boolean
    from java.sql import *
    from java.util import HashMap
    # Get the select statement on the staging area:
    sql= """select C1_ACCOUNT "Account",C3_TIMEPERIOD "TimePeriod",C4_LOBS "LOBs",C5_TREATY "Treaty",C6_SCENARIO "Scenario",C7_VERSION "Version",C8_CURRENCY "Currency",C9_YEAR "Year",C10_DEPARTMENT "Department",C11_ENTITY "Entity",C2_DIVLOC "DivLoc",C12_DATA "Data" from OdiMapping.dbo.C$_0BUDGET01_PLAN1Data where      (1=1) """
    srcCx = odiRef.getJDBCConnection("SRC")
    stmt = srcCx.createStatement()
    srcFetchSize=30
    stmt.setFetchSize(srcFetchSize)
    rs = stmt.executeQuery(sql)
    #load the data
    stats = pWriter.loadData(rs)
    #close the database result set, connection
    rs.close()
    stmt.close()
    ODI Step: Report Statistics
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 2, in ?
    Essbase Writer Load Summary:
         Number of rows successfully processed: 1
         Number of rows rejected: 0
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.h.y(h.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Where am i doing wronge for data load in Essbase? How I will find correct information to load data into Essbase?

  • Data load happening terribly slow

    Hi all,
    I had opened up quality server and made some change(removed its compunding attribute) at definition level to an infoobject present in the InfoCube.
    The infoobject,IC and Update rules are all active.
    when i scheduled the load now
    ther is a huge variation in the data load speed.
    Before:
    3Hrs- 60 Lakhs
    Now:
    3hrs-10 Lakhs.
    Also i m observing in SM66 that the process are all reading NRIV Table
    Can anyone throw some insght to this scenario??
    Any useful input will be rewarded!
    Regards
    Dhanya.

    1. yes, select main memory. how many entries do you expect in this dimension?
    In other terms, how many different combinations of characteristics calues included in your DIM are to be posted?
    As I first guess, you should enter there 50'000 but please let me know the cardinality of this dimension.
    2. the fact to have or not master data doesn't apply. Your fact table is booked with DIMIDs as keys of the table. Every time you book a record, the system will check in the dimension tables if this combination of characteristics values have already one record in their DIM, if yes, fine, nothing to do . If a new combination comes, the system will have to add a record in the dimension, thus it will first look for the next number range value (= DIMID).
    In adition, the system will create master data IDs as well (even if there is no master data). In each Dimension table you'll find the corresponding master data SIDs for each of the IObjs belonging to the Dimension.
    Tha's why filling empty cubes takes much more time than loading a cube with data already. That's also why the more you load data the less time it takes.
    Please also make sure that all your F table indexes are dropped. (manage cube, performance tab, delete indexes prior loading).
    this will help considerably initial loads...
    Understanding these concepts of BW datawarehousing are of paramount importance in order to set the system up properly.
    Message was edited by:
            Olivier Cora

  • Hyperion11.1.2.2 Sql Server and Essbase data load error

    Gentlemen,
    i have issue on loading data in Essbase via sql server , while loading data it was fine and all of sudden i see that i get network error 10054
    network error 10054 failed to recive data / send data
    unexpected essbase error 1042013
    even when i try to load manully it says Data Load Fails error Data load buffer [9999] does not exist Unexpected Essbase error 1270040
    and in logs
    Received client request: MaxL: Execute (from user [admin@Native Directory
    *Error writing to server*
    Also tried with maxll its the same issue
    is .esm is loacked?
    any issue with ODBC?
    or any orphan link which is connected to sql  and hyperion unable to process another ?
    let me know your thoughts
    thanks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Can you try limiting the query to return some records? Try adding a where clause and see whether that works?
    Regards
    Celvin
    http://www.orahyplabs.com

  • Very slow performance in every area after massive data load

    Hi,
    I'm new to Siebel. I had a call from customer saying that virtually every aspect of the application (login, etc) is slow after they did a massive data loading ~ around 15GB of data.
    Could you please help to point out what would be the best practice for this massive data loading exercise? All the table statistics are up to date.
    Anyone encountered this kind of problem before?

    Hello,
    Siebel CRM is a highly customizable customer relationship management solution. There are number of customizations (scripting, workflow, web services,...) and integrations (custom c++, java, ERP system,...) that can cause Siebel performance issues.
    Germain Monitoring v1.8.5 can help you -clean-up- all your siebel performance issues (5 min after installation, which can take between 4hours and 10 days whether it is to be used against your siebel dev/qa or prod environment) and then monitor your siebel production system at every layer of your infrastructure, at the siebel user click & back-end transaction levels and either solve or identify the root-cause of siebel performance issues, 24x7.
    Germain Monitoring Software (currently version 1.8.5) helps siebel customers 1)faster solve siebel performance issues introduced by customizations and 2)effectively solve siebel performance issues before business is impacted once siebel is on production.
    Customers like NetApp, J.M Smucker, Alltel/Verizon,...have saved hundred of thousands of dollars using Germain Monitoring software.
    Let us know whether you would like to discuss this further...good luck w/ these issues,
    Regards,
    Yannick Germain
    GERMAIN SOFTWARE LLC
    Siebel Performance Software
    21 Columbus Avenue, Suite 221
    San Francisco, CA 94111, USA
    Cell: +1-415-606-3420
    Fax: +1-415-651-9683
    [email protected]
    http://www.germainsoftware.com

  • Data load from R3 is slow and gives rfc connection error

    Hi all, is there any debug capabilties available on BI 7.0 when it comes to debug data load from R3 using rfc connection ( aleremote and bwiremote users )
    any ideas where can I look for solution.
    thanks.

    Hi,
    Check the connection between the R/3 and BW.
    or It may be the network problem contact system admin or basis people.
    thanks,
    dru

  • Data load is very slow

    Hi Experts,
    I am working on CRM Analytic. I am loading address data from extractor 0BP_DEF_ADDRESS_ATTR to Business partner, with  19 lakhs records. When I execute the DTP it is taking 3 to 4 days to complete the load.
    Please provide me solution so that my data load will become fast.
    With Regards,
    Avenai

    Hi,
    Increase the Number of parallel processes.
    in order to increase parallel processes --> from menu of DTP -> goto -> "settings for batch manager" -> increase the Number of parallel processes (by default it will be 3 increase it to 6).
    Increase the Datapacket size in the DTp -extraction tab.
    Do you have any routines used in the transfromations? if yes try to debug the code where its taking time, find n try to fine tune the code with the help of ABAP person.
    Below option may also be one of the reasons while using CRM data sources...
    The data source consists of lots of fields which are not used or mapped in the transformation ... try to hide those fields or you can create a copy of your data source using BWA1 transaction in CRM system.
    Regards
    KP

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Siebel Prod App poor performance during EIM tables data load

    Hi Experts,
    I have a situation, Siebel Production application performance is becoming poor when I start loading data into Siebel EIM tables during business hours. I'm not executing any EIM jobs during business hours so how come the database is becoming slow and application is getting affected.
    I understand that Siebel Application fetches data from siebel base tables. In that case why is the application getting very slow when EIM tables are only loaded.
    FYI - Siebel production Application server has good hardware configuration.
    Thanks,
    Shaik

    You have to talk with your DBA.
    Let's say your DB is running from one hard disk (HD).
    I guess you can imagine things will slow down when multiple processes start accessing the DB which is running from one HD.
    When you start loading the EIM tables, your DB will use a lot of time for writing and has less time to serve the data to the Siebel application server.
    The hardware for the Siebel application servers is not really relevant here.
    See if you can put the EIM tables on their own partition/hard disks.

Maybe you are looking for

  • .gif not working with gallery

    .GIF animation image works on internet browser but when saving it and view it in gallery it does not play. On Samsung galaxy tab 10.1, .GIF animation pictures works in gallery so how come it does not work on my Sony tablet s when its using the same a

  • Why is PNG opacity corrupt in CS6?

    My platform is Win7-64. PS-64 is just completely buggy so I use PS-32 to fill a gray #999 rectangle and give it 90% opacity for a DW navigation feature in development (Canada and UK). When I place the transparent PNG in the DW CSS3 code, the HTML5 br

  • How do i get my old hompage back?

    My old homepage had 9 boxes in it which showed my most visited/recent websites i visited.I don't know how to get it back as there is no link for it.

  • Index on a Table

    How does oracle utilize an index on table? Suppose that I had a query: SELECT emp.employee_id, emp.employee_firstname, emp.employee_lastname, emp.department_id, emp.position_id, emp.date_of_birth, emp.date_hired FROM hr_employee emp WHERE emp.departm

  • How to delay background processing

    Hi, I am writing a submit statement in the Delivery Order exit USEREXIT_SAVE_DOCUMENT_PREPARE. It triggers on saving the Delivery. The submit statement calls my Z Program in background (Im using Job Open and Job Close). My requirement is that once th