Loading performance issues

HI gurus
please can u help in loading issues.i am extracting  data from standard extractor  in purchasing  for 3 lakhs record it is taking  18 hrs..can u please  suuguest  me loading performance issues .
-KP

Hi,
Loading Performance:
a) Always load and activate the master data before you load the transaction data so that the SIDs don't have to be created at the loading of the transaction data.
b) Have the optimum packet size. If you have too small a packet size, the system writes messages to the monitor and those entries keep increasing over time and cause slow down. Try different packet sizes to arrive at the optimum number.
c) Fine tune your data model. Make use of the line item dimension where possible.
d) Make use of the load parallelization as much as you can.
e) Check your CMOD code. If you have direct reads, change them to read all the data into internal table first and then do a binary search.
f) Check code in your start routine, transfer rules and update rules. If you have BW Statistics cubes turned on, you can find out where most of the CPU time is spent and concentrate on that area first.
g) Work with the basis folks and make sure the database parameters are optimized. If you search on OSS based on the database you are using, it provides recommendations.
h) Set up your loads processes appropriately. Don't load all the data all the time unless you have to. (e.g) If your functionals say the historical fiscal years are not changed, then load only current FY and onwards.
i) Set up your jobs to run when there is not much activity. If the system resources are already strained, your processes will have to wait for resources.
j) For the initial loads only, always buffer the number ranges for SIDs and DIM ids
Hareesh

Similar Messages

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Poor Data Load Performance Issue - BP Default Addr (0BP_DEF_ADDR_ATTR)

    Hello Experts:
    We are having a significant performance issue with the Business Partner Default Address extractor (0BP_DEF_ADDRESS_ATTR).  Our extract is exceeding 20 hours for about 2 million BP records.  This was loading the data from R/3 to BI -- Full Load to PSA only. 
    We are currently on BI 3.5 with a PI_BASIS level of SAPKIPYJ7E on the R/3 system. 
    We have applied the following notes from later support packs in hopes of resolving the problem, as well as doubling our data packet MAXSIZE.  Both changes did have a positive affect on the data load, but not enough to get the extract in an acceptable time. 
    These are the notes we have applied:
    From Support Pack SAPKIPYJ7F
    Note 1107061     0BP_DEF_ADDRESS_ATTR delivers incorrect Address validities
    Note 1121137     0BP_DEF_ADDRESS_ATTR Returns less records - Extraction RSA3
    From Support Pack SAPKIPYJ7H
    Note 1129755     0BP_DEF_ADDRESS_ATTR Performance Problems
    Note 1156467     BUPTDTRANSMIT not Updating Delta queue for Address Changes
    And the correction noted in:
    SAP Note 1146037 - 0BP_DEF_ADDRESS_ATTR Performance Problems
    We have also executed re-orgs on the ADRC and BUT0* tables and verified the appropriate indexes are in place.  However, the data load is still taking many hours.  My expectations were that the 2M BP address records would load in an hour or less; seems reasonable to me.
    If anyone has additional ideas, I would much appreciate it. 
    Thanks.
    Brian

    Hello Experts:
    We are having a significant performance issue with the Business Partner Default Address extractor (0BP_DEF_ADDRESS_ATTR).  Our extract is exceeding 20 hours for about 2 million BP records.  This was loading the data from R/3 to BI -- Full Load to PSA only. 
    We are currently on BI 3.5 with a PI_BASIS level of SAPKIPYJ7E on the R/3 system. 
    We have applied the following notes from later support packs in hopes of resolving the problem, as well as doubling our data packet MAXSIZE.  Both changes did have a positive affect on the data load, but not enough to get the extract in an acceptable time. 
    These are the notes we have applied:
    From Support Pack SAPKIPYJ7F
    Note 1107061     0BP_DEF_ADDRESS_ATTR delivers incorrect Address validities
    Note 1121137     0BP_DEF_ADDRESS_ATTR Returns less records - Extraction RSA3
    From Support Pack SAPKIPYJ7H
    Note 1129755     0BP_DEF_ADDRESS_ATTR Performance Problems
    Note 1156467     BUPTDTRANSMIT not Updating Delta queue for Address Changes
    And the correction noted in:
    SAP Note 1146037 - 0BP_DEF_ADDRESS_ATTR Performance Problems
    We have also executed re-orgs on the ADRC and BUT0* tables and verified the appropriate indexes are in place.  However, the data load is still taking many hours.  My expectations were that the 2M BP address records would load in an hour or less; seems reasonable to me.
    If anyone has additional ideas, I would much appreciate it. 
    Thanks.
    Brian

  • Load Performance issue

    Hi all,
    I have two questions that are related to a large amount of AWs. At the moment I have 89 AWs about 300 GB in size.
    When I loaded the first four AWs in my empty database in parallel it took about 75 minutes. Then I loaded the other 85 AWs. When I deleted the 4 AWs that I loaded first and reloaded them again, it took about 200 minutes. So my question is, why does it take that long to load it when there are already many AWs existing? Is it the metadata reading that takes that long?
    The second thing is that my Disco4OLAP does not respond anymore. I wanted to look into my AWs but the Disco4OLAP doesn't come back. After 90 minutes of waiting I closed the browser.
    To me this seems to be a bug. I expect about 300 databases with more than one TB when productive. This definitely is a show stopper.
    Any help appreciated,
    Lars

    Lars,
    There are a number of possible routes that could be explored in relation to this issue. But first, how did you manage to create 84 different AWs - this seems a little strange to me. An AW does not have to be subject area specific it can contain cubes of varying dimensionality.
    Do you have the XML_LOAD_LOG data for the original timings? If so, where exactly is the extra time going?
    Have you used ADDM to monitor your instance to monitor the wait events?
    Also checkout theDec 07 OLAP Newsletter:
    http://www.oracle.com/technology/products/bi/olap/olapref/newsletter/oracleolapnewsletter_dec07.html
    My initial connection time using the BI Standard Edition to OLAP seems very slow. Why is this, and what can I do about it?
    OLAP DBA Tip:
    http://www.oracle.com/technology/products/bi/olap/olapref/newsletter/oracleolapnewsletter_dec07.html#story6
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    OLAP Blog: http://oracleOLAP.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    OWB : http://blogs.oracle.com/warehousebuilder/
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • Data Load Performance Issue

    Hi Gurus,
    We are loading data from one source into two BW systems (A and B). The loads of system A only takes 2 hours, while the same loads in system B takes more than 6 hours usually. System B is the upgraded version, which is supposed to be faster than system A.
    One finding is in system B, BW picks only 5000 records each time from all source tables. In system A, it picks 50,000 records each time from all source tables. I checked the system setting from RSCUSTV6. It says data package size is 50,000. Isnt it strange?
    Any idea why this is happening? Please advise! Points will be generously awarded.
    Thanks!
    Linda
    One mistake.
    Message was edited by:
            Linda Tseng

    That is 50,000 records for 1 info-idoc...
    For more info...
    <b>Maximum size of a data packet in kilo bytes:</b>
    The individual records are sent in packages of varying sizes in the data transfer to the Business Information Warehouse. Using these parameters you determine the maximum size of such a package and therefore how much of the main memory may be used for the creation of the data package.
    SAP recommends a data package size between 10 and 50 MB.
    <b>Frequency with which status Idocs are sent</b>:
    With this frequency you establish how many data IDocs should be sent in an Info IDoc.
    Hope this helps.

  • OBIEE 11g Dashboard page load performance issue

    Hi All,
    One of my dashboard is taking for 5 to 15 min to load,Once the dashboard is displayed reports are coming fast. Dashboard has 5 pages.
    Eg:
    When I click on Dashboard -> XXXX Dashboard. It took 5 to 15 min to display the dashboard page. Reports are coming fast once the page is loaded.
    Please help me on this.
    Regards,
    Hari

    Cool,
    Thank you for sharing the opinions of you.
    My English i'snt so good. May not have expressed myself correctly.
    In response to my post, I said that "OTHER" point which should also be evaluated is the environment you are using. not that this point is paramount.
    Since the OBIEE is generated queries, the evaluation of queries generated and sent the data sources for sure should be evaluated.
    thank you guys

  • SQLs erver migration performance issue

    We are performing migration from oracle to MSSQL server 2005 (Windows 2003/SAP 4.6D).Our target system configuration is like below:
    ->OS - Windows 2003 x64
    ->DB - MSSQL server 2005 SP4
    ->SAP - 4.6 D kernel
    ->CPU - 4 processors (2.67ghz each)
    ->RAM - 16 GB
    ->source system DB size - around 1 TB and package splitting was implemented.
    While importing the export dump that came from source system, we are facing load performance issue's.
    1) We have started 10 parallel processes and after import is started, CPU is getting 95% to 99% utilized and when we check SQL server studio-> Activity monitor we  found below information:
    -There were around 20 processes are in status sleeping
    -Only one or two system processes are running with commands - INSERT,SELECT INTO commands ( column)
    -At any point of time only two processes are running and CPU utilization is hitting high.
    -Import is very slow and its taking 35 hours to complete
    We have followed few SAP notes(1054852,1241751 e.tc..) and below are the settings for SQL server:
    -Minimum server memory - 5 GB
    - Maximum server memory - 5 GB
    - index creation memory - 0
    - Maximum memory per query - 1024 kb
    - Maximum degree of parellelism - 1
    - Parallellism locks - 0
    - Cost threshold of parellism - 5
    - Enabled trace flags - 610,620,1117,1211,3917
    - Windows environment variable BCP_BATCH_SIZE = 10000
    - SQL log file size - 100 GB
    - tempdb size - 20 GB
    2) When we tried another test import after tuning parameters (Maximum degree of parellelism to 3) and increasing r3load parallel processes to 20, we found  of inserts
    are overflowing in process list and blocking other waiting processes and putting them in suspended mode.CPU utilization is at 80%
    Are there any SQL server parameters need to be tuned to fix this import load time issue in both scenarios? Are there any  best practises for migrating to SQL server?
    Thanks..

    > 1) We have started 10 parallel processes and after import is started, CPU is getting 95% to 99% utilized and when we check SQL server studio-> Activity monitor we  found below information:
    What processes is using the CPU? The SQL Server itself or others? (see task manager)
    > -Minimum server memory - 5 GB
    > - Maximum server memory - 5 GB
    For the time of the migration itself I'd give the database more memory (10 GB or more) and reconfigure later when the import is done.
    What parameters do you use for the R3load? I'd recommend
    -loadprocedure fast -merge_bck -c 10000
    and also set
    BCP_LOB=1 in the environment.
    Markus

  • Performance issues with class loader on Windows server

    We are observing some performance issues in our application. We are Using weblogic 11g with Java6 on a windows 2003 server
    The thread dumps indicate many threads are waiting in queue for the native file methods:
    "[ACTIVE] ExecuteThread: '106' for queue: 'weblogic.kernel.Default (self-tuning)'" RUNNABLE
         java.io.WinNTFileSystem.getBooleanAttributes(Native Method)
         java.io.File.exists(Unknown Source)
         weblogic.utils.classloaders.ClasspathClassFinder.getFileSource(ClasspathClassFinder.java:398)
         weblogic.utils.classloaders.ClasspathClassFinder.getSourcesInternal(ClasspathClassFinder.java:347)
         weblogic.utils.classloaders.ClasspathClassFinder.getSource(ClasspathClassFinder.java:316)
         weblogic.application.io.ManifestFinder.getSource(ManifestFinder.java:75)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.application.utils.CompositeWebAppFinder.getSource(CompositeWebAppFinder.java:71)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.CodeGenClassFinder.getSource(CodeGenClassFinder.java:33)
         weblogic.utils.classloaders.GenericClassLoader.findResource(GenericClassLoader.java:210)
         weblogic.utils.classloaders.GenericClassLoader.getResourceInternal(GenericClassLoader.java:160)
         weblogic.utils.classloaders.GenericClassLoader.getResource(GenericClassLoader.java:182)
         java.lang.ClassLoader.getResourceAsStream(Unknown Source)
         javax.xml.parsers.SecuritySupport$4.run(Unknown Source)
         java.security.AccessController.doPrivileged(Native Method)
         javax.xml.parsers.SecuritySupport.getResourceAsStream(Unknown Source)
         javax.xml.parsers.FactoryFinder.findJarServiceProvider(Unknown Source)
         javax.xml.parsers.FactoryFinder.find(Unknown Source)
         javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source)
         org.ajax4jsf.context.ResponseWriterContentHandler.<init>(ResponseWriterContentHandler.java:48)
         org.ajax4jsf.context.ViewResources$HeadResponseWriter.<init>(ViewResources.java:259)
         org.ajax4jsf.context.ViewResources.processHeadResources(ViewResources.java:445)
         org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:193)
         org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:41)
         org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:140)
    On googling this seems to be an issue with java file handling on windows servers and I couldn't find a solution yet. Any recommendation or pointer is appreciated

    Hi shubhu,
    I just analyzed your partial Thread Dump data, the problem is that the ajax4jsf framework ResponseWriterContentHandler triggers internally a new instance of the DocumentBuilderFactory; every time; triggering heavy IO contention because of Class loader / JAR file search operations.
    Too many of these IO operations under heavy load will create excessive contention and severe performance degradation; regardless of the OS you are running your JVM on.
    Please review the link below and see if this is related to your problem.. This is a known issue in JBOSS JIRA when using RichFaces / ajaxJSF.
    https://issues.jboss.org/browse/JBPAPP-6166
    Regards,
    P-H
    http://javaeesupportpatterns.blogspot.com/

  • Performance issues with Planning data load & Agg in 11.1.2.3.500

    We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E  every time in Prod. Specs of the new Env is much higher than the old one.
    We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?

    Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Accrual Reconciliation Load Run  report performance issue.

    we have significant performance issues when running accrual reconciliation load run report. we had to cancel after have it run for a day. any idea on how to resolve it?

    We experienced similar issue. As the runtime of this report depends on the input parameters. Remember, your first run of this report going to take significant amount of time and the subsequent runs will be much shorter.
    But w had to apply the patches referred in the MOS article to resolve the performance issue.
    Accrual Reconciliation Load Run Has Slow Performance [ID 1490578.1]
    Thanks,
    Sunthar....

  • Generic extraction loading and performances issues

    hi,
          any one can give the details about the generic extraction loading as well as performance issues.
    advance thanks
    regards
    praveen

    Hi,
    when there is no suitable business content datasource we go for creating generic data source.
    by using generic data source we can extract data present in single or multiple tables.
    If the data is present in the single table we go for generic data source extracting from table.
    If the data that is to be extracted is present in multiple tables and the relation between the tables is one to one and if there is a common field in both the tables we create a view on these tables and create a generic data source extracting from view.
    if you want to extract data from different tables and there is no common field we create a info set on these tables and create a generic datasource extracting from query.
    if you want to extarc the data from different tables and the relation is one to many  or many to many we create a generic data source from function module.
    If we extarct from function module ,at run time it has to execute the code and brings the data to BW.so it degrades the loading performance.
    regards,

  • Performance Issue : Why does ADF Taskflow Portlet (JSF bridge portlet) loading ADF Specific images, css, js everytime from portlet producer with dynamic URL with portlet_id and context parameters?

    Hi All,
    We have used WSRP Portlet in Webcenter Portal Page. The Portlet is created using JSF Bridge out of ADF Bounded Taskflow.
    It is causing Performance issue. Every time static content like js, css and images URLs are downloaded  and the URL contain portlet_id and few other dynamic parameters like resource_id, client_id etc.
    We are not able to cache these static content as these contains dynamic URL. This ADF Specific  images, js and css files  are taking longer time to load.
    Sample URL:
    /<PORTAL_CONTEXT>/resourceproxy/~.clientId~3D-1~26resourceId~3Dresource-url~25253Dhttp~2525253A~2525252F~2525252F<10.*.*.*>~2525253A7020~2525252FportletProdApp~2525252Fafr~2525252Fring_60.gif~26locale~3Den~26checksum~3D3e839bc581d5ce6858c88e7cb3f17d073c0091c7/ring_60.gif
    /<PORTAL_CONTEXT>/resourceproxy/~.clientId~3D-1~26resourceId~3Dresource-url~25253Dhttp~2525253A~2525252F~2525252F<10.*.*.*>~2525253A7020~2525252FportletProdApp~2525252Fafr~2525252Fpartition~2525252Fie~2525252Fn~2525252Fdefault~2525252Fopt~2525252Fimagelink-11.1.1.7.0-4251.js~26locale~3Den~26checksum~3Dd00da30a6bfc40b22f7be6d92d5400d107c41d12/imagelink-11.1.1.7.0-4251.js
    Technologies Used:
    Webcenter Portal PS6
    Jdeveloper 11.1.1.7
    Please suggest , how this performance issue can be resolved?
    Thanks.
    Regards,
    Digesh

    Strange...
    I can't reproduce this because i have issues with creating portlets... If i can solve this issue i will do some testing and see if i can reproduce the issue...
    Can you create a new producer with a single portlet that uses a simple taskflow and see if that works?
    Are you also using business components in the taskflows or something? You can try removing some parts of the taskflow and test if it works so you can identify the component(s) that causes the issues.

  • Load balancing issue causes performance problems!

    Hi
    Can someone tell me if there is a way to find the list of users who are logging into the system directly (without using a logon group)? They are hitting the CI which is causing performance issues. Eventhough we've setup system messages and repeated emails its hard to track who aren't following it.
    Any ideas would be really appretiated rewarded with points.
    Thanks
    Arun

    easy:
    on the CI, change rdisp/gui_auto_logout to 60.
    if they login and there is no activity for 60 seconds, they will be logged out promptly. they will stop trying to login into the CI after awhile.
    and if they keep on trying, lock their account into SU01... they will get the message.
    another possibility is a user exit on the logon screen. check the ABAP forum for this.

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

Maybe you are looking for

  • Not able to pay bill or make payment arrangement

    I've been trying to make an arrangement to pay my past due bill, but I keep getting looped back to the my verizon and I am also trying to make an arrangement to put some money on the past due fir tomorrow but to no avail I keep getting looped around.

  • Cannot resize windows from all sides? Just the 1 corner??

    Hi all, I've just switched back to Mac in the past year after 6 years with Windows. While in no way do I prefer windows overall, I cannot understand why Mac has not made all four sides of a given window 'grabbable' for resizing. This is a massively i

  • Two problems with video

    Can FCE edit MKV files? or is there a GOOD format converter that anyone would recommend. Most of the ones I have tried (on a PC I have to admit) produce banding and video artefacts. Second question: All my video plays fine on a PC but much of it has

  • Disk space after deleting partition

    Hello I had partitioned my 500gb HD into two. One was used for osx (~400gb) and another ~60gb for Bootcamp to run windows. After Yosemite the bootcamp had issues and I decided to erase that partition leaving me with this The top Macintosh HD says 499

  • Is there any way to change the language in the iTunes/MacApp Store but still having a credit card connected to the stores "in a different languauge?

    Hi I'm living in Germany and I've got a German credit card connected to the stores (iTunes and MacApp) but I'd like to change the language to either English or Swedish. This way I beleive that I would be able to rent movies in another language than G