Does the ETL workload effect the performance of reports?

Hello
There are several heavy ETL processes running on production. Do they effect the performance of the reports that user may execute during this time?
thanks

Definitely... Your server has a finite number of resources available to share with all those processes... Running a report is just one more process... Users can be (and are) impacted by other processes in the system...

Similar Messages

  • How to setup the environment for doing the Performance Testing?

    Hi,
    We are planning to start the performance testing for iProcurement Product, Max number of user we are going to use 1000. For this simulation, what are all basic setups need to do for Application Tier, Database Tier,etc... Can anyone suggest what is procedure to setup environment depending upon the load?

    User Guides for thee rv120W are here:
    http://www.cisco.com/en/US/docs/routers/csbr/rv220w/quick_start/guide/rv220w_qsg.pdf
    http://www.cisco.com/en/US/docs/routers/csbr/rv120w/administration/guide/rv120w_admin.pdf
    and theres some more stuff over on my site:
    http://www.linksysinfo.org/index.php?forums/cisco-small-business-routers-and-vpn-solutions.49/

  • Urgent: regarding the increasing the performance of report

    Hi,
    I had a report which is displaying the correct data but i execute on PRD Server,it gets Request Time Out.So i want to increase the performance of it.Plzz help me out in doing this.
    REPORT  ZWIP_STOCK NO STANDARD PAGE HEADING LINE-SIZE 150.
    TABLES: AFPO, AFRU, MARA, MAKT.
    DATA: BEGIN OF ITAB OCCURS 0,
          AUFNR LIKE AFPO-AUFNR,
          MATNR LIKE AFPO-MATNR,
          LGORT LIKE AFPO-LGORT,
          MEINS LIKE MARA-MEINS,
          NTGEW LIKE MARA-NTGEW,
          MTART LIKE MARA-MTART,
          STOCK TYPE P LENGTH 10 DECIMALS 3,
          END OF ITAB.
    DATA : ITAB2 LIKE ITAB OCCURS 0 WITH HEADER LINE.
    DATA : DESC LIKE MAKT-MAKTX.
    SELECT-OPTIONS : MAT_TYPE FOR MARA-MTART.
    SELECT-OPTIONS : P_MATNR FOR AFPO-MATNR.
    DATA : V_MINOPR LIKE AFRU-VORNR,
           V_MAXOPR LIKE AFRU-VORNR,
           V_QTYMIN LIKE AFRU-GMNGA,
           V_QTYMAX LIKE AFRU-GMNGA,
           V_QTY TYPE P LENGTH 10 DECIMALS 3.
            SELECT AAUFNR AMATNR ALGORT BMEINS BNTGEW BMTART FROM AFPO AS A
              INNER JOIN MARA AS B ON AMATNR = BMATNR
                INTO TABLE ITAB WHERE ELIKZ <> 'X' AND MTART IN MAT_TYPE AND A~MATNR IN P_MATNR.
        ITAB2[] = ITAB[].
        SORT ITAB2 BY MATNR MEINS MTART NTGEW.
        DELETE ADJACENT DUPLICATES FROM ITAB2 COMPARING MATNR MEINS MTART NTGEW.
       LOOP AT ITAB2.
        V_QTY = 0.
          LOOP AT ITAB WHERE MATNR = ITAB2-MATNR.
            SELECT MIN( VORNR ) INTO V_MINOPR FROM AFRU WHERE AUFNR = ITAB-AUFNR.
            SELECT MAX( VORNR ) INTO V_MAXOPR FROM AFRU WHERE AUFNR = ITAB-AUFNR.
            SELECT SUM( GMNGA ) INTO V_QTYMIN FROM AFRU WHERE AUFNR = ITAB-AUFNR AND VORNR =  V_MINOPR.
            SELECT SUM( GMNGA ) INTO V_QTYMAX FROM AFRU WHERE AUFNR = ITAB-AUFNR AND VORNR =  V_MAXOPR.
            V_QTY = V_QTY + V_QTYMIN - V_QTYMAX.
          ENDLOOP.
          ITAB2-STOCK = V_QTY.
          MODIFY ITAB2.
        ENDLOOP.
        LOOP AT ITAB2.
              WRITE:/ ITAB2-MATNR,ITAB2-STOCK.
        ENDLOOP.

    Instead of code from
    itab2[] = itab[] till last endloop try code given below
    data : begin of minopr occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of minopr.
    data : begin of maxopr occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of maxopr.
    data : begin of qtymin occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of qtymin.
    data : begin of qtymax occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of qtymax.
    select aurnr vornr into table minopr from afru for all entries in itab where aurnr = itab-aufnr
    maxopr[] = minopr[].
    sort minopr by aufnr vornr ascending.
    sort maxopr by aufnr vornr descending.
    delete adjacent duplicates from minopr comparing aufnr.
    delete adjacent duplicates from maxopr comparing aufnr.
    SELECT aufnr vornr GMNGA INTO TABLE QTYMIN FROM AFRU for all entries in minopr WHERE AUFNR = minopr-AUFNR AND VORNR = MINOPR-vornr.
    SELECT aufnr vornr GMNGA INTO TABLE QTYMAX FROM AFRU for all entries in maxopr WHERE AUFNR = maxopr-AUFNR AND VORNR = maxopr-vornr.
    sort qtymin by aufnr.
    sort qtymax by aufnr
    sort itab by matnr MEINS MTART NTGEW.
    LOOP AT ITAB.
    v_minopr = 0.
    v_maxopr = 0.
    read table qtymin with key aufnr = itab-aufnr binary search.
    if sy-subrc = 0.
    loop at qtymin from sy-tabix.
    if qtymin-aufnr = itab-aufnr.
    V_MINOPR = V_MINOPR + itab-gmnga.
    else.
    exit.
    endif.
    endloop.
    endif.
    read table qtymax with key aufnr = itab-aufnr binary search.
    if sy-subrc = 0.
    loop at qtymax from sy-tabix.
    if qtymax-aufnr = itab-aufnr.
    V_MaxOPR = V_MaxOPR + itab-gmnga.
    else.
    exit.
    endif.
    endloop.
    endif.
    V_QTY = V_QTY + V_QTYMIN - V_QTYMAX.
    At new itab-matnr.
    if sy-tabix = 1.
    continue.
    endif.
    itab2 = itab.
    itab2-stock = v_qty.
    append itab2.
    V_QTY = 0.
    endat.
    ENDLOOP.
    itab2 = itab.
    itab2-stock = v_qty.
    append itab2.
    LOOP AT ITAB2.
    WRITE:/ ITAB2-MATNR,ITAB2-STOCK.
    ENDLOOP.

  • Split of Cubes to improve the performance of reports

    Hello Friends . We are now Implementing the Finance GL Line Items for Global Automobile operations in BMW and services to Outsourced to Japan which increased the data volume to 300 millions records for last 2 years since we go live. we have 200 Company codes.
    How To Improve performance
    1. Please suggest if I want to split the cubes based on the year and Company codes which are region based. which means european's will run report out of one cube and same report for america will be on another cube
    But Question here is if I make 8 cube (2 For each year : 1- current year comp code ABC & 1 Current Year DEF), (2 For each year : 1- Prev year comp code ABC & 1 Prev Year DEF)
    (2 For each year : 1- Arch year comp code ABC & 1 Archieve Year DEF)
    1. Then what how I can I tell the query to look the data from which cube. since Company code is authorization variable so to pick that value  of comp code and make a customer exit variable for infoprovider  will increase lot of work.
    Is there any good way to do this. does split of cubes make sense based on company code or just make it on year.
    Please suggest me a excellent approach step by step to split cubes for 60 million records in 2 years growth will be same for
    next 4 years since more company codes are coming.
    2. Please suggest if split of cube will improve performance of report or it will make it worse since now query need to go thru 5-6 different cubes.
    Thanks
    Regards
    Soniya

    Hi Soniya,
    There are two ways in which you can split your cube....either based on Year or based on Company code.(i.e Region). While loading the data, write a code in the start routine which will filter tha data. For example, if you are loading data for three region say 1, 2, and 3, you code will be something like
    DELETE SOURCE_PACKAGE WHERE REGION EQ '2' OR
    REGION EQ '3'.
    This will load data to your cube correspoding to region 1.
    you can build your reports either on these cubes or you can have a multiprovider above these cubes and build the report.
    Thanks..
    Shambhu

  • Regarding the performance in report

    Hi Abap Gurus,
                              i am working on the report. my requirement is that after executing the report data gets extracting after 11 hours.the required data is comonf perfectly.  how to improve the performance. any tips to follow the performance in the report.  if possible post the code.
    Moderator Message: Please search the forum for available information.
    Edited by: kishan P on Oct 19, 2010 4:50 PM

    Hi,
    Please check below thread;
    Extract from ALV List
    Regards
    Jana

  • Can the Performance Detail reports be exported as PDF without the Detail Table?

    We previously generated significant numbers of Performance Detail reports out of SCOM 2007 R2 as PDF files, and the 'Detail Table' did not show.
    Now that we are on SCOM 2012 R2, the same reports show the 'Detail Table' when exported to PDF. We report on 6 to 12 months of data, so these tables are huge, and are also useless for our purposes.
    Is there some way to suppress the Detail Table when exporting a Performance Detail report to PDF?

    Hello,
    Please see if the method in the following post can meet your requirements:
    SCOM reports on performance counters for large groups of servers
    http://www.bictt.com/blogs/bictt.php/2010/11/28/scom-reports-on-performance-counters-for-large-groups-of-servers

  • How to maximize the performance of report service 9i?

    I have a report running in Oracle 9iAS report service. The report is invoked from forms by PL/SQL function run_report_object and generated into PDF file. The report is so complex that the RDF file size reaches to 6MB.
    Now it takes about 30 seconds to run the report. My form service and report service are installed on same machine which has 2 intel XEON 2G CPU and 4G memory. If i run the report twice at same time, the report jobs are put into a queue and runs one by one, so it take me 60 seconds. I find the CPU loading is about 20% during the report is running, it means the report service did not make the base of CPU.
    How can report service process multi jobs at same on a multi CPU machine? What is the best config in report service about engine number?

    Hi,
    Whenever you start a new Reports Server in 9iAS, by default it starts 1 Reports Engine (rwEng). You can increase the number of engines to 2 for processing the 2 jobs simultaneously. you can do this by using the Oracle Enterprise Manager (via the browser). You can also change the following parameters in the server config file directly
    file name: OH/reports/conf/<your_reports_server_name>.conf
    <engine id="rwEng" ... initEngine="1" maxEngine="1" minEngine="0" ... >
    You can either increase both initEngine and maxEngine to 2, only maxEngine. In the latter case, the second reports engine will be started only when needed.
    However, increasing the number of engines will obviously increase the overhead. The optimum value for number of engines depends on your machine, load, response characteristics required, etc. You can test with a few values to arrive at an optimum number.
    I would recommend going through the Oracle Reports Tuning whitepaper
    http://otn.oracle.com/products/reports/htdocs/getstart/whitepapers/cb_tuning8.pdf
    Navneet.

  • What does the underlying task reported failure on exit mean?

    this appeared when i tried to do a repair on my disk...

    It means the repair failed. You don't say what you were repairing - just "repair" in Disk Utility?

  • Does the current SSRS version supports the xlsb excel file extension?

    I've found answers that the SSRS doesn't support the xlsb file type but it was way back 2013:
    https://social.msdn.microsoft.com/forums/sqlserver/en-US/94c57862-281e-480a-b442-c2e907778bd8/ssrs-excel-render-and-save-as-excel-binary-xlsb
    Does the current version (Reporting Services 2012) of Reporting Services already support this kind of file? Where can I see the list of file type that the Reporting Services is supporting?

    Hi zoldyk15,
    In Reporting Services 2012, it doesn’t support .xlsb render. For your requirement, you could provide Microsoft a feature request at
    https://connect.microsoft.com/SQLServer so that we can try to modify and expand the product features based on your needs.
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • Performance Monitor Reports Missing in FDM

    Hi All -
    I'm trying to run "Performance Monitor Reports" in FDM and am getting the below error messages for both type of reports. Any help on this greatly appreciated. Thank you!!
    Here's the message from the error log:
    Invalid Report File: \\..\Apps\FDMLTD\Reports\PerformanceGraphAvgProcessTime.rpt
    I logged into the app server and looked in the folder, and .rpt isn't there.
    Same with the Min-Max report
    Error: Invalid Report File: \\..\Apps\FDMLTD\Reports\PerformanceGraphMinMaxProcessTime.rpt

    Can you please let me know the steps to import report xml? I logged into workbench client, clicked on "view->Reports" and tried to import. But I was not able to locate the required xml files for the Performance monitor reports.
    I tried to find the steps in help guide as well, but couldn't find anything. Thanks!

  • Does Row Wise Initialised block effects the performance of OBIEE 10.1.3.2.0

    Hi,
    I am trying to implement the security in my application using row wise initialisation block. Could any one help me with below mentioned doubts:
    1) Does it effect the performance of the OBIEE server.
    2) Cons with row wise initialisation block used for implementing security which uses Oracle customise function in it. this initialisation block is been created for implementing 2 level security.
    Thanks
    Savita

    Savita,
    Answers to your questions are as follows:
    *1) Does it effect the performance of the OBIEE server.*
    No. This is part of the Oracle BI Repository modeling process for Oracle BI 10g.
    *2) Cons with row wise initialisation block used for implementing security which uses Oracle customise function in it. this initialisation block is been created for implementing 2 level security.*
    There really aren't any cons. To get the security you are seeking row-wise security in Oracle BI 10g is the route you must take. In addition, this process is used widely in Oracle BI 10g for numbers of consecutive users and total users/groups in an LDAP or security group table ranging from hundreds to tens of thousands with nothing but success. So, don't worry about implementing this as your implementation will be sound by configuring row-wise security.

  • Does Coloring Of Rows and Column in Query effect the Performance of Query

    Hi to all,
    Does Coloring of Rows and Column in Query via WAD or Report designer , will effect the performance of query.
    If yes, then how.
    What are the key factor should we consider while design of query in regards of Query performance.
    I shall ne thankful to you for this.
    Regards
    Pavneet Rana

    There will not be any significance perofrmance issue for Colouring the rows or columns...
    But there are various performance parameters which should be looked into while designing a query...
    Hope this PPT helps you.
    http://www.comeritinc.com/UserFiles/file/tips%20tricks%20to%20speed%20%20NW%20BI%20%202009.ppt
    rgds, Ghuru

  • Does triggers effect the performance

    Hi all,
    I have 3 triggers on the table....2 of which are select triggers and one is it insert into other tables whenver there insert/update/delete for each row..
    so if i do dataload say 1 lac rows on the table which has trigger.
    does trigger slows down the performance.
    thanks

    user11278505 wrote:
    Hi all,
    I have 3 triggers on the table....2 of which are select triggers and one is it insert into other tables whenver there insert/update/delete for each row..
    so if i do dataload say 1 lac rows on the table which has trigger.
    does trigger slows down the performance.As compared to what?
    If the trigger is properly written, and if the action that the trigger is to perform would have been performed anyway by the application but at the application level, then perhaps a trigger could be faster.
    If the trigger is doing stuff that should have been done elsewhere (eg: instead of declarative constraints) then a trigger could be slower.
    If the trigger is doing stuff on a per-row basis that could be done better on a per-set basis, especially when handling bulk data (which is the specialty of an RDBMS, much to many developer's surprise), then realize we often turn off constraints for exactly that reason during bulk loads.
    If the trigger is doing stuff that it shouldn't be doing at all, well ... I let you figure that one out.
    In the mean time, I have no idea of the context of the question and therefore the answer - like so often in Oracle - is 'it depends'.

  • Does the optimizer degrade significantly the performance while running ?

    Hi,
    I'm actually managing load tests on an application.
    I do a preliminary warm test of arround 30 mn, then I run a load test of about 3/4 hour.
    I've run JRA with the evaluation key close to the 1 hour expiry, and I observe that the optimizer is still working intensively (it works arround 5 mn for a 5mn record !).
    I observed too that the bmore I run the load test without restarting the serveur, the greater is the performance (better response time with fewer CPU consumption).
    I suppose it may a positive effect of the optimization performed by Jrockit, isn't it ?
    Is it normal that the optimizer takes so much time to converge ?
    Will the optimizer decrease and even stop after a while ?
    While the optimizer works - doing many optimizations all the time ( ~5mn optimizing during a 5 mn JRA record), is the performance significantly affected ?

    The optimizer is constantly watching the application and optimizing frequently used methods. On a large application it can take some time for it to find and optimize the required methods. One way to see the actions of the optimizer is to run with -Xverbose:opt which will print each method as it is optimized. The optimizer does take some CPU time to do it's work, but you will get that back since the methods are quicker.
    Regards,
    /Staffan

  • Is custom views effect the performance of EBS?

    We are using EBS r12.0.6,, database 0g R2.
    Our developers are writting some views to sumarize the data and those views are using in custom reports.
    Is this effect the EBS performance?

    Technically, just doing select 1 from dual will affect the performance. The impact may be a pico second but it will certainly be there.
    My point is that it is very difficult to answer this question without knowing the kind of views and the queries that are run on the views. It is best to analyze the report request to identify performance impact (if any).
    views to sumarize the data Keep in mind, for some summary type of reports, it might be better to write materialized views instead of regular views OR do the summarization on a data w/h
    instance or a reporting instance if you have one.
    Sandeep Gandhi

Maybe you are looking for