Data fetch taking time

hi,
        i have a query that if one has to fetch all the data from heavy tables like faglflexa or bseg the select query takes lots of time and the timeout occurs in foreground.........
             what should be done in such cases to make it faster....
              i have a development which requires all the data to be fetched from faglflexa and then comparison for segment is to take place ...this report takes around 2-3 hours to get execute in the background......
              what approach do the standard tcodes follow when they hit the same tables......
                                           shailendra.

Hi Sailedhra,
In this case the best way is to run the report as batch job.
Second best way is to fetch data in chunks of 1000 records using PACKAGE SIZE addition of select. This will give better performance compared to fetching all the data on huge tables like BSEG.
Another thing is select on cluster tables is always slower compared to transparent tables. In Standard programs they will use lot of optimization techniques which is not possible for us to analyze and implement the same due to tight dead lines.
There are many FMs available for replacing selects on cluster tables. Only thing is we have to search and find the appropriate one which suits our requirement.
Thanks,
Vinod.

Similar Messages

  • Data updataion taking time

    I am having oracle 8.1.6 on unixware 7.1.1.
    While running any sql DML statement from client end, it's taking too much time to execute ( around 3-4 min).
    At the server end no problem is found.
    During problem time the size of shared pool sql area is 118650244.
    I have to restart the unix server againg to get rid of this problem.
    After that the size of shared pool sql area is 10218616.
    Could any body help me to resolve this issue.
    Manoj

    Hi Sailedhra,
    In this case the best way is to run the report as batch job.
    Second best way is to fetch data in chunks of 1000 records using PACKAGE SIZE addition of select. This will give better performance compared to fetching all the data on huge tables like BSEG.
    Another thing is select on cluster tables is always slower compared to transparent tables. In Standard programs they will use lot of optimization techniques which is not possible for us to analyze and implement the same due to tight dead lines.
    There are many FMs available for replacing selects on cluster tables. Only thing is we have to search and find the appropriate one which suits our requirement.
    Thanks,
    Vinod.

  • Discovere lov fetch taking too much time

    Retrieving LOV taking time , how to fix it to fetch quickly .

    pradeep215 wrote:
    Retrieving LOV taking time , how to fix it to fetch quickly .Please see old threads for the docs you need to refer to -- https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Performance&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Also, please make sure you have the statistics collected up to date.
    Thanks,
    Hussein

  • Data load taking lot of time

    Hi All,
    I am trying to upload 2LIS_02_SCL data, I am trying to refresh the entire data using Init, it is taking lot of time in Prodcution. This is creating the problem for me for next Back ground Job which runs in the night for delta's. Please can anybody guide me how to speed up the load? This datasource is connected to 2 Infocubes and 1 ODS.
    Regards,
    Rajesh

    Hi Sal,
    I have done the same things as you said, my mistake is
    R/3:
    Locked all R/3 users.
    Deleted from LBWG.
    Filled setup tables using Document Date from 2006 to 2008 (Month wise i have filled).
    BW:
    I have cleaned entire data from 0PUR_C01 and 0PUR_C04.
    Loaded data at a time to 2 Cubes and 1 ODS using Init upload.
    It is started taking long time. Actual problem is load was not finished at the time of Daily load Process Chain starts(Night).
    I have cancelled the job. made available only delta to Process Chain.
    Now problem is escaled, again i started today same Init for 1 Cube only, again taking long time.
    Regards,
    Rajesh

  • Cursor tips : refresh data fetched by the cursor within run time,discussion

    Hello there,
    May be what I am asking about is kind of weird, but here is the Q:
    let us assume that I have cursor defined as follows:
    cursor emp_cursor is
            select EMPNO, ENAME, deptno
            from emp
            where deptno = 10 ;
    emp_rec  emp_cursor%rowtype;  then inside the loop through the data fetched by this cursor, I need to update some records, which might match the criteria of the cursor, by trying the following code segment:
    open emp_cursor;
          loop
            fetch emp_cursor into emp_rec;
            exit when emp_cursor%notfound;
            "PROCESS THE RECORD'S DATA "
            DBMS_OUTPUT.Put_Line('count of the records: ' || emp_cursor%rowcount);
            DBMS_OUTPUT.Put_Line('deptno: ' || emp_rec.deptno);
            DBMS_OUTPUT.Put_Line('emp no: ' || emp_rec.empno);
            update emp
            set deptno = 10
            where empno= 7566;
            commit;
            DBMS_OUTPUT.Put_Line('after the update');
          end loop;
          close emp_cursor; but the run never shows the record of employee 7566 as one record of the cursor, may be this is the execution plan of the cursor,
    but for the mentioned case, need to re-enter those updated records to the cursor data, to be processed, ( consider that the update statement is conditioned so that not executed always)
    any hints or suggestion
    Best Regards,

    John Spencer wrote:
    Justin Cave wrote:
    Not possible. You'd have to close and re-open the cursor in order to pick up any changes made to the data after the cursor was opened.Then close and re-open it to get any changes made while processing the second iteration, then close and re-open it to get any changes made while processing the third iteration ad infinitum :-)I'm not claiming there are no downsides to the process :-)
    On a serious note, though, the requirement itself is exceedingly odd. You most likely need to rethink your solution so that you remove this requirement. Otherwise, you're likely going to end up querying the same set of rows many, many, many times.
    Justin

  • Check Data Values, Generates Navigation Data steps taking more time

    Hi,
    In the Masterdata load to 0MAT_PLANT
    The step - "Updating attributes for InfoObject 0MAT_PLANT" used to take a Maximum if 23 minutes for a data package size of 100,000 records till three days ago with the constituent steps "Check Data Values","Process Time-Independent Attributes" and "Generates Navigation Data" taking 5,4 and 12 minutes respectively.
    For the last three days the step "Updating attributes for InfoObject 0MAT_PLANT" is taking 58-60 minutes for the same for data package size of 100,000 records with the constituent steps "Check Data Values","Process Time-Independent Attributes" and "Generates Navigation Data" taking 12,14 and 22 minutes respectively.
    Can anyone please explain what is needed to speeden the dataload back to its previous timing e.g. if  tables indexes need to built for 'P' or 'SID' table of Mat_plant.
    An explanation of why the delay could be happening in the above steps will be really great.
    regards,
    Kulmohan

    Hi Kulmohan Singh,
    I think there may be a difference between the first update and second update is the 100,000 records is
    already in your 0MAT_PLANT.
    If you update the 0MAT_PLANT in twice, The Modify statement should loop the Q table of
    0MAT_PLANT that was already have the 100,000 records in it,so the upload speed was slowly.
    Hope it helpful you.
    Regards,
    James.

  • I am extracting the data from ECC To bw .but Data Loading taking long tim

    Hi All,
                     i am extracting the data from ECC To BI Syatem..but Data Loading Taking Long time. from last   6 hoursinfopackage is running.still it is showing yellow.Manually i made the red.and delete again i applied repeat of the last delta.but same proble is coming .in the status job is showing bckground job is not finished at source system.we requested to basis.basis people killed that job.again we schedule the chain also again same problem is coming.how can i solve this issue.
    Thanks ,
    chandu

    Hi,
    There are different places to track your job. Once your job is triggered in BW, you can track your load job where exactly it is taking more time and why. Follow below steps:
    1) After InfoPackage is triggered, then take the request number and go to source system to check your extraction job status.
    You can get the job status by taking the request number from BW and go to transaction SM37 in ECC. Then give the request number with begining '' and ending ''.  Also give '*' to user name.
    Job name:  REQ_XXXXXX
    User Name: *
    Check the job status whether job is completed or cancelled or short dump. If the job is still running check in SM66 whether you can see any process. If not accordingly you got to check in ST22 or SM21 in ECC. If the job is complete, then the same in BW side now.
    2) Check the data arrived in PSA, if not check whether Transfer routines or start routines are having bad SQL or code. Similarly in update rules.
    3) Once it is through in Source system (ECC), Transfer rules , Update Rules, then the next task is updating the data might some time take more time which might be based on some parameters ( Number of parallel process to update database ). Check whether updating the database is taking more time and may be you got to check with the DBA guy also.
    At all the times you should see minimum of atleast once process running all the time in SM66 till the time your job gets complete. If not you will see a log in ST22.
    Let me know if you still have questions.
    Assigning points is the only way of saying thanks in SDN.
    Thanks,
    Kumar.

  • Fetching data doing 2 time refresh

    Hi,
    I have created the report with deafault, report001, report002,report003. When i do the first  refresh it not working and non of the data fetching from database. If i do again refresh it fetching the data and it working fine.
    I have used the refresh workbook , then also it is not fetching the data.
    We are using sp19 patch 1 . NET 4
    Please help me out.

    Hi Surya,
    this behaviour could happen if you have interconnections between data of different sheets, i.e. first report need results from another sheet but the refresh of the first report start before the second.
    To find a solution see please this note 1908840 - An EPM Add-in report has to be refreshed twice to update data
    Regards
         Roberto

  • OBIEE 11g - Data Fetch Issue.

    Hi,
    I am facing this weird issue regarding data fetching while executing Reports in OBIEE 11.1.1.5. When I login to the BI user interface and execute any report for the first time, everything is working fine. Bt after that I am not able to execute any report later. Even the report executed for the first time, when I rebuilt and execute it I cannot see the data this time. The Results user interface says - The layout of this view combined with the data,selections,drills or prompt values choosen resulted in no data.
    Thanks in advance.
    Regards,
    Dev.

    Hi,
    I also encounter the same issue. When I view combined layout, I get the No Results message. But when I edit the table layout, I see that there are records returned. Does anyone had any luck in resolving this issue?
    Thanks!

  • Internal table data fetch

    hi,
    I'm fetching 10 datas from flat file to internal table. But my requirement is first column data is passed to table i need to select 5 column of data from my internal table.
    Eg. column 1 = A1
          column 2 = A2
          column 3 = A3
          column 4 = A4
          column 5 = A5
          column 6 = A6
    i passed A1 data and next time i need to select A5. is it possible.
    If any one knows let me give the solution..
    regards,
    bab

    Hi,
    Have a look at this [Convert rows into columns|http://docs.google.com/Doc?id=dfv2hmgs_5d6bcxqgp&hl=en] if you are looking to convert rows of a internal table into columns
    Regards,
    Vikranth

  • I have Time Machine that I used to back up my old computer. It was running snow leopard. My MacBook died, so I got a new one which is running Lion.  how can I restore the data from the time capsule to my new Mac?

    I have Time Machine that I used to back up my old computer. It was running snow leopard. My MacBook died, so I got a new one which is running Lion.  how can I restore the data from the time capsule to my new Mac?

    In technical support, sometimes you have to make educated guesses. I'm sorry that you were offended.
    iTunes does prompt when it is going to erase a device, and the message is clear.
    She said in her message that she was able to successfully sync the old ipad. This indicated to me that itunes wiping the data was not an issue, because either it had been setup at the apple store (in which case it doesn't actually wipe the ipad despite saying it will*) (*based on a single case I saw), or because the itunes media folder was migrated.
    Furthermore, my solution was to tell her how to backup her ipad (by either doing it manually, or as a last resort, by deleting the corrupt backup -- that she couldn't access anyway.)
    I got that last part of the instructions from the "Taking Control of your iphone" book which I found samples of when I did a google search for "corrupted backup itunes".
    She marked this as a solution, so it worked for her.

  • Error "Conversion failed when converting date and/or time from character string" to execute one query in sql 2008 r2, run ok in 2005.

    I have  a table-valued function that run in sql 2005 and when try to execute in sql 2008 r2, return the next "Conversion failed when converting date and/or time from character string".
    USE [Runtime]
    GO
    /****** Object:  UserDefinedFunction [dbo].[f_Pinto_Graf_P_Opt]    Script Date: 06/11/2013 08:47:47 ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE   FUNCTION [dbo].[f_Pinto_Graf_P_Opt] (@fechaInicio datetime, @fechaFin datetime)  
    -- Declaramos la tabla "@Produc_Opt" que será devuelta por la funcion
    RETURNS @Produc_Opt table ( Hora datetime,NSACOS int, NSACOS_opt int)
    AS  
    BEGIN 
    -- Crea el Cursor
    DECLARE cursorHora CURSOR
    READ_ONLY
    FOR SELECT DateTime, Value FROM f_PP_Graficas ('Pinto_CON_SACOS',@fechaInicio, @fechaFin,'Pinto_PRODUCTO')
    -- Declaracion de variables locales
    DECLARE @produc_opt_hora int
    DECLARE @produc_opt_parc int
    DECLARE @nsacos int
    DECLARE @time_parc datetime
    -- Inicializamos VARIABLES
    SET @produc_opt_hora = (SELECT * FROM f_Valor (@fechaFin,'Pinto_PRODUC_OPT'))
    -- Abre y se crea el conjunto del cursor
    OPEN cursorHora
    -- Comenzamos los calculos 
    FETCH NEXT FROM cursorHora INTO @time_parc,@nsacos
    /************  BUCLE WHILE QUE SE VA A MOVER A TRAVES DEL CURSOR  ************/
    WHILE (@@fetch_status <> -1)
    BEGIN
    IF (@@fetch_status = -2)
    BEGIN
    -- Terminamos la ejecucion 
    BREAK
    END
    -- REALIZAMOS CÁLCULOS
    SET @produc_opt_parc = (SELECT dbo.f_P_Opt_Parc (@fechaInicio,@time_parc,@produc_opt_hora))
    -- INSERTAMOS VALORES EN LA TABLA
    INSERT @Produc_Opt VALUES (@time_parc,@nsacos, @produc_opt_parc)
    -- Avanzamos el cursor
    FETCH NEXT FROM cursorHora INTO @time_parc,@nsacos
    END
    /************  FIN DEL BUCLE QUE SE MUEVE A TRAVES DEL CURSOR  ***************/
    -- Cerramos el cursor
    CLOSE cursorHora
    -- Liberamos  los cursores
    DEALLOCATE cursorHora
    RETURN 
    END

    You can search the forums for that error message and find previous discussions - they all boil down to the same problem.  Somewhere in your query that calls this function, the code invoked implicitly converts from string to date/datetime.  In general,
    this works in any version of sql server if the runtime settings are correct for the format of the string data.  The fact that it works in one server and not in another server suggests that the query executes with different settings - and I'll assume for
    the moment that the format of the data involved in this conversion is consistent within the database/resultset and consistent between the 2 servers. 
    I suggest you read Tibor's guide to the datetime datatype (via the link to his site below) first - then go find the actual code that performs this conversion.  It may not be in the function you posted, since that function also executes other functions. 
    You also did not post the query that calls this function, so this function may not, in fact, be the source of the problem at all. 
    Tibor's site

  • In one website, it takes to much time to load the page, in other site its not taking time, so do i need to enable or change any settings

    in one website, its taking time to load the page, on other PC its not taking any time( with internet explorer) in my PC other websites are opening quickly but this website takes too much time with firefox

    Zepo wrote:
    My iMac has been overwhelmed almost since I bought it new.  After some digging the guiness bar suggested its my Aperture library being on the same
    internal Tera byte drive as my operating system.
    Having a single internal hard drive overfilled (drives slow as they fill) is very likely contributing to your problems, but IMO "my Aperture library being on the same internal Tera byte drive as my operating system" is very unlikely to be contributing to your problems. In fact the Library should stay on an underfilled (roughly, for speed, I would call ~half full "underfilled") internal drive, not on the Drobo.
    Instead build a Referenced-Masters workflow with the Library and OS on an internal drive, Masters on the Drobo, OS 10.6.8 (there have been issues reported with OS 10.7 Lion). Keep Vault backup of the Library on the Drobo, and of course back up all Drobo data off site.
    No matter what you do with i/o your C2D Mac is not a strong box for Aperture performance. If you want to really rock Aperture move to one of the better 2011 Sandy Bridge Macs, install 8 GB or more of RAM and build a Referenced-Masters workflow with the Library and OS on an internal solid state drive (SSD).
    Personally I would prefer investing in a Thunderbolt RAID rather than in a Drobo but each individual makes his/her own network speed/cost decisions. The Drobo should work OK for referenced Masters even though i/o is limited by the Firewire connection.
    Do not forget the need for off site backup. And I suggest that in the process of moving to your new setup it is most important to get the data safely and redundantly copied and generally best to disregard how long it may take.
    HTH
    -Allen Wicks

  • Data fetching from BSEG table

    Hi,
    I have used smartforms for generating suppler payment statement for financial department. more time duration is taken by the program when it is generating.
    I think this problem comes while data fetching from BSEG table. because, it has more records for one vendor ID.
    I want reduce this time duration.
    Please guide me.

    Have you tried this selection in se16? I'm quite sure that It will take
    a long time.
    The problem has been explained in this group before and I think you
    should search for bseg in the answers given.
    As a hint: It has to do with the selection universe. You are restricting
    only bukrs from the primary key (all the other restrictions in your
    where clause are filters that are applied on SAP's side (not on the
    database side)). The problem is that bseg isn't stored as separated
    fields in the RDBMS, but as a table with the primary key and a stream of
    bits in a raw field.
    You should review and change the logic you're using before reading bseg.
    It's the only way you'll improve the performance of this select. (for
    example, you could use one or more secondary index tables - bi or ba
    to retrieve belnr and access bseg with a better where clause).

  • Insert statement taking time on oracle 10g

    Hi,
    My procedure taking time in following statement while database upgrading from oracle 9i to oracle 10g.
    I m using oracle version 10.2.0.4.0.
    cust_item is matiralize view in procedure and it is refreshing in the procedure
    Index is dropping before inserting data into cust_item_tbl TABLE and after inserting data index is created.
    There are almost 6 lac records into MV which are going to insert into TABLE.
    In 9i below insert statement is taking 1 hr time to insert while in 10g it is taking 2.30 hrs.
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL QUERY';
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
    INSERT INTO /*+ APPEND PARALLEL */ cust_item_tbl  NOLOGGING
             (SELECT /*+ PARALLEL */
                     ctry_code, co_code, srce_loc_nbr, srce_loc_type_code,
                     cust_nbr, item_nbr, lu_eff_dt,
                     0, 0, 0, lu_end_dt,
                     bus_seg_code, 0, rt_nbr, 0, '', 0, '', SYSDATE, '', SYSDATE,
                     '', 0, ' ',
                                   case
                                 when cust_nbr in (select distinct cust_nbr from aml.log_t where CTRY_CODE = p_country_code and co_code = p_company_code)
                                 THEN
                                         case
                                            when trunc(sysdate) NOT BETWEEN trunc(lu_eff_dt) AND trunc(lu_end_dt)
                                            then NVL((select cases_per_pallet from cust_item c where c.ctry_code = a.ctry_code and c.co_code = a.co_code
                                                          and c.cust_nbr = a.cust_nbr and c.GTIN_CO_PREFX = a.GTIN_CO_PREFX and c.GTIN_ITEM_REF_NBR = a.GTIN_ITEM_REF_NBR
                                                          and c.GTIN_CK_DIGIT = a.GTIN_CK_DIGIT and trunc(sysdate) BETWEEN trunc(c.lu_eff_dt) AND trunc(c.lu_end_dt) and rownum = 1),
                                                          a.cases_per_pallet)
                                      else cases_per_pallet
                                  end
                          else cases_per_pallet
                     END cases_per_pallet,
                     cases_per_layer
                FROM cust_item a
               WHERE a.ctry_code = p_country_code ----varible passing by procedure
                 AND a.co_code = p_company_code   ----varible passing by procedure
                 AND a.ROWID =
                        (SELECT MAX (b.ROWID)
                           FROM cust_item b
                          WHERE b.ctry_code = a.ctry_code
                            AND b.co_code = a.co_code
                            AND b.ctry_code = p_country_code ----varible passing by procedure
                            AND b.co_code = p_company_code   ----varible passing by procedure
                            AND b.srce_loc_nbr = a.srce_loc_nbr
                            AND b.srce_loc_type_code = a.srce_loc_type_code
                            AND b.cust_nbr = a.cust_nbr
                            AND b.item_nbr = a.item_nbr
                            AND b.lu_eff_dt = a.lu_eff_dt));explain plan of oracle 10g
    Plan
    INSERT STATEMENT  CHOOSECost: 133,310  Bytes: 248  Cardinality: 1                      
         5 FILTER                 
              4 HASH GROUP BY  Cost: 133,310  Bytes: 248  Cardinality: 1            
                   3 HASH JOIN  Cost: 132,424  Bytes: 1,273,090,640  Cardinality: 5,133,430       
                        1 INDEX FAST FULL SCAN INDEX MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV Cost: 10,026  Bytes: 554,410,440  Cardinality: 5,133,430 
                        2 MAT_VIEW ACCESS FULL MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost: 24,570  Bytes: 718,680,200  Cardinality: 5,133,430  can you please look into the issue?
    Thanks.

    According to the execution plan you posted parallelism is not taking place - no parallel operations listed
    Check the hint syntax. In particular, "PARALLEL" does not look right.
    Running queries in parallel can either help performance, hurt performance, or do nothing for performance. In your case a parallel index scan on MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV using the PARALLEL_INDEX hint and the PARALLEL hint specifying the table for MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost might help, something like (untested)
    select /*+ PARALLEL_INDEX(INDX_TEMP_CST_AUTH_PERF_MV) PARALLEL(TEMP_CUST_AUTHPERF_MV) */Is query rewrite causing the MVs to be read? If so hinting the query will be tricky

Maybe you are looking for