3.x Workbooks taking longer and longer to run each week

Hey all,  I have a user who has embedded 5 versions of the same query into a workbook.  He runs this workbook every monday.  When he first created the workbook it took 30 minutes to run.  Each week that goes by the workbook takes longer and longer to run and eventually gets to the runtime of 2 hours.  Periodically my user has to go and make a change to the workbook and after he recreates the workbook then it goes back to taking 30 minutes to run.
Is there some kind of a buffer that is filling up that I don't know about?  Is there a way I can refresh the workbook so that the runtime doesn't creep like it is doing?
Thanks
Adam

Guess I posted prematurely. Looking closer I realized there was a select happening during this process against a text column without an index. The slowdown was just the increasing cost of looping through the the entire dataset looking at strings that often shared a fairly sizable starting substring. Chalk another problem up to the importance of appropriate indexes in your db!

Similar Messages

  • SQLite Inserts Taking Longer and Longer

    The application I'm working on makes repeated calls to a webservice to get data that is then cached in a local sqlite db for the user. Once the db hits ~5mb it starts taking painfully long to run each set of inserts. Calls to the webservice remain quick. There had been an issue with XML not being garbage collected, but I fixed that and now the profiler shows consistent memory usage.
    I've tried running the inserts with indexes on and off. I've tried batching the inserts in transactions of 100, or the entire set. The db calls are synchronous.
    Running the queries against the database directly (not through the AIR application) suggest that there isn't a slowdown at the 5mb mark there, which is consistent with my experiences with SQLlite. Restarting the application and continuing to download data into an existing project does not resolve the issue, it starts off slow.
    So...does anyone have any ideas of other things to try to get insert performance up to a reasonable level? Has anyone else run into similiar issues? Is anyone inserting into 20mb+ dbs and not seeing degrading performance?
    Thanks for the help!

    Guess I posted prematurely. Looking closer I realized there was a select happening during this process against a text column without an index. The slowdown was just the increasing cost of looping through the the entire dataset looking at strings that often shared a fairly sizable starting substring. Chalk another problem up to the importance of appropriate indexes in your db!

  • Fwrite() and fread() of a shared FAT32 formatted file is taking long time in MAC osx Lion C program

    Hi
    Is there any provision or api in MAC to open a file in shared mode same as windows
       hUSBdrive = CreateFile(pDriveName, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_NO_BUFFERING, NULL);
    we have the follwing scenario where a file is shared among two processes for read/write.one is running on Linux and the other one is running on MAC.where both the processses are reading/writing into the same memory location in the file say "X"
    FAT 32 formatted raw data file which is located on the device, is shared among two processes.
    One process is running on Linux device which is connected to MAC book through usb.In this linux process, the file is opened using fopen() and we have used fcntl() with O_DIRECT flag.This process continuously reads/writes data on memory location "X" in the shared file .
    The other process is running on Mac which has simple c program that opens the file on the connected device i.e from usb drive and reads/writes data using fread()/fwrite().fopen() is used to open the file and FILE_NOCACHE flag is used to avoid caching.
    The value at memory location "X" is updated by mac by using fwrite() and the linux process reads the memory location "X" by using fread(). Linux process is taking around 30 sec to get the updated value.
    If the value is updated by Linux process at memory location "X" by using fwrite() the MAC process is also taking long time more than a minute to read the updated value by usng fread().
    fwrite()/fread() on mac is taking long time where as the windows application which uses the same apis is taking msec time.
    Do we need to use other api s or flags to open file?
    thanks in advance.......

    does any one face this kind of problem?
    fwrite() and fread() takes long time?
    Is there any problem in read/write to a fat32 file on MAC?

  • HT201250 my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    Try 10.7.5 supplemental update.
    This update seems to have solved this problem for many.
    Best.

  • I just received my i phone 5 and it seems to be taking long for the apps to download with iCloud. Is that right?

    I just received my iphone5 and am downloading my apps with icloud and it seems to be taking long...is that right?

    Hello JeenC,
    I found an article with steps you can take when you experience issues with attachements on your iCloud calendar. 
    I recommend reviewing the steps in the section titled "Troubleshooting Calendar attachments" in the following article:
    iCloud: Using and troubleshooting Calendar attachments
    http://support.apple.com/kb/HT5373
    Thank you for using Apple Support Communities.
    Best,
    Sheila M.

  • Export (exp) taking long time and reading UNDO

    Hi Guys,
    Oracle 9.2.0.7 on AIX 5.3
    A schema level export job is scheduled at night. Since day before yesterday it has been taking really long time. It used to finish in 8 hours or so but yesterday it took around 20 hours and was still running. The schema size to be exported is around 1 TB. (I know it is bit stupid to take such daily exports but customer requirement, you know ;) ) Today again it is still running although i scheduled it to run even earlier by 1 and 1/2 hour.
    The command used is:
    exp userid=abc/abc file=expabc.pipe buffer=100000 rows=y direct=y
    recordlength=65535 indexes=n triggers=n grants=y
    constraints=y statistics=none log=expabc.log owner=abcI have monitored the session and all the time the wait event is db file sequential read. From p1 i figured out that all the datafiles it reads belong to UNDO tablespace. What surprises me is that when consistent=Y is not specified should it go to read UNDO so frequently ?
    There is total of around 1800 tables in the schema; what i can see from the export log is that it exported around 60 tables and has been stuck since then. The logfile, dumpfile both has not been updated since long time.
    Any hints, clues in which direction to diagnose please.
    Any other information required, please let me know.
    Regards,
    Amardeep Sidhu

    Thanks Hemant.
    As i wrote above, it runs from a cron job.
    Here is the output from a simple SQL querying v$session_wait & v$datafile:
    13:50:00 SQL> l
      1* select a.sid,a.p1,a.p2,a.p3,b.file#,b.name
      from v$session_wait a,v$datafile b where a.p1=b.file# and a.sid=154
    13:50:01 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     158244          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:03 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     157566          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:07 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     157016          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:11 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     156269          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:16 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        508     167362          1        508 /<some_path_here>/undotbs_44.dbf
    13:50:58 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        508     166816          1        508 /<some_path_here>/undotbs_44.dbf
    13:51:02 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        508     165024          1        508 /<some_path_here>/undotbs_44.dbf
    13:51:14 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        507     159019          1        507 /<some_path_here>/undotbs_43.dbf
    13:52:09 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        506     193598          1        506 /<some_path_here>/undotbs_42.dbf
    13:52:12 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        506     193178          1        506 /<some_path_here>/undotbs_42.dbf
    13:52:14 SQL>Regards,
    Amardeep Sidhu
    Edited by: Amardeep Sidhu on Jun 9, 2010 2:26 PM
    Replaced few paths with <some_path_here> ;)

  • HT4623 ios update is taking longer hours, corrupting other applications and failing

    ios update is taking longer hours, corrupting other applications and failing, please what can i do?

    Make sure you're using the latest version of itunes.

  • After updating to latest firefox computer on internet running slower,taking longer to go to links and sites.also if i click on an email link the link keeps refreshing as mailto and wont stop opening new tabs very quickly,have to shut down to stop it.

    since installing latest update,when using internet,email everything has slowed down and is taking long to go to websites to upload pictures to emails.
    also when i click on an email link to send email the tab bar keeps refreshing and opening a new tab very quickly all i can read cos its going so fast is mailto this has happened several times on various websites,i clicked on a email us link on grangerhertzog.com yesterday and it did it.

    You get that problem if you select the Firefox program to handle a file if you get an "open with" dialog.
    *https://support.mozilla.org/kb/Firefox+keeps+opening+many+tabs+or+windows

  • Apps taking longer to startup and use

    In my main OSX startup profile some apps Office, sometimes Safari, and others are taking longer than normal to startup and use. I booted into my administrator account and everything there is speedy so this appears to be a software issue somewhere. I am going to rebuild disk permissions, but I don think that will do the trick. It also takes years to perform this task in Snow Leopard. Tiger could do the job in half the time it takes Snow Leopard. Anyone got other suggestions to try? I am going to do some googling.
    Thanks,
    John

    See:
    Mac Maintenance Quick Assist,
    Mac OS X speed FAQ,
    Speeding up Macs,
    Macintosh OS X Routine Maintenance
    Essential Mac Maintenance: Get set up,
    Essential Mac Maintenance: Rev up your routines,
    Maintaining OS X, and
    Myths of required versus not required maintenance for Mac OS X for information.

  • Maintenance jobs taking longer and longer

    On 2 of our GroupWise servers the weekly maintenance jobs are taking much longer to run.
    What used to finish before I got in at 7:30am is now still running at 4PM right now on one of the servers. Last monday, they finished by noon.
    We are running GroupWise 8.03 on Netware.
    The two servers have about 300 Gig of data each.
    The issue of maintenance jobs running later started a few months ago, but today is the worst.
    Should I increase the GWWorker threads from to something higher? I think I may want to push the startup times earlier too.
    Other ideas?
    thanks
    Phil J

    Guess I posted prematurely. Looking closer I realized there was a select happening during this process against a text column without an index. The slowdown was just the increasing cost of looping through the the entire dataset looking at strings that often shared a fairly sizable starting substring. Chalk another problem up to the importance of appropriate indexes in your db!

  • SSRS Reports taking long time to load

    Hello,
    Problem : SSRS Reports taking long time to load
    My System environment : Visual Studio 2008 SP1  and SQL Server 2008 R2
    Production Environment : Visual Studio 2008 SP1  and SQL Server 2008 R2
    I have created a Parameterized report (6 parameters), it will fetch data from 1 table. table has 1 year and 6 months data,      I am selecting parameters for only 1 month (about 2500 records). It is taking almost 2 minutes and 30 seconds
    to load the report.
    This report running efficiently in my system (report load takes only 5 to 6 seconds) but in
    production it is taking 2 minutes 30 seconds.
    I have checked the Execution log from production so I found the timing for
    Data retrieval (approx~)       Processing (approx~)               Rendering (approx~)
    10 second                                      15 sec                        
                2 mins and 5 sec.
    But Confusing point is that , if I run the same report at different time overall output time is same (approx) 2 min 30 sec but
    Data retrieval (approx~)       Processing (approx~)                Rendering (approx~)
    more than 1 min                            15 sec                                     
    more than 1 min
    so 1 question why timings are different ?
    My doubts are
    1) If query(procedure to retrieve the data) is the problem then it should take more time always,
    2) If Report structure is problem then rendering will also take same time (long time)
    for this (2nd point) I checked on blog that Rendering depends on environment structure e.g. Network bandwidth, RAM, CPU Usage , Number of users accessing same report at a time.
    So I did testing of report when no other user working on any report But failed (same result  output is 2 min 30 sec)
    From network team I got the result is that there is no issue or overload in CPU usage or RAM also No issue in Network bandwidth.
    Production Database Server and Report server are different (but in same network).
    I checked that database server the SQL Server is using almost Full RAM (23 GB out of 24 GB)
    I tried to allocate the memory to less amount up to 2GB (Trial solution I got from Blogs) but this on also failed.
    one hint I got from colleague that , change the allocated memory setting from static memory to dynamic to SQL Server
    (I guess above point is the same) I could not find that option Static and Dynamic memory setting.
    I did below steps
    Connected to SQL Server Instance
    Right click on Instance go to properties, Go to Memory Tab
    I found three options 1) Server Memory   2) Other memory   3) Section for "Configured values and Running values"
    Then I tried to reduce Maximum  Server memory up to 2 GB (As mentioned above)
    All trials failed, this issue I could not find the roots for this issue.
    Can anyone please help (it's bit urgent).

    Hi UdayKGR,
    According to your description, your report takes too long to load on your production environment. Right?
    In this scenario, since the report runs quickly in developing environment, we initially think it supposed to be the issue on data retrieval. However, based on the information in execution log, it takes longest time on rendering part. So we suggest you optimize
    the report itself to reduce the time for rendering. Please refer to the link below:
    My report takes too long to render
    Here is another article about overall performance optimization for Reporting Services:
    Reporting Services Performance and Optimization
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Connecting to the database taking long time to connect database server

    Hi
    When I execute procedure i am getting the below message at bottom of the Oracle SQL Developer
    "Connecting to the database"
    it is taking more than 10 min plz guide

    Hi
    have you installed a normal Oracle Client also on your Host? normal Oracle Client
    Did you connect with host:port:sid or with a Oracle Naming Service? through TNS Service
    Can you test tnsping <alias> yes, It is working fine
    Did other user have the same problem? yes
    Did you connect through WAN or LAN connection? LAN (Intranet)
    Can you tell more about you client/database setup?
    Database setup:
    OS: Window 2008 server
    version: 11.1.0
    Client: 11.1.0
    OS: Window 2008 server
    Now I am not able to execute single select query which table contains 6 records and 15 columns it is taking long time I have waited 30 min still no resutls
    only one table is behaving like this remaining is working fine
    Edited by: user9235224 on Oct 6, 2012 7:06 PM

  • The ODS activation is taking long time

    Hi,
    We are on SAP NetWeaver BI 701 (Support Package 5).
    We create a Z ODS, it will contain a lot of data (180.000.000 month-end) and we want to generate specific reports about it.
    The activation is taking long time, I assume is because we checked the flag "SIDs Generation upon Activation". I am confused about this check. I really need it? is this check the only problem.
    Thanks for you help.
    Victoria

    Hi Victoria:
       If your Z DSO is used only for staging purposes (you don't have queries based on this DSO and you send the data to another DSO or to an InfoCube) then you don't need to check the "SIDs Generation Upon Activation" box.
    Even more, to achieve better performance during data loads in this scenario, you might consider using a Write Optimized DSO instead of a Standard DSO, but if you decide to take this alternative don't forget to select the "Do Not check Uniqueness of Data" box if you need to write several records with the same Semantic Key.
    Regards,
    Francisco Milán.

  • F4 Help is taking long time

    Hi All,
    We are working on BI 7.0. version
    In the varaible pop-up screen we have two info objects.
    1. Fiscal year Period
    2. JOA(Joint operating aggriment)
    If u press F4 for JOA, it is taking long time to execute and finally the application is getting closed.same situation is there in RSRT also.
    If i enter with out JOA the query is giving the output. Here i have to restrict the query by JOA.
    i have changed the JOA peroperties in query designer.
    Query execution for filter value selection = Values in master data table.......
    but still the situation is the same.......
    Could you please suggest any solution for this.....
    Thanks & Regards,
    PK

    Hi Kamal,
    You can set that at the query level in the query designer for each query.
    1. Select the corresponding characteristic in the query designer.
    2. Goto to the "Extended tab" in the properties
    3. Select the "Values in the Master data table" in the "Query execution in the filter value selection.
    Also see some recomendations:
    Note 748623 - Input help (F4) has a very long runtime - recommendations
    Hope this helps.
    CK

  • Update ztable is taking long time

    Hi All,
    i have run the 5 jobs with the same program at a time but when we check the db trace
    zs01 is taking long time as shown below.here zs01 is having small amount of data.
    in the below dbtrace for updating zs01 is taking 2,315,485 seconds .how to reduce this?
    HH:MM:SS.MS Duration     Program   ObjectName  Op.   Curs   Array   Rec     RC     Conn     
    2:36:15 AM     2,315,485     SAPLZS01  ZS01       FETCH  294     1     1     0     R/3     
    The code as shown below
    you can check the code in the program SAPLZS01 include LZS01F01.
    FORM UPDATE_ZS01.
    IF ZS02-STATUS = '3'.
        IF Z_ZS02_STATUS = '3'.            "previous status is ERROR
          EXIT.
        ELSE.
          SELECT SINGLE FOR UPDATE * FROM  ZS01
                 WHERE  PROC_NUM    = ZS02-PROC_NUM.
          CHECK SY-SUBRC = 0.
          ADD ZS02-MF_AMT TO ZS01-ERR_AMT.
          ADD 1           TO ZS01-ERR_INVOI.
          UPDATE ZS01.
        ENDIF.
      ENDIF.
    my question is when updating the ztable why it is taking such long time,
    how to reduce the time or how to make faster to update the ztable .
    Thanks in advance,
    regards
    Suni

    Try the code like this..
    data: wa_zs01 type zs01.
    FORM UPDATE_ZS01.
    IF ZS02-STATUS = '3'.
        IF Z_ZS02_STATUS = '3'.            "previous status is ERROR
          EXIT.
        ELSE.
          SELECT SINGLE FOR UPDATE * FROM  ZS01
                 WHERE  PROC_NUM    = ZS02-PROC_NUM.
    -- change
      CHECK SY-SUBRC = 0.
          ADD ZS02-MF_AMT TO wa_ZS01-ERR_AMT.
          ADD 1           TO wa_ZS01-ERR_INVOI.
          UPDATE ZS01 from wa_zs01.
        ENDIF.
      ENDIF.
    And i think this Select query for ZS01 is inside the ZS02 SELECT statement,
    This might also make slow process.
    If you want to make database access always use Workarea/Internal table to fetch the data
    and work with that.
    Accessing database like this or with Select.... endselect is an inefficient programming.

Maybe you are looking for