Performance Issues - Stupid Question!?!

Below are the specs on my Mac Book. My question is regarding the speed of our Mac. When initially booting up the Mac to use it it can take anywhere from 60 to 95 seconds. When logging on to the internet Safari can take 30 to 45 seconds to completely load and become useable. When using iPhoto and iMovie it takes up to a minute for each of these to load before I can even view my photos. We have around 13,000 photo's stored on the laptop and close to 50GB of music. Could these be the culprits sapping my performance?
Any suggestions on how to increase the performance to where things do not take so long? When we first purchased our Macbook we did not have any significant delays in using the applications I have mentioned, almost instantaneous loading and immediate availability for use.
Model Identifier: MacBook5,1
Processor Name: Intel Core 2 Duo
Processor Speed: 2.4 GHz
Number Of Processors: 1
Total Number Of Cores: 2
L2 Cache: 3 MB
Memory: 2 GB
Bus Speed: 1.07 GHz
Boot ROM Version: MB51.007D.B03
SMC Version (system): 1.32f8
Macintosh HD:
Capacity: 232.57 GB
Available: 19 GB
Writable: Yes
File System: Journaled HFS+
BSD Name: disk0s2
Mount Point: /

First, you don't have enough free space on your hard drive - a minimum of 12 GBs or 10% of the hard drive's capacity whichever is greater. Second, unless you are loading all 13,000 photos and all 50 GBs of music at the same time, their existence on the hard drive doesn't do anything but take up space.
Try some maintenance:
Kappy's Personal Suggestions for OS X Maintenance
For disk repairs use Disk Utility. For situations DU cannot handle the best third-party utilities are: Disk Warrior; DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible. TechTool Pro provides additional repair options including file repair and recovery, system diagnostics, and disk defragmentation. TechTool Pro 4.5.1 or higher are Intel Mac compatible; Drive Genius is similar to TechTool Pro in terms of the various repair services provided. Versions 1.5.1 or later are Intel Mac compatible.
OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep. Dependence upon third-party utilities to run the periodic maintenance scripts had been significantly reduced in Tiger and Leopard. These utilities have limited or no functionality with Snow Leopard and should not be installed.
OS X automatically defrags files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems.
I would also recommend downloading the shareware utility TinkerTool System that you can use for periodic maintenance such as removing old logfiles and archives, clearing caches, etc. Other utilities are also available such as Onyx, Leopard Cache Cleaner, CockTail, and Xupport, for example.
For emergency repairs install the freeware utility Applejack. If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the commandline. Note that AppleJack 1.5 is required for Leopard. AppleJack 1.6 is compatible with Snow Leopard.
When you install any new system software or updates be sure to repair the hard drive and permissions beforehand. I also recommend booting into safe mode before doing system software updates.
Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
Backuplist
Carbon Copy Cloner
Data Backup
Deja Vu
iBackup
JaBack
Silver Keeper
MimMac
Retrospect
Super Flexible File Synchronizer
ynchronizer
SuperDuper!
Synchronize Pro! X
SyncTwoFolders
Synk Pro
Synk Standard
Tri-Backup
Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
Additional suggestions will be found in Mac Maintenance Quick Assist.
Referenced software can be found at CNet Downloads or MacUpdate.
Adding more RAM couldn't hurt, either, if your model supports more than 2 GBs.

Similar Messages

  • Question about a view I created to fix the performance issues

    Dear alll;
    I have this interesting problem. I created a view to help solve certain performance issue I have been having with my query
    see view below
    create or replace view view_test as
    Select trunc(c.close_date, 'YYYY-MM-DD') as close_date, t.names
    from tbl_component c, tbl_joborder t
    where c.t_id = t.p_id
    and c.type = 'C'
    group by trunc(c.close_date, 'YYYY-MM-DD'), t.names
    ;and I tried testing the view out by using the following syntax and I am getting the following errors
    select k.close_date, k.names from view_test k
    where k.names = 'Kay'
    and k.close_date between to_date('2010-01-01', 'YYYY-MM-DD') and to_date('2010-12-31', 'YYYY-MM-DD')however, I am getting the error messages below
    ora-o1898: too many precision specifiersI have googled it and tried a lot of stuff online but I cant fix the problem unfortunately and I dont know why.

    Hi,
    Instead of
    trunc(c.close_date, 'YYYY-MM-DD')say
    trunc (c.close_date, 'DD')Oracle apparantly gets confused by saying you want to truncate (or round) to two or more different units.
    If it were up to me, I would allow it, since truncatng to the year implies truncating to the month and to the day (not to mention hours, minutes and seconds), but there may be reasons why that would be a pain to implement. (Maybe because there are units, such as 'WW' and 'IW', that are not neat sub-divisions of larger units.)

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance Issues with crystal reports 11 - Critical

    Post Author: DJ Gaba
    CA Forum: Exporting
    I have migrated from crystal reports version 8 to version 11.
    I am experiencing some performance issues with reports when displayed in version 11
    Reports that was taking 2 seconds in version 8 is now taking 4-5 seconds in versino 11
    I am using vb6 to export my report file into pdf
    Thanks 

    Post Author: synapsevampire
    CA Forum: Exporting
    Pleae don't multiple forums on the site with the same question.
    I responded to your other post.
    -k

  • Performance issues with respect scheme registration,select & insert query

    I am facing performance issues with respect to schema registration,Select & insert query towards 10.2.0.3 version.It is taking around 45 minutes to register schema and it is taking around 5 min to insert a single document into xml db where as it was taking less than min to insert a single document into xml db of 9.2.0.6 version.Would like to know the issue and solution to resolve this issue.Please help me out on this as it is very urgent for me

    Since it appears that this is an XML DB specific question, you're probably better off posting in the XML DB. The folks over there have much more experience with the ins and outs of that particular product.
    Justin

  • Performance Issue in Dashboard using SAP BW NetWeaver Connection

    HI Experts ,
    We developed Dashboard which is based on BW Queries, However it is taking considerable amount time while executing.
    We are  using BO Dashboard SP2 version ,BO 4.1 and BI system 7.01 .
    We are  looking for few clarifications on SAP BO Xcelsius Dashboards.
    Though we know limitations on component number and data volumes which could badly affect performance of the dashboard, We do have a requirement to handle huge data volumes and multiple components. Our source data lies in SAP BI system and we are using BICS connectivity/ Webi with QAAWS for updating data in BO dashboard .
    Our requirement is too complex where we should be in a position to meet user expectations for complete view 75 KPI’s in a single Dashboard.
    We have scenarios like YTD ,QTD and MTD other complex calculations in Bex Query.
    Here are my questions,
    Is there any way to provide complete functionality using large data sets to the users with the current architecture without any performance issues?
    Are there any third party tools which can be used with BO Dashboard for the performance improvement and handling huge volumes?
    Do you suggest any alternate solution for complete functionality?
    Many thanks for your inputs in advance!
    Regards
    Venkat

    Hi Rajesh,
    Thank you so much for your response.
    I have tried the way you suggested. But here my issue is , I have a prompt in my webi report based on months selection and it is a single valued prompt.
    so I was able to schedule my report only for one month;whereas my dashboard needs to show values for one year data based on the months the user will select in the dashboard.
    Though i use the WebI instance data in the dashboard , i getting only one month value, also I am not able to associate the selected month for the WebI instance in dashboard.
    Is there any option to schedule webI report for different months and the dashboard has to pick the 12 months instance and the combo box selection in the dashboard must associate with it??
    Please help me with your thoughts.

  • Performance issue with report

    Hello
    i am making a change to an existing custom report.I have to pull all the orders except CANCELLED Orders for a parameters passed by user.
    I made a change as FLOW_STATUS_CODE<>'CANCELLED' in rdf.The not equal is causing performance issues...and it is taking lot of time to complete.
    can any one sujjest what will be the best to use in place of not equal.
    Thanks

    Is there an index on column FLOW_STATUS_CODE?
    Run your query in sqlplus through explain plan, and check the execution plan if a query has performance issues.
    set pages 999
    set lines 400
    set trimspool on
    spool explain.lst
    explain plan for
    <your statement>;
    select * from table(dbms_xplan.display);
    spool offFor optimization questions you'd better go to the SQL forum.

  • Strange performance issue in bex report

    Hello Experts,
    I have a performance issue on my bex report.
    I'm running the report with below selection criteria and getting 'too much data' error.
    Country :  equals EMEA
    Category: not equlas 13
    Date : 02/2010 to 12/2010.
    But when I ran the report for smaller date ranges the number of records are not exceeding 13000.
    02.2010 - 06.2010 - 6,555 rows
    07.2010 - 09.2010 - 3,671 rows
    10.2010 - 12.2010 - 2,780 rows
    I know excel can't fit more than 65000 records, but I'm expecting 13000 records for my wide date range which excel can easily fit.
    Any ideas on this one will be appreciated.
    Regards,
    Brahma Reddy

    Hi,
    For Question 1:
    In query designer Go to the Query properties and select the tab "Variable Sequence", here you can set the order of variables as per you requirement.
    For Question 2:
    There will be a option "Hide Repeated Key values", if you uncheck this option then you will have the values for each row even though the material values are same.
    Note; if you are viewing the report in web or WAD report you need to make the same changes in the Web template also because the settings in the query designer will be overridden when you run the query in web.
    Hope this helps.
    Regards,
    Rk.

  • Infoset - Performance and Design Question .

    Hello BW Gurus,
    I have a question regarding the infoset.
    We have a backlog report designed as follows:
    sd_c03 - with around 125 chracteristics and kf's - daily load will have around 10 K records at a time.
    custome cube - say - zcust01 with around 50 characters and KF's.- daily full load aroun 15 K records.
    My question:
    1.We have infoset ontop of this 2 cubes for reporting. Can we use infoset for reporting .I used infoset for master data and DSO reporting. Do you guys see any performance issue with using the infoset instead of multiprovider. Is there any alternative instead of reporting from infoset.
    2.Also I executed the  SAP_INFOCUBE_DESIGNS program for the above cube and some dimesions are more then 25% like 58%, 75%,even 102%. So this has to be fixed for sure. Is it correct.if we don't change the design then what will be the consequences.
    Please advise. We are in the development and the objects are not yet moved to production yet.Again thanks for your help in advance.
    Senthil

    Hi......
    1.We have infoset ontop of this 2 cubes for reporting. Can we use infoset for reporting .I used infoset for master data and DSO reporting. Do you guys see any performance issue with using the infoset instead of multiprovider. Is there any alternative instead of reporting from infoset
    Multiproviders are a great tool that can be used freely to "combine" data without adding much overhead to the system. You may need to combine data of different applications (like Plan and Actual). Another good use is from a data modeling point of view...you can enable "partitioning" of the data by using separate cubes for separate years of data. This way the cube are more manageable, and with an InfoProvider on top the data can be combined for reports if reqd (instead of having one huge cube with all the data).
    Also using a multiprovider as an 'umbrella' infoprovider, you can easily make changes to underlying InfoProviders with minimum imapct to the queries.
    You should go for multi-provider when the requirement is to report on combined data coming from different infoproviders.
    Multi-providers combines data using union operation, that means all values of underlying infoproviders are taken into consideration. On ther other hand, Infoset works on the principle of join. Join forms the intersection and only the common data from underlying infoprovider are considerd..............In terms of performance infoset is better since it fetch less number of records.........
    Check this .........
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2f5aa43f-0c01-0010-a990-9641d3d4eef7
    2.Also I executed the SAP_INFOCUBE_DESIGNS program for the above cube and some dimesions are more then 25% like 58%, 75%,even 102%. So this has to be fixed for sure. Is it correct.if we don't change the design then what will be the consequences
    If Dimensions are more it effects query performance.........its better to change the design........
    Hope this helps........
    Regards,
    Debjani.........

  • Performance issue in browsing SSAS cube using Excel for first time after cube refresh

    Hello Group Members,
    This is a continuation of my earlier blog question -
    https://social.msdn.microsoft.com/Forums/en-US/a1e424a2-f102-4165-a597-f464cf03ebb5/cache-and-performance-issue-in-browsing-ssas-cube-using-excel-for-first-time?forum=sqlanalysisservices
    As that thread is marked as answer, but my issue is not resolved, I am creating a new thread.
    I am facing a cache and performance issue for the first time when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users system (8 GB RAM but around
    4GB available RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cube DB - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after daily cube
    refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (32 GB RAM, around 4GB available RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    As mentioned in my previous thread, we have already implemented a cube wraming cache. But, there is no improvement.
    Currently, the cumulative size of the all 4 cube DB are more than 9 GB in Production and each cube DB having 4 individual cubes in average with highest cube DB size is 3.5 GB. Now, the question is how excel works with SSAS cube after
    daily cube refresh?
    Is it Excel creates a cache of the schema and data after each time cube is refreshed and in doing so it need to download the cube schema in Excel's memory? Now to download the the schema and data of each cube database from server to client, it will take
    a significant time based on the bandwidth of the network and connection.
    Is it anyway dependent to client system RAM ? Today the bigest cube DB size is 3.5 GB, tomorrow it will be 5-6 GB. Now, though client system RAM is 8 GB, the available or free RAM would be around 4 GB. So, what will happen then ?
    Best Regards, Arka Mitra.

    Could you run the following two DMV queries filling in the name of the cube you're connecting to. Then please post back the row count returned from each of them (by copying them into Excel and counting the rows).
    I want to see if this is an issue I've run across before with thousands of dimension attributes and MDSCHEMA_CUBES performance.
    select [HIERARCHY_UNIQUE_NAME]
    from $system.mdschema_hierarchies
    where CUBE_NAME = 'YourCubeName'
    select [LEVEL_UNIQUE_NAME]
    from $system.mdschema_levels
    where CUBE_NAME = 'YourCubeName'
    Also, what version of Analysis Services is it? If you connect Object Explorer in Management Studio to SSAS, what's the exact version number it says on the top server node?
    http://artisconsulting.com/Blogs/GregGalloway

  • Performance issues related to logging (ForceSingleTraceFile option)

    Dear SDN members,
    I have a question about logging.
    I like to place my logs/traces for every application in different log files. By doing this you have to set the ForceSingleTraceFile option to NO (in the config tool).
    But in a presentation of SAP, named SAP Web Application Server 6.40; SAP Logging and Tracing API, is stated:
    - All traces by default go to the default trace file.
         - Good for performance
              - On production systems, this is a must!!!
    - Hard to find your trace messages
    - Solution: Configure development systems to pipe traces and logs for applications to their own specific trace file
    But I want the logs/traces also by our customers (production systems) in separate files. So my question is:
    What are the performance issues we face, if we turn the ForceSingleTraceFile option to NO by our customers?
    and
    If we turn the ForceSingleTraceFile to NO will the logs/traces of the SAP applications also go to different files? If so, then I can imagine that it will be difficult to find the logs of the different SAP applications.
    I hope that someone can clarify the working of the ForceSingleTraceFile setting.
    Kind regards,
    Marinus Geuze

    Dear Marinus,
    The performance issues with extensive logging are related to high memory usage (for concatenation/generation of the messages which are written to the log files) and as result increased garbare collection frequency, as well as high disk I/O and CPU overhead for the actual logging.
    Writing to same trace file, if logging is extensive can become a bottleneck.
    Anyway it is not related to if you should write the logs to the default trace of a standard location. I believe that the recommendation in the documentation is just about using the standard logging APIs of the SAP Java Server, because they are well optimized.
    Best regards,
    Sylvia

  • SQL Performance issue: Using user defined function with group by

    Hi Everyone,
    im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
    Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
    CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
    IS
    tz_name VARCHAR2(100);
    date_out date;
    BEGIN
    SELECT
    to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
    TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
    INTO date_out
    FROM dual;
    RETURN date_out;
    END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
    If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
    select
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
    But if I execute the following statement, it takes only ~90ms ...
    select
    fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
    noi
    from
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stamp
    )The execution plan for all three statements is EXACTLY the same!!!
    Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
    My questions are:
    Why is the second statement sooo much slower than the third?
    and
    Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
    I would really appreciate some help on this really weird issue.
    Thanks in advance,
    Andi

    Hi,
    The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
    drop table t cascade constraints purge;
    create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
    exec dbms_stats.gather_table_stats(user, 't');
    create or replace function test_fnc(p_int number) return number is
    begin
        return trunc(p_int);
    end;
    explain plan for select id from t group by id;
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from t group by test_fnc(id);
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from (select id from t group by id);
    select * from table(dbms_xplan.display(null,null,'advanced'));Output:
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL>
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "TEST_FNC"("ID")[22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$F5BB74E1
       2 - SEL$F5BB74E1 / T@SEL$2
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
          OUTLINE(@"SEL$2")
          OUTLINE(@"SEL$1")
          MERGE(@"SEL$2")
          OUTLINE_LEAF(@"SEL$F5BB74E1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    37 rows selected.

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • DB Performance issue

    Hi DB Gurus,
    Our application is inserting 60-70K records in a table in each transaction. When multiple sessions are open on this table user face performance issues like application response is too slow.
    Regarding this table:
    1.Size = 56424 Mbytes!
    2.Count = 188,858,094 rows!
    3.Years of data stored = 4 years
    4.Average growth = 10 million records per month, 120 million each year! (has grown 60 million since end of June 2007)
    5.Storage params = 110 extents, Initial=40960, Next=524288000, Min Extents=1, Max Extents=505
    6.There are 14 indexes on this table all of which are in use.
    7. Data is inserted through bulk insert
    8. DB: Oracle 10g
    Sheer size of this table (56G) and its rate of growth may be the culprits behind performance issue. But to ascertain that, we need to dig out more facts so that we can decide conclusively how to mail this issue.
    So my questions are:
    1. What other facts can be collected to find out the root cause of bad performance?
    2. Looking at given statistics, is there a way to resolve the performance issue - by using table partition or archiving or some other better way is there?
    We've already though of dropping some indexes but it looks difficult since they are used in reports based on this table (along with other tables)
    3. Any guess what else can be causing this issue?
    4. How many records per session can be inserted in a table? Is there any limitation?
    Thanks in advance!!

    Run STATSPACK and check what it says are the issues. Try and find the particular INSERT statement in the list of all SQL. Look at all the sections of the report, including block contention, which may show you are waiting for data blocks or index blocks, etc, or even things like latch contention too. Make sure you run it when the INSERT is happening during one of your busy periods.
    Given that you are using Oracle 10g, I assume you are using all the automatic settings now:
    o Local Tablespace Management
    o Automatic Segment Space Management
    o Automatic Undo Management
    If not, you should be. Prior to all this, Oracle always inserted into the last block in a table, which could become a bottleneck point. And space allocation of new blocks was also a problem. When these settings were introduced it alleviated most of these problems, and meant that Oracle could scale far better on such INSERT intensive workloads. If you are not using these for some reason or other, then you need to look at the number of FREELISTS you have on the table, and the setting of INITRANS.
    Also, how many columns does this table have? And how big is an average row. And what is your block size? You can get these from the data dictionary:
    select count (*) from user_tab_columns where table_name = '<tablename>' ;
    select avg_row_len from user_tables where table_name = '<tablename>' ;
    show parameter db_block_size
    Replace <tablename> with the name of your table, in uppercase.
    I ask because a very large row in a small data block will always fill the block quickly and cause new blocks to be allocated. If so, you may just have to live with this.
    And I would be suspicious about all 14 indexes being needed. Are they all single column indexes, or do you have any multi-column indexes? Do any of them share the same leading columns? Again, if you need all 14 indexes, then you must suffer the overhead of maintaining these indexes. But unless you have something like 50 columns in this table, I would guess that there is some overlap between these indexes.
    John

Maybe you are looking for

  • Schedule lines should be in gray mode

    Hi Gurus, Any idea how to make the schedule lines in gray mode. My client demand is when we save the sales order and we go to va02 schedule lines should show in gray mode to avoid the changes  quantity. Thanks in advance

  • How do I get my MIFI 4510L to work when roaming in Canada?

    I was unable to use my MIFI 4510L on my last trip to Canada. What do I need to do to enable operation while roaming in Alberta?

  • Connecting More than 1 IPOD to a Single User Computer

    I would like to have more than 1 IPOD be able to be recognized when I plug into the usb port. Is this Possible? Will Itunes recognize more than one (1) IPOD?

  • ActionListeners and ItemListener

    Hi I'm using some ItemListener to detect when things are selected from a list but require a button to open a new GUI window. However, when i try and compile, the compiler says i can't use ActionListeners and ItemListeners in the same class. Is this t

  • FM to find the serial number

    Hi, Is there any function module to find out the serial number from equi(equipment) if I have material,plant and storage location fields. Please let me know if avialable. Any inputs would be appreciated. Thanks, Raj