Poor performance of LAG function in view

Hi,
I have a query, containing a LAG function, that runs supper fast. I've created a view based on the exact query but when I select from it the performance is very slow. If I exclude the column, that gets its value from the LAG function in the underlying query, the performance is the same as the original query. Does anybody know how I can get the view's performance to be the same as the query, and why the view is so much slower when it uses the exact same SQL as the query?
Many thanks,
Johan

Thanks Rob,
The sql in the view is not very complicated (I removed some of the columns to make it more readable). When I select all the columns from the view except prev_credit_balance the performance is great but when I add the prev_credit_balance column performance is slow. If I run the query from the create view statement below it is fast and include the LAG function. I do select from a rather large table (but then, why doesn't the query give the same problems as the view?)
CREATE OR REPLACE VIEW CALL_MONITOR_VIEW AS
select TT.Name Transaction_Type,
SUBSTR('27'||TD.Subscriber_UID,1,11) Msisdn,
TD.Subscriber_UID,
     IN_PLATFORM_PREFIX,
     TD.End_Of_Call,
     TD.Called_Party Other_Party,
     TD.Call_Duration,
     TD.Value,
(TD.Value*TT.Display_Sign) DISPLAY_VALUE,
     TD.Credit_Balance,
LAG(TD.credit_balance,1) over (order by TD.Subscriber_UID asc, TD.end_of_call asc) prev_credit_balance,
     TD.File_UID,
     TD.File_Type_UID,
     FT.Name File_Type
from Transaction_Detail TD,
Transaction_Type TT,
     File_Types FT,
     In_Platform ip,
TC_Unitization_Map UM
where TD.Transaction_Type_UID = TT.Transaction_Type_UID(+)
and TD.IN_PLATFORM_UID = IP.IN_PLATFORM_UID(+)
and TD.File_Type_Uid = FT.File_Type_uid
and TD.Provider_ID = UM.Provider_ID (+)
)

Similar Messages

  • Poor performance: portal report using inline views

    I have created a portal report that uses inline views that performs terribly. It has 6 inline views. When I cut out half the views, the performance doubles. When I run the same query in sql + on my portal database with all the views, I get the results back instantly. Any ideas on what is causing the performance hit in portal? Any ideas on a remedy?

    More info
    SELECT patch_no, count(*) frequency
    FROM users_requests
    WHERE patchset IN (SELECT arps2.patchset_name
    FROM aru_bugfix_relationships abr, aru_bugfixes ab, aru_status_codes ac,
    aru_patchsets arps, aru_patchsets arps2
    WHERE arps.patchset_name = '11i.FIN_PF.E'
    AND abr.bugfix_id = ab.bugfix_id
    AND arps.bugfix_id = ab.bugfix_id
    AND abr.relation_type = ac.status_id
    AND arps2.bugfix_id = abr.related_bugfix_id
    AND abr.relation_type IN (601, 602))
    AND included ='Y'
    GROUP BY patch_no
    order by frequency desc, patch_no
    Runs < 1 sec from SQL navigator and from portal (if i hardcode the value for fampack.
    Takes ~50 secs if i replace with :fampack and set default value to 11i.FIN_PF.D

  • Poor Performance generating pdf with Disco Viewer

    Version: 10.1.2.54.25
    Please humour me, I’m not a Discoverer friendly person.
    Never even seen it.
    But I am supporting a client who brings me Discoverer problems in the third person...
    So issue is...
    They often generate large reports.
    Sometime, the users will export (not sure if that is correct term or not) them to pdf format.
    This part of the process takes a while.
    A 10,000 line/row report can take 25 minutes to generate in pdf.
    Users are finding this unacceptable.
    Excel generation for a similar report is done in approx 1.5 minutes.
    Sorry, I can find nothing at Oracle sites, and a few things using google.
    I can see that there are some client level controls in Disco that allows you to control resolution, memory, etc for pdf generation.
    So..
    1.     What exactly are all the related setting and what do these settings control.
    2.     Are there any known issues for this?
    3.     How is the pdf generated, this client process or server ? How can I describe this process to my client so that it can justify the difference between pdf & excel (assuming there is no known issue).
    4.     Any good documents dealing with this (please don’t point me to user guides)
    5. any other ideas, thoughts ?

    Hi
    Well if the same issue occurs in both Desktop and Viewer then you have your answer. It's not the way that Discoverer is running the workbook its the way the workbook has been constructed.
    For a start, 40000 rows for a Crosstab is way over the top and WILL cause performance issues. This is because Discoverer has to create a bucket for every data point for every combination of items on the page, side and top axes. The more rows, page items and column headings that you have, the more buckets you have and therefore the longer it will take for Discoverer to work out the contents of every bucket.
    Also, whenever you use page items or crosstabs, Discoverer has to retrieve all of the rows for the entire query, not just the first x rows as with a table. This is because it cannot possibly know how many buckets to create until it has all the rows.
    You therefore to:
    a) apply sufficient filters to reduce the amount of data being returned to something manageable
    b) reduce the number of page items, if used
    c) reduce the number of items on the side or top axis of a crosstab
    d) reduce the number of complex calculations, especially calculations that would generate a new bucket
    If you have a lot of complex calculations, you should consider the use of a materialized view / summary folder to pre-calculate the values.
    Does this help?
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • ITunes 5 poor performance in mini player view

    When I switch the iTunes 5 interface to the "mini player" view (green plus button), iTunes consumes ~65% CPU time continuously. This isn't the case when iTunes is minimzed, and when the entire interface is visible it only takes ~15% CPU time continuously. This wasn't the case with iTunes 4.x, in which the mini player mode didn't appear to consume any more CPU time than any other view mode. It's terribly annoying since it heats the computer significantly and forces my laptop's fan to spin nonstop.
    I can only attribute this behavior to the new "skin," but is anyone else seeing poor performance from iTunes 5 in the mini player view?

    No, I just downloaded iTunes 5 a few days ago and I don't have any add-ons. I should clarify that this is on my 867MHz Powerbook G4.
    I've been observing the CPU usage some more, and it seems to take a huge hit with songs/labels that are very long; i.e.: ones that scroll sideways across the yellow display on the second line. I've noticed this behavior with Podcasts--the Podcast name will be short and will not scroll, then it's description will roll up and will be long enough to require sideways scrolling. The poor performance comes only whin scrolling titles are being displayed, consuming 45-65% CPU.
    Playing music with short artist and album names consumes 10-15% CPU continuously, with spikes to 35% when the artist/album rolls upwards on the yellow display window. If sideways scrolling is displayed, again CPU usage stays high.
    If iTunes 5 is in mini player mode and not playing anything (only the Mac logo in the yellow window), then continuous CPU usage is <1%.
    It seems to me that text animation in the GUI has taken a huge hit with the upgrade to iTunes 5.

  • Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running google maps app on the phone. Siri cannot seem to get me to a specific address. Where does the problem lie? Thanks.

    Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running Google Maps app on the phone. SIRI cannot seem to get me to a specific address. Where does the problem lie? Also can anyone tell me the hierarchy of use between the Apple Maps, SIRI, and Google maps when the app is on the phone? How do you choose one over the other as the default map usage? Or better still how do you suppress SIRI from using the Apple maps app when requesting a "go to"?
    I have placed an address location into the CONTACTS list and when I ask SIRI to "take me there" it found a TOTALLY different location in the metro area with the same street name. I have included the address, the quadrant, (NE) and the ZIP code into the CONTACTS list. As it turns out, no amount of canceling the trip or relocating the address in the CONTACTS list line would prevent SIRI from taking me to this bogus location. FINALLY I typed in Northeast for NE in the CONTACTS list (NE being the accepted method of defining the USPS location quadrant) , canceled the current map route and it finally found the correct address. This problem would normally not demand such a response from me to have it fixed but the address is one of a hospital in the center of town and this hospital HAS a branch location in a similar part of town (NOT the original address SIRI was trying to take me to). This screw up could be dangerous if not catastrophic to someone who was looking for a hospital location fast and did not know of these two similar locations. After all the whole POINT of directions is not just whimsical pasttime or convenience. In a pinch people need to rely on this function. OR, are my expectations set too high? 
    How does the iPhone select between one app or the other (Apple Maps or Gppgle Maps) as it relates to SIRI finding and showing a map route?  
    Why does SIRI return an address that is NOT the correct address nor is the returned location in the requested ZIP code?
    Is there a known bug in the CONTACTS list that demands the USPS quadrant ID be spelled out, as opposed to abreviated, to permit SIRI to do its routing?
    Thanks for any clarification on these matters.

    siri will only use apple maps, this cannot be changed. you could try google voice in the google app.

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Crashing and poor performance during playback of a large project.

    Hi,
    I've been a regular user of iMovie for about 3 years and have edited several 50GB+ projects of DV quality footage without too many major issues with lag or 'dropped frames'. I currently have a 80GB project that resides on a 95% full 320GB Firewire 400 external drive that has been getting very slow to open and near impossible to work with.
    Pair the bursting-at-the-seams external drive, with an overburdened 90% full internal drive - the poor performance wasn't to be unexpected. So I bought a 1TB Firewire 400 drive to free up some space on my Mac. My large iTunes library (150GB) was the main culprit and it was quickly moved onto the new drive.
    The iMovie project was then moved onto my Mac's movie folder - I figured that the project needs some "room" to work (not that I really understand how Macs use memory) and that having roughly 80GB free with 1.5GB RAM (which is more than used to have) would make everything just that much smoother.
    Wrong.
    The project opened in roughly the same amount of time - but when I tried to play back the timeline, it plays like rubbish and then after 10-15 secs the Mac goes into 'sleep' mode. The screen goes off, the fans dies down and the 'heartbeat' light goes on. A click of the mouse 'wakes' the Mac only to find that if i try again, I get the same result.
    I've probably got so many variables going on here that it's probably hard to suggest what the problem might be but all I could think of was repairing permissions (which I did and none needed it).
    Stuck on this. Anyone have any advice?

    I understand completely, having worked with a 100 GB project once. I found that getting a movie bloated up to that size was just more difficult with jerky playback.
    I do have a couple of suggestions for you.
    You may need more than that 80GB free space for this movie. Is there any reason you cannot move it to the 1TB drive? If you have only your iTunes on it, you should have about 800 GB free.
    If you still need to have the project on your computer's drive, set your computer to never sleep.
    How close to finishing editing are you with this movie? If you are nearly done except for adding audio clips, you can export (share) it as QuickTime Full Quality movie. The resultant quicktime version of your iMovie will be smaller because it will contain only the clips actually used in the movie, not all the saved whole clips that iMovie keeps as its nondestructive editing feature. The quicktime movie will be one continuous clip, incorporating all your edits and added audio. It CAN be further edited, but you cannot change text of titles already there, or change transitions or remove already added audios.
    I actually do this with nearly every iMovie. I create my movies by first importing videos, then adding still photos, then editing with titles, effects and transitions. I add audio last, and if it becomes too distorted in playback, I export the movie and then continue adding audio clips.
    My 100+ GB movie slimmed down to only 8 GB with this method. (The large size was due to having so many clips. The movie was from VHS footage of my son's little league all-star game, and the video had so many skipped segments that I had to split it into thousands of clips to remove the dropped ones. Very old VHS tape!).
    I haven't upgraded to QT 7.5.5, but I heard that the jerky playback issue is mostly resolved with this new upgrade. I am in mid-project with about 5 iMovies, so I will probably plod along with my work-around method, not wanting to upgrade in the middle of any of them.
    Hope this is helpful to you.

  • Skype crashing and poor performance

    Hello!
    I have a Lumia625 with WP8.1. My problem is that Skype has a really poor performance on my phone. It crashes 6 times out of 10 on startup, and even if I manage to start it, the whole app is slow and laggy. Sometimes I can't even write a message it's so laggy. Video call is absolutely out of the question. It crashes my whole phone. I have no similar problems with other instant messaging apps nor with high-end games. There is something obviously using way more resource in the Skype app than it's supposed to. It's a simple chat program, why would it need so much resource?
    The problem seems to be originating from the lower (512 mb) RAM size of my phone model, because I experienced the same effect with poorly written apps, that don't keep in mind that there are 512 RAM devices, not only 1GB+ ones, and use too much resource.
    Please don't try to suggest to restart/reset the phone, and reinstall the app. Those are already behind me, and they did NOT help the problem. I'm not searching for temporary workarounds.
    Please find a solution for this problem, because it is super annoying, and I can't use Skype, which will eventually result in me leaving Skype.
    Solved!
    Go to Solution.

    When it crashes on startup it goes like:
    I tap the skype tile
    The black screen with the "Loading....." appears (default WP loading screen). Usually this takes longer than it would normally take on any other app.
    For a blink of an eye the Skype gui appears, but it instantly crashes.
    If I can successfully start up the app, it just keeps lagging. I sart to write a message to a contact, and sometimes even the letters don't appear as I touch them, but they appear much later altogether. If I tap the send message button the whole gui freezes (seems like it freezes till the contact gets my message). Sometimes the lag get stronger, and sometimes it almost vanishes, but if I keep making inputs when the lag is strong, sometimes it crashes the whole app.
    When I first installed the app, everything was fine. But after a while this behavior appeared. I reinstalled the app, and it solved the problem temporarily, but after some time the problem re-appeared. I don't know if it's relevant, but there was a time when I couldn't make myself appear online all the time (when the app was not started). In that time I didn't experience the lags and crashes. Anyways, what I'm sure about is that the lags get worse with time. Idk if it's because of use of the app (caching?), or the updates the phone makes to itself (conflict?).
    I will try to reinstall Skype. Probably it will fix it for now. I hope the problem won't appear again.

  • Poor Performance, startup, and slow everything...

    I have been noticing lately that my MBP is really under performing. Here are my specs:
    -10.5.5
    -2.2 GHz Intel Core 2 Duo
    -2 GB Ram
    -44 GB of HD space
    I recently got a new iMac at work it is really great. Speedy and everything so now it makes me see how poorly my computer is performing. I know that it is capable of being really fast but I just don't know what I can do to make it faster. I have repaired disk permissions but that does basically nothing.
    Thanks for any help!

    Kappy's Personal Suggestions for OS X Maintenance
    For disk repairs use Disk Utility. For situations DU cannot handle the best third-party utilities are: Disk Warrior; DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible. TechTool Pro provides additional repair options including file repair and recovery, system diagnostics, and disk defragmentation. TechTool Pro 4.5.1 or higher are Intel Mac compatible; Drive Genius is similar to TechTool Pro in terms of the various repair services provided. Versions 1.5.1 or later are Intel Mac compatible.
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep. Dependence upon third-party utilities to run the periodic maintenance scripts had been significantly reduced in Tiger and Leopard.
    OS X automatically defrags files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems.
    I would also recommend downloading the shareware utility TinkerTool System that you can use for periodic maintenance such as removing old logfiles and archives, clearing caches, etc.
    For emergency repairs install the freeware utility Applejack. If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the commandline. Note that AppleJack 1.5 is required for Leopard.
    When you install any new system software or updates be sure to repair the hard drive and permissions beforehand. I also recommend booting into safe mode before doing system software updates.
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    1. Retrospect Desktop (Commercial - not yet universal binary)
    2. Synchronize! Pro X (Commercial)
    3. Synk (Backup, Standard, or Pro)
    4. Deja Vu (Shareware)
    5. Carbon Copy Cloner (Donationware)
    6. SuperDuper! (Commercial)
    7. Intego Personal Backup (Commercial)
    8. Data Backup (Commercial)
    The following utilities can also be used for backup, but cannot create bootable clones:
    1. Backup (requires a .Mac account with Apple both to get the software and to use it.)
    2. Toast
    3. Impression
    4. arRSync
    Apple's Backup is a full backup tool capable of also backing up across multiple media such as CD/DVD. However, it cannot create bootable backups. It is primarily an "archiving" utility as are the other two.
    Impression and Toast are disk image based backups, only. Particularly useful if you need to backup to CD/DVD across multiple media.
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
    Additional suggestions will be found in Mac Maintenance Quick Assist.
    Referenced software can be found at www.versiontracker.com and www.macupdate.com.
    Add more RAM. Be careful about comparing to other non-similar hardware. The iMac may be newer, have faster processor, more RAM, etc. Also, "seems faster" is no substitute for benchmarks.

  • Poor performance on Discoverer after upgrade to 11g database

    Hello,
    We have two customers who have experienced a sharp decline in the time it takes to run their queries when they have upgraded to an 11g database.
    One was a full Disco upgrade from 4 to 10 plus the database upgrade.
    The other was purely a database upgrade - Discoverer version was already 10g.
    They were both on 9i database.
    They are both Oracle Apps - and the reports are based on a mixture of everything from standard tables to custom tables - there is no pattern (or one that we have seen) in the poorly performing reports.
    I have not seen much on metalink regarding this - has anyone else come across why this would be?
    It does seem to be only Discoverer - standard reports and the app are performing as expected.
    Any advice welcome,
    Thanks
    Rachael

    Hi Rachael
    There are additional database privileges needed for running in 10g and 11g databases that weren't needed in 9i. Here's the typical privileges that I use:
    accept username prompt'Enter Username: '
    accept pword prompt'Enter Password: '
    create user &username identified by &pword;
    grant connect, resource to &username;
    grant analyze any to &username;
    grant create procedure, create sequence to &username;
    grant create session, create table, create view to &username;
    grant execute any procedure to &username;
    grant global query rewrite to &username;
    grant create any materialized view to &username;
    grant drop any materialized view to &username;
    grant alter any materialized view to &username;
    grant select any table, unlimited tablespace to &username;
    grant execute on sys.dbms_job to &username;
    grant select on sys.v_$parameter to &username;
    I appreciate that all of the above are not needed for all users and some are only needed if the user is going to be scheduling.
    However, I do know that sys.v_$parameter was not needed in 9i. Can you check whether you have this assigned?
    I know this might sound silly too, but you need to refresh your database statistics in order for Discoverer to work well. You might also want to try turning off the query predictor if it is still enabled.
    Best wishes
    Michael

  • Query : Hyperion Performance Scorecard Rollup Functionality

    Hi All,
    Hyperion Performance Scorecard Rollup Functionality in measure and scorecard template.
    Query: We are implementing Hyperion performance scorecard for a client using measure template and scorecard templates after build client came up with following new requirements-
    Strategy hierarchy as follows-
    Level 1- SBU (Parent) Level 2- Groups (Child), 3- Teams( Chilld), 4- Desk ( Child), 5- RM (Child)
    1 Along with Existing scorecards functionality they need to roll up 2 KPIs (Credit Quality1,Credit Quality2) explicitly only at RM level ( at lowest level).
    2- Need to follow this hierarchy for these scorecards roll up (RMs to Desks to Teams to Groups to SBUs)
    3- No of KPIs to be used/modified (credit quality2 means bad assets-non performing loan & Credit quality 1 means good assets-performing loan)
    4- Both these KPIs should roll up to all RMs only but KPI bad asset (Credit Quality2) should not roll up to Desk, to Teams, to Groups, To SBUs but KPI1 Good Assets (performing asset) should roll up to all levels.
    4.1-Example let’s assume there are 2 RMs (RM1 & RM2) RM1 is having bad asset & RM2 is not having bad asset then only for RM1 both KPI should roll up but at above levels 1st KPI (non-performing assets ) should not roll up but second KPI ( performing assets) should roll up.
    5- No of scorecards required means RMs * approx 3000, Desk*Total numbers ,Teams*Total numbers ,Groups*total numbers, finally SBUs*total numbers.
    As per my understanding in the existing HPS design we are using Measure Templates & Scorecard Templates in which all KPIs are automatically rolling up to RMs then from RMs to all above hierarchies, now for this new requirement explicitly we need to create another scorecard (secondary scorecard) with 2 KPIs both should roll up to RMs (3000 approx) but should not roll up to above hierarchy (to Desks to Teams to Groups to SBUs) only we need to roll up 1 KPI to all above hierarchies not the other KPI or this new scorecard.
    As per my understanding in the current design all KPIs are rolling up to all levels.
    Question : In current scenario is there a possibility or solution to restrict a particular KPI to rollup to above levels?*
    Any help would be highly appreciated.

    Thanks for your response!
    I'm not sure that the Power Pivot plugin for Excel will be available here. We use Excel 2010. We have restrictions in our environment. Additionally, we have 32bit Office installed and my understanding is that you need 64 bit on both client and
    server when working with Power Pivot. So exporting Excel Power Pivots from SharePoint 2010 to a 32 bit client wouldn't work (or vice-vera) wouldn't work.
    By robust I mean the ability to utilize different types of data sources (CSV, EXCEL, SQL Server, ORACLE) as well as Data Models, (transactional, analytical, tabular). The ability to handle large datasets, pivot table funcitonality, drill down and drill
    through, rich data visualizations.
    Currently this data is in Oracle. I will need to export it to SQL and I would like to leave it as OLTP since OLAP technologies are not currently in our skill set. Hopefully these can be added in the next year or so.  If Power View loses the bulk
    of it's reporting power when moving to OLTP, I will just build these in SSRS UNLESS, I can use Group By, Rollup and CUBE in my queries to give Power View a data set where  it's capabiliities can be utilized.
    Love them all...regardless. - Buddha

  • Grid upgrade patch 10.2.0.5 ? countless deadlock's, poor performance

    hi,
    i upgraded the grid control OMS, Repository from 10.2.0.4 to 10.2.0.5 (Windows 2003 32 bit) and it was a disaster. Countless deadlocks, poor performance, four Database CPU's who are working nearly by 100%, and an open service request
    7573665.992 with poor response. After the upgrade, parts are not functioning for example the "Large Repository option" (see patch 8321694).
    We have a big repository 1100 endpoint agents. The upload process is not functioning in time. we have now more than 150.000 files who are not uploaded yet.
    i am realy very disapointed about the quality of the software and the professional reaction to solve a production problem. Has oracle test this patch just with 10 agents ? I think this patch is a good stress program for hard discs producers. Send it please to IBM ...
    greetings
    Efstratios

    I had successfully applied Patch 10.2.0.5 after facing couple of days (1 week) problem and also raising SR request. Finally I did it but almost 99% of my own effort
    I have Windows 32Bit with 4 GB of RAM and MS Windows 2003 SP2 installed on that machine and I was monitoring 60 target including 20 Agent on different host.
    Before starting upgrade I was quite happy. My platform at that time was 10.2.0.4 GC and DB 10.1.0.4.
    I download the patch and start apply. It did not work. I raised SR and after two days of interval nothing resolved.
    So I decided to De-install all ( Yes you are right I De-install all including DB )
    Removed registry entry.
    Then install following.
    NOTE: Shutdown all agent services on different host otherwise during installation it searches for other host machines and we have to wait wait and wait.
    1. GC 10.2.0.2 Full Version
    2. Applied Patch 10.2.0.4
    3. Applied Patch 10.2.0.5
    During installation my CPU almost went to 100% for couple of hours. But I keep waited and it works.
    Finally I finished my installation.
    You must re-start machine after installation finishes otherwise CPU will always 100%
    No restart all agent and run emctl secure agent command on agent home.
    Then you need to re
    Right now my GC is fine but only one problem is still there I would not be able to start iasconsole service and for that I raised SR and still waiting to resolve after 2 days.
    Thanks

  • Misconception about the LAG function's ... functionality?

    So I want to look at the value of a column on a current row, and the value of the same column on the most recently entered row, prior to the current. The problem I'm encountering is that for the current row, I'm only interested in those added last month. So, if the record that I'm seeking in the LAG function is prior to last month, it gets rejected from the resultset.
    create table check_lag
    (filenr number, code varchar2(2), create_date date);
    insert into check_lag values (1,'02',to_date('9/5/2008','MM/DD/YYYY')); -- current
    insert into check_lag values (1,'01',to_date('9/1/2008','MM/DD/YYYY')); --lag record, same month
    insert into check_lag values (2,'02',to_date('9/10/2008','MM/DD/YYYY'));-- current
    insert into check_lag values (2,'01',to_date('8/10/2008','MM/DD/YYYY'));-- lag record prior monthSo this query's results made sense
    SELECT FILENR, CODE,
           LAG( CODE ) OVER( PARTITION BY FILENR ORDER BY FILENR, CREATE_DATE ) AS PRIOR_CODE,
           CREATE_DATE
    FROM   CHECK_LAG;
    FILENR CODE PRIOR_CODE CREATE_DATE
    1      01              9/1/2008
    1      02   01         9/5/2008
    2      01              8/10/2008
    2      02   01         9/10/2008But as soon as I add a WHERE clause which set's a boundary around last month, I exclude a LAG record
    SELECT FILENR, CODE,
           LAG( CODE ) OVER( PARTITION BY FILENR ORDER BY FILENR, CREATE_DATE ) AS PRIOR_CODE,
           CREATE_DATE
    FROM   CHECK_LAG;
    FILENR CODE PRIOR_CODE CREATE_DATE
    1      01              9/1/2008
    1      02   01         9/5/2008
    2      02              9/10/2008I know that I could push this into an inline view, and provide the WHERE clause with the date range after the inline view is processed, but this is a huge table with an index on the CREATE_DATE, and so the following forces a table scan
    SELECT *
    FROM   ( SELECT FILENR, CODE,
                    LAG( CODE ) OVER( PARTITION BY FILENR ORDER BY FILENR, CREATE_DATE ) AS PRIOR_CODE,
                    CREATE_DATE
            FROM   CHECK_LAG )
    WHERE  CREATE_DATE BETWEEN TO_DATE( '09/01/2008', 'MM/DD/YYYY' )
                           AND TO_DATE( '09/30/2008 23:59:59', 'MM/DD/YYYY HH24:MI:SS' )
    AND    PRIOR_CODE IS NOT NULL;
    FILENR CODE PRIOR_CODE CREATE_DATE
    1      02   01         9/5/2008
    2      02   01         9/10/2008Is that just the way things are, or am I missing out on another approach?
    Thanks,
    Chuck

    Hi, Chuck,
    Thanks for including the CREATE TABLE and INSERT statements.
    When you use" ORDER BY <single_column>" in an analytic function, you can restrict it to a range of values relative to the ordering value on the current row.
    For example, you could say
    ORDER  BY create_date
    RANGE  BETWEEN  60 PRECEDING
               AND  30 PRECEDINGif you only wanted to consider rows that were no more than 60 days earlier, but at least 30 days earlier.
    It's a little more complicated for your problem, because you can't just hard-code numbers like 60 and 30; you have to compute them for every row.
    SELECT     filenr
    ,     code
    ,     LAST_VALUE (code) OVER
         ( PARTITION BY     filenr
           ORDER BY     create_date
           RANGE BETWEEN     create_date - ADD_MONTHS ( TRUNC ( create_date
                                        , 'MM'
                                   , -1
                                   )     PRECEDING
              AND     create_date - TRUNC ( create_date
                                 , 'MM'
                                 )          PRECEDING
         ) AS prior_code
    ,     create_date
    FROM     check_lag;You could probably get the same results using LAG, but this is exactly for what LAST_VALUE was designed.
    Windowing is described in the [SQL Language Reference manual|http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/functions001.htm#sthref972] in the section for "analytic functions" in general.

  • Photoshop laggy, poor performance with GPU enabled

    I'm using Photoshop CS6 for some time now. I had enabled GPU usage earlier and I remember that Photoshop ran absolutely smoothly without and lag what-so-ever. This morning, all of a sudden, it has become extermely laggy. I don't understand the reason behind this. This had happened to me before as well but it became alright after a few days. This also happened to me once while using Photoshop CS5. I didn't install any plugin and didn't change any software. I also want to point out that I had 13.4 driver version when Photoshop was working fine, and now too I have the same driver version.
    CPU: AMD Phenom II X4 955
    Memory: 6 GB
    Free storage: 8 GB
    GPU: ATI Radeon HD 5850

    I've also noticed poor performance since installing the 10.1 plugin.  Not just in HD video etc, but in general use.  So much for "improved performance"...
    No idea on solutions at this stage - In the process of downgrading to previous version to see if that fixes my problems

  • Shared nothing live migration over SMB. Poor performance

    Hi,
    I´m experiencing really poor performance when migrating VMs between newly installed server 2012 R2 Hyper-V hosts.
    Hardware:
    Dell M620 blades
    256Gb RAM
    2*8C Intel E5-2680 CPUs
    Samsung 840 Pro 512Gb SSD running in Raid1
    6* Intel X520 10G NICs connected to Force10 MXL enclosure switches
    The NICs are using the latest drivers from Intel (18.7) and firmware version 14.5.9
    The OS installation is pretty clean. Its Windows Server 2012 R2 + Dell OMSA + Dell HIT 4.6
    I have removed the NIC teams and vmSwitch/vNICs to simplify the problem solving process. Now there is one nic configured with one IP. RSS is enabled, no VMQ.
    The graphs are from 4 tests.
    Test 1 and 2 are nttcp-tests to establish that the network is working as expected.
    Test 3 is a shared nothing live migration of a live VM over SMB
    Test 4 is a storage migration of the same VM when shut down. The VMs is transferred using BITS over http.
    It´s obvious that the network and NICs can push a lot of data. Test 2 had a throughput of 1130MB/s (9Gb/s) using 4 threads. The disks can handle a lot more than 74MB/s, proven by test 4.
    While the graph above doesn´t show the cpu load, I have verified that no cpu core was even close to running at 100% during test 3 and 4.
    Any ideas?
    Test
    Config
    Vmswitch
    RSS
    VMQ
    Live Migration Config
    Throughput (MB/s)
    NTtcp
    NTttcp.exe -r -m 1,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    500
    NTtcp
    NTttcp.exe -r -m 4,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    1130
    Shared nothing live migration
    Online VM, 8GB disk, 2GB RAM. Migrated from host 1 -> host2
    No
    Yes
    No
    Kerberos, Use SMB, any available net
    74
    Storage migration
    Offline VM, 8Gb disk. Migrated from host 1 -> host2
    No
    Yes
    No
    Unencrypted BITS transfer
    350

    Hi Per Kjellkvist,
    Please try to change the "advanced features"settings of "live migrations" in "Hyper-v settings" , select "Compression" in "performance options" area .
    Then test 3 and 4 .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

Maybe you are looking for