LR 2.6 and Wacom Intuos4: Performance issues

Hi folks,
recently I added a Wacom Intuos4 graphic tablet to my PC (Windows 7 Ultimate 64bit; Intel QuadCore 3 GHz and 8GB RAM)  for more precise actions in Lightroom and Photoshop. Before I added the Inuos4 the Lightroom performance with the adjustment brush and the gradient tool worked well,
without anything to complain about. Now, that I am using the graphic tablet both tools are very slow and the CPU consumption of LR rises up to about 70%-80%. In Photoshop however, the Intuos4 is no perforance issue, everything works as good as before when I am using the tablet.
Anybody else having this problem and/or even knowing why this is so?
Looking forward to your responses.
Best regards
Thomas

doc tee wrote:
I asked Wacom for what issues have been resolved. They told me that it has nothing to do with the LR performance problem. As expected, Wacom does not see to be responsible in this matter. So the driver update should not change anything in this matter, and it really does not.
Which graphics adapter are you using (nVidia cards are also suspected to cause LR performance trouble)?
This machine has a GeForce GTX 285, so either the problem does not affect all nVidia cards, or there is some other problem trigger that is missing on my system. The machine is only about 1.5 months old, and I take care not to have crappy/unneeded processes running in the background. I also do not really use the tablet much with Lightroom, so it might be that I simply do not perform the kind of actions that trigger the problem. I have tried going really overboard with gradient filters and adjustment brushes for testing purposes though, and have not been able to reproduce the problem. CPU usage does go up noticeably while dragging a gradient filter, but this is to be expected due to the computational complexity of the task, and at no point does it ever make the operation feel sluggish.
System specs for comparison:
Intel Core i7 920 @ 2.66 GHz
6 GB DDR3-1600
GeForce GTX 285 1 GB (using the 196.21 drivers)
Windows 7 64-bit
The system disk (where Lightroom itself is stored) is an Intel SSD, but the catalog and the photos are on rotating media.

Similar Messages

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • 3tb time capsual and Apple tv performance issues

    I have just upgraded my airport extreme 1st gen to a 3tb 4th gen time capsule. I did it becuas of growing performance issues with apple TV's 1st and 2nd gen. I thought the dual band and improved power would solve the issue. I've tried all sorts of set ups to the time capsule and I'm getting just as irratic performance. Has anyone got any advice on the best set up for optimum performance please?

    To successfully stream, either audio or video, the key for uninterrupted performance is adequate bandwidth between the media source and the playback device. Your current network is primarily dependent on wireless connections. The extended network alone loses some bandwidth to maintaining the extended network itself. Also, any non-"n" clients (including the 802.11b/g AXs) will "slow down" the "n" network in order for them to be able to communicate at their top speed.
    Of your 802.11n gear, only the 4th generation TC (& I believe the ATV2) are full implementations of the 802.11n standard. Both 802.11n AXns are draft "n." This too would have an overall effect on the available bandwidth.
    Again since your network is primarily wireless, there are at least four areas to go after to look for improvements:
    You must have enough available bandwidth between the media source and destination. You can use utilities like iperf or jperf to do this. You will want to make some data transfer (throughput) measurements both between the media source and destination, and segments between them to find out where actual bottlenecks occur.
    You must have a clear wireless radio channel. Use a utility, like iStumbler to determine the other Wi-Fis operating in the vicinity that may be competing with yours. Specifically look for those with the strongest signal value and note which channels they are running on. Then change yours to be at least 3-5 channels away to prevent Wi-Fi interference.
    Know the media's bandwidth requirements. Looking at the worst-case scenario of what your ATVs can support, which would be 720p 30fps HD format. This would equate to around a minimum of 6-7 MBps (48-56 Mbps) of bandwidth between the video file source and the video player. Actually, I would recommend 20+ MBps as the minimum. 802.11g (at its best) would offer 6.75 MBps (54 Mbps), so 802.11n would be required for anything beyond SD video.
    Re-code the media if possible. If the data transfer peaks exceed the available bandwidth, the audio/video will experience drop-outs. You can always try re-encoding the media source with one that uses a tighter compression schema. However, as you can imagine, the greater the compression, the poorer the audio/video quality.

  • JavaScript and PS CS6: Performance issue

    Hi folks
    I have to admit we are stuck in our development: We have written a Phothoshop Plugin unsing extensive JavaScript and Flash Panels / Action Script.
    The Javascripts would, eg. select a given layer. When running the Javascript in PS CS5 or 5.1 everything is smooth and snappy but we've noticed, that the same JavaScript running in PS CS6 takes up to 300% more time.
    Does anyone having observed the same performance issues?
    Would it be faster to address the specific layers by their native Layer ID's rather than their names?
    Why is there such a performance slow down with the same JavaScripts / ActionScript-FlashPanel between CS5 and CS6?
    We have already contacted [email protected] (we are solution partner silver) but they do not start acting if you are using your own JavaScripts....
    You are our last hope :-( 
    I can send you some of the code but I don't want it to be publicly exposed here.
    Thanks in advance,
    Andrash

    Hi, since nobody bothers to answer we might have to find out ourselfs.
    Maybe it is caused by the way we address layers throug the script?
    Which method are you using?
    Are you addressing the layers directly or are you just cicling throug an array of layers?
    Are you pointing for the layers by their native ID or rather their layer names?
    How do you trigger the script: by another script? From a flash panel (Flex / Action Script)?
    We are using Flash Panels to start the script. The script simply calls a layer by it's name (a numerical ID that we apply to the layer). The script shall look up that specific layer and check if there is some content on the layer. We created a logger to see where the heavy amount of time is consumed and it seems, that it is while jumping to the layer.
    In CS5 that was all a matter of a split second. Now in CS6 it takes a couple of seconds (4sec.). We asked ADOBE Techsupport for help, but they didn't even bother to look at the problem since we are working with self written code (as every developer does.....?!?!). I wonder what techsupport is good for if not answering techical problems like this one.
    I hope that, with your answers we might circle in the cause of the problem!
    Cheers,
    Andreas

  • ITunes 7.3.2.6 and Windows Vista Performance Issues

    I recently purchased an 80GB iPod, and downloaded iTunes promptly, disregarding all the negative attention about how "iTunes crashes your computer" or "iTunes kills Windows". So far, the only real problem I've had with iTunes is the amazingly poor performance it has on my computer. When iTunes is minimized to the system tray, or compressed into Mini Player mode, it runs better, but not well.
    Running in the full mode, the player bogs down my computer substantially. Knowing my laptop isn't a powerhouse, I expected a slight bit of slowdown, since there is some evident with QuickTime.
    UPDATE: I've just stumbled across something. Apparently, wiggling my mouse over the iTunes program, iTunes runs about two or three times faster. The moment I stop moving the mouse cursor over the window, the speeds return to slow. It seems to only do this during the "Processing Album Artwork" phase. Bizarre? Very. Any explanations?
    I find this to be very annoying, especially when syncing. Syncing my library (which is about five or six gigabytes) takes an amazingly long time. Are there any workarounds, registry hacks, or anything else that can be done? Thanks!
    Message was edited by: Atlink

    Do you have ReadyBoost enabled on the PC? If so, by way of experiment, try disabling that as per the instructions from the following document:
    Troubleshooting iTunes for Windows Vista video playback performance issues
    ... any help at all with the performance issues?

  • Can't access root share sometimes and some strange performance issues

    Hi :)
    I'm sometimes getting error 0x80070043 "The network name cannot be found" when accessing \\dc01 (the root), but can access shares via \\dc01\share.
    When I get that error I also didn't get the network drive hosted on that server set via Group Policy, it fails with this error:
    The user 'W:' preference item in the 'GPO Name' Group Policy Object did not apply because it failed with error code '0x80070008 Not enough storage is available to process this command.' This error was suppressed.
    The client is Windows Server 2012 Remote Desktop and file server is 2012 too. On a VMware host.
    Then I log off and back on, and no issues.
    Maybe related and maybe where the problem is: When I have the issue above and sometimes when I don't (the network drive is added fine) I have some strange performance issues on share/network drive: Word, Excel and PDF files opens very slowly. Offices says
    "Contacting \\dc01\share..." for 20-30 sec and then opens. Text files don't have that problem.
    I have a DC02 server also 2012 with no issues like like this.
    Any tips how to troubleshoot?

    Hi,
    Based on your description, you could access shares on DC via
    \\dc01\share. But you couldn’t access shares via \\dc01.
    Please check the
    Network Path in the Properties of the shared folders at first. If the network path is
    \\dc01\share, you should access the shared folder by using
    \\dc01\share.
    And when you configure
    Drive Maps via domain group policy, you should also type the Network Path of the shared folders in the
    Location edit.
    About opening Office files very slow. There are some possible reasons.
     File validation can slow down the opening of files.
     This problem caused by the issue mentioned above.
    Here are a similar thread about slow opening office files from network share
    http://answers.microsoft.com/en-us/office/forum/office_2010-word/office-2010-slow-opening-files-from-network-share/d69e8942-b773-4aea-a6fc-8577def6b06a
    For File Validation, please refer to the article below,
    Office 2010 File Validation
    http://blogs.technet.com/b/office2010/archive/2009/12/16/office-2010-file-validation.aspx
    Best Regards,
    Tina

  • Yosemite and iCloud Drive Performance Issue

    While on my home network that is connected to the Internet over FiOS 50 G bps, I upgraded my Mac Book Pro Retina from Mavericks to Yosemite.  The upgrade went smoothly.
    I first noticed that when I boot my Mac Book, it boots slower like Windows...
    When I took it into my office and used a slow WiFi hot spot, the system crawled.  Safari, Google, and other apps barely responded.
    During the upgrade, I "blindly" selected to use the iCloud Drive.   Foolish me!   I tried to turn it off, but then it warned me it would delete all documents on my Mac that are also stored on the iCloud. 
    How do I turn iCloud drive completely off when I don't want to use it and KEEP all my local files???    Later, I'd like to turn it back on when I'm on my faster network and THEN sync everything.
    Also, I started turning off specific Apps and rebooted.  Performance has improved.
    Regards,
    Nick

    But Is there an option to retain copies of the files that have been stored on iCloud?   I don't want to delete them.   Later, when I connect back to iCloud drive, if I have made any changes on any of these files on my Mac and they are newer than the version up on iCloud drive, it will automatically "sync up".....
    Thanks again,
    Nick

  • Is Vista and Flash CS3 performance issue solved?

    My performances in Flash CS3 are very poor. When I try to
    scroll up-down/left-right I got slideshow. When move the
    framecursor to view the frames I also get slideshow. Searched
    google, do the compatibility settings and the only batter thing I
    get is smoother scrolling of the code. Nothing more. Is this
    solved?
    Im running C2Duo with 4gb 800 ram with Vista x64.
    Should I go back to Flash 8?

    I have an old AlBook G4 1.67 2GB and I actually think CS3
    works great. There is a huge thread about this very issue going on
    at the moment so you might want to take a look there. I never had
    Flash 8, but MX04 also ran just fine on my laptop.
    As far as player performance, it has sped up a lot with AS3
    and Flash plugin 9. (Even if you publish AS2 for Flash 9!). I Don't
    know if there is true parity between the systems, but it is
    certainly faster.
    I've always thought it a good idea to develop on a Mac. If
    you can make your animations look smooth and good playing back in
    the Mac version of the player then you know they will look good
    across the board. I can't tell you how many sites (thankfully fewer
    these days) where I go and everything just crawls and chugs along.
    It is usually due to over high frame rates and excess transparency
    or animated raster images. (And I'm talking about the kind that
    even if it did move smoothly it wouldn't be worth it!)

  • Photoshop, smart objects and dynamic filters performance issues

    Hello,
    I am quite new to Photoshop, after several years with Capture NX 2 to process thousands of NEF and  RW2 files (RAW from Nikon and Panasonic).
    I use Photoshop to read RAW pictures, convert them to a smart object, then apply several dynamic filters, mainly from the Nik Collection (Dfine, Color Efex Pro, Sharperner Pro), sometimes Topaz Denoise. I do that with actions, so I can batch process many pictures.
    But sometimes I have to manually adjust some settings, and this where I do not really understand the way Photoshop works. If I have to adjust let say the last filter on the stack, Photoshop reprocesses all the filters below, which can be very tedious as this takes lot of time.
    Is there a way to tell Photoshop to keep all intermediate data in memory, so if you have to adjust one of the last filters the process starts immediately?
    Any help would be greatly appreciate.
    Frederic.

    Thank you Chris.
    I am surprised, as for years there has been a lot of discussions about Capture NX2 which was supposed to be slow. In fact, when using the same filters (+ Nik Color Efex), NX2 is much much faster than Photoshop, and when you have to make an adjustment in any of the setttings, you can do that immediateley.
    Of course, Photoshop is completely opened and NX2 totally closed (and now not supported anymore).
    But, I really don't know how to adapt my workflow, except buying the most powerful PC possible (I already have 2 which are quite powerful), and this will still be far from being comfortable. I am used to tune manually many many pictures (adjust noise reduction, sharpening, light, colors ...), and this was quite fast with NX2.
    I am probably not on the correct forum for this, and I will try to investigate elsewhere.
    Anyhow, thank you for your answer.
    Frédéric

  • Performance Issues with large XML (1-1.5MB) files

    Hi,
    I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
    When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
    I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
    I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
    I guess I'm running out of options and patience as well.;)
    I would appreciate any ideas/suggestions, please help.....
    Thanks;
    Ramakrishna Chinta

    Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

  • Performance issues with 0CO_OM_WBS_1

    We use BW3.5 & R/3 4.7 and encounter huge performance issues with 0CO_OM_WBS_1? Always having to do a full load involving approx 15M records even though there are on the average 100k new records since previous load. This takes a longtime.
    Is there a way to delta-enable this datasource?

    Hi,
    This DS is not delta enabled and you can only do a full load.  For a delta enabled one, you need to use 0CO_OM_WBS_6.  This works as other Financials extractors, as it has a safety delta (configurable, default 2 hours, in table BWOM_SETTINGS).
    What you should do is maybe, use the WBS_6 as a delta and only extract full loads for WBS_1 for shorter durations.
    As you must have an ODS for WBS_1 at the first stage, I would suggest do a full load only for posting periods that are open.  This will reduce the data load.
    You may also look at creating your own generic data source with delta; if you are clear on the tables and logic used.
    cheers...

  • Performance issue in oracle 11.1.0.7 version

    Hi ,
    In production environment we have some cronjobs are scheduled, they will run every Saturday. One of the cronjob is taking more time to finish the job.
    Previous oracle version is 10.2.0.4, that time it was taking 36hrs to complete it. After upgrading to 11gr1, now it's taking 47hrs some time 50hrs to finish.
    I have asked my production DBA take AWR report after finish the cronjob.
    Now he sent the AWR report, but i don't know how to read it. Can you please help me to read the AWR reports, and i need to give some recommendations to reduce the overall running time.
    I don't know how to attach the AWR report here.
    Please help me on this.
    Thanks
    Shravan Kumar

    Hi,
    Now he sent the AWR report, but i don't know how to read it. Can you please help me to read the AWR reports, and i need to give some recommendations to reduce the overall running time.An't you a DBA? Probably you should seek help of you DBA to read the AWR and mean while you should also have AWR of 10g where this job was running previously so that you can compare the things.
    Did you do a testing before upgrade? you SHOULD have done a thorough testing of your applications/reports before the upgrade and resolve the performance issues before the production upgrade.
    Mean while you do investigation, you can set optimizer_features_enable=10.2.0.4 for only cron job session to check whether you job returns to same 26 hours time
    alter session set optimizer_features_enable='10.2.0.4';Salman

  • Accounts Payable Rapid Mart Performance Issue

    We are currently running the SAP AP Rapid Mart 3.2 and facing a performance issue on the SAP side for the data flows: DF_VendFinDocFactOpenDelta_SAP and DF_VendFinDocFactClearDelta_SAP.  These data flows pull from the same tables and one of them being BSEG which is a rather large table.
    Has anyone found a means to pull data for these two dataflows within a reasonable amount of time?  I'm looking for a two hour or better abap run time for five years worth of data.

    Hi Chuck,
    I wish I could say that I have a solution. I ran nto the exact same problem a few months ago. Unfortunately we did not have enough time to come to a resolution of the issue, and then I moved on.
    I would also love to hear anyone's suggestions regarding this issue.
    BSEG is not a real table but a "virtual" table (like a view), so there are issues with how DI is able to pull data from BSEG.
    Thanks

  • Performance issues logging on to Workspace 11.1.2.1

    Hi All,
    We have a distributed install of 11.1.2.1 (2 HSS, 3 PLN, 3HFM, 3 Workspace, 2 FDM servers) and facing significant performance issues:
    - Logging on to the Workspace as an external user takes 5 min
    - Logging on to the AAS console as an external user takes over 2 min
    - Opening HBRs in AAS takes a couple of minutes
    - Logging on to HSS as an external user takes over a min
    Logging on as a native user takes around 30 seconds.
    We have disabled compoments we do not use under the Workspace server properties and applied patch Set Exception (PSE): 13327628 which addressed the following perfomance issues:
    • 12913216 – Intermittent login error is observed while attempting to log into EPM System Workspace. Workspace displays the error message “You must supply a valid User Name and Password to log onto the system.” However, user can log in by clicking OK.
    • 13341789 – Poor login performance (delay of 3-5 minutes) is observed at the first user login if Workspace web application has been inactive for an hour. Subsequent login performance is not impacted.
    • 13388864 – In deployments where a firewall is configured to time out idle applications (for example, after 30 minutes) users can login once, but subsequent login times out.
    In Shared Services we have also set Evict Interval and Allowed Idle Connection Time to 5 mins.
    Is there anything else we could try to improve performance?
    Thanks for your help.
    Regards
    Seb

    If you already have deactivated the non used services from WS, it's weird. As I can see you may be using an external directory for authenticating, check on these.
    * have you checked the response time from the external directory... sometimes for example in Active directory the user hierarchy is too complex to navigate on it. you can use an LDAP tester to see
    * Are you using SSL? if yes, try wtihout SSL
    * For workspace Start the non used services, and activate back the services in the WS to see if there it improves with all the apps up ( this will help you to narrow the debugging)
    * go directly to java based servers, for instance in the shared services server go directly to the port 28080 for interop, instead of going by http server and check if you can login quick or not. This will help you also to isolate the issue (if it's related or not to Workspace).
    Hope this help u to narrow your search
    Motor

  • Inventory Cube performance Issue

    Hi All,
    This is not something new, but an old issue traditionally with this cube. I have customized 0IC_C03 for my requirement and having serious performance issues. It has 0CALDAY, 0MATERIAL, 0PLANT as non-cumulative value parameters. i hav eadded movement types (temproarily for validation purpose). But my query always timed out, unless I specify the material. There are close to 40K materials are being maintained. The values are all fine between ECC and BI afterdata loads. So we are thinking may be snap shot approach would hlp us resolve the performanc eissues.
    Anybody has implementeted snap-shot approach for inventory? I know it is a loading issue, but we think we could deal with that rather than performanc eissue when the users execute the query.
    if anybody has done it, could ou provide the steps?
    Thanks,
    Alex.

    Hi Jameson - Thanks for your response.
    We thought that would be the case. Have raised a SR with oracle and they are investigating on it. We have also sent an EIFF file to Oracle for investigation.
    Both the DBs are in the same environment (AIX 6.1) and DBAs have confirmed both the DBs have the same system parameters.
    Even if we keep aside comparing to 11.2.0.1, for some reason 11.2.0.3 seems to be very slow. Even a simple cube (2 Dim and 2 Measures) with 9K records takes around 15 min to get refreshed and it takes ages to view the data.
    Havent generated the AWR report, will see if we can do the same.
    rgds,
    Prakash S

Maybe you are looking for