Optimizer factors

Hello,
DB version 11gR2
If we joining multiple tables based on what factors optimizer decide driving table?
Thanks.

784585 wrote:
Hello,
DB version 11gR2
If we joining multiple tables based on what factors optimizer decide driving table?
Thanks.That's really not a question that can be answered with absolute white and black kind of answer. The number of values in the join predicates, the no of distinct values, data in the tables, availability of indexes etc etc, all would be involved in deciding the plan. Essentially, the focus would be on to find the best plan which is available at that point of time.
Aman....

Similar Messages

  • Mac OS X Lion to OS X Mountain Lion w/ erasing all data

    Hi 2 all, I have MacBook Pro with Mac OS X Lion 10.7.4 and when Mountain Lion will be introduced I want to ERASE all information including all partitions from my HDD and install Mountain Lion on fully cleared Mac. And there is a question. I created Time Machine backup and I think that this backup include in itself my  previous (Lion) operating system. But on my NEW system (Mountain Lion) I want only to restore apps and photos from iPhoto. How can I restore from Time Machine backup only some informations like apps and photos?
    Thanks in advance.

    TimeMachine doesn't have a highly selectable restore option, it's really a all or nothing approach.
    I assume you want to do this for hard drive performance and privacy reasons, because if you have a SSD there is no need for performance, just privacy reasons.
    It's because 10.7 will only be removed and 10.8 put into it's place if you choose to upgrade to 10.8, likely on the slower part of the hard drive, Applications and User accounts will be left alone.
    So if you wish to proceed, I advise having two external backups of your personal data. One on TimeMachine or bootable clone as well as your personal data backed up on a external drive. Once you have those then proceed.
    Most commonly used backup methods
    1: Hold command r keys while booting, use Disk Utility to Erase (hard drive use secure erase option #1, middle selection) the Macintosh HD partition.
    2: Reinstall 10.7 from Apple's severs using your AppleID and password.
    3: Reboot and setup using the same user name (so your iTunes playlists etc will work) different password is fine.
    4: Log into App Store and update to 10.8 and reboot, setup and don't use Filevault.
    5: Install 10.8 compatible programs as much as possible from original sources BEFORE returning user files.
    http://roaringapps.com/apps:table
    6: Choose to connect the external storage drive with only your files and return into same named account(s) (preferred)
    7: Keep your hard drive (SSD no need) below 50% filled for optimal performance, not more than 80% filled for any storage device.
    This method will give you clean install of 10.8, optimized and remain performing well for a long time if followed as instructed.
    If you chose to restore from TimeMachine or Filevault, your drive performance will suffer most likely because your going to lose the optimization factor, you want your applications (and OS X) to be near the top of the hard drive for performance.

  • Free iWeb SEO Tool to Help Improve Search Engine Rankings

    Hi,
    I have just posted a free iWeb SEO Tool that will let you add and edit all your meta tags and header information, as well as a few other important search engine optimization factors. Here is a brief description of what it can do;
    iWeb SEO Tool is the only software that makes it easy to get your iWeb built website ready for search engines.
    Since Apple has often neglected key SEO strategies in their iWeb software, it is difficult for many iWeb based websites to rank high in search engines. This is why we have created this free utility to help you properly optimize your website.
    Features include;
    1) Easily add meta tags such as description and keywords
    2) Edit your title tags for each pages
    3) Add robot rules and language meta tags
    4) Add alternative text for images
    5) All settings are saved in a private database so next time you publish your site you can take all the saved SEO copy and apply to your new site
    6) Edit sites locally or directly on your iDisk
    I am looking for feedback on this tool to see how useful it can be. If you find any problems, please let me know. You can download this tool by creating a free beta testers account at;
    http://www.trybeta.com/home/
    Please let me know if this is useful for anyone. It is completely free for all iWeb users.
    *I may receive some form of compensation, financial or otherwise, from my recommendation or link*
    <Edited by Moderator>

    1. To publish to .Mac just mount your iDisk and go to Web/Sites/ and you will see your old site there. Delete that site and just copy your new site to the location.
    2. Just press the load from iDisk button in the toolbar and enter your username and password. Your sites from your iDisk will load in iWeb SEO Tool.

  • WAAS Rjct Resources and conditions for asymmetric traffic

    Hello,
    I have a customer network of 30 WAE's connected to an MPLS cloud. Interception method is inline for all WAE, and WCCP for NM-WAE.
    Of those WAE's (running 4.1.1c), I have 3 that are connected in Datacenters, as such they are expected to receive most of the traffic and have been dimensioned as OE7341 appliances.
    It is my impression that this network statistics are not as good as they should be: Some of the optimizations factor are at 1.2 or 1.3X and most are simply 1.0X.
    My impression is that there is a lot of passthrough traffic, and although some of it is configured as so on the application policies, when I check statistics pass-through on several WAE's on the network I see that the Rjct Resources is very high in a particular WAE in a Datacenter - that has a 7341 Box (12Gb RAM!) - and I also do get non-zero counters on other boxes.
    Is there any way to see on a given moment how many connections are going through the box so that I understand if I'm really facing a box capacity issue? The initial shows I did didn't look as there were that many connections running through the box, but if I checked them live I saw about 65 Rjct Resource connection at a given time.
    Can anybody shed some light on this particular statistic?
    sghmansin--17w#
    sh statistics pass-through
    Outbound
    PT Client:
    Bytes 4081578138946
    Packets 11567591648
    PT Server:
    Bytes 8833662508567
    Packets 13797553929
    Active Completed
    Overall 0 0
    No Peer 7 141742513
    Rjct Capabilities 0 0
    Rjct Resources 65 273669865
    App Config 6 25610854
    Global Config 0 0
    Asymmetric 1 1597096
    In Progress 97 453847516
    Intermediate 0 0
    Overload 0 0
    Internal Error 0 478
    App Override 0 0
    Server Black List 0 150553
    AD Version Mismatch 0 0
    sghmansin--17w#
    One other observation is that pass-through through asymetric is also very frequent. Given that the customer is mostly using inline interception, even if a connection comes through a WAN/LAN interface pair and exits through another, the optimization should still be done.
    The datacenter designs are dual-homed active/passive, and traffic goes through the same (and only) WAE box. The customer assures me that there is no asymetrical traffic.
    Can anybody explain to me how is the decision made to mark a given flow as asymmetrical (and them pass-through it)?
    Thanks
    Gustavo Novais

    Hi Dan, Thank you for your reply.
    That show was just from one of the boxes, in this case on the Datacenter.
    For instance I also see asymetricals in NM-WAE's configured for WCCP. But the number is not that substantial, which makes me believe the interception is well configured (unfortunately the routers are managed by a third party, and I am yet to have access to their config).
    All boxes on this network have Enterprise License activated.
    How can I check on a given moment all connections count on the box? is there any MIB oid pollable to check that?
    Do passthrough connections count to the overall limit?
    While doing the diagnostics on the WAAS devices there was in deed a WAAS device marked as having asymetrical traffic, but many others have PT Asym connections and have not been marked as such by the diagnostics?
    How does the diagnostic work? Is it a instantaneous dianostic (i.e. checks connection table at time T to see if any of the current connections is PT Asym )?
    If on the far end of a connection we do have an asymetrical network topology, does the near end also mark the same connection as PT Asym, or will it simply say No Peer?
    thanks
    Thanks

  • What should I expect in quality? Make video better? optimize?

    Hello
    I'm using iChat 3.1 for the first time. I have tried using both a AIM account and .mac account. The software works fine for me. My question is
    1) what should I expect in terms of video quality? Is there something out there I can compare it to? I must say it is not the resolution I was expecting. I guess mine looks good as a 3" x 3" screen but strtching it to 12 x 12 is pretty bad. Is this what it's supposed to look like? is there a way to optimize it? I read somewhere that the bad video was a function of AIM? and if you go through this "other way" it will be better. (can't find that info I read!)
    2)Alternatively, could I theoretically network my computer and my mom's, and then have a better video session using another video access. (don't know which one or how)
    3) I have also tried using my wireless access and the video is much worse and the audio is delayed, so I switched back to ethernet. Am I doing something wrong?
    At this point I would only be videoing with this one other computer.
    thanks for any help
    iMac G5   Mac OS X (10.4.7)   using iSIght on one and DV-camera on other

    Hello Gina,
    Welcome to apple discussions. The video quality on ichatav depends on many factors:
    1. Bandwidth limitations of your ISP
    2. CPU speed (the faster the better)
    3. The size of the actual chat window (the bigger, the lower the quality) Full screen ichatav can be marginal at best.
    4. Lighting
    5. Unusually fast motion or panning will effect your video quality
    6. Camera quality ... Generally Mini DV cameras capture great quality. In fact they often get significantly better picture quality than most web cams. Firewire cameras are generally far better than most usb devices.
    7. iChat video pref. settings will effect video quality (try setting video preferences to "None").
    8. The connection doctor is a useful tool w/in ichatav. However having it constantly open during a video chat can actually slow the application down a bit and hence may also effect video quality. Check it quickly and then close it after you see all is working as expected. Do not leave it constantly open during a video chat or while attempting to start a video chat.
    9. Having several applications open while video chatting can also effect the quality of your video chat/s.
    10. Hosting a 3 way video conference will run significantly slower and at lower frame rate than video chatting with just one other party at a time.
    And according to Ian Bickerstaff, the following also effects quality:
    LIGHTS:
    In system preferences ( from the Blue Apple top left of your screen there )
    System preferences > Personal > Desktop&Screensaver > Desktop > Solid Colours.
    Select the pure white option ( Its invisible - Just click to the right of the last colour option )
    That increases the light thrown onto your face. It also prevents any odd discolouration that can come with using darker back grounds.
    Use a local Spot light to light just you in a larger room. Very atmospheric
    Use the video preview before you start a video chat to make sure that your image is "up to scratch" Play around with the amount of light and where it is positioned.
    SOUND:
    Is the volume set correctly? Setting it at 75% seems to be the best option.
    CAMERA:
    If you wear glasses try to get the camera in a position where by the screen does not block out your eyes.
    Look at the camera - It sort of implies "making eye contact" - Instead of looking at the screen. Placing the video " window " as near as you can to the camera helps ( thanks Sillydog )
    If its a family chat - Place the camera so that all people can be seen and move the microphone ( if you have one ) so all people can be heard.
    ACTION: Not just yet....
    Are you sitting comfortably? Is it going to be a long chat? Do you have a drink to hand? Do you know what you want to say?
    If there is a phone in the room - Do you want to turn it off to avoid interruption?
    SDMacuser

  • Reg Query Optimization - doubts..

    Hi Experts,
    This is related to Blog by Mr Prakash Darji regarding "Query Optimization" posted on Jan 26 2006.In this to optimize query Generation of Report is suggested.
    I tried this, but I am not sure I am analyzing this correctly.
    I collected Stats data before and after Generation of Report.But how to be sure that this is helping me? Did any one has tried this?
    What to look for in Stats Data - duration?
    But duration would not be absolute parameter as there is factor of "Wait Time, User", so duration may depend on this.
    Please help me in this.
    Thanks
    Gaurav
    Message was edited by: Gaurav

    Any ideas Experts?

  • Subquery Factoring and Materialized Hint

    WITH t AS
            (SELECT MAX (lDATE) tidate
               FROM rate_Master
              WHERE     Code = 'G'
                    AND orno > 0
                    AND TYPE = 'L'
                    AND lDATE <= ':entereddate')
    SELECT DECODE (:p1,  'B', RateB,  'S', RateS,  Rate)
      FROM rate_Master, t
    WHERE     Code = 'G'
           AND orno > 0
           AND TYPE = 'L'
           AND NVL (lDATE, SYSDATE) = tidate;In the given example the sub query returns just one row because of the aggregate function max. Making this in to a With clause will be of any benefit ? Also i presume/understand that the subquery factoring would be really useful only when we try to make a sub query which returns more rows in a with clause. Is my intrepration right?
    Secondly adding the /*+ Materialize */ hint to a With query is mandatory or the optimizer by itself will do it and make a temp table transformation. In my example i am forced to give the hint in the query. Please discuss and help
    Thanks in advance.

    ramarun wrote:
    WITH t AS
    (SELECT MAX (lDATE) tidate
    FROM rate_Master
    WHERE     Code = 'G'
    AND orno > 0
    AND TYPE = 'L'
    AND lDATE <= ':entereddate')
    SELECT DECODE (:p1,  'B', RateB,  'S', RateS,  Rate)
    FROM rate_Master, t
    WHERE     Code = 'G'
    AND orno > 0
    AND TYPE = 'L'
    AND NVL (lDATE, SYSDATE) = tidate;In the given example the sub query returns just one row because of the aggregate function max. Making this in to a With clause will be of any benefit ? Also i presume/understand that the subquery factoring would be really useful only when we try to make a sub query which returns more rows in a with clause. Is my intrepration right?I am not aware of any performance Benefit due to use of With clause. IMO, It eases the job to write a Subquery multiple times in a query.
    The solution you adopted has to hit the cache twice and hence do not look very performant. I will advise you to opt for Analytic functions (like the suggestion I provided in another thread). If the solution does not yeild correct results, then provide with a Script that we can replicate (Create table, Sample Insert statement and the expected output).
    select decode(:p1, 'B', RateB, 'S', RateS, Rate)
       from (
                select RateB, RateS, Rate, NVL(ldate, sysdate) ldate, dense_rank() over (order by case when NVL(lDATE, SYSDATE) <= ':entereddate' then NVL(lDATE, SYSDATE) else to_date('01/01/1970', 'DD/MM/YYYY' end DESC) rn
                  from rate_Master
               where Code = 'G'
                   and orno > 0
                   and type = 'L'
             ) a
      where a.rn = 1;>
    Secondly adding the /*+ Materialize */ hint to a With query is mandatory or the optimizer by itself will do it and make a temp table transformation. In my example i am forced to give the hint in the query. Please discuss and help
    Usage of Hints is only for Debugging purposes and is not meant to be used in production code. It is when you have to ascertain the reason for CBO choosing a plan that you do not expect it to take, you use hints to force your plan and find the cost and analyze it. Hence, I do not support the idea of Hints for production code.

  • What is the best way to Optimize a SQL query : call a function or do a join?

    Hi, I want to know what is the best way to optimize a SQL query, call a function inside the SELECT statement or do a simple join?

    Hi,
    If you're even considering a join, then it will probably be faster.  As Justin said, it depends on lots of factors.
    A user-defined function is only necessary when you can't figure out how to do something in pure SQL, using joins and built-in functions.
    You might choose to have a user-defined function even though you could get the same result with a join.  That is, you realize that the function is slow, but you believe that the convenience of using a function is more important than better performance in that particular case.

  • SQL Developer Optimization

    I have a few questions about SQL Developer that I was wondering if someone can help with:
    1. Does SQL Developer have the sql optimization feature that is present in Toad?
    Within TOAD, there is a facility that will help the user to optimize the sql and I was wondering if this was present in SQL Developer
    2. When using the compare facility for showing the difference between 2 tables, does SQL Developer show the data that is different?
    Thanks

    K,
    There are a couple of project in the works for this. Heres a quick exert of what we are planning. There is no confirmed release train for this at the minute though. This is provided for information only.
    SQL Code Advisor*
    By leveraging database features like Automated Workload Repository (AWR) and Active Session History (ASH), the potential exists to evaluate any SQL statement within a package or worksheet that has been executed and flag any statements exceeding some performance threshold. The developer will then immediately know if the SQL in question merits any tuning effort or should be left "as is".
    The goal of SQL Code Advisor is to provide real-time feedback to developer within an editor or worksheet on factors which may impact performance. Without going into great detail in this overview, here are some:
    * Connected to database instance with missing system statistics
    * SQL references tables and indexes with missing or stale statistics, or indexes in an invalid state
    * Population and cardinality estimates of referenced tables
    * Type, compression status, cache status, degree of parallelism for referenced tables
    * Explain plan indicates Full Table Scan performed on a large table
    * Explicit datatype conversions of columns in predicates, preventing use of available indexes
    SQL Tuning Advisor*
    By leveraging existing database APIs in the DBMS_SQLTUNE and DBMS_ADVISOR packages, with appropriate UI enhancements, this SQL Tuning Advisor extension will allow a developer to generate a report to warn when SQL performance may be impaired by:
    * stale optimizer statistics
    * missing indexes
    * improper coding practices.
    These APIs are able to perform more in-depth analyses of SQL statements than the optimizer. As a consequence, in addition to offering advice on specific environment and coding issues, it can produce a SQL Profile. The Profile contains additional statistics which help the optimizer find a more efficient execution plan. The original execution plan can be presented side-by-side with the enhanced SQL Profile-assisted execution plan for comparison. The developer has control over which, if any, of the recommendations to accept and deploy.

  • Inventory optimization

    Hi all,
              I need ur inputs for the following  sceanario.
    Critical materials are kept in warehouses as safety stocks. Each location has its own safety stock for its own spare parts. That leads to duplicate stocking of similar materials at different locations. To optimize the inventories and reduce their costs, it is suggested to consolidate similar materials at certain locations and allow all other locations to use it. By doing so, huge cost savings can be realized.
    Delays in delivery might have harmful effects on business. Assessment tool is required to find the best location to stock the consolidated safety stock considering cost of transportation, delivery time and criticalness of the location.
    It is required to have a system at which locations are prioritized by some cost factor .
    Another required functionality is to enable the decision of where to stock the safety stock considering the transportation cost, the delay penalty (risks) and the delivery time. Comparison of different stocking options is also required showing the cost and effect of each option. how to use the available APO functionality by enabling reporting, simulation and comparison.
    regards
    sankar.

    Here's a link to some details about SAP Enterprise Inventory and Service-Level Optimization (SAP EIS) on the SAP website.  Inventory Management Software | Integrated Sales &amp; Operations Planning | SAP
    There is a lot of material available to explain the mathematics that looks at lead times, inventory costs, uncertainty, customer service level targets and more, to recommend optimal safety stock recommendation through your multi-echelon supply chain.  And EIS integrates with other SAP supply chain applications and has been leading the field for many years.  I am an SAP Solutions Consultant implementing SAP EIS at several SAP customers now and there are many references that have been successfully saving inventory dollars for many years with this application.

  • SQL Server 2008R2 SP2 Query optimizer memory leak ?

    It looks like we are facing a SQL Server 2008R2 queery optimizer memory leak.
    We have below version of SQL Server
    Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
     Jun 28 2012 08:36:30
     Copyright (c) Microsoft Corporation
     Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
    The instance is set MAximum memory tro 20 GB.
    After executing a huge query (2277 kB generated by IBM SPSS Clementine) with tons of CASE and a lot of AND/OR statements in the WHERE and CASE statements and muliple subqueries the server stops responding on Out of memory in the internal pool
    and the query optimizer has allocated all the memory.
    From Management Data Warehouse we can find that the query was executed at
    7.11.2014 22:40:57
    Then at 1:22:48 we recieve FAIL_PACE_ALLOCATION 1
    2014-11-08 01:22:48.70 spid75       Failed allocate pages: FAIL_PAGE_ALLOCATION 1
    And then tons of below errors
    2014-11-08 01:24:02.22 spid87      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:02.22 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:02.22 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:02.30 Server      Error: 17312, Severity: 16, State: 1.
    2014-11-08 01:24:02.30 Server      SQL Server is terminating a system or background task Fulltext Host Controller Timer Task due to errors in starting up the task (setup state 1).
    2014-11-08 01:24:02.22 spid74      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:02.22 spid74      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:13.22 spid87      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:13.22 spid87      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 spid63      Error: 701, Severity: 17, State: 130.
    2014-11-08 01:24:13.22 spid63      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 spid57      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:13.22 spid57      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:13.22 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:18.26 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:24.43 spid81      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:24.43 spid81      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:18.25 Server      Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:18.25 Server      BRKR TASK: Operating system error Exception 0x1 encountered.
    2014-11-08 01:24:30.11 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:30.11 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:35.18 spid57      Error: 701, Severity: 17, State: 131.
    2014-11-08 01:24:35.18 spid57      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:35.18 spid71      Error: 701, Severity: 17, State: 193.
    2014-11-08 01:24:35.18 spid71      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:35.18 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:35.41 Server      Error: 17312, Severity: 16, State: 1.
    2014-11-08 01:24:35.41 Server      SQL Server is terminating a system or background task SSB Task due to errors in starting up the task (setup state 1).
    2014-11-08 01:24:35.71 Server      Error: 17053, Severity: 16, State: 1.
    2014-11-08 01:24:35.71 Server      BRKR TASK: Operating system error Exception 0x1 encountered.
    2014-11-08 01:24:35.71 spid73      Error: 701, Severity: 17, State: 123.
    2014-11-08 01:24:35.71 spid73      There is insufficient system memory in resource pool 'internal' to run this query.
    2014-11-08 01:24:46.30 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:51.31 Server      Error: 17053, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:51.31 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    2014-11-08 01:24:51.31 Logon       Error: 18052, Severity: -1, State: 0. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
    Last error message is half an hour after the inital Out of memory at 2014-11-08 01:52:54.03. Then the Instance is completely shut down
    From the memory information in the error log we can see that all the memory is consumed by the QUERY_OPTIMIZER
    Buffer Pool                                   Value
    Committed                                   2621440
    Target                                      2621440
    Database                                     130726
    Dirty                                          3682
    In IO                                            
    0
    Latched                                          
    1
    Free                                           
    346
    Stolen                                      2490368
    Reserved                                          0
    Visible                                     2621440
    Stolen Potential                                  0
    Limiting Factor                                  17
    Last OOM Factor                                   0
    Last OS Error                                     0
    Page Life Expectancy                             28
    2014-11-08 01:22:48.90 spid75     
    Process/System Counts                         Value
    Available Physical Memory                29361627136
    Available Virtual Memory                 8691842715648
    Available Paging File                    51593969664
    Working Set                               628932608
    Percent of Committed Memory in WS               100
    Page Faults                                48955000
    System physical memory high                       1
    System physical memory low                        0
    Process physical memory low                       1
    Process virtual memory low                        0
    MEMORYCLERK_SQLOPTIMIZER (node 1)                KB
    VM Reserved                                       0
    VM Committed                                      0
    Locked Pages Allocated                            0
    SM Reserved                                       0
    SM Committed                                      0
    SinglePage Allocator                       19419712
    MultiPage Allocator                             128
    Memory Manager                                   KB
    VM Reserved                               100960236
    VM Committed                                 277664
    Locked Pages Allocated                     21483904
    Reserved Memory                                1024
    Reserved Memory In Use                            0
    On the other side MDW reports that the MEMORYCLERK_SQLOPTIMIZER increases since the execution of the query up to the point of OUTOF MEMORY, but the Average value is 54.7 MB during that period as can be seen on attached graph.
    We have encountered this issue already two times (every time the critical query is executed).

    Hi,
    This does seems to me kind of memory Leak and actually it is from SQL Optimizer which leaked memory from buffer pool so much that it did not had any memory to be allocated for new page.
    MEMORYCLERK_SQLOPTIMIZER (node 1)                KB
    VM Reserved                                       0
    VM Committed                                      0
    Locked Pages Allocated                            0
    SM Reserved                                       0
    SM Committed                                      0
    SinglePage Allocator                       19419712
    MultiPage Allocator                             128
    Can you post complete DBCC MEMORYSTATUS output which was generated in errorlog. Is this the only message in errorlog or there are some more messages before and after it.
    select (SUM(single_pages_kb)*1024)/8192 as total_stolen_pages, type
    from sys.dm_os_memory_clerks
    group by typeorder by total_stolen_pages desc
    and
    select sum(pages_allocated_count * page_size_in_bytes)/1024,type from sys.dm_os_memory_objects
    group by type
    If you can post the output of above two queries with dbcc memorystaus output on some shared drive and share location with us here. I would try to find out what is leaking memory.
    You can very well apply SQL Server 2008 r2 SP3 and see if this issue subsides but I am not sure whether this is fixed or actually it is a bug.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • How to maintain Proportional Factors in SNP (not in DP)

    Hi Gurus!
    I want to maintain proportional Factors in SNP but could not know how to do. I read saphelp and know that we have to set up as following Steps:
    -In Planning Area:
             1.Set the Keyfigure calculation type For Key Figure KF1 to P or I
             2.Set disaggregation key figure for KF1 to u201CAPODPDANTu201D
    -When creating Planning Book/Dataview:
            3.Check u201CManual Proportion Maintenaceu201D
             4.Create a new Dataview that contain only KF APODPDANT
    But I find that we can do step 3 only when we create Planning book with Planning Area that based on Planning Object Structure of Demand Planning Only (like 9ADPBAS).
    With the Planning Area that based on Planning Object structure of SNP (like 9ASNPBAS), when creating planning book I could not check u201CManual Proportion Maintenanceu201D although I did the set up steps 1 and 2.
    The question are:
                     1.     Can we maintain Proportional Factors in SNP? If yes, How to do?
                     2.     Can we create a new KF and consider it as Proportional Factor (as KF APODPDANT)? Is there any propblem?
    Thanks very much for your help.

    Hi Mr. M Manimaran
    This is my reason:
    We have:
    - one DC
    - and some locations those receive money from that DC.
    The storage capacity of each Location is different from others and has different limit. Each location also need a number of money that is differrent from other.
    There is one thing to take notice here: Money is a special good. Letu2019s say location L1 needs 1 million usd, DC can give them in some ways. For example:
    -     First choice: 100usd: 90 %; 10usd: 10% ( so percentage of each denomination here is proportional factor)
    -     Second choice: 100usd: 10%; 10usd: 90%
    So quantity of notes of money of first choice is less than the second one although the total value of two choices are the same. So the Location L1 can store money with first choice. With the second choice, it could not store.
    Here, we want to calculate and store Proportional Factor in SNP so that we can run SNP Optimizer or Heuristic inorder to enable the DC to supply money  to satisfy the requirement of Location regarding locationu2019s storage capacity.
    Please so me the way!
    Thanks very much!
    Edited by: xuanduyen on Aug 12, 2011 9:41 AM

  • What are the Optimization Techniques?

    What are the Optimization Techniques? Can any one send the one sample program which is having Good Optimization Techniques.
    Phani

    Hi phani kumarDurusoju  ,
    ABAP/4 programs can take a very long time to execute, and can make other processes have to wait before executing. Here are
    some tips to speed up your programs and reduce the load your programs put on the system:
    Use the GET RUN TIME command to help evaluate performance. It's hard to know whether that optimization technique REALLY helps
    unless you test it out. Using this tool can help you know what is effective, under what kinds of conditions. The GET RUN TIME
    has problems under multiple CPUs, so you should use it to test small pieces of your program, rather than the whole program.
    Generally, try to reduce I/O first, then memory, then CPU activity. I/O operations that read/write to hard disk are always the
    most expensive operations. Memory, if not controlled, may have to be written to swap space on the hard disk, which therefore
    increases your I/O read/writes to disk. CPU activity can be reduced by careful program design, and by using commands such as
    SUM (SQL) and COLLECT (ABAP/4).
    Avoid 'SELECT *', especially in tables that have a lot of fields. Use SELECT A B C INTO instead, so that fields are only read
    if they are used. This can make a very big difference.
    Field-groups can be useful for multi-level sorting and displaying. However, they write their data to the system's paging
    space, rather than to memory (internal tables use memory). For this reason, field-groups are only appropriate for processing
    large lists (e.g. over 50,000 records). If you have large lists, you should work with the systems administrator to decide the
    maximum amount of RAM your program should use, and from that, calculate how much space your lists will use. Then you can
    decide whether to write the data to memory or swap space. See the Fieldgroups ABAP example.
    Use as many table keys as possible in the WHERE part of your select statements.
    Whenever possible, design the program to access a relatively constant number of records (for instance, if you only access the
    transactions for one month, then there probably will be a reasonable range, like 1200-1800, for the number of transactions
    inputted within that month). Then use a SELECT A B C INTO TABLE ITAB statement.
    Get a good idea of how many records you will be accessing. Log into your productive system, and use SE80 -> Dictionary Objects
    (press Edit), enter the table name you want to see, and press Display. Go To Utilities -> Table Contents to query the table
    contents and see the number of records. This is extremely useful in optimizing a program's memory allocation.
    Try to make the user interface such that the program gradually unfolds more information to the user, rather than giving a huge
    list of information all at once to the user.
    Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to be accessing. If the
    number of records exceeds NUM_RECS, the data will be kept in swap space (not memory).
    Use SELECT A B C INTO TABLE ITAB whenever possible. This will read all of the records into the itab in one operation, rather
    than repeated operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement. Make sure that ITAB is declared
    with OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to access.
    If the number of records you are reading is constantly growing, you may be able to break it into chunks of relatively constant
    size. For instance, if you have to read all records from 1991 to present, you can break it into quarters, and read all records
    one quarter at a time. This will reduce I/O operations. Test extensively with GET RUN TIME when using this method.
    Know how to use the 'collect' command. It can be very efficient.
    Use the SELECT SINGLE command whenever possible.
    Many tables contain totals fields (such as monthly expense totals). Use these avoid wasting resources by calculating a total
    that has already been calculated and stored.
    These r good websites which wil help u :
    Performance tuning
    http://www.sapbrainsonline.com/ARTICLES/TECHNICAL/optimization/optimization.html
    http://www.geocities.com/SiliconValley/Grid/4858/sap/ABAPCode/Optimize.htm
    http://www.abapmaster.com/cgi-bin/SAP-ABAP-performance-tuning.cgi
    http://abapcode.blogspot.com/2007/05/abap-performance-factor.html
    cheers!
    gyanaraj
    ****Pls reward points if u find this helpful

  • LR4.3 w/ NIK filters - Image previews keep reloading Constantly until Catalog Optimization

    Just bought the NIK filters on Googles big sale - problem is everytime I add a NIK filter - so far only tried Color Efex Pro - via the External Editor in LR4.3, Save, and then go back into LR4.3 the Image Preview keeps reloading while I try to make further adjustments on the image. Impossible to work with as when I use Adjust brush the image disappears for a second as I move the brush around to add my mask.
    Interestingly, if I Exit LR4 and relaunch the catalog that same photo with the NIK filter has no problem anymore. The catalog is set to "optimize" everytime I close LR. So why when NIK filters are added can the catalog not work well with them until the optimization process?  Do I have a setting wrong in preferences that effects this?
    I called NIK (Google) twice. They are useless. A so-called LR NIK expert named "Jessica" is not the expert she presents. She only knows how to have you re-install the software LOL. Then transfers you to oblivion of Google MUSAK. I don't think that NIK LR experts exist there or they are Product Managers who are not on support. Hoping some LR guru might know what is happening with my previews when those filters are applied.
    Win 7 64 bit environment  16 GB RAM (but same thing happens on my Mac Laptop)

    Thanks Rob. I opened a different LR4 catalog on the PC that only has two photos in it and the problem didnotoccur. So it does seem to be an issue with catalogs that have a lot of raw Canon 5D Mark III images. The image "flickering/reloading" also occurs on the Mac Laptop running a different catalog but that one also has a ton of canon raw images. The PC catalog (the one with the problem) also resides on an EHD so I'm not sure if that is a factor, or not - the other one with only 2 photos total  is on my hard drive.
    I guess I could try to move the entire catalog to my HD (and raw images) and test again. If still a problem then it is either the size of the catalog or some setting in that catalog. Does NIK SW have a max size threshold recommended? I realize it converts the raw to an RGB format when it opens the image - the last image started as 25MB raw and ended up at 160MB after I added a few of your filters stacked and then did some touch ups in LR. Do your recommend any compression? I can set that as a preference for Color Efex in the External Editor settings?
    Joan Sides
    Email links removed by moderator
    Message was edited by: Geoff the kiwi as placing email addresses on an open public forum is an invitation to hackers and spammers.

  • Optimize JPG image size reduction by reduced compression quality vs. reduced pixels?

    I have many images of slides scanned at high res (4800 DPI, maximum pixels 5214x3592).   Although I will be saving these as lossless TIFs, I also wish to make JPGs from them that I wish to be just less than 5 MB in file size.  Aside from cropping, I know I can achieve such a reduction of JPG file size by a combination of saving to lower quality JPG compression or reducing image size.  My question is, what is theoretically or practically better, achieving this mostly by reducing image total pixels or by reducing  JPG compression quality.  Thank you

    Thank you Doug.  The comments on extensive uniform blue sky vs. marked variation in color seem well taken, I'll keep this method of choosing in mind.  My goal is to create a JPG family photo archive of the highest quality images that I can make for future use by non-technical descendants (thus it will supplement the TIF archive that holds the best quality versions of the same images but that may not be usable to novices).  As I cannot anticipate exactly how the JPGs will be used, I just want them to be the best possible, while still being of a size that can be uploaded to, say, Costco (5 MB size limit) for making enlargements. 
    In general, I am often left curious as to how exactly Photoshop carries out its algorithms and how different factors influence the outcome.  So often, one read "just try different techniques and see what looks the best".  But I am always left wondering, what is the theory behind this and has it been systematically studied and worked out and published.  In so many disciplines, such as medicine, the methods of optimization has been evaluated, systematized, and fully described.  I have not yet explored what may be found in technical journals, but I'm sure much of this good stuff must be available somewhere. It would be nice to have a "How Things Work" that actually explains what Photoshop is doing under the hood.
    Thanks again.

Maybe you are looking for

  • Not able to clear out a formula field value when variables are equal to 0

    I am trying to calculate the difference between a formula field from a main report and a formula field from a subreport. When i run the report i get a correct difference calculation only when both formula fields have a value different than 0. When bo

  • Ideal network to share 2TB iTunes library with mac mini and iMacs by wifi?

    Greetings, Just made the move from pc to apple world. First time poster. Please feel free to suggest if this topic is better suited in another area. I am looking for an ideal means of sharing my music library (+/- 1.5TB size) from my mac mini with my

  • Drag and drop files for burn to DVD takes FOREVER

    When I drag and drop files onto the blank DVD icon, the ghosts of said files just hang there for the longest time, and I get a spinny beach ball. Many, many minutes later things finally return to normal and I can proceed with burning the DVD. It's ju

  • Debugging is not Triggered

    Hi Experts, I've the problem that the Debugging is not triggered. I found your replies in SDN. I tried in all ways that you have specified. ie: i) From SE80>UTILITIES>SETTINGS>ABAP EDITOR>DEBUGGING--> enetr the username for EXTERNAL Debugging.     ii

  • HCM- Data before pay roll run

    Hi, I want to confirm one scenario as below. Third party is requesting Pay roll data(which exactly the same after payroll run) 4 days prior to the payroll run for the current pay period.So that they will have a look and approve it.I am not sure wheth