Optimization Working

Hi,
Being a student of operations research I have understood Oracle cost and rbo optimization as shown (although I have read certain Oracle books/documents)
1     5     6
8     9     9
1     2     7
Where the RULE is based on the North West Method according to which the cost will be
=1+5+9+9+7
Taking again the above example and applying the Cost based method, according to which the objective is least cost, therefore = 1+8+1
Can this be a good approach to understand rule based optimization and cost based optimization?
Adith

I'm not sure what the North West Method is, and I'm not sure what the matrix you've posted means, so I'm not sure how to respond to that part of your question...
Understanding that the RBO has been depricated and is no longer supported in 10g
- the RBO uses a series of rules of thumb to figure out how to execute a query (i.e. if an index exists and can be used, use it)
- the CBO attempts to determine the most efficient access path by comparing the IO and CPU required for each possible execution path. This requires information about the size and distribution of data (object statistics), information about the system's capabilities (system statistics), and considers many, many more paths than the RBO.
Justin

Similar Messages

  • How can I learn the details of how the iBA video optimizer works?

    Although I appreciate the work done by the vide optimizer in iBooks Author, results vary depending upon the characteristics of the video I submit to it. Sometimes this is acceptable, sometimes not. I came to this conclusion by adding different versions of the same video clip to the iBA Media Widget and then deconstructing the resulting *.ibooks file and analyzing those video files using MediaInfo.   I think that if I could better understand how the optimizer works I could get more acceptable results.
    So where can I learn more about  how this optimizer works?

    In PO screen(me21n) delivery item tab over-delivery & under- delivery % is there.While creating PO you have to set it.Based on that goods receipt will be done.

  • Optimization work table.

    Hi,
    I use some work tables in my code pl / sql.
    to optimize the table is it possible to remove the index when insersion data in these tables.
    And when they have data recreating indexes to optimize ther request on it?
    Do you think this is a good method?
    Can you give me an example of code to delete and create the index ?
    Thanks,

    >
    I use some work tables in my code pl / sql.
    to optimize the table is it possible to remove the index when insersion data in these tables.
    And when they have data recreating indexes to optimize ther request on it?
    Do you think this is a good method?
    Can you give me an example of code to delete and create the index ?
    >
    You don't need to drop and recreate the indexes. You can marks them unusable, do the load and then rebuild the indexes.
    That technique can be effective especially when large numbers of records are involved.
    There are examples in the doc. See Altering Indexes in the DBA guide
    http://docs.oracle.com/cd/E11882_01/server.112/e25494/indexes004.htm#CIHJCEAJ
    >
    Making an Index Unusable
    When you make an index unusable, it is ignored by the optimizer and is not maintained by DML. When you make one partition of a partitioned index unusable, the other partitions of the index remain valid.
    You must rebuild or drop and re-create an unusable index or index partition before using it.
    The following procedure illustrates how to make an index and index partition unusable, and how to query the object status.

  • Does CIFS optimization work without WAFS

    Hi everyone,
    Now I am testing WAE in my lab with the following environment.
    Windows 2003 Server
    |
    Cisco 2821 with NME-WAE-502-K9 as WAE Core
    |
    |
    NIST Net WAN Emulation Software
    |
    |
    Cisco 2821 with NME-WAE-502-K9 as WAE Edge
    |
    PC
    Cisco 2821:
    12.4(9)T3
    enabled CEF, WCCP v2 with TCP promiscuous mode service (WCCP services 61 and 62)
    NME-WAE-502-K9:
    4.0.3
    enabled WCCP v2
    NIST Net parameters:
    Bandwidth: 1.5M
    Latency: 300ms
    Packet loss: 0.5%
    Test Tools/Methods on PC:
    Cisco WAFS Benchmark Tool for Microsoft Office Application downloaded from CCO
    for CIFS/SMB application
    TCP Replay utility downloaded from CCO for HTTP application
    FTP for FTP application
    In this environment, I can get good result for HTTP (by using TCP Reply utility) and FTP, that is,
    speed for downloading file by HTTP and FTP is improved over WAN (NIST Net).
    However I can not see the effectiveness of WAE for CIFS/SMB traffic by using WAFS Benchmark Tool.
    I do not configure WAFS (Wide Area File Services), because I can not configure it due to lack of
    DRAM on WAE Core.
    2GB DRAM is needed to get WAE Core to be worked as WAFS core cluster but both WAE have
    just 1GB DRAM.
    So I am testing how WAE efficiently work for WAFS Benchmark Tool without WAFS.
    However the result of testing is not indicating WAE efficiency for WAFS Benchmark Tool.
    That is, Speed is not different, in other words Speed does not change so much in the following
    three cases.
    Native WAN:
    Remove WCCP v2 configuration from Cisco 2821
    Then PC accesses Windows 2003 server by using WAFS Benchmark Tool
    Cache Miss:
    Enable WCCP v2 with TCP promiscuous mode service (WCCP services 61 and 62) on Cisco
    2821 and WAE and restart WAE to clear any cache information
    Then PC accesses Windows 2003 server (it is a first time) by using WAFS Benchmark Tool
    Cache Hit:
    In the case of PC has been accesses Windows server in the past
    PC accesses Windows 2003 server (it is a second time) by using WAFS Benchmark Tool
    And unfortunately, when "wccp cifs-cache" command is configured or not on both WAE,
    the result is the same, that is, Speed does not change so much.
    I am confusing about optimization for CIFS/SMB traffic.
    According to "Default Application Policies" in the Cisco Wide Area Application
    Services Configuration Guide,
    http://www.cisco.com/en/US/customer/docs/app_ntwk_services/waas/waas/v403/configuration/guide/apx_apps.html
    WAAS, by default, handles CIFS/SMB traffic with LZ, TFO and DRE.
    Classifier: CIFS-non-wafs
    WAAS Action: LZ+TFO+DRE
    Destination Ports: 139, 445
    So I was thinking WAE optimize CIFS/SMB traffic even if no WAFS configured...
    Do I need to configure WAFS in other words WAFS must be configured to achieve
    the effect of WAE for CIFS/SMB traffic ?
    or
    Do I need to configure additional command to get WAE work fine for CIFS/SMB traffic
    without WAFS ?
    or
    Is my idea wrong ? that is, WAFS Benchmark Tool is not appropriate tool in this
    environment (no WAFS) ?
    Your assistance would be appreciated.
    Best regards,

    === Before Test 2 is executed ===
    WAE-SiteA#sh tfo conn sum
    WAE-SiteA#sh statistics dre
    Cache:
    Status: Usable, Oldest Data (age): 0s
    Total usable disk size: 47527 MB, Used: 0.00%
    Hash table RAM size: 189 MB, Used: 0.00%
    === While Test 1 is being executed ===
    WAE-SiteA#sh tfo conn sum
    Optimized Connection List
    Policy summary order: Our's, Peer's, Negotiated, Applied
    F: Full optimization, D: DRE only, L: LZ Compression, T: TCP Optimization
    Local-IP:Port Remote-IP:Port ConId PeerId Policy
    192.168.2.2:1031 192.168.1.1:445 2 00:16:9d:38:8a:5d F,F,F,F
    WAE-SiteA#sh statiWAE-SiteA#sh statistics dre
    Cache:
    Status: Usable, Oldest Data (age): 4m14s
    Total usable disk size: 47527 MB, Used: 0.00%
    Hash table RAM size: 189 MB, Used: 0.00%
    Connections: Total (cumulative): 1 Active: 1
    Encode:
    Overall: msg: 426, in: 2120 KB, out: 1558 KB, ratio: 26.47%
    DRE: msg: 426, in: 2120 KB, out: 1843 KB, ratio: 13.04%
    DRE Bypass: msg: 0, in: 0 B
    LZ: msg: 374, in: 749 KB, out: 464 KB, ratio: 38.00%
    LZ Bypass: msg: 52, in: 1094 KB
    Avg latency: 0.808 ms
    Message size distribution:
    0-1K=69% 1K-5K=8% 5K-15K=7% 15K-25K=6% 25K-40K=5% >40K=1%
    Decode:
    Overall: msg: 397, in: 101158 B, out: 2196 KB, ratio: 95.50%
    DRE: msg: 396, in: 162 KB, out: 2195 KB, ratio: 92.61%
    DRE Bypass: msg: 1, in: 89 B
    LZ: msg: 338, in: 67992 B, out: 130 KB, ratio: 48.95%
    LZ Bypass: msg: 59, in: 33166 B
    Avg latency: 0.045 ms
    Message size distribution:
    0-1K=65% 1K-5K=9% 5K-15K=8% 15K-25K=4% 25K-40K=11% >40K=0%
    === While Test 3 is being executed ===
    WAE-SiteA#sh tfo conn sum
    Optimized Connection List
    Policy summary order: Our's, Peer's, Negotiated, Applied
    F: Full optimization, D: DRE only, L: LZ Compression, T: TCP Optimization
    Local-IP:Port Remote-IP:Port ConId PeerId Policy
    192.168.2.2:1031 192.168.1.1:445 2 00:16:9d:38:8a:5d F,F,F,F
    WAE-SiteA#sh statiWAE-SiteA#sh statistics dre
    Cache:
    Status: Usable, Oldest Data (age): 12m5s
    Total usable disk size: 47527 MB, Used: 0.01%
    Hash table RAM size: 189 MB, Used: 0.00%
    Connections: Total (cumulative): 1 Active: 1
    Encode:
    Overall: msg: 862, in: 4223 KB, out: 1616 KB, ratio: 61.72%
    DRE: msg: 862, in: 4223 KB, out: 1948 KB, ratio: 53.88%
    DRE Bypass: msg: 0, in: 0 B
    LZ: msg: 726, in: 822 KB, out: 491 KB, ratio: 40.27%
    LZ Bypass: msg: 136, in: 1125 KB
    Avg latency: 0.584 ms
    Message size distribution:
    0-1K=70% 1K-5K=8% 5K-15K=7% 15K-25K=6% 25K-40K=6% >40K=1%
    Decode:
    Overall: msg: 803, in: 167 KB, out: 4392 KB, ratio: 96.18%
    DRE: msg: 802, in: 269 KB, out: 4392 KB, ratio: 93.86%
    DRE Bypass: msg: 1, in: 89 B
    LZ: msg: 674, in: 100 KB, out: 203 KB, ratio: 50.29%
    LZ Bypass: msg: 129, in: 68670 B
    Avg latency: 0.024 ms
    Message size distribution:
    0-1K=66% 1K-5K=9% 5K-15K=7% 15K-25K=4% 25K-40K=11% >40K=0%

  • How is Photo Optimization working on your IPad 3

    I have 21000 odd jpg photos on my Ipad3 64GB.  Although I have managed to overcome most of the syncing problems without reducing photo size, I think they are taking up 47GB of space (39GB showing on itunes as photos + 8GB in other) this matches what is in the ipod photo cache.  Which isn't being used for any other apple device and the number of files in the photocache equals the number on the Ipad.
    The typical picture pixel size is 3264x2448 and average memory size on computer is 2.7MB, on the ipad it is 2.3MB.  A reduction of only 15%.  When I download great looking hires wallpaper photos they take up only around 0.5MB each and despite  the fantastic screen I can't see that my photos look any sharper.
    I know the additional compression from itunes will depend on the photo but things should average out with large numbers of photos.  But  for those who have managed to sync a reasonable number fo photos how does this compare with your compression.
    One reason I ask is that early on in my sync efforts I was achieving a reduction of closer to 100% for up to 16,000 photos i.e. only 1MB per photo it seems to have increased after reinstalling ios itunes etc as part of the efforts to get the syncing work. 
    I have spoken to a number of people at Apple Support but no one knew what the optimization would/should do.
    Thanks

    Hello there unknown223,
    It sounds like you would like to connect to a 4g network with your iPad 3. A 4g connection for your iPad depends on your provider. This article outlines which providers have a 4g connection for iPads:
    iPad (3rd generation) Wi-Fi + Cellular: About 4G LTE connectivity
    http://support.apple.com/kb/ht5205
    iPad (3rd generation) Wi-Fi + Cellular supports 4G LTE connectivity when used with the following carriers:
    Country
    Carrier
    Canada
    Bell
    Rogers
    Telus
    United States
    AT&T
    Verizon
    iPad (3rd generation) Wi-Fi + Cellular does not support LTE in any other countries, carriers, or networks.
    iPad (3rd generation) Wi-Fi + Cellular is compatible with all carriers that support iPad Wi-Fi + 3G, with additional support for fast 3G networks, including HSPA, HSPA+, and DC-HSDPA.
    Note: Data plan is sold separately. 4G LTE and fast 3G coverage is not available in all areas and varies by carrier. See your carrier for details.
    Thank you for using Apple Support Communities.
    Take care,
    Sterling

  • Generic GRE not working (ver 4.1.3.55)

    Hi everybody.
    I'm testing in Lab a configuration for one customer.
    It's a basic environment with :
    DATA CENTER (wccp)
    1 WAEs 7341 and 1 Cat6506 routers
    BRANCH (inline)
    1 WAE 574.
    Optimization works with l2-redirect and gre return in DATA CENTER !!
    It does not work with egress-method generic-gre inteception-method wccp.
    This is the problem that i can see with " show wccp gre" on the 7341..
    " Packets received on a disabled service: 667790".
    I read some manuals but...
    I don't understand .. The service 61 and 62 works !!
    So any idea ?
    Thanks a lot to everybody
    Vittorio

    Hy and thanks to be interested.
    That's the output you ask :
    WAE-DC-01#sh egress-methods
    Intercept method : WCCP
    TCP Promiscuous 61 :
    WCCP negotiated return method : WCCP GRE
    Egress Method Egress Method
    Destination Configured Used
    any Generic GRE Generic GRE
    TCP Promiscuous 62 :
    WCCP negotiated return method : WCCP GRE
    Egress Method Egress Method
    Destination Configured Used
    any Generic GRE Generic GRE
    Intercept method : Generic L2
    Egress Method Egress Method
    Destination Configured Used
    any not configurable IP Forwarding
    And here there is another useful :
    WAE-DC-01#sh wccp gre
    Transparent GRE packets received: 52082
    Transparent non-GRE packets received: 0
    Transparent non-GRE non-WCCP packets received: 0
    Total packets accepted: 0
    Invalid packets received: 0
    Packets received with invalid service: 0
    Packets received on a disabled service: 50118
    Packets received too small: 1964
    Packets dropped due to zero TTL: 0
    Packets dropped due to bad buckets: 0
    Packets dropped due to no redirect address: 0
    Packets dropped due to loopback redirect: 0
    Pass-through pkts dropped on assignment update:0
    Connections bypassed due to load: 0
    Packets sent back to router: 50118
    GRE packets sent to router (not bypass): 0
    Packets sent to another WAE: 0
    GRE fragments redirected: 28770
    GRE encapsulated fragments received: 0
    Packets failed encapsulated reassembly: 0
    Packets failed GRE encapsulation: 0
    Packets dropped due to invalid fwd method: 0
    Packets dropped due to insufficient memory: 0
    Packets bypassed, no pending connection: 0
    Packets due to clean wccp shutdown: 0
    Packets bypassed due to bypass-list lookup: 0
    Conditionally Accepted connections: 0
    Conditionally Bypassed connections: 0
    L2 Bypass packets destined for loopback: 0
    Packets w/WCCP GRE received too small: 0
    Packets dropped due to received on loopback: 0
    Packets dropped due to IP access-list deny: 0
    Packets fragmented for bypass: 28770
    Packets fragmented for egress: 0
    Packet pullups needed: 57543
    Packets dropped due to no route found: 0
    Any new idea ?
    Thanks
    Vittorio

  • Criticism of new data "optimization" techniques

    On February 3, Verizon announced two new network practices in an attempt to reduce bandwidth usage:
    Throttling data speeds for the top 5% of new users, and
    Employing "optimization" techniques on certain file types for all users, in certain parts of the 3G network.
    These were two separate changes, and this post only talks about (2), the "optimization" techniques.
    I would like to criticize the optimization techniques as being harmful to Internet users and contrary to long-standing principles of how the Internet operates. This optimization can lead to web sites appearing to contain incorrect data, web sites appearing to be out-of-date, and depending on how optimization is implemented, privacy and security issues. I'll explain below.
    I hope Verizon will consider reversing this decision, or if not, making some changes to reduce the scope and breadth of the optimization.
    First, I'd like to thank Verizon for posting an in-depth technical description of how optimization works, available here:
    http://support.vzw.com/terms/network_optimization.html
    This transparency helps increase confidence that Verizon is trying to make the best decisions for their users. However, I believe they have erred in those decisions.
    Optimization Contrary to Internet Operating Principles
    The Internet has long been built around the idea that two distant servers exchange data with each other by transmitting "packets" using the IP protocol. The headers of these packets contain the information required such that all the Internet routers located between these servers can deliver the packets. One of the Internet's operating principles is that when two servers set up an IP connection, the routers connecting them do not modify the data. They may route the data differently, modify the headers in some cases (like network address translation), or possibly, in some cases, even block the data--but not modify it.
    What these new optimization techniques do is intercept a device's connection to a distant server, inspect the data, determine that the device is downloading a file, and in some cases, to attempt to reduce bandwidth used, modify the packets so that when the file is received by the device, it is a file containing different (smaller) contents than what the web server sent.
    I believe that modifying the contents of the file in this matter should be off-limits to any Internet service provider, regardless of whether they are trying to save bandwidth or achieve other goals. An Internet service provider should be a common carrier, billing for service and bandwidth used but not interfering in any way with the content served by a web server, the size or content of the files transferred, or the choices of how much data their customers are willing to use and pay for by way of the sites they choose to visit.
    Old or Incorrect Data
    Verizon's description of the optimization techniques explains that many common file types, including web pages, text files, images, and video files will be cached. This means that when a device visits a web page, it may be loading the cached copy from Verizon. This means that the user may be viewing a copy of the web site that is older than what the web site is currently serving. Additionally, if some files in the cache for a single web site were added at different times, such as CSS files or images relative to some of the web pages containing them, this may even cause web pages to render incorrectly.
    It is true that many users already experience caching because many devices and nearly all computer browsers have a personal cache. However, the user is in control of the browser cache. The user can click "reload" in the browser to bypass it, clear the cache at any time, or change the caching options. There is no indication with Verizon's optimization that the user will have any control over caching, or even knowledge as to whether a particular web page is cached.
    Potential Security and Privacy Violations
    The nature of the security or privacy violations that might occur depends on how carefully Verizon has implemented optimization. But as an example of the risk, look at what happened with Google Web Accelerator. Google Web Accelerator was a now-discontinued product that users installed as add-ons to their browsers which used centralized caches stored on Google's servers to speed up web requests. However, some users found that on web sites where they logged on, they were served personalized pages that actually belonged to different users, containing their private data. This is because Google's caching technology was initially unable to distinguish between public and private pages, and different people received pages that were cached by other users. This can be fixed or prevented with very careful engineering, but caching adds a big level of risk that these type of privacy problems will occur.
    However, Verizon's explanation of how video caching works suggests that these problems with mixed-up files will indeed occur. Verizon says that their caching technology works by examining "the first few frames (8 KB) of the video". This means that if multiple videos are identical at the start, that the cache will treat them the same, even if they differ later on in the file.
    Although it may not happen very frequently, this could mean that if two videos are encoded in the same manner except for the fact that they have edits later in the file, that some users may be viewing a completely different version of the video than what the web server transmitted. This could be true even if the differing videos are stored at completely separate servers, as Verizon's explanation states that the cataloguing process caches videos the same based on the 8KB analysis even if they are from different URLs.
    Questions about Tethering and Different Devices
    Verizon's explanation says near the beginning that "The form and extent of optimization [...] does not depend on [...] the user's device". However, elsewhere in the document, the explanation states that transcoding may be done differently depending on the capabilities of the user's device. Perhaps a clarification in this document is needed.
    The reason this is an important issue is that many people may wish to know if optimization happens when tethering on a laptop. I think some people would view optimization very differently depending on whether it is done on a phone, or on a laptop. For example, many people, for, say, business reasons, may have a strong requirement that a file they downloaded from a server is really the exact file they think they downloaded, and not one that has been optimized by Verizon.
    What I would Like Verizon To Do
    With respect to Verizon's need to limit bandwidth usage or provide incentives for users to limit their bandwidth usage, I hope Verizon reverses the decision to deploy optimization and chooses alternate, less intrusive means to achieve their bandwidth goals.
    However, if Verizon still decides to proceed with optimization, I hope they will consider:
    Allowing individual customers to disable optimization completely. (Some users may choose to keep it enabled, for faster Internet browsing on their devices, so this is a compromise that will achieve some bandwidth savings.)
    Only optimizing or caching video files, instead of more frequent file types such as web pages, text files, and image files.
    Disabling optimization when tethering or using a Wi-Fi personal hotspot.
    Finally, I hope Verizon publishes more information about any changes they may make to optimization to address these and other concerns, and commits to customers and potential customers about their future plans, because many customers are in 1- or 2-year contracts, or considering entering such contracts, and do not wish to be impacted by sudden changes that negatively impact them.
    Verizon, if you are reading, thank you for considering these concerns.

    A very well written and thought out article. And, you're absolutely right - this "optimization" is exactly the reason Verizon is fighting the new net neutrality rules. Of course, Verizon itself (and it's most ardent supporters on the forums) will fail to see the irony of requiring users to obtain an "unlimited" data plan, then complaining about data usage and trying to limit it artificially. It's like a hotel renting you a room for a week, then complaining you stayed 7 days.
    Of course, it was all part of the plan to begin with - people weren't buying the data plans (because they were such a poor value), so the decision was made to start requiring them. To make it more palatable, they called the plans "unlimited" (even though at one point unlimited meant limited to 5GB, but this was later dropped). Then, once the idea of mandatory data settles in, implement data caps with overages, which is what they were shooting for all along. ATT has already leapt, Verizon has said they will, too.

  • DB13 Check database not working

    Hi All,
    Check DB and update stats from DB13 does not work. It comes back with the following error
    Can't exec external program (No such file or directory)
    Checked from OS level and found that brconnect -u / -c -f check works only from exe directory. If i execute the
    command from any other directory it comes back with an error saying command not found
    Upon further investigation found that this is an issue with env variable PATH
    The current value of PATH in env variable of sidadm is
    PATH=/oracle/SID/102_64/bin:/opt/IBMJava2-amd64-142/bin:.:/export/home/sidadm:/usr/sap/SID/DVEBMGS01/exe:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin
    I want to change the value from usr/sap/SID/DVEBMGS01/exe to /usr/sap/SID/SYS/exe/run
    I checked all the csh and .sh profiles for sidadm user but i did not find any PATH variable,
    I can see DIR_LIBRARY and LD_LIBRARY_PATH but i cant see PATH variable
    Checked  the shell and it is /bin/csh
    checked .bash_profile and all i can see is this
    .bash_profile
    Get the aliases and functions
    if [ -f ~/.bashrc ]; then
            . ~/.bashrc
    fi
    User specific environment and startup programs
    PATH=$PATH:$HOME/bin
    export PATH
    unset USERNAME
    Has any one faced this issue earlier??
    Meanwhile i have found a temporary fix for timebeing i.e i copied all  the br* executables from /sapmnt/SID/exe to
    /usr/sap/SID/DVEBMGS01/exe and after doing that DB13 - DBcheck and Update optimizer works fine.
    Regards,
    Ershad Ahmed

    Hi Sunil and Eric
    Thanks for the reply
    I checked all the profiles but no luck. I cant find the parameter.
    As i mentioned in my earlier post i have found a temporary fix by copying the br executables from /sapmnt/SID/exe to
    /usr/sap/SID/DVEBMGS01/exe and ran saproot.sh and that resolved the issue.
    If i can change my PATH variable to point to /usr/sap/SID/SYS/exe/run that would be a permanent fix
    Regards,
    Ershad Ahmed

  • BPC 4.2 Optimization error (object variable or with block variable not set)

    Hi All,
    I am getting the following error when I try to optimize application from the front end:
    Run-time error '91':
    Object variable or With block variable not set
    From the back end the optimization works just fine. This is the new application I created from the AppShell. As soon as I created this new application set, I tried to run optimization and I am getting this error. Optimization in the AppShell works just fine. I wonder what the problem is since this is a brand new application set. I tried a few things all day yesterday and day before but in vain.
    We are using BPC 4.2 (OutlookSoft CPM). Any help is greatly appreciated, the sooner the better.
    Thanks in advance!

    Depending on your version of 4.2, here are two possible issues and remedys.
    Possible issue #1
    Do you only have 1 application in the appset? - Add another application.
    Possible issue #2
    This problem will occur if you have copied 4.2 SP2 Apshell or copied an existing appset.
    This happens when a table named tblAdminTaskMessage exists and a stored procedure named INPUTMESSAGE does not exist.
    The table and stored procedure are created when you run optimize for the first time and when you make copy of Apshell that has been optimized once, it can copy the table but it cannot copy the stored procedures.
    The workaround is to delete the tblAdmintaskMesssage table in SQL Enterprise Manager within the problem appset.
    Hope this helps.

  • The Full Optimization & Lite Optimization Data Manager packages are failing

    Hi,
    The Full Optimization and Lite Optimization Data Manager packages are failing with the following message "An Error occured while querying for the webfolders path".
    Can anyone had similar issue earlier, please let me know how can we rectify the issue.
    Thanks,
    Vamshi Krishna

    Hi,
    Does the Full Optimize work from the Administration Console directly?
    If it's the case, delete the scheduled package for Full Optimize every night (in both eData -> Package Schedule Status and in the Scheduled Tasks on your server Control Panel -> Scheduled Tasks), and then try to reschedule it from scratch.
    If it's not solving your problem, I would check if there are some "wrong" records into the FACT and FAC2 tables.
    After that, I would also check if the tblAppOptimize is having other values than 0. For all applications, you should have a 0 there.
    Hope this will help you..
    Best Regards,
    Patrick

  • Insert Into @temptable hangs but #temptable works

    I have a simple temp table declared as a table variable. This is used on nearly 50 deployed databases and works fin. However, on one server, 2008-R2 as are all the others, the Insert statement hangs.  If I change to a #temptable the exact same code
    works.  Is there a configuration setting that would account for this behavior.  This is what the code looks like;
    On all my other servers this works fine.  On one server it would hang until I modified it to use CREATE table #Counts.
    declare @Counts table
    CountOf varchar(10), StatusID bigint, StatusName varchar(30), TheCount bigint
    insert into @Counts
    Select
    'Files' as CountOf,
    sc.StatusID as StatusID,
    sc.StatusName as StatusName,
    count_big(*) as TheCount
    from
    dfFiles f with (nolock)
    join dfFolders d with (nolock)
    on d.folderid = f.folderid
    join dfVolumes v with (nolock)
    on v.VolumeUID = d.VolumeUID
    and v.MachineName = @MachineName
    join dfStatusCodes sc with (nolock)
    on sc.StatusID = f.StatusID
    group by sc.StatusID, sc.StatusName
    union all
    Select
    'Folders' as CountOf,
    sc.StatusID as StatusID,
    sc.StatusName as StatusName,
    count_big(*) as TheCount
    from
    dfFolders d with (nolock)
    join dfVolumes v with (nolock)
    on v.VolumeUID = d.VolumeUID
    and v.MachineName = @MachineName
    join dfStatusCodes sc with (nolock)
    on sc.StatusID = d.StatusID
    group by sc.StatusID, sc.StatusName

    Supposedly, this is a query-plan issue. As you may know the optimizer works from the statistics sampled from the data and from this it makes an estimation of what is the best plan. Since the data profile may be different in different database, the query
    plans may be different in different databases.
    And of course, the set of avilable indexes may be different in different databases too.
    So what has this do with temp table vs. table variables? The presence of a table variable precludes a parallel plan, whereas there is no such restriction with temp tables.
    Thus, you need to look at the query plans to see what is going on. Make sure statistics are up to date, and also check that this server has the same indexes as the other server.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Dynamic Optimization and PRO tips

    Is dynamic optimization a new feature on System Center 2012 or it was included in previous versions under a different name?
    What are the main differences between dynamic optimization and PRO tips beside trigger customization?
    Can dynamic optimization work with Operations Manager Monitoring VMM only?
    Thanks!

    Hi
    Your filemaker pro database must be exported as sql in order for you to use it to create a database.
    This may not be a simple create and import into MySQL, see - http://dev.mysql.com/tech-resources/articles/filemaker_mysql_whitepaper/filemaker_to_mysql _whitepaper01.htm.
    PZ

  • Fios Optimizer not saving settings

    I noticed my internet speeds slowed down a bit last few days so I decided to run the optimizer and see if it said my settings were already optimized. I ran it as it said my pc was not optimzed, was successful, shut down the pc, booted it back up and ran it again to see if it said my setttings were already optimzed. It said my settings were not. I again ran it a 2nd time and its still saying my pc is not optimzed. What could the issue be?
    I did a speed test and saw no difference. When I was fully succesful awhile back with it things were better. For some reason its no longer saving the settings.
    Solved!
    Go to Solution.

    If it works properly for you the fios optimizer works great imo.
    http://my.verizon.com/micro/speedoptimizer/fios/de​fault.aspx
    Open IE as an administrator and go to the above url. Run it, shut down the pc, and boot it back up. Then go check your speeds at somewhere like speedtest.net and see if its faster. When it works for me I do get higher speeds and to what I am paying for.
    There are other optimizers out there such as TCP Optimizer, but I honestly have no idea how to use it since you have to do things manually. 

  • Ways to optimize big product catalog with a lot of characteristics

    Hi,
    B2B with SO. We have product catalog with a lot of products.
    All products have a lot of characteristics.
    Which methods are exist for optimize works with big catalog?
    For example, how we can deactivate part of un-used characteristics?
    Denis

    Ive been having hell with this aswell, although i think my computer is to blame. It "should" work if i could install wmp, but i cannot because my computer needs me to install some update rollup 2 which doesnt install and i cant install sp3 either... i've been contacting microsoft but they cant sort it.. maybe my pc is just reaching its old age and cannot cope with so many new things going on.
    but from what you have said i cant see why it shouldnt work with your current system.

  • Optimizer in simulation version

    We are running SNP Optimization for a simulation version, while we are running the optimizer job, the planners who are working in active version are complaining about system slowness.  Any body come across this situation? if so, what your recommendations.  Any help is appreciated. We are using SCM 7.2.
    thx
    Jeff

    Hello Jeff,
    Optimizer works in three steps:
    Read master data
    optimization
    write log
    The time consuming steps is the 'optimization', and it works in the optimizer server.
    So i do not think for the 1st and 2nd step, it slow the system.
    If it really killing the performance then try to break the optimizer job into multiple by creating multiple variant. It depend on the business case as well as master data.
    For example, you can not run a shared resource in 2 different jobs.
    Regards
    Kishor

Maybe you are looking for

  • Cannot open : Error 2  Photoshop is Undefined

    When I try to open Bridge from Photoshop CS5, I get the message: Error 2 Photoshop is undefined Line: 1 ->  photoshop.invokeBridge (false, false, '');

  • What is the DC in board?

    Does this stand for anything? I think it's the plug in the computer where the cord goes... Before I order a new one, as I think mine has turned into Hal, I was hoping someone could explain to me what it is, what it stands for, etc. Thanks!

  • OC4J Install for APEX printing

    Hi, I have an Oracle 11g Enterprise Edition insalled on my Windows 2008 server. I am now trying to get the report printing option to work within APEX. One of the options was to use OC4J + Apache FOP to get this going. However, I'm hitting problems wh

  • My iphone5 using too much data

    ANyone know why im using so much data on my iphone 5 ive never down loaded music films or any thing and used up 1gb in 17days

  • Photos will not edit on ipad

    I have downloaded my photos from macbook to my ipad, but when I come to edit them on the ipad, the edit tag does not work. advice would be appreciated.