10.1.2.0.2 Optimization Tips?

I have Oracle Application Server 10.1.2.0.2 On Windows 2003 with Infrastructure and Application Server on single machine. I am facing significantly low performance on my clients which are only 4 or 5. it takes minutes to open a single form etc.
Can some one help me optimize my server so that it start working little faster.

Hi Zahid,
Let me give you a list of useful key-notes that can to a great deal help you administer your AS better.
Please consult the following notes and follow the appropriate measures.
Diagnostics (portals)
Doc ID:      245368.1 Diagnosing Portal Export / Import Performance issues (9.0.2.x / 10.1.x.x)
Doc ID:      249172.1 For testing the speed and performance of the portal using Grinder
Doc ID:      578806.1 Master Note for Diagnosing Portal/Database Performance Issues
Performance Enhancement
Doc ID:      438794.1 How to increase performance of Portal 10.1.4
Doc ID:      249449.1 How to increase the performance of Portal 9.0.X (or even for 10.1.2 as that is closer in many respects to this system than 10.1.4)
If you are using Forms, then you can also follow these:
Doc ID:      390031.1 Performance Tuning Forms Listener Servlet In Oracle Applications
Doc ID:      221529.1 Few Basic Techniques to Improve Performance of Forms.
Also check the log files regularly. make scripts for them. see specially the appication.logs and apache server logs.
and if you share some details of your system topology. are you using a firewall, type of your installation, are you using Webcache, size of ram, hardware specs, etc. then more specific ideas may be given.
hope that helps!
Naqvi

Similar Messages

  • MySQL optimization tips for IDM repo?

    I am running IDM 7.1 and MySQL 5.0.45 for the repository on RHEL 4. I understand that idm is storing large xml strings in the mysql repository so typical mysql optimization may not apply. I have 2 cpu and 4G ram and when initially loading data from file into the repo performance is slow. Are there any mysql variables I should set to impove performance and load time?

    indie_idm pretty much hit the nail on the head :) We started with 4GB and then upgraded to 8GB in our server and it runs pretty well now.
    I've been working on some settings over the last couple versions and currently with 7.1.1 the following appear to work great. I'm not a DBA or MySQL expert but with a lot of trial-and-error have significantly improved our performance. Once I dumped 15k AD Accounts and 100k LDAP Accounts into IDM it was miserable, took 3 days to Sync LDAP! Now with these settings takes a handful of hours:
    [mysqld]
    port = 3306
    socket = /tmp/mysql.sock
    skip-locking
    key_buffer = 512M
    max_allowed_packet = 16M
    table_cache = 512
    sort_buffer_size = 512M
    read_buffer_size = 8M
    read_rnd_buffer_size = 8M
    myisam_sort_buffer_size = 512M
    thread_cache_size = 8
    query_cache_size = 64M
    # Try number of CPU's*2 for thread_concurrency
    thread_concurrency = 16
    # You can set .._buffer_pool_size up to 50 - 80 %
    # of RAM but beware of setting memory usage too high
    innodb_buffer_pool_size = 2G
    innodb_additional_mem_pool_size = 20M
    innodb_log_buffer_size = 8M
    innodb_flush_log_at_trx_commit = 1
    innodb_lock_wait_timeout = 120
    [mysqldump]
    quick
    max_allowed_packet = 16M
    [isamchk]
    key_buffer = 256M
    sort_buffer_size = 256M
    read_buffer = 2M
    write_buffer = 2M
    [myisamchk]
    key_buffer = 256M
    sort_buffer_size = 256M
    read_buffer = 2M
    write_buffer = 2M

  • Cable 3.0 Optimization tips?

    Anyone got any? I would love some tips with my new router!

    It is delivered over a Hybrid Fibre Coax (HFC) Cable, it is run into the house and terminates at a faceplate, you then run a cable from the faceplate to the modem which is another coax cable, and that connects it to the HFC network and the internet is provided over it... at no point in your house does it connect or rely on the Copper network.
    It is significantly faster then ADSL with the standard speed being 30/1Mbps (compared to a maximum speed on ADSL2+ of 20/1Mbps) with the option of increasing it to up to 100/2Mbps if you have the need, the need for speed.

  • Needed: Optimization tips from a Pro for a very slow internet connection

    I am on a remote base in Iraq with about 150 people sharing a satellite connection with a 2MB pipe. It is slow. There is nothing I can do about the volume of people, QoS from the provider, and there is no competition.
    Given that, I would like to know what I can tune on my machine to make the most of a slow, frustrating experience.
    I have noticed although I have browser caching turned on, it appears my browsers (safari, Omni, FF) don't try t take advantage of it. They still spend 10 minutes trying to get in touch with the website and start from scratch. Is there a browser that visits it's cache first and then looks for changes? Are there settings I am overlooking that can force the browser to load from cache first and then look for differences? Are there command line parameters or configuration files I can edit and modify to make a difference for the better?
    I also have a buddy running linux and he installed a DNS caching mechanism that reduced his repeat lookups from 5 seconds to 0. He also installed another website data caching mechanism that truly stores the pages of websites he frequently visits. His experience was enhanced tremendously.
    What do you recommend I do from the command line and within my browsers to maximize facilities like caching to make a slow situation a little better?
    I apprecite your time and advice
    Dan
    US Army (AIRBORNE)
    Mesopotamia, Iraq

    Hi Sahar,
    1) To post your adsl line statistics you can be connected to the home hub or router via wireless.
    Below is some information on how to obtain your adsl line stats. You can also see this link here ADSL Statistics
    ADSL Line Statistic Help:
     If you have a BT Home Hub like the one below...
    Then:
     1) Go to http://192.168.1.254 or http://bthomehub.home
     2) click Settings
     3) Click Advanced Settings
     4) Click Broadband
     5) Click Connection or sometimes called ADSL (see picture Below)
    The direct Address is http://bthomehub.home/index.cgi?active_page=9116 (for bthomehub3.A firmware ending in 1.3)
    or http://bthomehub.home/index.cgi?active_page=9118 (for bthomehub3.A firmware ending in 94.1.11)
    You will need to copy and past all the adsl line statistics ( Including HEC, CRC and FEC errors). You may need to click " More Details"
    There are more useful links on Keith's website here: If you have an ADSL connection, please select this link
    2) To run and post the results of a BT Wholesale Speed Test it is recommended to do this with a wired ethernet connection, as a WiFi connection can be effected by other electronic devices in your property.
    It is also best to have a landline phone of some sort, even if you don't use it, at is is a very handy tool to test your landline for any noise. 
    As any noise on your landline can dramatically reduce your broadband performance.
    You can get a cheap corded landline phone from most electronic stores or online. 
    They are also good in emergencies when there is a power cut as a corded landline phone will still work.
    Hope that helps,
    Cheers
    I'm no expert, so please correct me if I'm wrong

  • 64-Bit Essbase Optimization Tips

    We are upgrading from 9.3.1 to 11.1.1.3 and have decided to try 64-bit Essbase on a Windows server with Xeon processors. All went well in initial testing, my ASO models all ran faster on the new 64-bit server. However, we have a planning model that of course is BSO. That model is performing very poorly on the new 64-bit server (most calculations are taking 5-6 times longer). Just to prove that the difference was 64-bit and not the upgrade to 11.1.1.3, I installed the 32-bit version of 11.1.1.3 on another server and ran the same test. It performed fine, very similarly to our 32-bit 9.3.1 installation.
    I guess I'm looking for any ideas on tuning BSO applications or the server itself in a 64-bit windows environment. Also, has anyone else had similar issues.
    Any help is much appreciated!
    Jeff

    You're the first person I've heard say that performance is slower on 64 bit. Although BSO Essbase is always...interesting.
    Did you change platforms (Windows to *nix?)?
    What specifically is slower -- aggregations, specific allocations, something else?
    64 bit Essbase has made my Planning life much, much easier -- I don't have to wring every bit of performance out of Essbase to get more than acceptable speed. I am enjoying the laziness. :)
    If you go onto ODTUG's website, click on Tech Resources, and search for "64" you'll find two presentations on optimizing for 64 bit, one by Edward Roske and the other by Brian Suter. If you're not an ODTUG member, you can sign up for a free associate membership.
    Okay, if I mention ODTUG, I have to mention 2010's Kaleidoscope conference in Washington DC, 27 June to 1 July. There are many, many tuning sessions that are worth everyone's time. Many of the posters on this board will be presenting there. If you like us on the board, you'll (hopefully) like us more in person.
    Regards,
    Cameron Lackpour

  • How to optimize portal javascript and css - where is scriptset property ?

    Hi,
    I've downloaded <a href="https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d164c5ee-0901-0010-4fbf-d0856b5c8a84">How to Finetune Performance of Portal Platform</a> document,
    which contains portal performance optimization tips.
    In this pdf file there is a section about optimizing javascript and css code by using
    scriptset  property and there is also appropriate reference to the help.sap.com portal page.
    The problem is that this page is not there so I'm unable to configure this scriptset property.
    Any ideas where can it be found?
    Regards,
    Ladislav

    yes - go into System Adminstration --> System Configuration and then Service Configuration (in the detailed Nav).
    In there, find application com.sap.portal.epcf.loader - In Services --> epcfloader the property you seek is script.set and you should change from standard to optimize.  Then you need to restart the service.
    Hope that helps
    Haydn

  • Free resources that ensure the health and optimization of your site/s.

    I thought that I should share two resources that I found priceless.
    1. To check whether your site doesn't have any malware or hackers. These folks will check and clean your site for you for free:
         http://sitecheck.sucuri.net/scanner/
    2. To check for optimization tips and places where you can shed some weight off your site visit.
         http://gtmetrix.com/dashboard.html
    Feel free to suggest anything else that you might know of.

    Check spelling and grammar.
    Validate CSS & HTML code.
    To ensure menus, links and widgets all work, fully test your site in the 5 major browsers. If you must support older IE, get IE Tester.
    http://www.my-debugbar.com/wiki/IETester/HomePage
    Use Firefox's web developer toolbar.
    https://addons.mozilla.org/en-US/firefox/addon/web-developer/
    Web Page Analyzer & Optimization report
    http://www.websiteoptimization.com/services/analyze/
    Set-up Google WebMaster Tools and Google Analytics.
    http://www.google.com/webmasters/
    http://www.google.com/analytics/
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists 
    http://alt-web.com/
    http://twitter.com/altweb

  • Java Optimization Book

    elloo, i'm wondering is there any good books I can read for java optimization tip and tricks? what's more, how do I monitor if a section of a code is optimized or not?
    terribley sorry if that I'm asking stupid questions.
    thanks in advance.

    getmizanur wrote:
    elloo, i'm wondering is there any good books I can read for java optimization tip and tricks?You don't want tips and tricks, you want knowledge.
    And there is a book for that (though it may be outdated) from O'Reilly.
    If you have to ask whether such a thing exists though, you probably wouldn't benefit from it (nor need it).
    what's more, how do I monitor if a section of a code is optimized or not?
    You can't. What's good in some cases may be bad in others.
    You're looking at micro-optimisation, which is almost always bad.
    terribley sorry if that I'm asking stupid questions.
    We've seen worse, far worse.

  • Photo Gallery Speed in iWeb Site

    Hi, I'm looking for some advice on quicker photo galleries.
    I've made my organization's website using iWeb... It seems to work fine, except a lot of the site is really slow to load, especially large photo galleries. I realize a lot of this has to do with the code, but is there another solution (have the photo's hosted by some other, faster service)?
    See for yourself: http://turtleconservancy.org/travel/brazil/
    Thanks for your help!
    Max

    Hi guys, thanks for taking a look. The site is made with iWeb. The gallery was made by drag n drop + linking to image files, then adding the shadow box code in Dreamweaver.
    When I said the site is slow to load, I am just looking to speed it up. It loads fine on every test machine i've used. I'm just looking to make it even faster! (Google Analytics is telling me it's slow...)
    -If I hosted the linked images on another server (amazon cloud) would this speed things up?
    -Is there another way to present a gallery of images with captions + shadow box (or similar, elegant) effect that would cut down the load time?
    Also, a bit off topic, anyone know of a good resource to read up on taking advantage of browser cache capabilities and how to make a section of the site (ie. navbar) an "include"?
    As you can clearly see, I am no pro... Haha! Any other streamlining/optimization tips are greatly appreciated!
    Again, the site I'm working with here is: http://turtleconservancy.org/
    Thank you for all help and advice!
    -Max

  • My dashboard taking long time to load in ipad compared to Desktop. Experts need suggestion to tune my dashboard.

    Hi Experts,
             I have developed dashboard with BEX query as a source(9 queries used in dashboard). This dashboard opens in 40 secs in BI launchpad but if I open same dashboard through SAP BI app it almost taking 3-4 min. Is there any ways to improve performance in Ipad?
    Info of Dashboard
    * 20-25 components using in dashboard
    * 100-150 formulae are used(VLOOKUP)
    * SAP BO 4.0 SP6 I am using
    Please need some tips to improve performance.
    Regards
    MAHANTESH

    Hi MH,
    Opening the dashboard in 40 secs in desktop is also a problem before jumping in to the ipad. Maximum it should take 5 - 10 secs to open the dashboard then only the clients will show interest on working with dashboards.
    * 20-25 components using in dashboard
    >>Check for unused components and try to remove it from the dashboard.
    * 100-150 formulae are used(VLOOKUP)
    >>Try to minimize the formula in excel, because the excel formula will throw lot of load at the time of processing the data. Instead of lookup's try to use index formula which is better than that.
    1. Use "Refresh after components are loaded" which will make you components gets loaded first then the data comes.
    2. File menu --> document properties. "Show Loading status" which explains or dashboard where it takes time to load.
    Xcelsius performance optimization- tips &amp;amp; tricks - Business Intelligence (BusinessObjects) - SCN Wiki
    1430976 - BICS remote & Xcelsius performance improvements
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0ab8cce-1851-2d10-d5be-b5147a651c58?QuickLink=index&…
    Hope this Helps!!!
    Revert for any clarifications.
    --SumanT

  • Lightroom 3 super slow on new Windows 7 64 bit...Why?

    I've recently switched over to  Windows 7 64 bit Home Premium, running 8GB DDR3 RAM, 1GB ATI Radeon 4800  series video card, 1920px monitor and have found the performance in  Lightroom 3.4.1 much slower than I did on Windows XP, 32 bit running 2GB  of RAM with a 128MB motherboard card (laptop to boot).
    Why  is this happening? The main issues are extremely slow imports (minimal  previews set as default when importing), very delayed thumbnail  (preview) rendering (the grid view sometimes sits with half the  thumbnails rendered, and half not rendered, then all of a sudden, a  minute or two later, all will be rendered...), and slow, in-consistent  1:1 rendering. I've tried all the different combos when it comes to  preview rendering but nothing does the trick. I've even tried switching  my 10GB LR3 (ACR?) cache to different drives but nothing does the trick.
    Could  this be an issue with the Catalyst Control Center (software controling  my video driver)? It's all I've got left to consider but would very much  like a resolution.
    Thanks in advance.

    Try the performance optimization tips here first:
    http://kb2.adobe.com/cps/400/kb400808.html
    Optimize your catalog(s), render 1:1 previews on import, and make sure your Cache folder isn't near the setting in LR preferences. If necessary, increase your Cache size and see if that helps.
    If you regularly use Standby Mode to turn off your system, try shutting down Windows AND REMOVE AC POWER. If you have other connected devices (storage, camera, game, etc.) that use a separate power supply, disconnect them from AC power as well. Wait at least 10 seconds, then restore AC power to all devices, and reboot your system. This called a "cold boot," which will insure restoring ALL hardware devices, operating system, background applications and drivers to a known good state.
    When I'm done for the day my OS is "Shut Down" and the main AC power strip turned off. It is the ultimate surge protection – NO CONNECTION. When I reboot the next morning I know all my system memory and other resources are freed up and available for my designated usage.
    Following these guidelines I have absolutely no issues running LR 3.4.1 on my Windows 7 64 bit system.

  • Windows Server 2012 - Hyper-V - Cluster Sharded Storage - VHDX unexpectedly gets copied to System Volume Information by "System", Virtual Machines stops respondig

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM. This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM. This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched off.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, VMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation:
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with two 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair.
    Our Problem:
    Our problem is that for some reason a random VHDX gets copied to System Volume Information by "System" of the Clusterd Shared Storage (i.e. C:\ClusterStorage\Volume1\System Volume Information).
    All VMs stops responding or responds very slowly during this copy process and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    This happens at random and not every day and different VHDX files from different VMs gets copied each time. Some time it happens during daytime wich causes a lot of problems, especially when a 200 GB file gets copied (which take a lot of time).
    What it is not:
    We thought that this was connected to the backup, but the backup had finished 3 hours before the last time this happended and the backup never uses any of the files in System Volume Information so it is not the backup.
    An observation:
    When this happend today I switched on ShadowCopy (previous files) and set it to only to use 320 MB of storage and then the Copy Process stopped and the virtual Machines started responding again. This could be unrelated since there is no way to see
    how much of the VHDX that is left to be copied, so it might have been finished at the same time as I enabled  ShadowCopy (previos files).
    Our question:
    Why is a VHDX copied to System Volume Information when scheduled ShadowCopy (previous version of files) is switched off? As far as I know, nothing should be copied to this folder when this functionis switched off?
    List of VSS Writers:
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Writer name: 'Task Scheduler Writer'
       Writer Id: {d61d61c8-d73a-4eee-8cdd-f6f9786b7124}
       Writer Instance Id: {1bddd48e-5052-49db-9b07-b96f96727e6b}
       State: [1] Stable
       Last error: No error
    Writer name: 'VSS Metadata Store Writer'
       Writer Id: {75dfb225-e2e4-4d39-9ac9-ffaff65ddf06}
       Writer Instance Id: {088e7a7d-09a8-4cc6-a609-ad90e75ddc93}
       State: [1] Stable
       Last error: No error
    Writer name: 'Performance Counters Writer'
       Writer Id: {0bada1de-01a9-4625-8278-69e735f39dd2}
       Writer Instance Id: {f0086dda-9efc-47c5-8eb6-a944c3d09381}
       State: [1] Stable
       Last error: No error
    Writer name: 'System Writer'
       Writer Id: {e8132975-6f93-4464-a53e-1050253ae220}
       Writer Instance Id: {7848396d-00b1-47cd-8ba9-769b7ce402d2}
       State: [1] Stable
       Last error: No error
    Writer name: 'Microsoft Hyper-V VSS Writer'
       Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Instance Id: {8b6c534a-18dd-4fff-b14e-1d4aebd1db74}
       State: [5] Waiting for completion
       Last error: No error
    Writer name: 'Cluster Shared Volume VSS Writer'
       Writer Id: {1072ae1c-e5a7-4ea1-9e4a-6f7964656570}
       Writer Instance Id: {d46c6a69-8b4a-4307-afcf-ca3611c7f680}
       State: [1] Stable
       Last error: No error
    Writer name: 'ASR Writer'
       Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4}
       Writer Instance Id: {fc530484-71db-48c3-af5f-ef398070373e}
       State: [1] Stable
       Last error: No error
    Writer name: 'WMI Writer'
       Writer Id: {a6ad56c2-b509-4e6c-bb19-49d8f43532f0}
       Writer Instance Id: {3792e26e-c0d0-4901-b799-2e8d9ffe2085}
       State: [1] Stable
       Last error: No error
    Writer name: 'Registry Writer'
       Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485}
       Writer Instance Id: {6ea65f92-e3fd-4a23-9e5f-b23de43bc756}
       State: [1] Stable
       Last error: No error
    Writer name: 'BITS Writer'
       Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0}
       Writer Instance Id: {71dc7876-2089-472c-8fed-4b8862037528}
       State: [1] Stable
       Last error: No error
    Writer name: 'Shadow Copy Optimization Writer'
       Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f}
       Writer Instance Id: {cb0c7fd8-1f5c-41bb-b2cc-82fabbdc466e}
       State: [1] Stable
       Last error: No error
    Writer name: 'Cluster Database'
       Writer Id: {41e12264-35d8-479b-8e5c-9b23d1dad37e}
       Writer Instance Id: {23320f7e-f165-409d-8456-5d7d8fbaefed}
       State: [1] Stable
       Last error: No error
    Writer name: 'COM+ REGDB Writer'
       Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f}
       Writer Instance Id: {f23d0208-e569-48b0-ad30-1addb1a044af}
       State: [1] Stable
       Last error: No error
    Please note:
    Please only answer our question and do not offer any general optimization tips that do not directly adress the issue! We want the problem to go away, not to finish a bit faster!

    Hallo Lawrence!
    Thankyou for youre reply, some comments to help you and others who read this thread:
    First of all, we use Windows Server 2012 and the VHDX as I wrote in the headline and in the text in my post. We have not had this problem in similar setups with Windows Server 2008 R2, so the problem seem to be introduced in Windows Server 2012.
    These posts that you refer to seem to be outdated and/or do not apply to our configuration:
    The post about Dynamic Disks:
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx is only a recommendation for Windows Server 2008 R2 and the VHD format. Dynamic VHDX is indeed recommended by Microsoft when using Windows Server 2012 (please look in the optimization guide
    for Windows Server 2012).
    Infact, if we use fixed VHDX then we would have a bigger problem since fixed VHDX are generaly larger then Dynamic Disks, i.e. more data would be copied and that would take longer time = the VMs would be unresponsive for a longer time.
    The post "What's the deal with the System Volume Information folder"
    http://blogs.msdn.com/b/oldnewthing/archive/2003/11/20/55764.aspx is for Windows XP / Windows Server 2003 and some things has changed since then. for instance In Windows Server 2012, Shadow Copies cannot be controlled by going to Control panel -> System.
    Instead you right-click on a Drive (i.e. a Volume, for instance the C drive/Volume) in Computer and then click "Configure Shadow Copies".
    Windows Server 2008 R2 Backup problem
    http://social.technet.microsoft.com/Forums/en/windowsbackup/thread/0fc53adb-477d-425b-8c99-ad006e132336 - This post is about the Antivirus software trying to scan files used during backup that exists in the System Volume Information folder and we do not
    have any antivirus software installed on our hosts as I stated in my post.
    Comment that might help us:
    So according to “System Volume Information” definition, the operation you mentioned is Volume Shadow Copy. Check event viewer to find Volume Shadow Copy related event logs and post them.
    Why?
    Furhter investigation suggests that a volume shadow copy is somehow created even though the Schedule for Shadows Copies is turned off for all drives. This happens at random and we have not found any pattern. Yesterday this operation took almost all available
    disk space (over 200 GB), but all the disk space was released when I turned on scheduled Shadow Copies for the CSV.
    I therefore draw these conclusions:
    The CSV Volume has about 600 GB of disk space and since Volume Shadows Copy used 200 GB, or about 33% of the disk space, and the default limit is 10% then I conclude that for some reason the unscheduled Volume Shadow Copy did not have any limit (or ignored
    the limit).
    When I turned on the Schedule I also change the limit to the minimum amount which is 320 MB and this is probably what released the disk space. That is, the unscheduled Volume Shadow Copy operation was aborted and it adhered to the limit and deleted the
    Volume Shadow Copy it had taken.
    I have also set the limit for Volume Shadow Copies for all other volumes to 320 MB by using the "Configure Shadow Copies" Window that you open by right clicking on a drive (volume) in Computer and then selecting "Configure Shadow Copies...".
    It is important to note that setting a limit for Shadow Copy Storage, and disabaling the Schedule are two different things! It is possible to have unlimited storage for Shadow Copies when the Schedule is disabled, however I do not know if this was the case
    Before I enabled Shadow Copies on the CSV since I did not look for this.
    I now have defined a limit for Shadow Copy Storage to 320 MB on all drives and then no VHDX should be copied to System Volume Information since they are all larger than 320 MB.
    Does this sound about right or am I drawing the wrong conclusions?
    Limits for Shadow Copies:
    Below we list the limits for our two hosts:
    "Primary Host":
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (91%)
    Shadow Copy Storage association
       For volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Shadow Copy Storage volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    Shadow Copy Storage association
       For volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Shadow Copy Storage volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (3%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    C:\>cd \ClusterStorage\Volume1
    Secondary host:
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 35,0 MB (10%)
    Shadow Copy Storage association
       For volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Shadow Copy Storage volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 27,3 GB (10%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 6,80 GB (10%)
    C:\>
    There is something strange about the limits on the Secondary host!
    I have not in any way changed the settings on the Secondary host and as you can see, the Secondary host has a maximum limit of only 35 MB storage on the CSV, but it also shows that this is 10% of the Volume. This is clearly not the case since 10% if 600
    GB = 60 GB!
    The question is, why does it by default set a too small limit (i.e. < 320 MB) on the CSV and is this the cause of the problem? I.e. is the limit ignored since it is smaller than the smallest amount you can provide using the GUI?
    Is the default 35 MB maximum Shadow Copy limit a bug, or is there any logical reason for setting a limit that according to the GUI is too small?

  • Okay I have Kinda Of A Basic Complex Question... How Do You Install Leopard On Tiger???  PowerPC Only!!!!  No Other Mac Computers...

    I was recently given a Power Mac G5 (PPC) by a friend of mine to be able to use for our multimedia business. The PPC came loaded with Mac OS X Tiger 10.4.1.  Now my first dilemma was that when I went to install Logic 8 on the system I recieved an error message that "this version of logic requires a minimum of pro application update 4.0 and prokit version whatever" either way it didnt work.  I wasn't really bothered by this at first until I realised that finding pro application update 4.0 would be about as hard as finding waldo.  Eventually after hours of looking I located the update on of all places the Apple site I initially started searching on all to find out Tiger 10.4.1 does not meet the minimum requirements to install this update.  So I came to the conclusion updating my OS to Leopard 10.5.8 would be the best possible outcome and hopefully meet the minimum requirements to run Logic 8.  My friend just happened to have a .dmg file  from his previous installation of OS X Leopard 10.5.8 on his other PPC which I had to burn to a disc (using THIS PPC) in order to be able to boot.  Which brings me to where Im at right now.  I cant get the disc to boot no matter what I try.  Please help!

    The folks in this link indicate 4.5 will work. 
    https://discussions.apple.com/thread/2269512?start=0&tstart=0
    & notice there is a logic pro forum.
    & the link didn't work for me
    another site found this link.
    Here is ProKit 4.5
    http://download.info.apple.com/Mac_OS_X/061-4098.20080204.bpYt5/ProKitUpdate4.5. dmg
    Apple has been purging older downloads.  It's a drag.
    You can somethings find things if you know the exact name:
    ProKitUpdate4.5.dmg
    Restore Tiger 10.4 & Leopard 10.5  DVDs are available from Apple by calling 800-767-2775 as of January 20, 2013. Collect your Mac's serial number.  Prepare you credit card for a workout. There is a reasonable fee.
    https://discussions.apple.com/thread/4720126?tstart=0
    Requirements for Mac OS X v10.5
    http://support.apple.com/kb/HT3759
    You want the lastest version of Firefox?  Use the TenFourFox g5 clone.
    TenFourFox -- It's a port of the latest FireFox to run on older hardware and software.
    "World's most advanced web browser. Finely tuned for the Power PC."
        --  works for me on 10.4.  Supports 10.5
    http://www.floodgap.com/software/tenfourfox/
    alternative download site:
    http://www.macupdate.com/app/mac/37761/tenfourfox
    Turn on pipelining.  This will allow Firefox to make simaltaneous requests to the server.  Chrome has pipeling turned on. Some sites could fail to load with pipeling set on. The site will be old. See "Increase pipelining" in:
    http://www.hongkiat.com/blog/firefox-optimization-tips/
    OmniWeb uses the lastest Safari framework.  The open source WebKit. Other browsers like Safari and iCab use the OS version of WebKit.  The OmniWeb downloaded dmg includes it's own copy of the latest WebKit.
    http://www.omnigroup.com/products/omniweb/
    Safari 4.1.3 for Tiger
    http://support.apple.com/kb/DL1069

  • SSRS 2012 Excel Rendering Error

       
    Hi
    I need help
    Platform
    Sharepoint 2013
    SSRS 2012 integrated in sharePoint 2013
    I've no problem when I export my report  (25000 records) to EXCELOPENXML format, but each time I try to export it with 140 000 records i've always the error described below , however  I can export it to csv format
    at Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.Render(Report report, NameValueCollection reportServerParameters, NameValueCollection deviceInfo, NameValueCollection clientCapabilities, Hashtable& renderProperties, CreateAndRegisterStream
    createAndRegisterStream)
       at Microsoft.ReportingServices.ReportProcessing.Execution.RenderReport.Execute(IRenderingExtension newRenderer)
       --- End of inner exception stack trace ---;
    w3wp!library!1b!02/25/2014-12:18:44:: e ERROR: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: An unexpected error occurred in Report Processing. ---> System.Threading.ThreadAbortException: Thread was being aborted.
       at System.AppDomain.GetId()
       at System.Threading.Thread.GetCurrentCultureNoAppX()
       at Go17(RegexRunner )
       at System.Text.RegularExpressions.RegexRunner.Scan(Regex regex, String text, Int32 textbeg, Int32 textend, Int32 textstart, Int32 prevlen, Boolean quick, TimeSpan timeout)
       at System.Text.RegularExpressions.Regex.Run(Boolean quick, Int32 prevlen, String input, Int32 beginning, Int32 length, Int32 startat)
       at System.Text.RegularExpressions.Regex.Match(String input)
       at Microsoft.ReportingServices.ReportProcessing.Validator.ValidateColor(String color, Color& c, Boolean allowTransparency)
       at Microsoft.ReportingServices.OnDemandReportRendering.ReportColor..ctor(String color, Boolean allowTransparency)
       at Microsoft.ReportingServices.Rendering.ExcelOpenXmlRenderer.OpenXmlGenerator.AddColor(String colorString)
       at Microsoft.ReportingServices.Rendering.ExcelRenderer.Layout.LayoutEngine.ItemInfo.FillBorders(RPLStyleProps style, Boolean omitBorderTop, Boolean omitBorderBottom, IExcelGenerator excel)
       at Microsoft.ReportingServices.Rendering.ExcelRenderer.Layout.LayoutEngine.RenderNewItem(IRowItemStruct item, Int32 top, Int32 topRow, IExcelGenerator excel, String pageContentKey, Dictionary`2 sharedBorderCache, Dictionary`2 sharedImageCache,
    Boolean& autosizableGrow, Boolean& autosizableShrink)
       at Microsoft.ReportingServices.Rendering.ExcelRenderer.Layout.LayoutEngine.RenderPageToExcel(IExcelGenerator excel, String key, Dictionary`2 sharedBorderCache, Dictionary`2 sharedImageCache)
       at Microsoft.ReportingServices.Rendering.ExcelRenderer.MainEngine.RenderRPLPage(RPLReport report, Boolean headerInBody, Boolean suppressOutlines)
       at Microsoft.ReportingServices.Rendering.ExcelRenderer.ExcelRenderer.Render(Report report, NameValueCollection reportServerParameters, NameValueCollection deviceInfo, NameValueCollection clientCapabilities, Hashtable& renderProperties, CreateAndRegisterStream
    createAndRegisterStream)
       at Microsoft.ReportingServices.ReportProcessing.Execution.RenderReport.Execute(IRenderingExtension newRenderer)
       --- End of inner exception stack trace ---
       at Microsoft.ReportingServices.ReportProcessing.Execution.RenderReport.Execute(IRenderingExtension newRenderer)
       at Microsoft.ReportingServices.Library.RenderFromSnapshot.CallProcessingAndRendering(ProcessingContext pc, RenderingContext rc, OnDemandProcessingResult& result)
       at Microsoft.ReportingServices.Library.RenderStrategyBase.ExecuteStrategy(OnDemandProcessingResult& processingResult)
    w3wp!wcfruntime!1b!02/25/2014-12:18:44:: e ERROR: Reporting Services fault exception System.ServiceModel.FaultException`1[Microsoft.ReportingServices.ServiceContract.RsExceptionInfo]: An unexpected error occurred in Report Processing. ---> Microsoft.ReportingServices.ReportProcessing.ReportProcessingException:
    An unexpected error occurred in Report Processing. ---> System.Exception: For more information about this error navigate to the report server on the local server machine, or enable remote errors (Fault Detail is equal to Microsoft.ReportingServices.ServiceContract.RsExceptionInfo).
    Any solutions 

    Hi Vichu,
    According to your description, you encountered the timeout issue when rendering your report. Right?
    In Reporting Services, sometimes this issue happens when rendering big report, and it exceed the default Query Timeout time. Generally, we increase the Query Timeout and Session Timeout so that it will have more time to retrieving data.
    In this scenario, since you have done this adjustment and the issue still exists randomly. We suggest you optimize the performance of your report. Please refer to the articles below:
    Processing large reports:
    http://technet.microsoft.com/en-us/library/ms159638.aspx
    Performance (Reporting Services):
    http://technet.microsoft.com/en-us/library/bb522786.aspx
    Improving report performance with caching:
    http://technet.microsoft.com/en-us/library/ms155927.aspx
    Report performance optimization tips:
    http://blogs.msdn.com/b/robertbruckner/archive/2009/01/08/report-performance-optimization-tips-subreports-drilldown.aspx
    In addition, we recommend you install the Updates for SSRS 2012:
    http://sqlserverbuilds.blogspot.com/
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Best practice for optimizing processing of big XMLs?

    All,
    What is the best practice when dealing with large XML files ... (say couple of MBs).
    Instead of having to read the file from the file system everytime a static method is run, what would be the best way in which the program reads the file once and then keeps it in memory. So the next time it would not have to read and parse it all over again?
    Currently my code just read the file in the static method like ...
    public static String doOperation(String path,...) throws Exception
    try{           
    String masterFile = path+"configfile.xml";
    Document theFile = (Document)getDocument(masterFile);
    Element root = theFile.getDocumentElement();
    NodeList nl = root.getChildNodes();
    // ... operations on file
    Optimization tips and tricks most appreciated :-)
    Thanks,
    David

    The best practice for multi-megabyte XML files is not to have them at all.
    However if you must, presumably you don't need all of the information in your XML, repeatedly. Or do you? If you need a little bit of it here, then another little bit of it there, then yet another little bit of it later, then you shouldn't have stored your data in one big XML.
    Sorry if that sounds unhelpful, but I'm having trouble imagining a scenario when you need all the data in an XML document repeatedly. Perhaps you could expand on your design?
    PC&#178;

Maybe you are looking for

  • SC 1-step approval with Deadline

    Hi Folks, Is there any standard workflow for SC 1-step approval with deadline escalation? I found 10400047 Two Step Approval with Deadline but can not find a 1-step. I tried to extend the standard 1-step to include deadline without succes. Is there a

  • Help?! Playback Resolution error on Premiere CC 2014

    I recently upgraded to new release of Adobe CC 2014. I have a sequence set at 1080P AppleProRes 422 HQ. I currently have a mixture of 4k footage and 1080p footage. When I set my program window to playback resolution to 1/2, the program window magnifi

  • Rosetta 'checkbox' is not in the Quicken 2007 info panel

    i got my new Imac, first time Mac user, installed Quicken 2007, which does crash everytime I use it. Followed instructions for the info panel to check the 'rosetta' box, but there is no Rosetta box in my info window. Under the 'General' it does say P

  • Not getting connected

    Dear all, Os version : Windows 2003 Standard Edition Database : 9.2.0.8.0 Once of our client having problem for the connection to the database only some times. There is no error but its waiting for a long time without any result when we trying to con

  • Regarding 36.7gb Raptor drive...

    Hi. First, my thanks to people who helped me with hdd/windows installation. My question: I pretty much let windows disc do the work while installing windows xp pro. All default settings even when formatting the drive by way of NFTS. When I check the