Extensive IO and CPU for simple queries

Hi,
I have a machine using oracle 9.2.0 and solaris 10.
For every simple queries, it very big IO and a lot of CPU. This is only happend on one particular machine(We have same version db and soalris on other mahcine and it works fine).
One example queris is when I use Enterprise Manager to get the "configuration" information of the instance,it use 50% IO. I get the trace file and tkprof as following:
SELECT UNIQUE sp.name, sp.sid, DECODE(p.type, 1, 'Boolean', 2, 'String', 3,'Integer', 4, 'Filename', ' '), sp.value, p.issys_modifiable, p.description FROM v$spparameter sp, v$parameter p WHERE sp.name = p.name ORDER BY sp.name,sp.sid
call count cpu elapsed disk query current rows
Parse 4 0.02 0.01 0 0 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 9 4.36 34.12 7980 0 0 783
total 17 4.38 34.13 7980 0 0 783
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5 (SYSTEM)
Rows Row Source Operation
261 SORT UNIQUE (cr=0 pr=0 pw=0 time=1214116 us)
261 HASH JOIN (cr=0 pr=0 pw=0 time=1221296 us)
361485 MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=370609 us)
261 FIXED TABLE FULL X$KSPSPFILE (cr=0 pr=0 pw=0 time=19777 us)
361485 BUFFER SORT (cr=0 pr=0 pw=0 time=6413 us)
1385 FIXED TABLE FULL X$KSPPCV (cr=0 pr=0 pw=0 time=4180 us)
1379 FIXED TABLE FULL X$KSPPI (cr=0 pr=0 pw=0 time=7001 us)
It seems Oracle FTS the X$KSPPCV and X$KSPPI.
Can anybody give me some suggestion to improve the performance?
thanks.

Is there a difference in the query plans on the two machines?
Did you analyze the SYS and SYSTEM schemas on one system and not the other?
Are there different initialization parameters on the two machines?
What do you mean by "it use 50% IO"? I'm not sure what that means and I'm not sure how you're measuring that.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC

Similar Messages

  • High TimeDataRetrieval in SSRS, for simple queries

    Hi there
    We have a SharePoint 2013 instance running PowerView, that is hitting an SSAS Tabular Model instance in the back-end.
    We are seeing very slow response times in PowerView. I dug into things, and I noted that if I use a Report Data Source connection that impersonates the current user, it runs very slowly; whereas if I set in the Report Data Source in SharePoint that it should
    "Use the following credentials", and specify the SAME account I am logged in as the one that gives the slow results; it all works lightning fast.
    SSAS doesn't seem to be the issue (It has all data in an 'InMemory' SSAS Tabular Model implementation , and CPU and RAM usage are very low throughout; and in reviewing Profiler results, the query responses seems to be the same on both the SLOW and FAST runs)
    I checked Fiddler that pointed at the SSRS calls being slow - and on reviewing the SSRS logs, I see the following: (The same 3 queries were run for each; returning the same number of records in each case (roughly 51) - results here are in ms, and are for
    the "TimeDataRetrieval" value:
    QUERIES WITHOUT EMBEDDED CREDENTIALS:
    Query 1 - 3074
    Query 2 - 3085
    Query 3 - 84
    QUERIES WITH EMBEDDED CREDENTIALS:
    Query 1 - 76
    Query 2 - 61
    Query 3 - 9
    I also noted that if I run the FAST connection query, then closed the IE window, opened a new IE window, and then used the OTHER SLOW connection, then for the time-being it works fast for every request I make then - as if somewhere that connection is cached
    for this user?
    Any thoughts would be greatly appreciated.
    Thanks
    David

    Hi Jude_44,
    Thank you for your question.
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
    Thank you for your understanding and support.
    Thanks,
    Wendy Fu
    If you have any feedback on our support, please click
    here.
    Wendy Fu
    TechNet Community Support

  • Deploying Adobe Extension Manager and Toolkit for CreateJS

    Has anyone tried deploying Adobe Extension Manager and/or Toolkit for CreateJS through Group Policy. Any tips for gettig it to work?

    We haven't tried to deploy Extension Manager through Group Policy. Maybe you can consider Adobe Application Manager Enterprise Edition (AAMEE): http://www.adobe.com/devnet/creativesuite/enterprisedeployment.html .

  • CS3 How to choose a Codec/extension wrapper and Preset for a project

    Goal 
    To create fanvideos from commercial DVDs for personal fair use.
    Potential to use to promote a series in a social media campaign with permission of owner.
    Tools on Hand
    Three seasons worth of the series ripped from commercial DVD --> .VOB format in TS Folders on my 2008 10.5.8 Mac using MacTheRipper
    FFMpegx CODEC pack.
    CS3 Premierie Pro with Mac updates that were posted on the Adobe site.
    So far I've been able to
    I managed to use ffmpegx to convert my .vobs into .mp4s, which I then renamed as .movs to get Premiere to recognize it.  I managed to sync my audio with the mp4 encoder (may have been a H264 but not sure by changinge the frame-rate from ntsc film to ntsc. 
    Problems I've encountered
    I had admittedly been experimenting with different codecs and .exts, trying to find optimum quality:size:premiere/iDVD compability, so I had a variety of source footage in my project.  For some, I had to readjust the pixel aspect ratio.  But when i rendered/exported the final product...everything was squished and incorrect.  I had sound syncing issues for some of my trials, too.  Before I was trying to make things compatable with iDVD.  I no longer care about this and have no intention of burning anything with this project. 
    I saw things on the internet about using Encore's library of codecs, but couldn't find anything .dll on my computer.  I also have no idea how to use any of hte CS3 support software (bridge, extension manager, after affects)
    Current steps I'd like to take
    I'm starting over and converting my .vob files into something Adobe will recognize (a simple file name change didn't work in this instance).  I've read the different file types that Adobe PPro recognizes, but it's come to my attention that it prefers some file types to others.  Which extension wrappers are the least likely to cause headaches when trying to edit in Pro?
    Out of those possible codecs and extension wrappers that are compatible, which are the best for fitting a lot of quality into a small filesize?  I heard good things about H264, but I don't know if the patch I dl'd will be enough to make it Adobe PrePro compatible (and I know there are a few different kinds of h264.  the only one i had any success with at all put it an mp4 mpeg container, but didn't end up being anything of a filesize savings compared to the other h264, which ended up being useless).  I have about 60 .vobs that are 2 GB big, and don't have an external hard drive yet, so this is a consideration.
    How do I choose a preset for my new project?  These DVDs were released over the course of 3 years, and they don't all have the same specs in their native format (when I grab the info from DVD player, it's different).  Will I have better luck using different seasons if I convert them from .vobs to identical target file types?  I guess the better question is, when using 3 different types of files that are wrapped into .vobs, am I better off customizing the target file to match the file information DVD Player gives me for each different season....or trying to find specs that serve all three seasons best and have more uniform files imported into Premiere?  Eitherway, how do I know how to pick a Preset?  I try keeping a 16:9 or 16:9 DVD in my target files, but does that make it a widescreen project to start?  Once I start the project, is it too late to change these settings?
    Thank you for helping a newbie.  I tried looking around for answers to each problem as I came across them, but eventually I got overwhelmed and threw my hands up, deleting everything.  I'm ready to ask for help and have a little patience.
    *It's important for me to use free or very affordable 3rd party software, as I am currently unemployed. 

    You MAY want to think about a product that will read the output from your DVD player and convert to DV AVI
    Matt with Grass Valley Canopus in their tech support department stated that the model
    110 will suffice for most hobbyist. If a person has a lot of tapes that were played
    often the tape stretches and the magnetic coding diminishes. If your goal is to encode
    tapes in good shape buy the 110, if you will be encoding old tapes of poor quality buy
    the model 300
    Both the 110 and 300 are two way devices so you may output back to tape... if you don't
    need that, look at the model 55
    http://www.grassvalley.com/products/advc55 One Way Only to Computer
    http://www.grassvalley.com/products/advc110 for good tapes, or
    http://www.grassvalley.com/products/advc300 better with OLD tapes

  • Let's talk motherboard and CPU in simple English please-

    I am a home enthusiast/student (signed up with a popular web site) that loves all things Adobe. I have the Master Suite. I love Premiere Pro (can’t wait to learn After Effects). I have a consumer-grade AVCHD video camera from a ‘big box store’.
    I don’t care about having a gaming rig or overclocking, or learning about RAID, or a huge/heavy computer case in my tiny office. So the minimum number of drives will work.
    I have been watching Newegg TV and the ASUS guy nearly put me in a coma with all the acronyms.
    I want a ‘middle of the road’ rig.
    Can someone enlighten me on the following:
    Sandy Bridge vs. Ivy Bridge?
    LGA 1155 vs. LGA 2011?
    What does this mean: “Onboard Video Chipset: Supported only by CPU with integrated graphic”? This is a common note on the motherboard description on the website.
    SSD caching?
    Thanks.

    >I want a ‘middle of the road’ rig
    What does that mean in dollars?
    Check current prices, but last month this was under $1,500 (excluding software)
    Intel i7 3770k CPU
    http://www.newegg.com/Product/Product.aspx?Item=N82E16819116501
    Motherboard
    http://www.newegg.com/Product/Product.aspx?Item=N82E16813121640
    16Gig Ram
    http://www.newegg.com/Product/Product.aspx?Item=N82E16820148600
    Mid-Tower Case
    http://www.newegg.com/Product/Product.aspx?Item=N82E16811129042
    850w Power Supply
    http://www.newegg.com/Product/Product.aspx?Item=N82E16817171061
    500Gig Drive
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822136769
    500Gig Drive
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822136769
    1Terabyte Drive
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822236339
    GTX 660 Ti 2Gig
    http://www.newegg.com/Product/Product.aspx?Item=N82E16814130809
    120mm x2 Case Fan
    http://www.newegg.com/Product/Product.aspx?Item=N82E16835103060
    Keyboard & Mouse
    http://www.newegg.com/Product/Product.aspx?Item=N82E16823109232
    Sata DVD Writer
    http://www.newegg.com/Product/Product.aspx?Item=N82E16827135204
    Again check current... last month this was under $2,000 (excluding software)
    Intel i7 3930k CPU
    http://www.newegg.com/Product/Product.aspx?Item=N82E16819116492
    Motherboard
    http://www.newegg.com/Product/Product.aspx?Item=N82E16813121552
    32Gig Ram
    http://www.newegg.com/Product/Product.aspx?Item=N82E16820231507
    Full Tower Case
    http://www.newegg.com/Product/Product.aspx?Item=N82E16811119225
    1000w Power Supply
    http://www.newegg.com/Product/Product.aspx?Item=N82E16817171056
    500Gig Drive
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822136769
    500Gig Drive
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822136769
    1Terabyte Drive
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822236339
    1Terabyte Drive
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822236339
    GTX670 2Gig
    http://www.newegg.com/Product/Product.aspx?Item=N82E16814130782
    Keyboard & Mouse
    http://www.newegg.com/Product/Product.aspx?Item=N82E16823109232
    Sata DVD Writer
    http://www.newegg.com/Product/Product.aspx?Item=N82E16827135204
    CPU Cooler
    http://www.newegg.com/Product/Product.aspx?Item=N82E16835103099
    Premiere Pro drive configuration - NOTE that I do NOT do raid
    500Gig #1 for Windows and all software
    500Gig #2 for Projects and all temp files
    1T #1 for all source media (video and sound) & exports w/3 drives
    1T #2 for all exported media w/4 drives

  • Lightroom cc way slower than Lr5 and struggling for simple cropping and panning

    I wonder if there is something wrong here. I have a decent machine (2500k overclocked to almost 5ghz, GTX570, 8Gb ram, Lr installed on a SSD).
    LR5 served me well for a long time, and despite being slow at some tasks it was ok. On the other hand, LRcc offers very few new features and a big performance decrease.
    I'm editing some 20mp canon raw files, and the simple task or cropping/rotating/panning became a nightmare. It is slow and something appears to be very wrong when I try to recrop a virtual copy of an image. On top of moving very slowly, the crop rectangle sometimes keeps resetting its position and "refuses" to stay in the place I put it at.
    When I zoom and pan on library mode everything goes well, it only gets bad on developer mode. If I zoom 1:1 and have some adjustments made with the spot removal brush, it feels like I'm panning a gigantic file on a 20 year old computer. Even in a clean image with no adjustments at all, developer mode struggles badly to pan when zoomed.
    My catalog has around 40,000 files but it shouldn't matter when developing a single photo. Again, the same catalog was on LR5 and didn't have any of these issues.
    Checked the computer resources usage, and nothing is actually being forced to the limit. Total memory usage never goes above 5gb (out of 8gb) and processor usage doesn't raise above 40%. If feels that LRcc is not making proper use of the computing power available.

    Just replying to myself here, I tried turning off that option under Preferences>Performance to use my Graphics processor, and the the performance boosted immediately. Now it's on par with Lr5.
    I am using the 350.12 (latest as today) version of Nvidia driver with a GTX570. Apparently there is an issue on the way that LRcc uses my video card. I hope this gets fixed, but at least now my LRcc is usable.

  • OK WHATS THE BEST MOTHERBOARD AND CPU FOR ONLINE GAMING

    Here i go again thinking about stepping up on motherboaerds and cpus! If you have an opinion on which would make the best online gaming setup let me here you hollor! I will only go with intel so all amd lovers  dont bother! 

    He's better off with a Q6600 over the E6850, because you can clock the Q6600 to QX6850 speeds and it's quad core, so future proof anyway.
    And the Q6600 costs less.
    However the CPU will be very limited by that 7900GT.
    A 8800GTX or HD2900XT would be a better choice.
    EDIT: Get a G0 stepping Q6600, they have a lower TDP of 95w and require less voltage.

  • Offset and Limit for sql queries

    Hello all,
    Can anybody help me please...
    i have used the offset and limit in postgres sql when we want to get only a number of rows starting from a particular row of a table. is there any similar things available in oracle.
    Hope some one can help me soon...
    Thanks and regards,
    sreejesh Rajan,

    use the following query to get records from 10th row to 14th row
    select * from scott.emp where rownum < 15
    minus
    select * from scott.emp where rownum < 10

  • "System Resource Exceeded" for simple select query in Access 2013

    Using Access 2013 32-bit on a Windows Server 2008 R2 Enterprise. This computer has
    8 GB of RAM.
    I am getting:
    "System Resource Exceeded"  errors in two different databases
    for simple queries like:
    SELECT FROM .... GROUP BY ...
    UPDATE... SET ... WHERE ...
    I compacted the databases several times, no result. One database size is approx 1 GB, the other one is approx. 600 MB.
    I didn't have any problems in Office 2010
    so I had to revert to this version.
    Please advise.
    Regards,
    M.R.

    Hi Greg. I too am running Access on an RDP server. Checking Task Manager, I can see many copies of MSACCESS running in the process list, from all users on the server. We typically have 40-60 users on that server. I am only changing the Processor Affinity
    for MY copy, and only when I run into this problem. Restarting Access daily, I always get back to multi-processor mode soon thereafter.
    As this problem only seems to happen on very large Access table updates, and as there are only three of us performing those kind of updates, we have good control on who might want to change the affinity setting to solve this problem. However, I
    understand that in other environments this might not be a good solution. In my case, we have 16 processors on the server, so I always take #1, my co-worker here in the US always takes #2, etc. This works for us, and I am only describing it here in case it
    works for someone else.
    The big question in my mind is what multi-threading methods are employed by Microsoft for Access that would cause this problem for very large datasets. Processing time for an update query on, say, 2 million records is massively improved by going down
    to 1 processor. The problem is easily reproduced, and so far I have not seen it in Excel even when working with very large worksheets. Also have not seen it in MS SQL. It is just happening in Access.

  • Gtksql pkgbuild. Simple GUI for sql queries.

    This is simple program for running sql queries. I made this pkgbuild some time ago but I'm not sure how much it is good. Gtksql should support both mysql and postgresql but postgresql support is broken (probably to old). Developer promises a brand new gtk2 based application. We will see...
    Since I can't make this pkgbuild any better I'm just posting it.
    gtksql PKGBUILD
    pkgname=gtksql
    pkgver=0.4.2
    pkgrel=1
    pkgdesc="Gtk front-end for sql queries"
    url="http://gtksql.sourceforge.net/"
    depends=('lua' 'gtk')
    makedepends=('mysql')
    source=(http://dl.sourceforge.net/sourceforge/gtksql/$pkgname-$pkgver.tar.gz)
    md5sums=('a0ba598027cd49f69f951a31342b51fd')
    build() {
    cd $startdir/src/$pkgname-$pkgver
    ./configure --prefix=/usr
    --with-mysql
    --with-lua
    make || return 1
    make prefix=$startdir/pkg/usr install

    Hello:
    I'm trying to do add a calculated variable to my view that isn't an Entity
    Attribute, its simple really but i don't know if I'm leaving something out or
    not.
    I add the "new Attribute" , call it nCompleted, type - number
    I then check off everything as is required(as documented in Help)
    for an Sql Derived Attribute, the following code is entered in the
    Expression Box :
    Select group_services.office, count(client_group_services.clnt_group_services_id) as nCompleted
    Where group_services_id=client_group_services_id(+)
    And client_group_services_id.result="Completed";
    It isn't working, what am I missing???? Sheena:
    It probably isn't working because of missing 'group by' clause. I don't know the exact detail of your query statement, but suppose you want to count number of employees in a dept, then you're query should look like:
    select dept.deptno, dept.dname, count(emp.empno) as nCount
       from dept, emp where dept.deptno = emp.deptno(+)
       group by dept.deptno, dept.dname  // This group-by is important
    OR if you want to use nested SELECT
    select dept.deptno, dept.dname,
       (select count(emp.empno) from emp where dept.deptno=emp.deptno(+)) as nCount
       from dept

  • Extensions like Ghostery, WOT or AdBlock stop working after two or three times. Restarting the webpage in a new tab the extensions will work again for several times and then stop again. Has anybody an explanation or a workaround for this bug in Safari 5?

    Extensions like Ghostery, WOT or AdBlock stop working after two or three times. Restarting the webpage in a new tab the extensions will work again for several times and then stop again. Has anybody an explanation or a workaround for this bug in Safari 5?

    Remove the extensions, redownload Safari, reload the extensions.
    http://www.apple.com/safari/download/
    And if you really want a better experience, use Firefox, tons more choices and possibilities there.
    Firefox's "NoScript" will block the Trojan going around on websites. Best web security you can get.
    https://addons.mozilla.org/en-US/firefox/addon/noscript/
    Ghostery, Ad Block Plus and thousands of add-ons more have originated on Firefox.

  • Can I use two Time Capsules? one as an extension of my laptop (for music and video storage) and the other one to back up everything from the laptop and  Time Capsule (for music and videos)

    Can I use two Time Capsules? one as an extension of my laptop (for music and video storage) and the other one to back up everything from the laptop and  Time Capsule (for music and videos)

    Not via Time Machine.   It cannot back up from a network location.
    The 3rd-party apps CarbonCopyCloner and ChronoSync may be workable alternatives.
    EDIT:  And, if you're going to do that, you could back up from the Time Capsule to a USB drive connected to the TC's USB port.  Second TC not required.
    Message was edited by: Pondini

  • Transport request for BW queries and roles.

    Hi All,
    we need to craete 20 bw queries on 4 multiproviders. We need to save 18 queries as workbooks in one role and the other 2 queries in other role.  Both the roles and queries does not exist and will be created in Developement environment.
    We just want to know how we can transport them in the quality environment. What is the right method to transport them.
    Can we transport all the object queries, workbooks and roles in number of transport requests so that if few queries or workbooks needs any changes then we do not have to transport all objects just the request which includes the changed objects.
    Thanks & Kind Regards,
    Hardeep

    Thanks a lot for all of you for your quick response. But i still have questions.
    If we create one transport for roles and one transport for each query then we will be having 21 transport requests. But the transport request on same multiprovider can lock the clacuated key figures and restricted key figures, if they are present in more than one query, it means they will be present in more than one transport requests, so they can be locked and trasport request will be failed.
    If i just create one transport request for all the objects roles, quaries & workbooks it will not loack any object, and transport request will not fail. But i have to transport all the objects again if i need to change one of the queries.
    Please let me know if there is a method that i can divide my queries as per multiprovider and can create transport requests as per multiprovider so that we can not lock calculated keyfigures and restricted key figures. Is workbooks can be published in the role in the same transport request. If it is and if in more than one transport request we are publishing the different workbooks to the same role, will it lock the role.

  • Hyper-V Resource Pools for Memory and CPU

    Hi all,
    I'm trying to understand the concepts and details of resource pools in Hyper-V in Windows Server 2012. It seems as if there is almost no documentation on all that. Perhaps somebody can support me here, maybe I've not seen some docs yet.
    So far, I learned that resource pools in their current implementation serve mainly for metering purposes. You can create pools per tenant and then group VM resources into those pools to facilitate resource metering per tenant. That is, you enable metering
    once per pool and get all the data necessary to bill that one customer for all their resources (without metering individual VMs). Is that correct?
    Furthermore, it seems to me that an ethernet pool goes one step further by providing an abstraction level for virtual switches. As far as I've understood you can add multiple vSwitches to a pool and then connect a VM to the pool. Hyper-V then decides which
    actual switch to use. This may be handy in a multi-host environment if vSwitches on different hosts use different names although they connect to the same network. Is that correct?
    So - talking about actually managing that stuff I've learned how to create a pool and how to add VHD locations and virtual switches to a pool. Enabling resource metering for a pool then collects usage data from all the resources inside that pool.
    But now: I can create a pool for memory and a pool for CPU. But I cannot add resources to those. Neither can I add a complete VM to a pool. Now I'm launching a VM that belongs to a customer whose resources I'm metering. How will Hyper-V know that it's
    supposed to collect data on CPU and memory usage for that VM?
    Am I missing something here? Or is pool-based metering only good for ethernet and VHD resources, and CPU and memory still need to be metered per VM?
    Thanks for clarification,
    Nils
    Nils Kaczenski
    MVP Directory Services
    Hannover, Germany

    Thank you for the links. I already knew those, and unfortunately they are not matching my question. Two of them are about Windows Server 2008/R2, and one only lists a WMI interface. What I'm after is a new feature in Windows Server 2012, and I need conceptional
    information.
    Thanks for the research anyway. I appreciate that a lot!
    In the meantime I've gotten quite far in my own research. See my entry above of January 7th. Some additions:
    In Windows Server 2012, Hyper-V resource pools are mainly for metering purposes. You cannot compare them to resource pools in VMware.
    A resource pool in Hyper-V (2012) facilitates resource metering and billing for VM usage especially in hosting scenarios. You can either measure resource usage for single VMs, or you can group existing resources (such as CPU power, RAM, virtual hard disk
    storage, Ethernet traffic) into pools. Those pools will mostly be assigned to one customer each. That way you can bill the customer for their resource usage in a given time period by just querying the customer's pool.
    Metering only collects aggregated data with one value per resource (i.e. overall CPU usage, maximum VHD storage, summed Ethernet traffic and so on). You can control the time period by explicitly resetting the counter at any given time (a day, a week, a
    month or what you like).
    There is no detailed data. The aggregate values serve as a basis for billing, not as monitoring data. If you need detailed monitoring data use Performance Monitor.
    There is currently only one type of resource pool that adds an abstraction layer to a virtualization farm, and that is the Ethernet type. You can use that type for metering, but you can also use it to group a number of virtual switches (that connect to
    the same network segment) and then a VM connected to that pool will automatically use an appropriate virtual switch from the pool. You need no longer worry about virtual switch names across multiple hosts as long as all equivalent virtual switches are
    added to the pool.
    While you can manage two types of pool resources in the GUI (VHD pools and Ethernet pools) you should only manage resource pools via PowerShell. Only there will you be able to control what happens. And only PowerShell provides a means to start, stop, and
    reset metering and query metering data.
    The process to use resource pools in Hyper-V (2012) in short:
    First create a new pool via PowerShell (New-VMResourcePool). (In case of a VHD pool you must specify the VHD storage paths to add to the pool in the moment you create the pool.)
    In case of an Ethernet pool add existing virtual switches to the pool (Add-VMSwitch).
    Reconfigure existing VMs that you want to measure so that they use resources from the pool. The PowerShell
    Set-VM* commands accept a parameter -ResourcePoolName to do that. Example:
    Set-VMMemory -VMName APP-02 -ResourcePoolName MyPool1
    Start measuring with Enable-VMResourceMetering.
    Query collected data as often as you need with Measure-VMResourcePool.
    Note that you should specify the pool resource type in the command to get reliable data (see my post above, Jan 7th).
    When a metering period (such as a week or a month) has passed, reset the counter to zero with
    Reset-VMResourceMetering.
    Hope that helps. I consider this the answer to my own question. ;)
    Here's some links I collected:
    http://itproctology.blogspot.ca/2012/12/hyper-v-resource-pool-introduction.html
    http://www.ms4u.info/2012/12/configure-ethernet-resource-pool-in.html
    http://blogs.technet.com/b/virtualization/archive/2012/08/16/introduction-to-resource-metering.aspx
    http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/thread/1ce4e2b2-8fdd-4f16-8ab6-e1e1da6d07e3
    Best wishes, Nils
    Nils Kaczenski
    MVP Directory Services
    Hannover, Germany

  • How to kill Forms Runaway Process using 95% CPU and running for 2 hours.

    We had a situation at E-Business Suite customer (using Oracle VM server) where some of Form processes were not being cleared by form timeout settings automatically.
    Also when user exits the form session from front end, the linux form process (PID) and DB session did not exit properly, so they got hung.
    They were spiking CPU and memory usage and causing e-business suite to perform slowely and ultimately causing VM host to reboot the production VM guest (running on Linux).
    We could see the form processes (PIDs) using almost 100% cpu with "top" command and running for a long time.
    Also we verified those Form Sessions did not exist in the application itself.
    ie. Using from Grid Control -> OAM-> Site Map -> Monitoring (tab) -> "Form Sessions".
    It means that we could safely kill that form process from Linux using "kill -9 <PID>" command.
    But that required a continuous monitoring and manual DBA intervention as customer is 24x7 customer.
    So, I wrote a shell script to do the following;
    •     Cron job runs every half an hour 7 days a week which calls this shell script.
    •     Shell script runs and tries to find "top two" f60webmx processes (form sessions) using over 95% cpu with 2 minutes interval.
    •     If no process is found or CPU% is less than 95%, it exits and does nothing.
    •     If top process is found, it searches for its DB session using apps login (with hidden apps password file - /home/applmgr/.pwd).
    a.     If DB session is NOT found (which means form process is hung), it kills the process from unix and emails results to <[email protected]>
    b.     If DB session is found, it waits for 2 hours so that form process times automatically via form session timeout setting.
    It also emails the SQL to check the DB session for that form process.
    c.     If DB session is found and it does not timeout after 2 hours,
    it kills the process from unix (which in turn kills the DB session). Output is emailed.
    This are the files required for this;
    1. Cron job which calls the shell script looks like this;
    # Kill form runaway process, using over 95% cpu having no DB session or DB session for > 2hrs
    00,30 * * * * /home/applmgr/forms_runaway.sh 2>&1
    2. SQL that this script calls is /home/applmgr/frm_runaway.sql and looks like;
    set head off
    set verify off
    set feedback off
    set pagesize 0
    define form_client_PID = &1
    select count(*) from v$session s , v$process p, FND_FORM_SESSIONS_V f where S.AUDSID=f.audsid and p.addr=s.paddr and s.process='&form_client_PID';
    3. Actual shell script is /home/applmgr/forms_runaway.sh and looks like;
    # Author : Amandeep Singh
    # Description : Kills runaway form processes using more than 95% cpu
    # and Form Session with no DB session or DB session > 2hrs
    # Dated : 11-April-2012
    #!/bin/bash
    . /home/applmgr/.bash_profile
    PWD=`cat ~/.pwd`
    export PWD
    echo "`date`">/tmp/runaway_forms.log
    echo "----------------------------------">>/tmp/runaway_forms.log
    VAR1=`top -b -u applmgr -n 1|grep f60webmx|grep -v sh|grep -v awk|grep -v top|sort -nrk9|head -2|sed 's/^[ \t]*//;s/[ \t]*$//'| awk '{ if ($9 > 95 && $12 = "f60webmx") print $1 " "$9 " "$11 " "$12; }'`
    PID1=`echo $VAR1|awk '{print $1}'`
    CPU1=`echo $VAR1|awk '{print $2}'`
    TIME1=`echo $VAR1|awk '{print $3}'`
    PROG1=`echo $VAR1|awk '{print $4}'`
    PID_1=`echo $VAR1|awk '{print $5}'`
    CPU_1=`echo $VAR1|awk '{print $6}'`
    TIME_1=`echo $VAR1|awk '{print $7}'`
    PROG_1=`echo $VAR1|awk '{print $8}'`
    echo "PID1="$PID1", CPU%="$CPU1", Running Time="$TIME1", Program="$PROG1>>/tmp/runaway_forms.log
    echo "PID_1="$PID_1", CPU%="$CPU_1", Running Time="$TIME_1", Program="$PROG_1>>/tmp/runaway_forms.log
    echo " ">>/tmp/runaway_forms.log
    sleep 120
    echo "`date`">>/tmp/runaway_forms.log
    echo "----------------------------------">>/tmp/runaway_forms.log
    VAR2=`top -b -u applmgr -n 1|grep f60webmx|grep -v sh|grep -v awk|grep -v top|sort -nrk9|head -2|sed 's/^[ \t]*//;s/[ \t]*$//'| awk '{ if ($9 > 95 && $12 = "f60webmx") print $1 " "$9 " "$11 " "$12; }'`
    PID2=`echo $VAR2|awk '{print $1}'`
    CPU2=`echo $VAR2|awk '{print $2}'`
    TIME2=`echo $VAR2|awk '{print $3}'`
    PROG2=`echo $VAR2|awk '{print $4}'`
    PID_2=`echo $VAR2|awk '{print $5}'`
    CPU_2=`echo $VAR2|awk '{print $6}'`
    TIME_2=`echo $VAR2|awk '{print $7}'`
    PROG_2=`echo $VAR2|awk '{print $8}'`
    HRS=`echo $TIME1|cut -d: -f1`
    exprHRS=`expr "$HRS"`
    echo "PID2="$PID2", CPU%="$CPU2", Running Time="$TIME2", Program="$PROG2>>/tmp/runaway_forms.log
    echo "PID_2="$PID_2", CPU%="$CPU_2", Running Time="$TIME_2", Program="$PROG_2>>/tmp/runaway_forms.log
    echo " ">>/tmp/runaway_forms.log
    # If PID1 or PID2 is NULL
    if [ -z ${PID1} ] || [ -z ${PID2} ]
    then
    echo "no top processes found. Either PID is NULL OR CPU% is less than 95%. Exiting...">>/tmp/runaway_forms.log
    elif
    # If PID1 is equal to PID2 or PID1=PID_2 or PID_1=PID2 or PID_1=PID_2
    [ ${PID1} -eq ${PID2} ] || [ ${PID1} -eq ${PID_2} ] || [ ${PID_1} -eq ${PID2} ] || [ ${PID_1} -eq ${PID_2} ];
    then
    DB_SESSION=`$ORACLE_HOME/bin/sqlplus -S apps/$PWD @/home/applmgr/frm_runaway.sql $PID1 << EOF
    EOF`
    echo " ">>/tmp/runaway_forms.log
    echo "DB_SESSION ="$DB_SESSION >>/tmp/runaway_forms.log
    # if no DB session found for PID
    if [ $DB_SESSION -eq 0 ] then
    echo " ">>/tmp/runaway_forms.log
    echo "Killed Following Runaway Forms Process:">>/tmp/runaway_forms.log
    echo "-------------------------------------------------------------------">>/tmp/runaway_forms.log
    echo "PID="$PID1", CPU%="$CPU1", Running Time="$TIME1", Program="$PROG1>>/tmp/runaway_forms.log
    kill -9 $PID1
    #Email the output
    mailx -s "Killed: `hostname -a` Runaway Form Processes" [email protected] </tmp/runaway_forms.log
    cat /tmp/runaway_forms.log
    else
    # If DB session exists for PID
    if [ ${exprHRS} -gt 120 ]; then
    echo $DB_SESSION "of Database sessions exist for this forms process-PID="$PID1". But its running for more than 2 hours. ">>/tmp/runaway_forms.log
    echo "Process running time is "$exprHRS" minutes.">>/tmp/runaway_forms.log
    echo "Killed Following Runaway Forms Process:">>/tmp/runaway_forms.log
    echo "-------------------------------------------------------------------">>/tmp/runaway_forms.log
    echo "PID="$PID1", CPU%="$CPU1", Running Time="$TIME1", Program="$PROG1>>/tmp/runaway_forms.log
    kill -9 $PID1
    #Email the output
    mailx -s "`hostname -a`: Runaway Form Processes" [email protected] </tmp/runaway_forms.log
    cat /tmp/runaway_forms.log
    else
    echo "Process running time is "$exprHRS" minutes.">>/tmp/runaway_forms.log
    echo $DB_SESSION "of Database sessions exist for PID="$PID1" and is less than 2 hours old. Not killing...">>/tmp/runaway_forms.log
    echo "For more details on this PID, run following SQL query;">>/tmp/runaway_forms.log
    echo "-----------------------------------------------------------------------">>/tmp/runaway_forms.log
    echo "set pages 9999 lines 150">>/tmp/runaway_forms.log
    echo "select f.user_form_name, f.user_name, p.spid DB_OS_ID , s.process client_os_id,, s.audsid, f.PROCESS_SPID Forms_SPID,">>/tmp/runaway_forms.log
    echo "to_char(s.logon_time,'DD-Mon-YY hh:mi:ss'), s.seconds_in_wait">>/tmp/runaway_forms.log
    echo "from v\$session s , v\$process p, FND_FORM_SESSIONS_V f">>/tmp/runaway_forms.log
    echo "where S.AUDSID=f.audsid and p.addr=s.paddr and s.process='"$PID1"' order by p.spid;">>/tmp/runaway_forms.log
    mailx -s "`hostname -a`: Runaway Form Processes" [email protected] </tmp/runaway_forms.log
    cat /tmp/runaway_forms.log
    fi
    fi
    else
    #if PID1 and PID2 are not equal or CPU% is less than 95%.
    echo "No unique CPU hogging form processes found. Exiting...">>/tmp/runaway_forms.log
    cat /tmp/runaway_forms.log
    fi
    If you have the same problem with some other unix and DB processes, the script can be easily modified and used.
    But use this with thorough testing first (by commenting out <kill -9 $PID1> lines.
    Good luck.
    Edited by: R12_AppsDBA on 19/04/2012 13:10

    Thanks for sharing the script!
    Hussein

Maybe you are looking for