Multiple logical disks as custom objects

Hello All,
I have been working with PowerShell for a few months now and my goal is to help my colleagues perform the same tasks a little faster.
As such I took on the responsibility of creating a little GUI tool to find some details about a remote computer.
I have created as advanced function with a custom object.
I am limited to PowerShell V2 as this is the version used in the company.
the function is as below:
<#
.Synopsis
   Short description
.DESCRIPTION
   Long description
.EXAMPLE
   Example of how to use this cmdlet
.EXAMPLE
   Another example of how to use this cmdlet
#>
function Get-PartitionInfo
    [CmdletBinding()]
    Param
        # Computer Name
        [Parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true,Position=0)]
        [string[]]$ComputerName
    Process
        Write-Verbose "Testing Machine"
        if(Test-Connection $computername -Count 1 -Quiet)
            Try
                Write-Verbose "Gathering Data"
                $partitions = Get-WmiObject -Class win32_logicalDisk -ComputerName $computername -ErrorAction Stop
                    $myobj = @{
'Device ID' = $partitions.DeviceID
                        'Access' = $partitions.Access
                        'Compressed' = $partitions.Compressed
                        'Drive Type' = $partitions.DriveType
                        'File System' = $partitions.FileSystem
                        'Free Space' = $partitions.FreeSpace
                        'Media Type' = $partitions.MediaType
                        'Size' = $partitions.Size
                        'Name' = $partitions.Name
                    Write-Output "Partition information"
                    Write-Output (New-Object -TypeName PSObject -Property $myobj)
            Catch
                Write-Output "Machine not online or PowerShell is unable to read it"
                Write-Output "$($Error[0])"
        else
            Write-Output "Machine not online"
Please excuse the fact that is is not complete, after I get it to work it will be polished.
My problem is as follows:
The output of this function is correct but, it is as below:
Partition information
Drive Type  : {3, 5, 3}
Compressed  : {False, $null, False}
Device ID   : {C:, D:, E:}
Size        : {167772155904, $null, 5356126208}
Name        : {C:, D:, E:}
Free Space  : {71496396800, $null, 5356109824}
Access      : {0, $null, 0}
Media Type  : {12, 11, 12}
File System : {NTFS, $null, FAT32}
What I am trying to do is to separate the partitions but have not had any success.
Searched high and low for a solution to list custom objects one by one.
Anyone have any ideea if this can be done?

You have to enumerate the results of the WMI.
You are also sort of off when it comes to how to develop output with PowerShell.
You are tryingtoo do too many things that you do not understand and they are all conflicting. YOu cannot nis Write-Outpur statements into a colleciotn or it will not output correctly. Study the basics of how PowerShell does output
Here is how to get the output objects.  Test it manually with a number of computers to see how the output changes.  Try to understand why.
$computer=$env:computername
if(Test-Connection $computer -Count 1 -Quiet){
Get-WmiObject -Class win32_logicalDisk -ComputerName $computer |
Select Access,Compressed,DriveType,FileSystem,FreeSpace,MediaType,Size,Name
}else{
Write-Host 'Machine not online or PowerShell is unable to read it'
Once you understand why I did it this way and why it works you will have the basics needed to understand how to generalize.
Try not to write or use code that you do notunderstand.  Test and ask question.  Build the knowledge base from the ground up.
\_(ツ)_/

Similar Messages

  • Custom Logical Disk monitor incorrectly flapping between healthy and unhealthy

    One of the clients Ops Mgr 2012 SP1 UR8 environments I am supporting has had some custom logical disk monitoring setup; there are 5 groups dynamically populated by logical drives depending on their size (1st group has small drives up to the last group with
    very large drives). There is a 'Warning' and 'Critical' Monitor setup per server OS version, the Monitors are not Enabled. There are Overrides applied to each group to enable the Monitor and apply a threshold - different threshold for each group.
    During some BAU tuning I could see that some of the above Monitors were appearing as Top-Talking alerts. Further investigation showed that alerts were being triggered by drives that momentarily dropped below the applied threshold. I re-created the Monitors
    from 'Simple Threshold' to 'Consecutive Samples' and set the 'Number of Samples' to 6 @ 3 minute intervals.
    What I am seeing is that alerts from the above Monitors are still appearing as Top Talkers. When I check the Health Explorer of repeating alerts I can see the disk space is staying the same, below the applied threshold but the health is turning healthy then
    back to unhealthy. I have confirmed each noisy Object has the expected threshold as per its dynamic group allocation and have also confirmed the drives are not fluctuating above and below the threshold. One thing I have noticed is that some drives Performance
    View is patchy - lots of dotted lines between the coloured lines.
    Its almost like the Monitor moves a Logical Disk Object into unhealthy state in the correct (and expected) manner, then it somehow picks up an incorrect threshold which is below the current usage level. This moves it into a healthy state only for the
    whole process to repeat. For example: Drive X: on a server is very large, the Group that it sits in has a threshold of 102400MB, its current usage is ~stable at 45500MB. Looking in Health Explorer I can see 3:01pm green state/ 45573 last sampled value/ # of
    samples 1 | 3:16pm yellow state/ 45573/ 6 samples | 3:34pm green state/ 45572/ 1 samples | 3:49pm yellow state/ 45571/ 6 samples | 4:01pm green state/ 45425/ 1 sample etc etc.
    I'm scratching my head on this one and would appreciate any suggestions or assistance.
    Thanks
    BT

    Thanks for the reply. It is not just one server / drive this is happening on. I am seeing it on everything; once they go into an unhealthy state they periodically go healthy and back again with no change in disk free space. Just to elaborate on how it is
    setup; a Monitor has been created for each OS version (2003, 2008 and 2012) and a separate Monitor for Warning and Critical so 6 Monitors in total. Looking at the Warning Monitors; they are created with a threshold of 5120MB for 6 samples and set to disabled.
    The following groups have been created and the following thresholds added:
    Group 1 (less than 60GB size): override added to enable. This group will then pick up the 5120MB threshold.
    Group 2 (60 – 250GB size): override added to enable and override added for 10240MB threshold
    Group 3 (250 – 500GB size): override added to enable and override added for 20480MB threshold
    Group 4 (500 – 1TB size): override added to enable and override added for 51200MB threshold
    Group 5 (>1TB size): override added to enable and override added for 102400MB threshold
    One drive I was looking at was in Group 2 (threshold of 10240MB), it was staying at approx. 8500MB but periodically going into healthy state then after 10mins (6 polls @ 2min intervals) back to unhealthy. This process repeats once or twice per day.
    I am wondering if the Object is somehow picking up the threshold of the Monitor (5120MB) then going back to its correct overridden threshold. I have setup some test groups and monitors in a lab and will review the results over the coming days.
    When the monitors were setup as 'Simple Threshold' this worked fine but were noisy due to drives spiking downwards. It was only when I re-wrote them as 'Consecutive Samples over Threshold' Monitors that this issue has started occurring.
    Thanks

  • Event Bubbling Custom Object not inheriting from control

    One of the new things flash flex and xaml have are ways which
    the event easily bubbles up to a parent that knows how to handle
    the event. Similar to exceptions travel up until someone catches
    it.
    My goal is to use the frameworks event system on custom
    objects. My custom objects are:
    ApplicationConfiguration
    through composition contains:
    SecurityCollection which contains many or no SecurityElements
    and
    FileSystemCollection.cs which contains many or no
    FileSystemElement objects
    ect ect basically defining the following xml file with custom
    objects.
    [code]
    <ApplicationConfiguration>
    <communication>
    <hardwareinterface type="Ethernet">
    <ethernet localipaddress="192.168.1.2" localport="5555"
    remoteipaddress="192.168.1.1" remoteport="5555" />
    <serial baudrate="115200" port="COM1" />
    </hardwareinterface>
    <timing type="InternalClock" />
    </communication>
    <filesystem>
    <add id="location.scriptfiles" value="c:\\" />
    <add id="location.logfiles" value="c:\\" />
    <add id="location.configurationfiles" value="c:\\" />
    </filesystem>
    <security>
    <add id="name1" value="secret1" />
    <add id="name2" value="secret2" />
    </security>
    <logging EnableLogging="true"
    LogApplicationExceptions="true" LogInvalidMessages="true"
    CreateTranscript="true" />
    </ApplicationConfiguration>
    [/code]
    basically these custom objects abstract the xml details of
    accessing attributes, writing content out of the higher application
    layers.
    These custom objects hold the application configuration which
    contains the users options. The gui application uses these
    parameters across various windows forms, modal dialog boxes ect.
    The gui has a modal dialog that allows the user to modify these
    parameters during runtime.
    basically i manage: load, store, new, edit, delete of these
    configuration files using my custom objects.
    Where would event propagation help in custom objects like
    described above?
    ConfigurationSingleton.getInstance().ApplicationConfiguration.CommunicationElement.Hardwar eInterfaceElement.EthernetElement.RemoteIPAddress
    =
    System.Net.IPAddress.Parse(this.textBoxRemoteEthernetIpAddress.Text);
    The EthernetElement should propagate a changed event up to
    the parent ApplicationConfiguration which would persist this to the
    registry, db, file or whatever backend.
    currently this logic is maintained else where. I serialize
    the root node which compositely serializing the nested nodes and i
    check of the serialization is different from that in the backend
    … This tells me if the dom was modified. It works but i would
    like an event driven system.
    how should i implement bubbling using custom objects?
    3 implementation ideas:
    1) A simple way is to implement a singleton event manager:
    EventManager.RegisterRoutedEvent
    http://msdn2.microsoft.com/en-us/library/ms742806.aspx
    I like this idea but how can you tell which object is nested
    in who… this way the event can be stopped and discontinue
    propagation?
    2) If i use binders as discussed in Apress’s book:
    Event-Based
    Programming Taking Events to the Limit
    basically a binder connects the events between seperate
    objects together… although it would work for my app, I would
    like a more generalized approach so i can reuse the event system on
    future project.
    3) how does flash flex handle this..
    objectproxy.as?
    http://www.gamejd.com/resource/apollo_alpha1_docs/apiReference/combined/mx/utils/ObjectPro xy.html#getComplexProperty()
    >Provides a place for subclasses to override how a complex
    property that needs to be either proxied or daisy chained for event
    bubbling is managed.
    how does these systems all work....? Reflection ?
    this way i can simulate this on my own custom classes.
    Thanks!

    I have a strong sensation that the OSMF project is quite dead.
    no new submits since 2010, the contact form on the offical OSMF
    project website http://www.opensourcemediaframework.com/
    returns a PHP error.
    and many unanswered questions about OSMF in this forum.
    i think it would be wise to not use OSMF if possible, although
    I'm also stuck with it since we are utilizing HDS/PHDS
    protocols which are utilized in the framework.
    otherwise its quite a head-ache.
    I'm unable to get to a video element coming from a proxied element
    that is being produced via an HDS connection.
    and haven't found any solution that works.

  • Using a Custom Object to allow an Entity to link to itself

    Hey folks,
    Have a feeling the answer to this is No, but have to ask anyway.
    Is there any way that anyone is aware of to use a Custom Object to link an Entity to itself? For example... We have a CO (Custom Object) which is linked to Service Request. I want to link this CO to Service Request again, allowing one Service Request to be linked to another Service Request via the CO. I can obviously link Service Request to the CO, but the second link doesn't seem likely.
    Cheers,
    Mark

    You can link multiple Service Requests to a single Custom Object (1-3) very easliy. Just expose SRs under related items on the Custom Object and adjust Access Profiles appropriately. Alternately, you can add Custom Objects to related items on the Service Request page layout so SRs can be linked to multiple COs.
    In the end, you can end up with one CO linked to multiple SRs. This should give you what you are looking for - there may be more elaborate solutions involving multiple advanced custom objects.
    The key, of course, is how you need this data to be viewed or reported on. You'll need to assess this solution to see if the output is what you require.

  • What is the difference between Logical Disk and Physical Disk?

    Hi.
    When I do Performance Monitor, I got Logical Disk Avg. Disk sec/Write counter and  Physical Disk Avg. Disk sec/Write counter.
    But I can see the different Avg. value and Max. value. 
    Even if Logical and Physical Disk are one-to-one mapping.
    Why did i get the result?
    One the other hands, I got a same result that Logical Disk Avg. Disk sec/Read counter and  Physical Disk Avg. Disk sec/Read counter's Avg. value and Max. value.

    Physical Disk refers to an actual physical HDD (or array in a hardware RAID setup), whereas Logical Disk refers to a Volume that has been created on that disk.
    So if you have one disk with one volume created on it then the values are likely to be 1 to 1, but if you have multiple volumes on the disk, for instance a physical disk with C:\ and D:\ volumes running on it, then the logical disks relate to c:\ and d:\
    rather than the disk they're running on.
    See
    http://blogs.technet.com/b/askcore/archive/2012/03/16/windows-performance-monitor-disk-counters-explained.aspx for a more in depth explanation.

  • Logical Disk Free Space Monitor - Slow to detect low free space

    We are using the built in two trigger (MB and %) logical disk free space monitor in SCOM 2012 R2. We have setup overrides for MB warning and critical for both system and non-system drives and for a group containing disks we do not want monitored. The monitor
    actually works fine, triggering an alert when both the MB and % free criteria are met. The problem is that it takes almost an hour for the initial alert to fire. After the initial alert, if I further fill the disk to push it from warning to critical, the alert
    changes within the specifiec interval, which we have left at 15 minutes. The alert also clears using the 15 minute interval.
    Has anyone else seen this behavior with this monitor? A disk monitor that takes an hour to fire is not going to be very useful.

    I wanted to see for myself if there was anything else that I might be missing, so I opened up the Windows 2008 Logical Disk Free Space monitor XML and noticed that there is a NumSamples configuration that is set to 4. So, if the interval is 15 minutes, the
    disk would have to exceed both threshold types for 4 consecutive intervals in order to change state and generate alert. This would be a minimum of 1 hour before an alert is raised with the default 15 minutes interval.
    Unfortunately, NumSamples is not overrideable in the monitor type, which is too bad... The only way to get an alert sooner than one hour is to override interval. For example, if you want an alert within 20 minutes, override interval to 300 seconds (5 minutes).
    Here is the code - see for yourself:
    <UnitMonitor ID="Microsoft.Windows.Server.2008.LogicalDisk.FreeSpace" Accessibility="Public" Enabled="true" Target="Server2008!Microsoft.Windows.Server.2008.LogicalDisk" ParentMonitorID="SystemHealth!System.Health.AvailabilityState" Remotable="true" Priority="Normal" TypeID="Microsoft.Windows.Server.2008.FreeSpace.Monitortype" ConfirmDelivery="true">
    <Category>Custom</Category>
    <AlertSettings AlertMessage="Microsoft.Windows.Server.2008.LogicalDisk.FreeSpace.AlertMessage">
    <AlertOnState>Warning</AlertOnState>
    <AutoResolve>true</AutoResolve>
    <AlertPriority>Normal</AlertPriority>
    <AlertSeverity>MatchMonitorHealth</AlertSeverity>
    <AlertParameters>
    <AlertParameter1>$Target/Property[Type="Windows!Microsoft.Windows.LogicalDevice"]/DeviceID$</AlertParameter1>
    <AlertParameter2>$Target/Host/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</AlertParameter2>
    </AlertParameters>
    </AlertSettings>
    <OperationalStates>
    <OperationalState ID="UnderWarningThresholds" MonitorTypeStateID="UnderWarningThresholds" HealthState="Success" />
    <OperationalState ID="OverWarningUnderErrorThresholds" MonitorTypeStateID="OverWarningUnderErrorThresholds" HealthState="Warning" />
    <OperationalState ID="OverErrorThresholds" MonitorTypeStateID="OverErrorThresholds" HealthState="Error" />
    </OperationalStates>
    <Configuration>
    <ComputerName>$Target/Host/Property[Type="Windows!Microsoft.Windows.Computer"]/NetworkName$</ComputerName>
    <DiskLabel>$Target/Property[Type="Windows!Microsoft.Windows.LogicalDevice"]/DeviceID$</DiskLabel>
    <IntervalSeconds>900</IntervalSeconds>
    <SystemDriveWarningMBytesThreshold>500</SystemDriveWarningMBytesThreshold>
    <SystemDriveWarningPercentThreshold>10</SystemDriveWarningPercentThreshold>
    <SystemDriveErrorMBytesThreshold>300</SystemDriveErrorMBytesThreshold>
    <SystemDriveErrorPercentThreshold>5</SystemDriveErrorPercentThreshold>
    <NonSystemDriveWarningMBytesThreshold>2000</NonSystemDriveWarningMBytesThreshold>
    <NonSystemDriveWarningPercentThreshold>10</NonSystemDriveWarningPercentThreshold>
    <NonSystemDriveErrorMBytesThreshold>1000</NonSystemDriveErrorMBytesThreshold>
    <NonSystemDriveErrorPercentThreshold>5</NonSystemDriveErrorPercentThreshold>
    <NumSamples>4</NumSamples>
    </Configuration>
    </UnitMonitor>
    This proves 2 things:
    1. Your testing proved that the monitor is working as designed - you got an alert in about an hour
    2. This is a bad design at best, or a bug if you wish, as NumSamples should not be a hidden configuration - it should be exposed in override parameters in the console.
    This should be fixed by Microsoft.
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • Logical Disk space on a virtual Computer (not a Server) in SCOM 2012

    Hello all-
       I have a sort of unique situation for our SCOM environment.  We are already doing logical disk space monitoring on all of our SERVERS with no issues.  Now we have one user who requires monitoring on their computer.  This is really
    a virtual workstation, as opposed to a real computer.   Since it's virtual, it's really online all the time.    Whether its right or wrong, they are using this virtual workstation sort of as a server.  Anyway, they want me to
    monitor the disk space for the C drive on this virtual computer.  I'm having difficulty setting up such a monitor on a computer like that...
    I got the SCOM agent installed onto the computer.  SCOM is reporting it as being "Healthy".   Now how can I define a logical disk space monitor for it?   I attempt to create a monitor:   Windows Performance Counters,
    Static Thresholds, Single Threshold.  Then how can I direct the monitor at the C drive on the computer?  I have to fill in the Object, Counter and Instanace.  Normally I would click on the
    Select button, then I would type in the server name and hit Enter and it would bring up all the objects it will allow me to monitor.  Unfortunately it doesn't let me type in the computer name and hit enter in the same way.  When I
    do so, it says "The network path was not found".  If I click on the
    ... button next to the Computer field, it brings me to a Select Computer screen.  In here, I am type in the computer name and then I click
    Check Name and it does appear to find it, and it underlines the computer's name.    So I then click
    OK to bring me back to the previous screen where it presents me with the same  "The network path was not found" error immediately. So I can't seem to navigate to the computer in SCOM and select the C drive, as I would be able
    to for a server.
    I tried to sort of force it in, instead of browsing for the computer and selecting the C drive.  I simply put
    LogicalDisk in for the Object then Free Megabytes for the Counter, then
    C: for the Instance (as shown in the attached screen print.
    Then I just applied my new rule to the computer via SCOM.  I don't get any error messages when setting it up that way, but it certainly doesn't work.
    So with all that said, can anyone help me out setting up a logical disk space monitoring for a COMPUTER in SCOM 2012 SP1 please?  Please let me know if you need any more information!

    Thanks, that would probably do it, I'm going to keep that as my backup plan.  However, my current feeling is that it might be a bit overkill to install a whole management pack, when all I want to do is monitor a single C drive on a single workstation. 
    The rest of the monitors in the management pack I wouldn't have any interest it.   Let me ask you a question maybe you will know the answer to... Shouldn't I be able to monitor that drive even without that management pack installed?   I
    mean some combination of settings for the Logical Disk Space rule in that management pack works for computers/workstations.  Shouldn't I be able to reproduce that one single rule without the management pack?  Does anyone have that management pack
    installed?  Could you possibly look at the Client OS Logical Disk Space rule and send me some screen shots or something?   Or am I missing something?  Is the management pack needed for other reasons in order to monitor this?

  • Unix layout question  single vs. multiple logical volumes

    Hello friends,
    I have a question which I have seen various points of view. I'm hoping you might be able to give me a better insight so I can either confirm my own sanity, or accept a new paradigm shift in laying out the file system for best performance.
    Here are the givens:
    Unix systems (AIX, HP-UX, Solaris, and/or Linux).
    Hardware RAID system on large SAN (in this case, RAID-05 striped over more than 100 physical disks).
    (We are using AIX 6.1 with CIO turned on for the database files).
    Each Physical Volume is literally striped over at least physical 100 disks (spindles).
    Each Logical Volume is also striped over at least 100 spindles (all the same spindles for each lvol).
    Oracle software binaries are on their own separate physical volume.
    Oracle backups, exports, flash-back-query, etc., are on their own separate physical volume.
    Oracle database files, including all tablespaces, redo logs, undo ts, temp ts, and control files are in their own separate physical volume (that is made up of logical volumes that are each striped over at least 100 physical disks (spindles).
    The question is if it makes any sense (and WHY) to break up the physical volume that is used for the Oracle database files themselves, into multiple logical volumes? At what point does it make sense to create individual logical volumes for each datafile, or type, or put them all in a single logical volume?
    Does this do anything at all for performance? If the volumes are logical, then what difference would it to put them into individual logical volumes that are striped across the same one-hundred (+) disks?
    Basically ALL database files are in a single physical volume (LUN), but does it help (and WHY) to break up the physical volume into several logical volumes for placing each of the individual data files (e.g., separating system ts, from sysaux, from temp, from undo, from data, from indexes, etc.) if the physical volume is created on a RAID-5 (or RAID-10) disk array on a SAN that literally spans across hundreds of high-speed disks?
    If this does makes sense, why?
    From a physical standpoint, there are only 4 hardware paths for each LUN, so what difference does it make to create multiple 'logical' volumes for each datafile, or for separating types of data files?
    From an I/O standpoint, the multi-threading of the operating system should only be able to use the number of pathways that are capable based on the various operating system options (e.g., multicore CPUs using SMT (simultaneous multipath threading). But I believe they are still based on physical paths, not based on logical volumes.
    I look forward to hearing back from you.
    Thanks.
    ji li

    Thanks for your reply damorgan.
    We have dual HBAs in our servers as standard equipment, along with dual controllers.
    I totally agree with the idea of getting rid of RAID-5, but that is not my choice.
    We have a very large (massive) data center and the decision to use RAID-5 was at the discretion of our unix team some time ago. Their idea is one-size-fits-all. When I questioned it, I was balked at. After all, what do I know? I've only been a sys admin for 10 years (but on HP-UX and Solaris, not on AIX), and I've only been an Oracle DBA for nearly 20 years.
    For whatever it is worth, they also mirror their RAID-5, so in essence, it is a RAID 5-1-0 (RAID-50).
    Anyway, as for the hardware paths, from my understanding, there are only 4 physical hardware paths going from the servers to the switches, to the SAN and back. Their claim (the unix team's) is that by using multiple logical volumes within a single physical volume, that it increases the number of 'threads' to pull data from the stripe. This is the part I don't understand and may be specific to AIX.
    So if each logical volume is a stripe within a physical volume, and each physical volume is striped across more than one hundred disks, I still don't understand how multiple logical volumes can increase I/O through-put. From my understanding, if we only have four paths, and there are 100+ spindles, even if it did increase I/O somehow by the way AIX uses multipathing (SMT) with its CPUs, how can it have any affect on the I/O. And if it did, it would still have to be negligible.
    Two years ago, I've personally set up three LUNs on a pair of Sun V480s (RAC'd) connected to a Sun Storage 3510 SAN. One LUN for Oracle binaries, one for database datafiles, and one for backups and archivelogs), and then put all my datafiles in a single logical volume on one LUN, and had fantastic performance for a very intense database that literally had 12,000 to 16,000 simultaneous active* connections using Webshere connection pools. While that was a Sun system, and now I'm dealing with an AIX P6 570 system, I can't imagine the concepts being that much different, especially when the servers are basically comparable.
    Any comments or feedback appreciated.
    ji li
    Edited by: ji li on Jan 28, 2013 7:51 AM

  • How to make a proactive view of the Logical Disk Free Space

    Hello,
    I was wondering how I could make a view (preferably within a dashboard) that monitors the state of the Logical Disk Free Space values for one or more predefined groups. I can only get this to work with line diagrams but that is pretty hard to read.
    I would like to make views like:
    1) A simple state view that shows the state of the servers (or disks) in three state form (1. Healthy: 80% or lower; 2. Warning: Between 80% and 90%; 3. Alert: 90% or higher).
    2) A view of actual percentages of the disk drives in a table form rather than the usual line diagram.
    I prefer the first one the most and seems to be the easiest aswell but I can't seem to get this to work.
    I hope that this is possible any like to know how to achieve this.
    Thanks in advance,
    Bram

    Hi Bram,
    I think you need to create a new dashboard view for this.
    Make a new management pack for this.
    Once you create a new management pack.
    Go to monitoring TAB
    Locate the management pack there and right click and select new Dashboard.
    Create a summary view dashboard and then once it is created on the right hand side you will see something like
    Performance (Which i edited as LDS report for last 24 hrs as per the screenshot)
    Above that you will have a configure option. Click on it and mention the Object, counter and instance and of the LDS performance counter and mention the report duration (Last 1hr or  24 hrs )once you do this dashboard will start collecting the report
    for you.
    Once you scroll down the report you will get the list of servers in which space is low and how old is that alert
    Below is the screenshot for your reference.
    Gautam.75801

  • Windows Server 2012 Logical Disk Free Space (%) Low

    I enabled the monitor "Windows Server 2012 Logical Disk Free Space (%) Low" and configured a low threshold to test. I started to get a bunch of warnings from servers, for example:
    The disk \\?\Volume{ee0222ed-16de-40a5-af89-f95db3fdf5a4} on computer PC is running out of disk space. The value that exceeded the threshold is 11% free space.
    Now I checked on the server, and all the disks have more than 11% free space. Additionally, I don't see any disks with such a name/guid.
    When looking at the additional knowledge of the monitor, I see that it is using the following information:
    Object Name: Logical Disk
    Counter Name: PercentFree 
    My question is where is this disk coming from, and how can I avoid these disks from creating false alarms? When looking in the Windows Server
    From my analyzing the DB, I see that these are the partitions on the server without a volume letter. Any way to avoid getting these discovered and/or alerts, without overriding each one?

    Hi,
    These "strange" disks are called mount points.
    They get discovered by the "Mount Point Discovery Rule".
    Go to your authoring => rules => search for the rule above and disable it.
    If you want to remove all the instances in your environment you need to use Remove-SCOMDisabledClassInstance
    powershell cmdlet.
    More info on the cmdlet can be found here:  http://technet.microsoft.com/en-us/library/hh920257%28v=sc.20%29.aspx
    If you have any more questions please do not hesitate to ask
    It's doing common things uncommonly well that brings succes. Check out my SCOM link blog:
    SCOM link blog

  • Logical Disk Performance counter for cluster shared volume on Hyper-V

    Hello All,
    I am trying to collect counters like latency, queuelength from Win32_PerfFormattedData_PerfDisk_LogicalDisk WMI class.
    Output of "Name" attribute for logical disks in this class as below:
    Name: _Total
    Name: C:
    Name: E:
    Name: HarddiskVolume1
    Name: Q:
    Name here doesn't show the actual label so I queried Win32_Volume class and wanted to join with performance WMI class. Out put of Win32_Volume is as below:
    Caption: E:\
    Label: New Volume
    Name: E:\
    Caption: Q:\
    Label: Quorum
    Name: Q:\
    Caption: C:\
    Label: Voume C
    Name: C:\
    Caption: F:\
    Label: SAN
    Name: F:\
    Please note that "Name" attribute matches for all except one with label "SAN". This is cluster shared volume and "Name" attribute value is "HardDiskVolume1" in Win32_PerfFormattedData_PerfDisk_LogicalDisk class.
    Is this is a configuration issue or any other alternative to get volume label and corresponding performance counters.
    Thanks in advance
    Regards,
    Udupa

    Hi Udupa,
    I haven't found a better way, if you want to combine the two script, please refer to the script below:
    $output=@()
    $volumes = gwmi Win32_Volume
    foreach($volume in $volumes){
    $match=($volume.name).TrimEnd("\")
    $counter=gwmi Win32_PerfFormattedData_PerfDisk_LogicalDisk |where{$_.name -eq $match}
    $Object = New-Object PSObject
    $Object | add-member Noteproperty name $volume.name
    $Object | add-member Noteproperty label $volume.label
    $Object | add-member Noteproperty AvgDiskQueueLength $counter.AvgDiskQueueLength
    $output += $Object
    $output
    I hope this helps.

  • One of the SAN logical disk of windows 2008 not discovered as logical disk in SCOM 2007R2.

    Hi,
    Anyone can advise where should i check to make the discovery work. i have rename/restart the health service folder/scom agent on the problem windows 2008 server. it still cannot discover the logical disk which make the monitoring of logical disk not possible.

    Hi,
    Microsoft introduced 'SAN Policy' in Win 2008 (Enterprise and Datacenter Edition only) which causes SAN disks to not be available on startup by default, as a protection mechanism. You have to use the Diskpart.exe utility to set the policy of the disks
    on that server to make them come up after a restart.
    More info on similar sort of thing here.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Problem building schema for return type as custom object

    Hi i am invoking a web service through a partner link. But when i expand the invoke variables( in structure tab ), it shows Exception- Problem building schema.
    Response is of type java:Customer which is a custom object having mobile_no, name, compname.
    Error is--
    Invalid reference: 'java:customobjectproject:Customer'
    wsdl file:
    <?xml version='1.0' encoding='UTF-8'?>
    <definitions name="CustomerDetailServiceDefinitions" targetNamespace="http://customobjectproject" xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:s0="http://customobjectproject" xmlns:s1="http://schemas.xmlsoap.org/wsdl/soap/">
    <types>
    <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" targetNamespace="http://customobjectproject" xmlns:s0="http://customobjectproject" xmlns:s1="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xs="http://www.w3.org/2001/XMLSchema">
    <xs:element name="getCustomer">
    <xs:complexType>
    <xs:sequence>
    <xs:element name="mobileNo" type="xs:int"/>
    <xs:element name="name" type="xs:string"/>
    <xs:element name="compName" type="xs:string"/>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    <xs:element name="getCustomerResponse">
    <xs:complexType>
    <xs:sequence>
    <xs:element name="return" type="java:Customer" xmlns:java="java:customobjectproject"/>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    </xs:schema>
    <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" targetNamespace="java:customobjectproject" xmlns:s0="http://customobjectproject" xmlns:s1="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xs="http://www.w3.org/2001/XMLSchema">
    <xs:complexType name="Customer">
    <xs:sequence>
    <xs:element minOccurs="1" name="Mobile_no" nillable="false" type="xs:int"/>
    <xs:element minOccurs="1" name="Name" nillable="true" type="xs:string"/>
    <xs:element minOccurs="1" name="CompName" nillable="true" type="xs:string"/>
    </xs:sequence>
    </xs:complexType>
    </xs:schema>
    </types>
    <message name="getCustomer">
    <part element="s0:getCustomer" name="parameters"/>
    </message>
    <message name="getCustomerResponse">
    <part element="s0:getCustomerResponse" name="parameters"/>
    </message>
    <portType name="CustomerDetail">
    <operation name="getCustomer" parameterOrder="parameters">
    <input message="s0:getCustomer"/>
    <output message="s0:getCustomerResponse"/>
    </operation>
    </portType>
    <binding name="CustomerDetailServiceSoapBinding" type="s0:CustomerDetail">
    <s1:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
    <operation name="getCustomer">
    <s1:operation soapAction="" style="document"/>
    <input>
    <s1:body parts="parameters" use="literal"/>
    </input>
    <output>
    <s1:body parts="parameters" use="literal"/>
    </output>
    </operation>
    </binding>
    <service name="CustomerDetailService">
    <port binding="s0:CustomerDetailServiceSoapBinding" name="CustomerDetailSoapPort">
    <s1:address location="http://localhost:7030/CustomerDetail/CustomerDetail"/>
    </port>
    </service>
    </definitions>
    I am using jdeveloper 10.1.3.3.0. Please help..
    Thanks.

    Well, the problem is caused by the fact that BPEL does not yet support soapenc:Array types. We've created a workaround by defining two Services, one that has only single returntypes and one that has the multiple returntypes (which are defined using soapenc:Array). There are other solutions, but those are very complicated. Hope this helps!

  • Creating SCOM 2012 Logical Disk Space Availability View

    Hi Everyone,
    I am looking to create a SCOM 2012 Logical Disk Space Availability View and having a little trouble finding documentation on how to do this. It would be really handy to see all of the servers being monitored and how much free space they have all in one view.
    Has anyone created a view like this before.
    Thanks in advance for all your help.
    JD

    Hi,
    Below are some blogs that will help you to understand more about disk space monitoring
    Writing monitors to target Logical or Physical Disks
    http://blogs.technet.com/b/kevinholman/archive/2009/11/24/writing-monitors-to-target-logical-or-physical-disks.aspx
    OpsMgr: Logical Disk free space alerts don’t show percent and MB free values in the alert description
    http://blogs.technet.com/b/kevinholman/archive/2011/11/17/opsmgr-logical-disk-free-space-alerts-don-t-show-percent-and-mb-free-values-in-the-alert-description.aspx 
    Thanks.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Logical Disk monitoring - Overrides galore?

    Hi all!
    We have a number of systems with differing needs for storage thresholds.  The defaults work the majority of time, but there are a number of 'buckets' our disks need.  For example, some disks might warrant an error at 5GB free, 10GB free, or even
    25GB free.  Let's pretend the percentages aren't important for now.
    Just to be clear, is it correct that I would need to create 2 (system / non-system) * 3 (2003, 2008, 2012) = 6 overrides for every single bucket I want (and make no mistake, we need way more than 3)?
    This seems absurd.  I can handle dumping the appropriate logical disks into buckets based on PowerShell or some other programmatic method.  Creating
    overrides seems a bit more complicated.  Is there not a simpler way to do this?
    I'm comfortable using PowerShell, but the 600 line example above seems like a pretty steep requirement for a very basic and common task (that is cumbersome and delay ridden when using the SCOM console).
    Please tell me there is an easier way to create overrides for logical disk space monitoring!

    I run into the same questions and problems with most customers. What I always suggest is to give the power to the server and/or application owners. This can be accomplished a few different ways, and I have done this by using a registry entry or by using
    a file on the root of each disk.
    For example, everyone gets default thresholds for disk monitoring. If the server owner wants different thresholds for a disk on their server(s), then they would create a csv file on the root of each disk in which they want different thresholds (or create
    a registry entry with similar threshold configuration).
    If you modify the built-in disk monitor by checking whether this particular file is present, then read the file to get the custom thresholds and bypass the default thresholds for monitoring that particular disk. The built-in disk monitoring script is pretty
    long, but if you know a little VBscript then you should be able to figure out where to add more logic to retrieve custom thresholds based on csv file or registry information.
    You will need to put this monitor in a new pack (for example, Windows 2008 Operating System Extended), and disable the unit monitor in the vendor pack. This would end up being 2 new "extended" packs if you were to also do this for Windows Server
    2012.
    This will effectively remove the responsibility from the SCOM admin to manage hundreds (or more) disk monitoring thresholds, and place the responsibility into the hands of the server owner. This has worked quite well in any environment I've work in.
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

Maybe you are looking for