SCOM SP1 Groups, Classes and SNMP

I've been working on a management pack using some of the examples on the net (Kristopher Bash from the operating-quadrantin has been a huge inspiration) to monitor an Isilon cluster.  This had lead me to a number of interesting challenges to over come since I have to design this within the confines of SP1 and I have a unique networking to device configuration. 
Device Overview:
The Isilon cluster itself is a number of FreeBSD systems (nodes) joined together via and infiniband backend to create a single NAS.  While normally this would not be an issue, network connectivity to the device and SNMP response from the cluster have been.  In my configuration I have a total of 8 nodes each node has 2 network interfaces.  Of these 16 network interfaces only 2 are accessible/on the same network as my RMS (em1 node2, em1 node4).
Device SNMP Design:
While the cluster is highly dynamic the SNMP sub-systems are not.  The MIB created by Isilon does not join the whole of the cluster into an index for SNMP polling.  I can only poll a single nodes OID's.  To over come this limitation Isilon implemented SNMP-Proxy or comtosec within the system.  This allowed me to poll node 3 by changing the community name for the OID I was polling from the discovered name to <discoveredname>_node_3.
MP Design:
Now I'm not the best at MP design since I rarely work within SCOM so don't laugh too hard...  I reused items from Kristopher's Cisco MP and created a number of classes for discovery and item hosting.  To address the limitation I found within SCOM for dynamic discovery and 1 IP address 1 Community Name, I created a class property within the root called ConfiguredNodes.  I can poll the Isilon an populate this value (8).  Then I created a sub-class property value called NodeCommStr to fill in all the custom community names I generate using a VB script with Base64 encode/decode, the discovery community name and the ConfiguredNodes value, in the data source for sub-class discovery. public_node_1 public_node_2 etc...
All in all this is working well however I have run into a few design roadblocks and I have some questions.
1.  When I discover a set of items within the Isilon cluster the health explorer is not sorting this information alphabetically.  Is there a value I can include in the dependency roll-up to correct this?
2.  I have run into an issue with the Isilon MIB and I'm looking for the best way to overcome the MIB's design.  They included a fan table with fan information and speed of the fans however there is no status (success{0}, warning{1}, error{2}).  I created a monitor type to compensate for this and included overrides for the warning and critical event points.  This is where I found the curve ball, seems that the fans are not the same....  There are 2 sets of fans - Chassis and Power Supply - and they have different thresholds *rolls eyes*.  Some I'm asking for the best design advice, should I create 2 classes, discoveries, monitor types, etc.?  Or can I address this issue by creating 2 monitor types with a string filter?
3.  I've been successful in creating this MP and displaying the information as a single device however I was wondering if there was a way to create dynamic groups with sub groups.  This would have to be 100% dynamic since I can add a 9th 10th 192nd (yes 192) node to the cluster.
Cluster
-ClusterNode1
-ClusterNode1Power
-ClusterNode1Fans
-etc
The information is there in the NodeCommStr, I'm just in brain lock on how to design it right now.
Again, code is raw and a work in progress so please no giggling.  Oh this is a multi post...  Code is too long.
<Manifest>
<Identity>
<ID>IsilonSNMP</ID>
<Version>1.0.1.2</Version>
</Identity>
<Name>IsilonSNMP</Name>
<References>
<Reference Alias="MicrosoftSystemCenterNetworkDeviceLibrary">
<ID>Microsoft.SystemCenter.NetworkDevice.Library</ID>
<Version>6.0.6278.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
<Reference Alias="Snmp">
<ID>System.Snmp.Library</ID>
<Version>6.0.6278.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
<Reference Alias="SystemHardwareLibrary">
<ID>System.Hardware.Library</ID>
<Version>6.0.6278.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
<Reference Alias="Windows">
<ID>Microsoft.Windows.Library</ID>
<Version>6.0.6278.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
<Reference Alias="SystemPerformanceLibrary">
<ID>System.Performance.Library</ID>
<Version>6.0.6278.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
<Reference Alias="System">
<ID>System.Library</ID>
<Version>6.0.6278.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
<Reference Alias="SC">
<ID>Microsoft.SystemCenter.Library</ID>
<Version>6.0.6278.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
<Reference Alias="Health">
<ID>System.Health.Library</ID>
<Version>6.0.6278.0</Version>
<PublicKeyToken>31bf3856ad364e35</PublicKeyToken>
</Reference>
</References>
</Manifest>
<TypeDefinitions>
<EntityTypes>
<ClassTypes>
<ClassType ID="IsilonSNMP.Class.IsilonCluster" Accessibility="Public" Abstract="false" Base="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice" Hosted="false" Singleton="false">
<Property ID="Hostname" Type="string" Key="false" CaseSensitive="false" Length="256" MinLength="0" />
<Property ID="ConfiguredNodes" Type="string" Key="false" CaseSensitive="false" Length="256" MinLength="0" />
<!-- <Property ID="NodeCommStr" Type="string" Key="false" CaseSensitive="false" Length="256" MinLength="0" /> -->
</ClassType>
<ClassType ID="IsilonSNMP.Class.IsilonCluster.Nodes" Accessibility="Public" Abstract="false" Base="SystemHardwareLibrary!System.Chassis" Hosted="true" Singleton="false">
<Property ID="Name" Type="string" Key="true" CaseSensitive="false" Length="256" MinLength="0" />
</ClassType>
<ClassType ID="IsilonSNMP.Class.IsilonCluster.PhysicalDisk" Accessibility="Public" Abstract="false" Base="SystemHardwareLibrary!System.PhysicalDisk" Hosted="true" Singleton="false">
<Property ID="Index" Type="string" Key="true" CaseSensitive="false" Length="256" MinLength="0" />
<!-- <Property ID="ConfiguredNodes" Type="string" Key="false" CaseSensitive="false" Length="256" MinLength="0" /> -->
<Property ID="NodeCommStr" Type="string" Key="false" CaseSensitive="false" Length="256" MinLength="0" />
<Property ID="BayIndex" Type="string" Key="false" CaseSensitive="false" Length="256" MinLength="0" />
</ClassType>
<ClassType ID="IsilonSNMP.Class.IsilonCluster.PhysicalFan" Accessibility="Public" Abstract="false" Base="SystemHardwareLibrary!System.Fan" Hosted="true" Singleton="false">
<Property ID="Index" Type="string" Key="true" CaseSensitive="false" Length="256" MinLength="0" />
<Property ID="NodeCommStr" Type="string" Key="false" CaseSensitive="false" Length="256" MinLength="0" />
<Property ID="FanNumber" Type="string" Key="false" CaseSensitive="false" Length="256" MinLength="0" />
</ClassType>
<ClassType ID="IsilonSNMP.Group.IsilonClusters" Accessibility="Public" Abstract="false" Base="System!System.Group" Hosted="false" Singleton="true" />
</ClassTypes>
<RelationshipTypes>
<RelationshipType ID="IsilonSNMP.Relationship.ClusterHostsNodes" Accessibility="Internal" Abstract="false" Base="System!System.Hosting">
<Source>IsilonSNMP.Class.IsilonCluster</Source>
<Target>IsilonSNMP.Class.IsilonCluster.Nodes</Target>
</RelationshipType>
<RelationshipType ID="IsilonSNMP.Relationship.IsilonClustersGroupContainsIsilonClusters" Accessibility="Public" Abstract="false" Base="System!System.Containment">
<Source>IsilonSNMP.Group.IsilonClusters</Source>
<Target>IsilonSNMP.Class.IsilonCluster</Target>
</RelationshipType>
<RelationshipType ID="IsilonSNMP.Relationship.NodesHostsPhysicalDisk" Accessibility="Public" Abstract="false" Base="System!System.Hosting">
<Source>IsilonSNMP.Class.IsilonCluster.Nodes</Source>
<Target>IsilonSNMP.Class.IsilonCluster.PhysicalDisk</Target>
</RelationshipType>
<RelationshipType ID="IsilonSNMP.Relationship.NodesHostsPhysicalFan" Accessibility="Public" Abstract="false" Base="System!System.Hosting">
<Source>IsilonSNMP.Class.IsilonCluster.Nodes</Source>
<Target>IsilonSNMP.Class.IsilonCluster.PhysicalFan</Target>
</RelationshipType>
</RelationshipTypes>
</EntityTypes>
<ModuleTypes>
<DataSourceModuleType ID="IsilonSNMP.DataSource.BasicSNMPProbe" Accessibility="Internal" Batching="false">
<Configuration>
<xsd:element minOccurs="1" name="Interval" type="xsd:integer" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="IPAddress" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="CommStr" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="OID" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
</Configuration>
<OverrideableParameters>
<OverrideableParameter ID="Interval" Selector="$Config/Interval$" ParameterType="int" />
</OverrideableParameters>
<ModuleImplementation Isolation="Any">
<Composite>
<MemberModules>
<DataSource ID="Scheduler" TypeID="System!System.Scheduler">
<Scheduler>
<SimpleReccuringSchedule>
<Interval>$Config/Interval$</Interval>
<SyncTime />
</SimpleReccuringSchedule>
<ExcludeDates />
</Scheduler>
</DataSource>
<ProbeAction ID="SNMPProbe" TypeID="Snmp!System.SnmpProbe">
<IsWriteAction>false</IsWriteAction>
<IP>$Config/IPAddress$</IP>
<CommunityString>$Config/CommStr$</CommunityString>
<SnmpVarBinds>
<SnmpVarBind>
<OID>$Config/OID$</OID>
<Syntax>0</Syntax>
<Value VariantType="8" />
</SnmpVarBind>
</SnmpVarBinds>
</ProbeAction>
<ConditionDetection ID="ValueFilter" TypeID="System!System.ExpressionFilter">
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type="String">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>NotEqual</Operator>
<ValueExpression>
<Value Type="String" />
</ValueExpression>
</SimpleExpression>
</Expression>
</ConditionDetection>
</MemberModules>
<Composition>
<Node ID="ValueFilter">
<Node ID="SNMPProbe">
<Node ID="Scheduler" />
</Node>
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>Snmp!System.SnmpData</OutputType>
</DataSourceModuleType>
<DataSourceModuleType ID="IsilonSNMP.DataSource.DiscoverContainmentClasses" Accessibility="Internal" Batching="false">
<Configuration>
<IncludeSchemaTypes>
<SchemaType>System!System.ParamListSchema</SchemaType>
<SchemaType>System!System.Discovery.MapperSchema</SchemaType>
</IncludeSchemaTypes>
<xsd:element minOccurs="1" name="IPAddress" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="ClassID" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="InstanceSettings" type="SettingsType" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
</Configuration>
<ModuleImplementation Isolation="Any">
<Composite>
<MemberModules>
<DataSource ID="Scheduler" TypeID="System!System.Scheduler">
<Scheduler>
<SimpleReccuringSchedule>
<Interval>60</Interval>
<SyncTime />
</SimpleReccuringSchedule>
<ExcludeDates />
</Scheduler>
</DataSource>
<ConditionDetection ID="Mapper" TypeID="System!System.Discovery.ClassSnapshotDataMapper">
<ClassId>$Config/ClassID$</ClassId>
<InstanceSettings>$Config/InstanceSettings$</InstanceSettings>
</ConditionDetection>
</MemberModules>
<Composition>
<Node ID="Mapper">
<Node ID="Scheduler" />
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.Discovery.Data</OutputType>
</DataSourceModuleType>
<DataSourceModuleType ID="IsilonSNMP.DataSource.DiscoverCluster" Accessibility="Internal" Batching="false">
<Configuration>
<xsd:element minOccurs="1" name="Interval" type="xsd:integer" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="IP" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="CommunityString" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="SystemOID" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
</Configuration>
<OverrideableParameters>
<OverrideableParameter ID="Interval" Selector="$Config/Interval$" ParameterType="int" />
</OverrideableParameters>
<ModuleImplementation Isolation="Any">
<Composite>
<MemberModules>
<DataSource ID="Scheduler" TypeID="System!System.Scheduler">
<Scheduler>
<SimpleReccuringSchedule>
<Interval Unit="Seconds">$Config/Interval$</Interval>
</SimpleReccuringSchedule>
<ExcludeDates />
</Scheduler>
</DataSource>
<ProbeAction ID="Probe" TypeID="Snmp!System.SnmpProbe">
<IsWriteAction>false</IsWriteAction>
<IP>$Config/IP$</IP>
<CommunityString>$Config/CommunityString$</CommunityString>
<SnmpVarBinds>
<SnmpVarBind>
<OID>.1.3.6.1.4.1.12124.1.1.4.0</OID>
<Syntax>0</Syntax>
<Value VariantType="8" />
</SnmpVarBind>
<SnmpVarBind>
<OID>.1.3.6.1.4.1.12124.1.1.1.0</OID>
<Syntax>0</Syntax>
<Value VariantType="8" />
</SnmpVarBind>
<SnmpVarBind>
<OID>.1.3.6.1.2.1.1.5.0</OID>
<Syntax>0</Syntax>
<Value VariantType="8" />
</SnmpVarBind>
<SnmpVarBind>
<OID>.1.3.6.1.2.1.1.1.0</OID>
<Syntax>0</Syntax>
<Value VariantType="8" />
</SnmpVarBind>
<SnmpVarBind>
<OID>.1.3.6.1.2.1.1.4.0</OID>
<Syntax>0</Syntax>
<Value VariantType="8" />
</SnmpVarBind>
<SnmpVarBind>
<OID>.1.3.6.1.2.1.1.6.0</OID>
<Syntax>0</Syntax>
<Value VariantType="8" />
</SnmpVarBind>
<SnmpVarBind>
<OID>.1.3.6.1.2.1.1.2.0</OID>
<Syntax>0</Syntax>
<Value VariantType="8" />
</SnmpVarBind>
</SnmpVarBinds>
</ProbeAction>
<ConditionDetection ID="Mapper" TypeID="System!System.Discovery.FilteredClassSnapshotDataMapper">
<Expression>
<RegExExpression>
<ValueExpression>
<XPathQuery>/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>ContainsSubstring</Operator>
<Pattern>1.3.6.1.4.1.12124.</Pattern>
</RegExExpression>
</Expression>
<ClassId>$MPElement[Name="IsilonSNMP.Class.IsilonCluster"]$</ClassId>
<InstanceSettings>
<Settings>
<Setting>
<Name>$MPElement[Name="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</Name>
<Value>$Data/Source$</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/Name$</Name>
<Value>$Data/SnmpVarBinds/SnmpVarBind[5]/Value$</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/SystemDescription$</Name>
<Value>$Data/SnmpVarBinds/SnmpVarBind[4]/Value$</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/SystemContact$</Name>
<Value>$Data/SnmpVarBinds/SnmpVarBind[3]/Value$</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/SystemLocation$</Name>
<Value>$Data/SnmpVarBinds/SnmpVarBind[2]/Value$</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/SystemOID$</Name>
<Value>$Data/SnmpVarBinds/SnmpVarBind[1]/Value$</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="System!System.Entity"]/DisplayName$</Name>
<Value>$Data/SnmpVarBinds/SnmpVarBind[5]/Value$</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="IsilonSNMP.Class.IsilonCluster"]/Hostname$</Name>
<Value>$Data/SnmpVarBinds/SnmpVarBind[6]/Value$</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="IsilonSNMP.Class.IsilonCluster"]/ConfiguredNodes$</Name>
<Value>$Data/SnmpVarBinds/SnmpVarBind[7]/Value$</Value>
</Setting>
</Settings>
</InstanceSettings>
</ConditionDetection>
<ConditionDetection ID="SystemOIDFilter" TypeID="System!System.ExpressionFilter">
<Expression>
<RegExExpression>
<ValueExpression>
<Value>$Config/SystemOID$</Value>
</ValueExpression>
<Operator>ContainsSubstring</Operator>
<Pattern>1.3.6.1.4.1.12124.</Pattern>
</RegExExpression>
</Expression>
</ConditionDetection>
</MemberModules>
<Composition>
<Node ID="Mapper">
<Node ID="Probe">
<Node ID="SystemOIDFilter">
<Node ID="Scheduler" />
</Node>
</Node>
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.Discovery.Data</OutputType>
</DataSourceModuleType>
<DataSourceModuleType ID="IsilonSNMP.DataSource.DiscoverPhysicalDisk" Accessibility="Internal" Batching="false">
<Configuration>
<xsd:element minOccurs="1" name="IPAddress" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="CommStr" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="Interval" type="xsd:integer" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="ConfiguredNodes" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="BayIndex" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="NodeCommStr" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
</Configuration>
<OverrideableParameters>
<OverrideableParameter ID="Interval" Selector="$Config/Interval$" ParameterType="int" />
</OverrideableParameters>
<ModuleImplementation Isolation="Any">
<Composite>
<MemberModules>
<DataSource ID="Scheduler" TypeID="System!System.Scheduler">
<Scheduler>
<SimpleReccuringSchedule>
<Interval>$Config/Interval$</Interval>
<SyncTime />
</SimpleReccuringSchedule>
<ExcludeDates />
</Scheduler>
</DataSource>
<ProbeAction ID="ScriptDiscovery" TypeID="Windows!Microsoft.Windows.ScriptDiscoveryProbe">
<ScriptName>DiscoverIsilonDisk.vbs</ScriptName>
<Arguments>$Config/IPAddress$ $Config/CommStr$ $MPElement$ $Target/Id$ $Config/ConfiguredNodes$</Arguments>
<ScriptBody>
<![CDATA['Discover PhysicalDisk
Dim oAPI, oDiscoveryData, oInst, objWMIServices, objWMILocator, oArgs
set oArgs = Wscript.Arguments
if oArgs.Count <5 Then
Wscript.Quit -1
End If
DeviceIP = oArgs(0)
CommStr = oArgs(1)
SourceID = oArgs(2)
ManagedEntityId = oArgs(3)
StrConfiguredNodes = oArgs(4)
CommStr = Decode(CommStr)
ConfiguredNodesCommStr = cstr(CommStr)
wscript.echo CommStr
Set oAPI = CreateObject("MOM.ScriptAPI")
set oDiscoveryData = oAPI.CreateDiscoveryData(0, SourceId, ManagedEntityId)
Set objWMILocator = CreateObject("WbemScripting.SWbemLocator")
Set objWMIServices = objWMiLocator.ConnectServer("","root\snmp\localhost")
'Name community name
GetPhysicalDisks
'Created community names
For i = 1 to StrConfiguredNodes
CommStr = ConfiguredNodesCommStr & "_node_" & i
GetPhysicalDisks
Next
'Return all data to SCOM
Call oAPI.Return(oDiscoveryData)
Sub GetPhysicalDisks
on error resume next
Set objWmiNamedValueSet = CreateObject("WbemScripting.SWbemNamedValueSet")
objWmiNamedValueSet.Add "AgentAddress", cstr(DeviceIP)
objWmiNamedValueSet.Add "AgentReadCommunityName", cstr(CommStr)
Set colPhysicalDisk = objWmiServices.InstancesOf("SNMP_ISILON_MIB_diskTable", , objWMINamedValueset)
For each objItem in colPhysicalDisk
nIndex = objItem.diskBay
sDesc = objItem.diskSerialNumber
if nIndex > 0 then
set oInst = oDiscoveryData.CreateClassInstance("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.PhysicalDisk']$")
call oInst.AddProperty("$MPElement[Name='MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice']/IPAddress$", DeviceIP)
call oInst.AddProperty("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.PhysicalDisk']/Index$", cstr(CommStr) & "-" & cdbl(nIndex))
call oInst.AddProperty("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.PhysicalDisk']/NodeCommStr$", Encode(CommStr))
call oInst.AddProperty("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.PhysicalDisk']/BayIndex$", cdbl(nIndex))
call oInst.AddProperty("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.Nodes']/Name$", "Cluster")
call oInst.AddProperty("$MPElement[Name='System!System.Entity']/DisplayName$", "SNMP Host " & cstr(CommStr) & " - Bay " & nIndex & " - Serial Number " & HexToString(sDesc))
'call oInst.AddProperty("$MPElement[Name='System!System.Entity']/DisplayName$", HexToString(sDesc))
call oDiscoveryData.AddInstance(oInst)
'Test Section
'wscript.echo Base64Encode(CommStr)
'wscript.echo Base64Encoder(CommStr)
'wscript.Echo CommStr
'wscript.Echo nIndex
'Wscript.Echo HexToString(sDesc)
'wscript.echo cdbl(nIndex) & cstr(CommStr)
end if
Next
on error goto 0
End Sub
Function Decode(strB64)
strXML = "<B64DECODE xmlns:dt=" & Chr(34) & _
"urn:schemas-microsoft-com:datatypes" & Chr(34) & " " & _
"dt:dt=" & Chr(34) & "bin.base64" & Chr(34) & ">" & _
strB64 & "</B64DECODE>"
Set oXMLDoc = CreateObject("MSXML2.DOMDocument.3.0")
oXMLDoc.LoadXML(strXML)
decode = oXMLDoc.selectsinglenode("B64DECODE").nodeTypedValue
set oXMLDoc = nothing
End Function
Function Encode(Str)
'Use ADODB.Stream to write Ansi string to Unicode stream
Set objStream = CreateObject("ADODB.Stream")
objStream.Type = 2
objStream.Open
objStream.Charset = "unicode"
objStream.WriteText Str
objstream.Flush
'Read the stream back as a byte array
objStream.Position = 0
objStream.Type = 1
temp = objstream.read(2) 'read two bytes of the stream to discard the byte order mark
bArray = objStream.Read
objStream.Close
'Convert byte array to Base64
set objXML = createobject("MSXML2.DOMDocument.3.0")
Set objNode = objXML.createElement("b64")
objNode.dataType = "bin.base64"
objNode.nodeTypedValue = bArray
Encode = objNode.Text
Set Stream = Nothing
set objNode = nothing
set objXML = nothing
End Function
Function HexToString(str)
on error resume next
sOutput = ""
For x = 1 To len(str) Step 2
sChar = Chr(Clng("&h" & Mid(str,x,2)))
sOutput = sOutput & sChar
Next
if err.number = 0 then
HexToString = sOutput
Else
HexToString = str
end if
End Function
set oInst = nothing
set oDiscoveryData = nothing
set oArgs = nothing
set oAPI = nothing
set objWMILocator = nothing
set objWMIServices = nothing
set objWMINamedValueSet = nothing
]]>
</ScriptBody>
<TimeoutSeconds>120</TimeoutSeconds>
</ProbeAction>
</MemberModules>
<Composition>
<Node ID="ScriptDiscovery">
<Node ID="Scheduler" />
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.Discovery.Data</OutputType>
</DataSourceModuleType>

<DataSourceModuleType ID="IsilonSNMP.DataSource.DiscoverPhysicalFan" Accessibility="Internal" Batching="false">
<Configuration>
<xsd:element minOccurs="1" name="IPAddress" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="CommStr" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="Interval" type="xsd:integer" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="ConfiguredNodes" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="FanNumber" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="NodeCommStr" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="FanSpeedHighCritical" type="xsd:integer" />
<xsd:element minOccurs="1" name="FanSpeedLowWarn" type="xsd:integer" />
<xsd:element minOccurs="1" name="FanSpeedLowCritical" type="xsd:integer" />
</Configuration>
<OverrideableParameters>
<OverrideableParameter ID="Interval" Selector="$Config/Interval$" ParameterType="int" />
<OverrideableParameter ID="FanSpeedHighCritical" Selector="$Config/FanSpeedHighCritical$" ParameterType="int" />
<OverrideableParameter ID="FanSpeedLowWarn" Selector="$Config/FanSpeedLowWarn$" ParameterType="int" />
<OverrideableParameter ID="FanSpeedLowCritical" Selector="$Config/FanSpeedLowCritical$" ParameterType="int" />
</OverrideableParameters>
<ModuleImplementation Isolation="Any">
<Composite>
<MemberModules>
<DataSource ID="Scheduler" TypeID="System!System.Scheduler">
<Scheduler>
<SimpleReccuringSchedule>
<Interval>$Config/Interval$</Interval>
<SyncTime />
</SimpleReccuringSchedule>
<ExcludeDates />
</Scheduler>
</DataSource>
<ProbeAction ID="ScriptDiscovery" TypeID="Windows!Microsoft.Windows.ScriptDiscoveryProbe">
<ScriptName>DiscoverIsilonPhysicalFan.vbs</ScriptName>
<Arguments>$Config/IPAddress$ $Config/CommStr$ $MPElement$ $Target/Id$ $Config/ConfiguredNodes$</Arguments>
<ScriptBody>
<![CDATA['Discover PhysicalFan
Dim oAPI, oDiscoveryData, oInst, objWMIServices, objWMILocator, oArgs
set oArgs = Wscript.Arguments
if oArgs.Count <5 Then
Wscript.Quit -1
End If
DeviceIP = oArgs(0)
CommStr = oArgs(1)
SourceID = oArgs(2)
ManagedEntityId = oArgs(3)
StrConfiguredNodes = oArgs(4)
CommStr = Decode(CommStr)
ConfiguredNodesCommStr = cstr(CommStr)
wscript.echo CommStr
Set oAPI = CreateObject("MOM.ScriptAPI")
set oDiscoveryData = oAPI.CreateDiscoveryData(0, SourceId, ManagedEntityId)
Set objWMILocator = CreateObject("WbemScripting.SWbemLocator")
Set objWMIServices = objWMiLocator.ConnectServer("","root\snmp\localhost")
'Name community name
GetPhysicalFan
'Created community names
For i = 1 to StrConfiguredNodes
CommStr = ConfiguredNodesCommStr & "_node_" & i
GetPhysicalFan
Next
'Return all data to SCOM
Call oAPI.Return(oDiscoveryData)
Sub GetPhysicalFan
on error resume next
Set objWmiNamedValueSet = CreateObject("WbemScripting.SWbemNamedValueSet")
objWmiNamedValueSet.Add "AgentAddress", cstr(DeviceIP)
objWmiNamedValueSet.Add "AgentReadCommunityName", cstr(CommStr)
Set colPhysicalFan = objWmiServices.InstancesOf("SNMP_ISILON_MIB_FanTable", , objWMINamedValueset)
For each objItem in colPhysicalFan
nIndex = objItem.fanNumber
sDesc = objItem.fanDescription
if nIndex > 0 then
set oInst = oDiscoveryData.CreateClassInstance("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.PhysicalFan']$")
call oInst.AddProperty("$MPElement[Name='MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice']/IPAddress$", DeviceIP)
call oInst.AddProperty("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.PhysicalFan']/Index$", cstr(CommStr) & "-" & cdbl(nIndex))
call oInst.AddProperty("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.PhysicalFan']/NodeCommStr$", Encode(CommStr))
call oInst.AddProperty("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.PhysicalFan']/FanNumber$", cdbl(nIndex))
call oInst.AddProperty("$MPElement[Name='IsilonSNMP.Class.IsilonCluster.Nodes']/Name$", "Cluster")
'call oInst.AddProperty("$MPElement[Name='System!System.Entity']/DisplayName$", "SNMP Host " & cstr(CommStr) & " - Bay " & nIndex & " - Serial Number " & HexToString(sDesc))
call oInst.AddProperty("$MPElement[Name='System!System.Entity']/DisplayName$", "SNMP Host " & cstr(CommStr) & " - " & HexToString(sDesc))
call oDiscoveryData.AddInstance(oInst)
'Test Section
'wscript.echo Base64Encode(CommStr)
'wscript.echo Base64Encoder(CommStr)
'wscript.Echo CommStr
'wscript.Echo nIndex
'Wscript.Echo HexToString(sDesc)
'wscript.echo cdbl(nIndex) & cstr(CommStr)
end if
Next
on error goto 0
End Sub
Function Decode(strB64)
strXML = "<B64DECODE xmlns:dt=" & Chr(34) & _
"urn:schemas-microsoft-com:datatypes" & Chr(34) & " " & _
"dt:dt=" & Chr(34) & "bin.base64" & Chr(34) & ">" & _
strB64 & "</B64DECODE>"
Set oXMLDoc = CreateObject("MSXML2.DOMDocument.3.0")
oXMLDoc.LoadXML(strXML)
decode = oXMLDoc.selectsinglenode("B64DECODE").nodeTypedValue
set oXMLDoc = nothing
End Function
Function Encode(Str)
'Use ADODB.Stream to write Ansi string to Unicode stream
Set objStream = CreateObject("ADODB.Stream")
objStream.Type = 2
objStream.Open
objStream.Charset = "unicode"
objStream.WriteText Str
objstream.Flush
'Read the stream back as a byte array
objStream.Position = 0
objStream.Type = 1
temp = objstream.read(2) 'read two bytes of the stream to discard the byte order mark
bArray = objStream.Read
objStream.Close
'Convert byte array to Base64
set objXML = createobject("MSXML2.DOMDocument.3.0")
Set objNode = objXML.createElement("b64")
objNode.dataType = "bin.base64"
objNode.nodeTypedValue = bArray
Encode = objNode.Text
Set Stream = Nothing
set objNode = nothing
set objXML = nothing
End Function
Function HexToString(str)
on error resume next
sOutput = ""
For x = 1 To len(str) Step 2
sChar = Chr(Clng("&h" & Mid(str,x,2)))
sOutput = sOutput & sChar
Next
if err.number = 0 then
HexToString = sOutput
Else
HexToString = str
end if
End Function
set oInst = nothing
set oDiscoveryData = nothing
set oArgs = nothing
set oAPI = nothing
set objWMILocator = nothing
set objWMIServices = nothing
set objWMINamedValueSet = nothing
]]>
</ScriptBody>
<TimeoutSeconds>120</TimeoutSeconds>
</ProbeAction>
</MemberModules>
<Composition>
<Node ID="ScriptDiscovery">
<Node ID="Scheduler" />
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>System!System.Discovery.Data</OutputType>
</DataSourceModuleType>
</ModuleTypes>
<MonitorTypes>
<UnitMonitorType ID="IsilonSNMP.MonitorType.PhysicalDiskStatus" Accessibility="Internal">
<MonitorTypeStates>
<MonitorTypeState ID="PhysicalDiskOK" NoDetection="false" />
<MonitorTypeState ID="PhysicalDiskNotOK" NoDetection="false" />
</MonitorTypeStates>
<Configuration>
<xsd:element minOccurs="1" name="Interval" type="xsd:integer" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="IPAddress" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="OID" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="NodeCommStr" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
</Configuration>
<MonitorImplementation>
<MemberModules>
<DataSource ID="DS1" TypeID="IsilonSNMP.DataSource.BasicSNMPProbe">
<Interval>$Config/Interval$</Interval>
<IPAddress>$Config/IPAddress$</IPAddress>
<CommStr>$Config/NodeCommStr$</CommStr>
<OID>$Config/OID$</OID>
</DataSource>
<ConditionDetection ID="CDPhysicalDiskOK" TypeID="System!System.ExpressionFilter">
<Expression>
<RegExExpression>
<ValueExpression>
<XPathQuery Type="String">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>ContainsSubstring</Operator>
<Pattern>HEALTHY</Pattern>
</RegExExpression>
</Expression>
</ConditionDetection>
<ConditionDetection ID="CDPhysicalDiskNotOK" TypeID="System!System.ExpressionFilter">
<Expression>
<RegExExpression>
<ValueExpression>
<XPathQuery Type="String">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>DoesNotContainSubstring</Operator>
<Pattern>HEALTHY</Pattern>
</RegExExpression>
</Expression>
</ConditionDetection>
</MemberModules>
<RegularDetections>
<RegularDetection MonitorTypeStateID="PhysicalDiskOK">
<Node ID="CDPhysicalDiskOK">
<Node ID="DS1" />
</Node>
</RegularDetection>
<RegularDetection MonitorTypeStateID="PhysicalDiskNotOK">
<Node ID="CDPhysicalDiskNotOK">
<Node ID="DS1" />
</Node>
</RegularDetection>
</RegularDetections>
</MonitorImplementation>
</UnitMonitorType>
<UnitMonitorType ID="IsilonSNMP.MonitorType.PhysicalFanStatus" Accessibility="Internal">
<MonitorTypeStates>
<MonitorTypeState ID="PhysicalFanOK" NoDetection="false" />
<MonitorTypeState ID="PhysicalFanWarn" NoDetection="false" />
<MonitorTypeState ID="PhysicalFanCritical" NoDetection="false" />
</MonitorTypeStates>
<Configuration>
<xsd:element minOccurs="1" name="Interval" type="xsd:integer" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="IPAddress" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="OID" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="NodeCommStr" type="xsd:string" xmlns:xsd="http://www.w3.org/2001/XMLSchema" />
<xsd:element minOccurs="1" name="FanSpeedHighCritical" type="xsd:integer" />
<xsd:element minOccurs="1" name="FanSpeedLowWarn" type="xsd:integer" />
<xsd:element minOccurs="1" name="FanSpeedLowCritical" type="xsd:integer" />
</Configuration>
<OverrideableParameters>
<OverrideableParameter ID="Interval" Selector="$Config/Interval$" ParameterType="int" />
<OverrideableParameter ID="FanSpeedHighCritical" Selector="$Config/FanSpeedHighCritical$" ParameterType="int" />
<OverrideableParameter ID="FanSpeedLowWarn" Selector="$Config/FanSpeedLowWarn$" ParameterType="int" />
<OverrideableParameter ID="FanSpeedLowCritical" Selector="$Config/FanSpeedLowCritical$" ParameterType="int" />
</OverrideableParameters>
<MonitorImplementation>
<MemberModules>
<DataSource ID="DS1" TypeID="IsilonSNMP.DataSource.BasicSNMPProbe">
<Interval>$Config/Interval$</Interval>
<IPAddress>$Config/IPAddress$</IPAddress>
<CommStr>$Config/NodeCommStr$</CommStr>
<OID>$Config/OID$</OID>
</DataSource>
<ConditionDetection ID="CDPhysicalFanOK" TypeID="System!System.ExpressionFilter">
<Expression>
<And>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type="Integer">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>Less</Operator>
<ValueExpression>
<Value Type="Integer">$Config/FanSpeedHighCritical$</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type="Integer">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>Greater</Operator>
<ValueExpression>
<Value Type="Integer">$Config/FanSpeedLowWarn$</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
</And>
</Expression>
</ConditionDetection>
<ConditionDetection ID="CDPhysicalFanWarn" TypeID="System!System.ExpressionFilter">
<Expression>
<And>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type="Integer">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>LessEqual</Operator>
<ValueExpression>
<Value Type="Integer">$Config/FanSpeedLowWarn$</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type="Integer">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>Greater</Operator>
<ValueExpression>
<Value Type="Integer">$Config/FanSpeedLowWarn$</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
</And>
</Expression>
</ConditionDetection>
<ConditionDetection ID="CDPhysicalFanCritical" TypeID="System!System.ExpressionFilter">
<Expression>
<Or>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type="Integer">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>GreaterEqual</Operator>
<ValueExpression>
<Value Type="Integer">$Config/FanSpeedHighCritical$</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type="Integer">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>LessEqual</Operator>
<ValueExpression>
<Value Type="Integer">$Config/FanSpeedLowCritical$</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
</Or>
</Expression>
</ConditionDetection>
</MemberModules>
<RegularDetections>
<RegularDetection MonitorTypeStateID="PhysicalFanOK">
<Node ID="CDPhysicalFanOK">
<Node ID="DS1" />
</Node>
</RegularDetection>
<RegularDetection MonitorTypeStateID="PhysicalFanWarn">
<Node ID="CDPhysicalFanWarn">
<Node ID="DS1" />
</Node>
</RegularDetection>
<RegularDetection MonitorTypeStateID="PhysicalFanCritical">
<Node ID="CDPhysicalFanCritical">
<Node ID="DS1" />
</Node>
</RegularDetection>
</RegularDetections>
</MonitorImplementation>
</UnitMonitorType>
</MonitorTypes>
</TypeDefinitions>
<Monitoring>
<Discoveries>
<Discovery ID="IsilonSNMP.Discovery.Cluster" Enabled="true" Target="IsilonSNMP.Class.IsilonCluster" ConfirmDelivery="true" Remotable="true" Priority="Normal">
<Category>Discovery</Category>
<DiscoveryTypes>
<DiscoveryClass TypeID="IsilonSNMP.Class.IsilonCluster.Nodes" />
<DiscoveryRelationship TypeID="IsilonSNMP.Relationship.ClusterHostsNodes" />
</DiscoveryTypes>
<DataSource ID="DS1" TypeID="IsilonSNMP.DataSource.DiscoverContainmentClasses">
<IPAddress>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</IPAddress>
<ClassID>$MPElement[Name="IsilonSNMP.Class.IsilonCluster.Nodes"]$</ClassID>
<InstanceSettings>
<Settings>
<Setting>
<Name>$MPElement[Name="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</Name>
<Value>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="System!System.Entity"]/DisplayName$</Name>
<Value>Cluster</Value>
</Setting>
<Setting>
<Name>$MPElement[Name="IsilonSNMP.Class.IsilonCluster.Nodes"]/Name$</Name>
<Value>Cluster</Value>
</Setting>
</Settings>
</InstanceSettings>
</DataSource>
</Discovery>
<Discovery ID="IsilonSNMP.Discovery.IsilonCluster" Enabled="true" Target="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice" ConfirmDelivery="true" Remotable="true" Priority="Normal">
<Category>Discovery</Category>
<DiscoveryTypes>
<DiscoveryClass TypeID="IsilonSNMP.Class.IsilonCluster" />
</DiscoveryTypes>
<DataSource ID="DS1" TypeID="IsilonSNMP.DataSource.DiscoverCluster">
<Interval>600</Interval>
<IP>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</IP>
<CommunityString>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/CommunityString$</CommunityString>
<SystemOID>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/SystemOID$</SystemOID>
</DataSource>
</Discovery>
<Discovery ID="IsilonSNMP.Discovery.IsilonClustersGroup" Enabled="true" Target="IsilonSNMP.Group.IsilonClusters" ConfirmDelivery="true" Remotable="true" Priority="Normal">
<Category>Discovery</Category>
<DiscoveryTypes>
<DiscoveryRelationship TypeID="IsilonSNMP.Relationship.IsilonClustersGroupContainsIsilonClusters" />
</DiscoveryTypes>
<DataSource ID="GP1" TypeID="SC!Microsoft.SystemCenter.GroupPopulator">
<RuleId>$MPElement$</RuleId>
<GroupInstanceId>$MPElement[Name="IsilonSNMP.Group.IsilonClusters"]$</GroupInstanceId>
<MembershipRules>
<MembershipRule>
<MonitoringClass>$MPElement[Name="IsilonSNMP.Class.IsilonCluster"]$</MonitoringClass>
<RelationshipClass>$MPElement[Name="IsilonSNMP.Relationship.IsilonClustersGroupContainsIsilonClusters"]$</RelationshipClass>
<Expression>
<RegExExpression>
<ValueExpression>
<Property>$MPElement[Name="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</Property>
</ValueExpression>
<Operator>ContainsSubstring</Operator>
<Pattern>.</Pattern>
</RegExExpression>
</Expression>
</MembershipRule>
</MembershipRules>
</DataSource>
</Discovery>
<Discovery ID="IsilonSNMP.Discovery.PhysicalDisk" Enabled="true" Target="IsilonSNMP.Class.IsilonCluster.Nodes" ConfirmDelivery="true" Remotable="true" Priority="Normal">
<Category>Discovery</Category>
<DiscoveryTypes>
<DiscoveryClass TypeID="IsilonSNMP.Class.IsilonCluster.PhysicalDisk" />
<DiscoveryRelationship TypeID="IsilonSNMP.Relationship.NodesHostsPhysicalDisk" />
</DiscoveryTypes>
<DataSource ID="DS1" TypeID="IsilonSNMP.DataSource.DiscoverPhysicalDisk">
<IPAddress>$Target/Host/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</IPAddress>
<CommStr>$Target/Host/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/CommunityString$</CommStr>
<Interval>7800</Interval>
<ConfiguredNodes>$Target/Host/Property[Type="IsilonSNMP.Class.IsilonCluster"]/ConfiguredNodes$</ConfiguredNodes>
<BayIndex>$MPElement[Name="IsilonSNMP.Class.IsilonCluster.PhysicalDisk"]/BayIndex$</BayIndex>
<NodeCommStr>$MPElement[Name="IsilonSNMP.Class.IsilonCluster.PhysicalDisk"]/NodeCommStr$</NodeCommStr>
</DataSource>
</Discovery>
<Discovery ID="IsilonSNMP.Discovery.PhysicalFan" Enabled="true" Target="IsilonSNMP.Class.IsilonCluster.Nodes" ConfirmDelivery="true" Remotable="true" Priority="Normal">
<Category>Discovery</Category>
<DiscoveryTypes>
<DiscoveryClass TypeID="IsilonSNMP.Class.IsilonCluster.PhysicalFan" />
<DiscoveryRelationship TypeID="IsilonSNMP.Relationship.NodesHostsPhysicalFan" />
</DiscoveryTypes>
<DataSource ID="DS1" TypeID="IsilonSNMP.DataSource.DiscoverPhysicalFan">
<IPAddress>$Target/Host/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</IPAddress>
<CommStr>$Target/Host/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/CommunityString$</CommStr>
<Interval>8000</Interval>
<ConfiguredNodes>$Target/Host/Property[Type="IsilonSNMP.Class.IsilonCluster"]/ConfiguredNodes$</ConfiguredNodes>
<FanNumber>$MPElement[Name="IsilonSNMP.Class.IsilonCluster.PhysicalFan"]/FanNumber$</FanNumber>
<NodeCommStr>$MPElement[Name="IsilonSNMP.Class.IsilonCluster.PhysicalFan"]/NodeCommStr$</NodeCommStr>
<FanSpeedHighCritical>14500</FanSpeedHighCritical>
<FanSpeedLowWarn>3400</FanSpeedLowWarn>
<FanSpeedLowCritical>3000</FanSpeedLowCritical>
</DataSource>
</Discovery>
</Discoveries>
<Monitors>
<UnitMonitor ID="UIGeneratedMonitore2c4dd195da8497bb99c9711e4134d70" Accessibility="Public" Enabled="true" Target="IsilonSNMP.Class.IsilonCluster" ParentMonitorID="Health!System.Health.AvailabilityState" Remotable="true" Priority="Normal" TypeID="Snmp!System.SnmpTrapProvider.2SingleEvent2StateMonitorType" ConfirmDelivery="false">
<Category>Custom</Category>
<AlertSettings AlertMessage="UIGeneratedMonitore2c4dd195da8497bb99c9711e4134d70_AlertMessageResourceID">
<AlertOnState>Warning</AlertOnState>
<AutoResolve>true</AutoResolve>
<AlertPriority>Normal</AlertPriority>
<AlertSeverity>MatchMonitorHealth</AlertSeverity>
</AlertSettings>
<OperationalStates>
<OperationalState ID="UIGeneratedOpStateId843498792d7d4fbf80d83f3939255dd9" MonitorTypeStateID="SecondEventRaised" HealthState="Success" />
<OperationalState ID="UIGeneratedOpStateId9fd776cd21a747c5994738e863b31fb9" MonitorTypeStateID="FirstEventRaised" HealthState="Warning" />
</OperationalStates>
<Configuration>
<FirstIP>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</FirstIP>
<FirstCommunityString>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/CommunityString$</FirstCommunityString>
<FirstAllTraps>false</FirstAllTraps>
<FirstVersion>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/Version$</FirstVersion>
<FirstOIDProps>
<OIDProp>.1.3.6.1.4.1.12124.1.1.2.0</OIDProp>
</FirstOIDProps>
<FirstExpression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type="String">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>Equal</Operator>
<ValueExpression>
<Value Type="String">1</Value>
</ValueExpression>
</SimpleExpression>
</FirstExpression>
<SecondIP>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</SecondIP>
<SecondCommunityString>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/CommunityString$</SecondCommunityString>
<SecondAllTraps>false</SecondAllTraps>
<SecondVersion>$Target/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/Version$</SecondVersion>
<SecondOIDProps>
<OIDProp>.1.3.6.1.4.1.12124.1.1.2.0</OIDProp>
</SecondOIDProps>
<SecondExpression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type="String">/DataItem/SnmpVarBinds/SnmpVarBind[1]/Value</XPathQuery>
</ValueExpression>
<Operator>NotEqual</Operator>
<ValueExpression>
<Value Type="String">1</Value>
</ValueExpression>
</SimpleExpression>
</SecondExpression>
</Configuration>
</UnitMonitor>
<UnitMonitor ID="IsilonSNMP.Monitor.PhysicalDiskStatus" Accessibility="Internal" Enabled="true" Target="IsilonSNMP.Class.IsilonCluster.PhysicalDisk" ParentMonitorID="Health!System.Health.AvailabilityState" Remotable="true" Priority="Normal" TypeID="IsilonSNMP.MonitorType.PhysicalDiskStatus" ConfirmDelivery="true">
<Category>AvailabilityHealth</Category>
<AlertSettings AlertMessage="IsilonSNMP.Monitor.PhysicalDiskStatus_AlertMessageResourceID">
<AlertOnState>Warning</AlertOnState>
<AutoResolve>true</AutoResolve>
<AlertPriority>Normal</AlertPriority>
<AlertSeverity>MatchMonitorHealth</AlertSeverity>
<AlertParameters>
<AlertParameter1>$Target/Property[Type="System!System.Entity"]/DisplayName$</AlertParameter1>
<AlertParameter2>$Data/Context/SnmpVarBinds/SnmpVarBind[1]/Value$</AlertParameter2>
</AlertParameters>
</AlertSettings>
<OperationalStates>
<OperationalState ID="IsilonSNMP.Monitor.PhysicalDiskStatus_PhysicalDiskOK" MonitorTypeStateID="PhysicalDiskOK" HealthState="Success" />
<OperationalState ID="IsilonSNMP.Monitor.PhysicalDiskStatus_PhysicalDiskNotOK" MonitorTypeStateID="PhysicalDiskNotOK" HealthState="Warning" />
</OperationalStates>
<Configuration>
<Interval>120</Interval>
<IPAddress>$Target/Host/Host/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</IPAddress>
<OID>.1.3.6.1.4.1.12124.2.52.1.5.$Target/Property[Type="IsilonSNMP.Class.IsilonCluster.PhysicalDisk"]/BayIndex$</OID>
<NodeCommStr>$Target/Property[Type="IsilonSNMP.Class.IsilonCluster.PhysicalDisk"]/NodeCommStr$</NodeCommStr>
</Configuration>
</UnitMonitor>
<UnitMonitor ID="IsilonSNMP.Monitor.PhysicalFanStatus" Accessibility="Internal" Enabled="true" Target="IsilonSNMP.Class.IsilonCluster.PhysicalFan" ParentMonitorID="Health!System.Health.AvailabilityState" Remotable="true" Priority="Normal" TypeID="IsilonSNMP.MonitorType.PhysicalFanStatus" ConfirmDelivery="true">
<Category>AvailabilityHealth</Category>
<AlertSettings AlertMessage="IsilonSNMP.Monitor.PhysicalFanStatus_AlertMessageResourceID">
<AlertOnState>Warning</AlertOnState>
<AutoResolve>true</AutoResolve>
<AlertPriority>Normal</AlertPriority>
<AlertSeverity>MatchMonitorHealth</AlertSeverity>
<AlertParameters>
<AlertParameter1>$Target/Property[Type="System!System.Entity"]/DisplayName$</AlertParameter1>
<AlertParameter2>$Data/Context/SnmpVarBinds/SnmpVarBind[1]/Value$</AlertParameter2>
</AlertParameters>
</AlertSettings>
<OperationalStates>
<OperationalState ID="IsilonSNMP.Monitor.PhysicalFanStatus_PhysicalFanOK" MonitorTypeStateID="PhysicalFanOK" HealthState="Success" />
<OperationalState ID="IsilonSNMP.Monitor.PhysicalFanStatus_PhysicalFanWarn" MonitorTypeStateID="PhysicalFanWarn" HealthState="Warning" />
<OperationalState ID="IsilonSNMP.Monitor.PhysicalFanStatus_PhysicalFanCritical" MonitorTypeStateID="PhysicalFanCritical" HealthState="Error" />
</OperationalStates>
<Configuration>
<Interval>120</Interval>
<IPAddress>$Target/Host/Host/Property[Type="MicrosoftSystemCenterNetworkDeviceLibrary!Microsoft.SystemCenter.NetworkDevice"]/IPAddress$</IPAddress>
<OID>.1.3.6.1.4.1.12124.2.53.1.4.$Target/Property[Type="IsilonSNMP.Class.IsilonCluster.PhysicalFan"]/FanNumber$</OID>
<NodeCommStr>$Target/Property[Type="IsilonSNMP.Class.IsilonCluster.PhysicalFan"]/NodeCommStr$</NodeCommStr>
<FanSpeedHighCritical>14500</FanSpeedHighCritical>
<FanSpeedLowWarn>3400</FanSpeedLowWarn>
<FanSpeedLowCritical>3000</FanSpeedLowCritical>
</Configuration>
</UnitMonitor>
<DependencyMonitor ID="IsilonSNMP.Monitor.ClusterPhysicalDiskAvailabilityDependency" Accessibility="Internal" Enabled="true" Target="IsilonSNMP.Class.IsilonCluster.Nodes" ParentMonitorID="Health!System.Health.AvailabilityState" Remotable="true" Priority="Normal" RelationshipType="IsilonSNMP.Relationship.NodesHostsPhysicalDisk" MemberMonitor="IsilonSNMP.Monitor.PhysicalDiskStatus">
<Category>AvailabilityHealth</Category>
<Algorithm>WorstOf</Algorithm>
<MemberUnAvailable>Error</MemberUnAvailable>
</DependencyMonitor>
<DependencyMonitor ID="IsilonSNMP.Monitor.ClusterPhysicalFanAvailabilityDependency" Accessibility="Internal" Enabled="true" Target="IsilonSNMP.Class.IsilonCluster.Nodes" ParentMonitorID="Health!System.Health.AvailabilityState" Remotable="true" Priority="Normal" RelationshipType="IsilonSNMP.Relationship.NodesHostsPhysicalFan" MemberMonitor="IsilonSNMP.Monitor.PhysicalFanStatus">
<Category>AvailabilityHealth</Category>
<Algorithm>WorstOf</Algorithm>
<MemberUnAvailable>Error</MemberUnAvailable>
</DependencyMonitor>
<DependencyMonitor ID="IsilonSNMP.Monitor.ClusterClusterAvailabilityDependency" Accessibility="Internal" Enabled="true" Target="IsilonSNMP.Class.IsilonCluster" ParentMonitorID="Health!System.Health.AvailabilityState" Remotable="true" Priority="Normal" RelationshipType="IsilonSNMP.Relationship.ClusterHostsNodes" MemberMonitor="Health!System.Health.AvailabilityState">
<Category>AvailabilityHealth</Category>
<Algorithm>WorstOf</Algorithm>
<MemberUnAvailable>Error</MemberUnAvailable>
</DependencyMonitor>
</Monitors>
</Monitoring>
<Presentation>
<StringResources>
<StringResource ID="UIGeneratedMonitore2c4dd195da8497bb99c9711e4134d70_AlertMessageResourceID" />
<StringResource ID="AlertMessageIDb1a3848769824949889fcc4c159cf462" />
<StringResource ID="IsilonSNMP.Monitor.PhysicalDiskStatus_AlertMessageResourceID" />
<StringResource ID="IsilonSNMP.Monitor.PhysicalFanStatus_AlertMessageResourceID" />
</StringResources>
</Presentation>
<LanguagePacks>
<LanguagePack ID="ENU" IsDefault="true">
<DisplayStrings>
<DisplayString ElementID="IsilonSNMP">
<Name>Isilon SNMP MP</Name>
<Description>Management pack to discover a Isilon cluster running OneFS 5.5</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Class.IsilonCluster">
<Name>Isilon Cluster</Name>
<Description>Isilon SNMP Device</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Class.IsilonCluster" SubElementID="Hostname">
<Name>Hostname</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Class.IsilonCluster" SubElementID="ConfiguredNodes">
<Name>ConfiguredNodes</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Class.IsilonCluster.Nodes" SubElementID="Name">
<Name>Name</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Class.IsilonCluster.Nodes">
<Name>Isilon Cluster Hosts Nodes</Name>
<Description>Containment class for the Isilon cluster with components such as hard disks, fans, power supplies, etc.</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Relationship.ClusterHostsNodes">
<Name>Isilon Cluster Hosts Nodes</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Discovery.IsilonCluster">
<Name>Discover Isilon Cluster</Name>
<Description>Discovery of the Isilon Cluster using OID strings from RFC1213.</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.DataSource.BasicSNMPProbe">
<Name>Isilon Basic Probe Data Source</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.DataSource.DiscoverCluster">
<Name>Discover Isilon Cluster</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Relationship.IsilonClustersGroupContainsIsilonClusters">
<Name>Isilon Devices Group Contains Isilon Cluster</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Discovery.Cluster">
<Name>Discover Isilon Cluster Containment Class</Name>
<Description>Discovers the Cluster containment class, which hosts managed objects such as fans</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Discovery.IsilonClustersGroup">
<Name>Isilon Deivce Group Populator</Name>
</DisplayString>
<DisplayString ElementID="UIGeneratedMonitore2c4dd195da8497bb99c9711e4134d70">
<Name>PlaceHolder</Name>
<Description>Place Holder to detect isilon cluster</Description>
</DisplayString>
<DisplayString ElementID="UIGeneratedMonitore2c4dd195da8497bb99c9711e4134d70" SubElementID="UIGeneratedOpStateId843498792d7d4fbf80d83f3939255dd9">
<Name>Second Event Raised</Name>
</DisplayString>
<DisplayString ElementID="UIGeneratedMonitore2c4dd195da8497bb99c9711e4134d70" SubElementID="UIGeneratedOpStateId9fd776cd21a747c5994738e863b31fb9">
<Name>First Event Raised</Name>
</DisplayString>
<DisplayString ElementID="UIGeneratedMonitore2c4dd195da8497bb99c9711e4134d70_AlertMessageResourceID">
<Name>PlaceHolder</Name>
<Description>placeholder alert to detect isilon</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.PhysicalDiskStatus">
<Name>Isilon PhysicalDisk Status Monitor</Name>
<Description>Monitor that generates an alert when the PhysicalDisk status is not ok.</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.PhysicalDiskStatus" SubElementID="IsilonSNMP.Monitor.PhysicalDiskStatus_PhysicalDiskOK">
<Name>PhysicalDiskOK</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.PhysicalDiskStatus" SubElementID="IsilonSNMP.Monitor.PhysicalDiskStatus_PhysicalDiskNotOK">
<Name>PhysicalDiskOK</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.PhysicalDiskStatus_AlertMessageResourceID">
<Name>Isilon PhysicalDisk Status</Name>
<Description>The Disk ({0}) is in a warning or error state. The Disk state is: {1}.</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Class.IsilonCluster.PhysicalDisk">
<Name>Isilon Disk</Name>
<Description>Disk</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Class.IsilonCluster.PhysicalDisk" SubElementID="Index">
<Name>Index</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Relationship.NodesHostsPhysicalDisk">
<Name>Isilon Cluster Hosts PhysicalDisk</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Discovery.PhysicalDisk">
<Name>Discover Isilon Cluster Physical Disks</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.DataSource.DiscoverPhysicalDisk">
<Name>Discover Isilon Cluster PhysicalDisk</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.ClusterPhysicalDiskAvailabilityDependency">
<Name>Isilon Disk</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.PhysicalFanStatus">
<Name>Isilon PhysicalFan Status Monitor</Name>
<Description>Monitor that generates an alert when the PhysicalFan speed is not within a premited range.</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.PhysicalFanStatus" SubElementID="IsilonSNMP.Monitor.PhysicalFanStatus_PhysicalFanOK">
<Name>PhysicalFanOK</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.PhysicalFanStatus" SubElementID="IsilonSNMP.Monitor.PhysicalFanStatus_PhysicalFanWarn">
<Name>PhysicalFanWarn</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.PhysicalFanStatus" SubElementID="IsilonSNMP.Monitor.PhysicalFanStatus_PhysicalFanCritical">
<Name>PhysicalFanCritical</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.PhysicalFanStatus_AlertMessageResourceID">
<Name>Isilon PhysicalFan Status</Name>
<Description>The Fan ({0}) is in a warning or error state. Current Fan speed is: {1}.</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Class.IsilonCluster.PhysicalFan">
<Name>Isilon Fan</Name>
<Description>Fan</Description>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Class.IsilonCluster.PhysicalFan" SubElementID="Index">
<Name>Index</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Relationship.NodesHostsPhysicalFan">
<Name>Isilon Cluster Hosts PhysicalFan</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Discovery.PhysicalFan">
<Name>Discover Isilon Cluster Physical Fans</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.DataSource.DiscoverPhysicalFan">
<Name>Discover Isilon Cluster PhysicalFan</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.ClusterPhysicalFanAvailabilityDependency">
<Name>Isilon Fan</Name>
</DisplayString>
<DisplayString ElementID="IsilonSNMP.Monitor.ClusterClusterAvailabilityDependency">
<Name>Cluster</Name>
</DisplayString>
</DisplayStrings>
</LanguagePack>
</LanguagePacks>

Similar Messages

  • SCOM - How internally classes and properties, discovered and monitored

    I am trying to understand how internally classes created instances, and how it runs in the agent, discovered and monitored in the state view. 
    Need to understand this for Create my own design and develop complete management pack, to display some hard coded value to be displayed in the state view. Both target and source will be my scom server itself.
    Thanks and Regards,
    Boopalan

    Adding more info:
    How to Create a State View
    http://technet.microsoft.com/en-us/library/ff832979.aspx
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Material groups, valuation classes and GL acccounts

    HI Gurus,
    I need to setup new material groups with descriptions, valuaiton classess and their related GL accounts. What steps do i nned to follow.
    PLease explain '
    Thanks
    Anusha

    Go to transacition OMSF to define the material group
    Go to transaciotn OMSK to define the valuation class
    here you can define or use existing A/C ategory reference than create valuation calss and assign the A/C category to that valution class and next step you can assign the material type
    G/L account you can define in FS00 transaciotn and you can assign them to the valution class in OBYC transcation.

  • Custom User and Group classes

    Hi,
    I have a login custom module which does the authentication for my application.
    Till now I was using WLSUserImpl and WLSGroupIpml and everything was working fine.
    Now to make the LoginModule weblogic independent , I replaced the User and Group
    classes with my own classes which extend from java.security.Principal.
    But for some reason this isnt working. Am I missing something obvious.??
    This the exception stack trace which I get
    java.lang.SecurityException: [Security:090398]Invalid Subject: principals=[com.isone.security.providers.authentication.ISOUser@1698cbe,
    com.isone.security.providers.authentication.ISOGroup@9719f4, com.isone.security.providers.authentication.ISOGroup@28ebb4,
    com.isone.security.providers.authentication.ISOGroup@8ab721, com.isone.security.providers.authentication.ISOGroup@fcf06c,
    com.isone.security.providers.authentication.ISOGroup@c7539, com.isone.security.providers.authentication.ISOGroup@1e41830,
    com.isone.security.providers.authentication.ISOGroup@1f01b29, com.isone.security.providers.authentication.ISOGroup@8721bd,
    com.isone.security.providers.authentication.ISOGroup@1b81d4f, com.isone.security.providers.authentication.ISOGroup@8c6e04,
    com.isone.security.providers.authentication.ISOGroup@18aeabe, com.isone.security.providers.authentication.ISOGroup@13968f1,
    com.isone.security.providers.authentication.ISOGroup@18c28a, com.isone.security.providers.authentication.ISOGroup@18bff68,
    com.isone.security.providers.authentication.ISOGroup@2d2da4]
         at weblogic.security.service.SecurityServiceManager.seal(SecurityServiceManager.java:682)
         at weblogic.security.service.RoleManager.getRoles(RoleManager.java:279)
         at weblogic.security.service.AuthorizationManager.isAccessAllowed(AuthorizationManager.java:694)
         at weblogic.servlet.security.internal.WebAppSecurity.hasPermission(WebAppSecurity.java:567)
         at weblogic.servlet.security.internal.SecurityModule.checkPerm(SecurityModule.java:134)
         at weblogic.servlet.security.internal.FormSecurityModule.checkUserPerm(FormSecurityModule.java:327)
         at weblogic.servlet.security.internal.SecurityModule.beginCheck(SecurityModule.java:182)
         at weblogic.servlet.security.internal.FormSecurityModule.checkA(FormSecurityModule.java:181)
         at weblogic.servlet.security.internal.ServletSecurityManager.checkAccess(ServletSecurityManager.java:145)
         at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3539)
         at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2585)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)

    And this will explain you why there is no way to do this right now:
    (CR125681 -- although it says 7.0SP1 it is not fixed even in 8.1 SP2 and
    there is no time frame for the fix)
    http://support.bea.com/application?namespace=askbea&origin=ask_bea_answer.jsp&event=link.view_answer_page_clfydoc&answerpage=solution&page=wls/S-21705.htm
    We've had the same issue and even have an open support case and for now
    the only way to workaround the bug is to
    use the WLSUserImpl and WLSGroupImpl classes.
    HTH,
    Dejan
    Pavel wrote:
    See if this will help:
    http://edocs.bea.com/wls/docs81/dvspisec/pv.html
    Pavel.
    "Anil" <[email protected]> wrote:
    I actually extended PrincipalValidatorImpl and returned java.security.Principal
    as the base class.
    But still I got the same exception.
    PaulF <paulf@reply_in_newsgroup.com> wrote:
    On 25 Feb 2004 06:45:50 -0800, Anil <[email protected]> wrote:
    Hi,
    I have a login custom module which does the authentication for my
    application.
    Till now I was using WLSUserImpl and WLSGroupIpml and everything was
    working fine.
    Now to make the LoginModule weblogic independent , I replaced the
    User
    and Group
    classes with my own classes which extend from java.security.Principal.
    But for some reason this isnt working. Am I missing something obvious.??
    This the exception stack trace which I get
    java.lang.SecurityException: [Security:090398]Invalid Subject:
    principals=[com.isone.security.providers.authentication.ISOUser@1698cbe,
    com.isone.security.providers.authentication.ISOGroup@9719f4,
    com.isone.security.providers.authentication.ISOGroup@28ebb4,
    com.isone.security.providers.authentication.ISOGroup@8ab721,
    com.isone.security.providers.authentication.ISOGroup@fcf06c,
    com.isone.security.providers.authentication.ISOGroup@c7539,
    com.isone.security.providers.authentication.ISOGroup@1e41830,
    com.isone.security.providers.authentication.ISOGroup@1f01b29,
    com.isone.security.providers.authentication.ISOGroup@8721bd,
    com.isone.security.providers.authentication.ISOGroup@1b81d4f,
    com.isone.security.providers.authentication.ISOGroup@8c6e04,
    com.isone.security.providers.authentication.ISOGroup@18aeabe,
    com.isone.security.providers.authentication.ISOGroup@13968f1,
    com.isone.security.providers.authentication.ISOGroup@18c28a,
    com.isone.security.providers.authentication.ISOGroup@18bff68,
    com.isone.security.providers.authentication.ISOGroup@2d2da4]
         at
    weblogic.security.service.SecurityServiceManager.seal(SecurityServiceManager.java:682)
         at weblogic.security.service.RoleManager.getRoles(RoleManager.java:279)
         at
    weblogic.security.service.AuthorizationManager.isAccessAllowed(AuthorizationManager.java:694)
         at
    weblogic.servlet.security.internal.WebAppSecurity.hasPermission(WebAppSecurity.java:567)
         at
    weblogic.servlet.security.internal.SecurityModule.checkPerm(SecurityModule.java:134)
         at
    weblogic.servlet.security.internal.FormSecurityModule.checkUserPerm(FormSecurityModule.java:327)
         at
    weblogic.servlet.security.internal.SecurityModule.beginCheck(SecurityModule.java:182)
         at
    weblogic.servlet.security.internal.FormSecurityModule.checkA(FormSecurityModule.java:181)
         at
    weblogic.servlet.security.internal.ServletSecurityManager.checkAccess(ServletSecurityManager.java:145)
         at
    weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3539)
         at
    weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2585)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    I think that you need to extend WLSAbstractPrincipal I think instead
    of
    WLSPrincipal if you aren't going to implement your own
    PrincipalValidator. The default PrincipalValidator is going to expect
    a
    principal that extends WLSAbstractPrincipal.
    PaulF
    Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/

  • SCOM Monitoring for Azure and its Licensing

    Hi All,
    In a hybrid environment , If i have my scom management group set up on-premise and want to monitor boxes from AZURE , does this require additional licensing to monitor boxes in Azure? If Yes can anyone help me with the information.
    Regards,
    Hemanshu Kadam

    Hi,
    To monitor Azure, here is an article for your reference:
    Walkthrough to Configure System Center Management Pack for Windows Azure Fabric Preview for SCOM 2012 SP1 (with a MetricsHub Bonus)
    http://blogs.msdn.com/b/walterm/archive/2013/04/13/first-impressions-on-system-center-management-pack-for-windows-azure-fabric-preview-for-scom-2012-sp1.aspx
    Regards,
    Yan Li
    Regards, Yan Li

  • Need help with Group Headings and showing them on drill down.

    Hi all:
    I think this is a simple question and feel that I should already know the answer but for some reason it eludes me.  What I would like to know is the proper way to hide/display group headings and have them show up on drill down BUT not repeat on each group header.
    What I have for grouping is as follows:
    Group Header #1 u2013 Sales Rep
      Group Header #2 u2013 Customer
        Group Header #3 u2013 Product Class
          Group Header #4 u2013 Stock Code
            Details
          Group Footer #4 u2013 summary calculated on this line
        Group Footer #3  u2013 summary calculated on this line
      Group Footer #2 u2013 summary calculated on this line
    Group Footer #1 u2013 summary calculated on this line
    My goal is to have my report show only the lines listed below by default and allow the user to drill down group by group to the details section:
    Group Header #1 u2013 Sales Rep
      Group Footer #2 u2013 summary calculated on this line
    Group Footer #1 u2013 summary calculated on this line
    If anyone could offer any help it would be greatly appreciated!
    Thanks,
    FatMan

    click section expert, then for example of GH3, highlight it, then click on the formula box for suppression, and put in the code....
    Drilldowngrouplevel < 3
    The result will be that this GH3 is suppressed whenever the report is showing GH1 and 2 but not 3.
    In addition to this, you must Hide(Drill-down ok) GH3. This is also done in the section expert.
    Then just do the same for GF3, and for GH4/GF4 make sure you use...
    Drilldowngrouplevel < 4

  • Transport classes and programs to another server

    Except for transport routing and charm copy function, do we have other ways to do so?
    for example copy or so forth.
    best regards,
    Blake Le

    Hi Blake
    SAP R/3 Correction and Transport System
    Operating system level files in the transport process:
    The SAP C program TP, requires a special file structure for the transport process. The file system is operating system dependent. TP uses a transport directory or file system, which is called /usr/sap/trans.
    The /usr/sap/trans file system is generally NFS mounted form the development system to other systems unless a system is defined as a single system in the CTS pipeline. All the sub directories should have <SID>adm as the owner and sapsys as the group; and proper read, write and execute access should be given to owner and the group. The TP imports are always performed by <SID>adm.
    The following are the subdirectories in /usr/sap/trans:
    /data
    /cofiles
    /bin
    /log
    /actlog
    /buffer
    /sapnames
    /tmp
    /usr/sap/trans/data: holds the data of transport objects after they are released . The example of a data file is R904073.DEV. The extension DEV means the data file was released from the DEV or development system.
    /usr/sap/trans/cofiles: The cofiles directory holds the command files for all change requests. These files are like a command or control files used to import the data files. The common directory for CTS system is /usr/sap/trans. After a change request is released from the source system , the data is exported immediately to the file system of the operating system. The SAP transport utility TP uses the cofile to transport a data file. The example of a file in cofiles directory is K904073.DEV.
    /usr/sap/trans/bin: holds the most important file TPPARAM in the CTS system. TPPARAM file has all the information about the CTS systems in the CTS pipeline. TPPARAM file is the parameter file for the transport program TP and it is the common file for all the systems in the CTS pipeline. As you know already that /usr/sap/trans should be NFS mounted to all the systems in a CTS pipeline, TP program has access to the TPPARAM file from all the systems. The following is an example of typical TPPARAM file for five SAP systems in the CTS pipeline:
    #@(#) TPPARAM.sap 20.6 SAP 95/03/28
    Template of TPPARAM for UNIX #
    First we specify global values for some parameters, #
    later the system specific incarnation of special parameters #
    global Parameters #
    transdir = /usr/sap/trans/
    dbname = $(system)
    alllog = ALOG$(syear)$(yweek)
    syslog = SLOG$(syear)$(yweek).$(system)
    System spezific Parameters #
    Beispiel T11 #
    DEV/dbname = DEV
    DEV/dbhost = sap9f
    DEV/r3transpath = /usr/sap/DEV/SYS/exe/run/R3trans
    QAS/dbname = QAS
    QAS/dbhost = sap8f
    QAS/r3transpath = /usr/sap/QAS/SYS/exe/run/R3trans
    TRN/dbname = TRN
    TRN/dbhost = sap17
    TRN/r3transpath = /usr/sap/TRN/SYS/exe/run/R3trans
    PRE/dbname = PRE
    PRE/dbhost = sap19f
    PRE/r3transpath = /usr/sap/PRE/SYS/exe/run/R3trans
    PRD/dbname = PRD
    PRD/dbhost = sap18f
    PRD/r3transpath = /usr/sap/PRD/SYS/exe/run/R3trans
    /usr/sap/trans/log: holds the entire log files, trace files and statistics for the CTS system. When the user goes to SE09 (workbench organizer) or SE10 (customizing organizer) transaction and opens the log for a transport, the log file for that transport will be read from /usr/sap/trans/log directory. Each change request should have a log file. Examples of log files are DEVG904073.QAS, DEVI904073.QAS and DEVV904073.QAS. The name of a log file consists of the names of the change request, the executed step, and the system in which the step was executed:
    <source system><action><6 digits>.<target system>
    Now we can analyze the above example DEVG904073. QAS. The <source system> = DEV, <action> = G or report and screen generation, <6 digits> = 904073 (these six digits numbers are exactly the same number as the six digits of the transport) and the <target system> = QAS
    Possible values for <action> are:
    A: Dictionary activation
    D: Import of application-defined objects
    E: R3trans export
    G: Report and screen generation
    H: R3trans dictionary import
    I: R3trans main import
    L: R3trans import of the command files
    M: Activation of the enqueue modules
    P: Test import
    R: Execution of reports after put (XPRA)
    T: R3trans import of table entries
    V: Set version flag
    X: Export of application-defined objects.
    /usr/sap/trans/actlog: holds action log files. The example of an action file is DEVZ902690.DEV. The following are the contents of the file:
    1 ETK220 u201C==================================================u201D u201C=================
    =============================
    1 ETK191 u201C04/30/1998u2033 Action log for request/task: u201CDEVK902690u2033
    1 ETK220 u201C==================================================u201D u201C=================
    =============================
    1 ETK185 u201C04/30/1998 18:02:32u2033 u201CMOHASX01u2033 has reincluded the request/task
    4 EPU120 Timeu2026 u201C18:02:32u2033 Run timeu2026 u201C00:00:00u2033
    1 ETK193 u201C04/30/1998 18:02:33u2033 u201CMOHASX01u2033 owner, linked by u201CMOHASX01u2033 to u201CDEVK902691u2033
    4 EPU120 Timeu2026 u201C18:02:33u2033 Run timeu2026 u201C00:00:00u2033
    1 ETK190 u201C05/04/1998 11:02:40u2033 u201CMOHASX01u2033 has locked and released the request/task
    1 ETK194 u201C05/04/1998 11:02:40u2033 **************** End of log *******************
    4 EPU120 Timeu2026 u201C11:02:40u2033 Run timeu2026 u201C00:00:09u2033
    ~
    ~u201DDEVZ902690.DEVu201D 10 lines, 783 characters
    /usr/sap/trans/buffer: transport buffer of the target systems; contains control information on which requests are to be imported into which systems and in what order the imports must occur. The /usr/sap/trans/buffer will have a directory for each system in the CTS pipeline. For example the buffer file for DEV system is /usr/sap/trans/buffer/DEV.
    /usr/sap/trans/sapnames: holds information pertaining to transport requests for each system user. There are files for each user who released change requests from the system.
    /usr/sap/trans/tmp: holds information about temporary data and log files. While the transport is occurring the Basis administrator can find a file that is related to the transport in the tmp directory; that file shows the exact status if the transport (What objects are being imported at that time).
    Important SAP delivery class and table types and tables in the CTS process:
    Delivery class
    The delivery class defines who (i.e. the SAP system itself or the customer) is responsible for maintaining the table contents. In addition the delivery class controls how the table behaves in a client copy and an upgrade. For example when you select a SAP defined profiles to perform a client copy, certain tables are selected according to their delivery class. DD02L table can show what delevery class a table belongs to.
    The following delivery classes exist:
    A: Application table.
    C: Customizing table, maintenance by customer only.
    L: Table for storing temporary data.
    G: Customizing table, entries protected against overwriting.
    E: Control table.
    S: System table, maintenance only by SAP.
    W: System table, contents can be transported via own TR objects.
    Table type
    The table type defines whether a physical table exists for the logical table description defined in the ABAP/4 Dictionary and how the table is stored on the database.
    The following are different table types in SAP:
    Transparent Tables
    There is a physical table on the database for each transparent table. The names of the physical table and the logical table definition in the ABAP/4 Dictionary are same. For every transparent table in SAP, there is a table in database. The business and application data are stored in transparent tables.
    Structure
    No data records exist on the database for a structure. Structures are used for the interface definition between programs or between screens and programs.
    Append Structure
    An Append structure defines a subset of fields which belong to another table or structure but which are treated as a separate object in the correction management. Append structures are used to support modifications.
    The following table types are used for internal purposes, for example to store control data or for continuous texts:
    Pooled table
    Pooled tables can be used to store control data (e.g. screen sequences, program parameters or temporary data). Several pooled tables can be combined to form a table pool. The table pool corresponds to a physical table on the database in which all the records of the allocated-pooled tables are stored.
    Cluster table
    Cluster tables contain continuous text, for example documentation. Several cluster tables can be combined to form a table cluster. Several logical lines of different tables are combined to form a physical record in this table type. This permits object-by-object storage or object-by-object access. In order to combine tables in clusters, at least part of the keys must agree. Several cluster tables are stored in one corresponding table on the database.
    Tables in CTS process:
    TRBAT and TRJOB:
    TRJOB and TRBAT are the major tables in the CTS process. After TP program has sent the event to the r3 system, RDDIMPDP checks table TRBAT in the target system to find out if there is an action to be performed. Mass activation, distribution, or table conversions are the examples of actions. If there is action to be performed, RDDIMPDP starts the appropriate program in the background task. RDDIMPDP then reschedules itself.
    By checking table TRJOB, RDDIMPDP automatically recognizes if a previous step was aborted, and restarts this step. For each transport request , TP program inserts an entry into table TRBAT. If the return code 9999 in this table then the step is waiting to be performed. Return code 8888 indicates that the step is active and currently being processed. A return code of 12 or less indicates that the step is finished. In addition, TP inserts a header entry to let the RDDIMPDP program know to start processing. The column return code will therefore contain a B for begin. When RDDIMPDP is started, it sets the header entry to R(un), and starts the required program. When all the necessary actions are performed for all the transport requests, the column return code contains all the return codes received, and the column TIMESTAMP contains the finishing time. The header entry is set to F(inished). TP monitors the entries in TRBAT and TRJOB tables. When the header entry in TRBAT is set to finished. The entry in TRJOB is deleted.
    Transport Tables SE06
    TDEVC - Development classes
    TASYS - Details of the delivery. Systems in the group that should automatically receive requests, have to be specified in table TASYS.
    TSYST - The transport layers will be assigned to the integration systems. ( Define all systems)
    TWSYS - Consolidation routes ( define consolidation path)
    DEVL - Transport layers are defined here
    In u201CConfiguring the CTS systemu201D section, We will learn more about the transport tables in SE06 transaction
    Programs in the CTS process:
    In the CTS table section we learned about the RDDIMPDP program. RDDIMPDP program needs to be scheduled in all the clients in an instance. It is recommended to schedule the RDDIMPDP as event driven.
    RDDPUTPP and RDDNEWPP programs can be used to schedule RDDIMPDP program in the background.
    The ABAP/4 programs that RDDIMPDP starts are determined by the transport step to be executed that is entered in the function field of table TRBAT.
    Function Job Name Description of transport Steps
    J RDDMASGL Activation of ABAP/4 dictionary objects
    M RDDMASGL Activation of match codes and lock objects
    S RDDDISOL Analysis of database objects to be converted
    N RDDGENOL Conversion of database objects
    Y RDDGENOL Conversion of matchcode tables
    X RDDDICOL Export of AD0 objects
    D RDDDIC1L Import of AD0 objects
    E RDDVERSE Version management update during export
    V RDDVERSL Version management update during import
    R RDDEXECL Execution of programs for post - import processing
    G RDDDIC3L Generation of ABAP/4 programs and screens
    Version Management:
    One of the important features of Workbench Organizer is Version Management. This feature works for all the development objects. Using the version management feature the users can compare and retrieve previous versions of objects.
    Version management provides for comparisons, restore of previous versions, documentation of changes and assistance in the adjustment of data after upgrading to a new release. With the release of a change request, version maintenance is automatically recorded for each object. If an object in the system has been changed N times, it will have N delta versions and one active version. To display version management, for ABAPs use transaction SE38 and for tables, domains and data elements use SE11. The path to follow is Utilities -> Display version. Using version management the users can view existing version for previously created ABAP code, make changes to the code, compare code versions and restore original version of the code. Now the users will be restore previous versions without cut and paste steps of the past.
    TP and R3trans program:
    The basis administrator uses TP program to transport SAP objects from one system to another. TP is a C program delivered by SAP that runs independently of the R/3 system. TP program uses the appropriate files located in a common transport directory /usr/sap/trans. TP starts C programs, ABAP/4 programs and special operating system commands to its job. R3trans is one of the most important utility program called by TP. Before using the TP program, the basis administrator needs to make sure that the CTS system is setup properly and the right version of TP is running in the system. The TP program is located in the run time directory /usr/sap/<SID>/SYS/exe/run directory. It is automatically copied in the install process. A global parameter file TPPARAM that contains the databases of the different target systems and other information for the transport process controls TP. The global parameter file determines which R3trans is used for each system. If the parameter r3transpath is not defined properly then no export and import can be done. The basis administrator should make sure that the default value u201Cr3transpathu201D is properly defined. Later in this chapter we will learn more about TP and R3trans; also we are going to see how they are used.
    Configuring the TPPARAM file:
    Each time TP is started, it must know the location of the global parameter file. As we have seen before TPPARAM file should be in directory /usr/sap/trans/bin. The parameters in TPPARAM can either global (valid for each and every system in the cts pipeline) or local to one system. Th parameters are either operating system dependant (these parameters preceded by a keyword corresponding to the specific operating system) or database dependant (contain a keyword corresponding to a specific database system).
    The global parameter file provides variables that can be used for defining parameters. The variables can be defined in format: $(xyz). The brackets can be substituted with the u201C\u201D-character if required.
    The following pre-defined variables are available for the global parameter file:
    $(cpu1): The CPU name can be sun or as4 for example. In heterogeneous networks this variable is very important.
    $(cpu2): Acronym for the name of the operating system. The example for this variable can be
    hp-ux, or sunos . This is an operating system specific variable.
    $(dname): Used for the day of the week (SUN,MON,u2026.).
    $(mday): Used for the day of the current month (01-31).
    $(mname): Used for the name of the month (JANu2026DEC).
    $(mon): Used for the Month (01-12).
    $(system): R/3 System name.
    $(wday): Day of the week (00-06, Sunday=00, Monday=01, Tuesday=02 and so on).
    $(yday): Day of the current year (001-366). Using the number any day of the year can be chosen.
    $(year): Year (Example:1998 or 1999).
    $(syear): Short form of the year (two positions).
    $(yweek): Calendar week (00-53). The first week begins with the first Sunday of the year.
    For the database connection:
    The transport environment also needs parameters to connect to the R/3 System database. As we know already the every instance in the R/3 CTS pipeline has its own database, therefore specific parameters should be defined for each database system. From dbtype parameter of RSPARAM file, TP program identifies the database system.
    The two parameters u201Cdbnameu201D and u201Cdbhostu201D are required for ORACLE databases.
    DBHOST: is the name of the computer on which the database processes execute. TCP/IP name of the host if NT is being used.
    DBNAME: is the name of the database instance.
    As of Release 3.0E, two new parameters have been introduced.
    DBLOGICALNAME: The default value is $(system). The logical name that was used to install the database.
    DBCONFPATH: The default value is $(transdir).
    The parameters u201Cdbnameu201D and u201Cdbhostu201D are also used for INFORMIX databases in an installation:
    DBHOST: Same as Oracle.
    DBNAME: Name of the database instance, uppercase and lowercase are distinguished here.
    INFORMIXDIR : u201C/informix/<SAPSID>u201D is the default value. Defines the directory namewhere the database software can be found.
    INFORMIXSQLHOSTS: u201C$(informixdir)/etc/sqlhosts[.tli|.soc]u201Cis default value under Unix. The name of the SQLhosts file with its complete path is defined with this parameter.
    INFORMIX_SERVER: u201C$(dbhost)$(dbname)shmu201D is the default value. The name of the database server may be specified for a local connect.
    INFORMIX_SERVERALIAS: u201C$(dbhost)$(dbname)tcpu201Dis the default vlue. The name of the database server can be specified for a remote connect.
    For Microsoft SQL Server database the two parameters u201Cdbnameu201D and u201Cdbhostu201D are also required. DBHOST: The TCP/IP name of the host on which the database is running.
    DBNAME: The database instance name.
    For DB2 in AS/400 only u201Cdbhostu201D is required.
    DBHOST: System name of the host on which the database is running.
    If theu201DOptiConnectu201D is used, the following line should be specified:
    OPTICONNECT 1
    For DB2/ AIX
    The two parameters u201Cdbnameu201D and u201Cdbhostu201D are required
    DBHOST: The host on which the database processes are running. It is the TCP/IP name of the host for Windows NT (As we have seen in the earlier examples).
    DBNAME: Database instance name.
    The DB2 for AIX Client Application Enabler Software must also be installed on the host on which tp is running.
    ALLLOG: u201CALOGu201D $(syear) $(yweek)u201Dis the default value. This variable can be used in TPPARAM file to specify the name of a file in which tp stores information about every transport step carried out for a change request anywhere in the transport process. The file always resides in the log directory.
    SYSLOG: u201CSLOG $(syear) $(yweek).$(system)u201D is the default value. This variable can be used to name a file in which tp stores information about the progress of import actions in a certain R/3 System. The file does not store information for any particular change request. The file always resides in the log directory.
    tp_VERSION: Zero is the default value. If this parameter is set to not equal to zero, a lower version of tp may not work with this TPPARAM file. If the default value (zero) is set, the parameter has no affect.
    STOPONERROR: (Numeric value) The default value is 9. When STOPONERROR is set to zero, tp is never stopped in the middle of an u201Cimportu201D or u201Cputu201D call. When STOPONERROR is set to a value greater than zero, tp stops as soon as a change request generates a return code that is equal to or greater than this value (The numeric value of the STOPONERROR parameter is stored in the variable BADRC). Change requests, which still have to be processed for the current step, are first completed. A u201CSYNCMARKu201D in the buffer of the R/3 System involved, sets a limit here. tp divides the value of this parameter between two internal variables. STOPONERROR itself is treated as a boolean variable that determines whether tp should be stopped, if the return code is too high.
    REPEATONERROR (Numeric value too): The default value is 9. The REPEATONERROR parameter is similar to STOPONERROR. The difference is, REPEATONERROR specifies the return code up to which a change request is considered to be successfully processed. Return codes less than REPEATONERROR are accepted as u201Cin Orderu201D. Change requests that were not processed successfully stay in the buffer.
    NEW_SAPNAMES: Default value is u201CFALSEu201D. A file is created for each user of the R/3 System group in the u201Csapnamesu201D subdirectory of the transport directory. Except some of the operating system,the name of the user is the name of the file. It is very important to remember hat the special characters or length of the file name could cause problems. If all the R/3 Systems in the transport group have at least Release level 3.0.; TP program is efficient to handle this problem. The user names are modified to create file names that are valid in all operating systems and the real user names are stored in a corresponding file.
    Though we have seen so many parameters, for the minimum configuration the following two parameters are very important.
    TRANSDIR: specifies the name of the common transport directory. The following is a typical example from TPPARAM of Unix as we have seen before.
    transdir = /usr/sap/trans/
    DBHOST: contains the name of the database host. In Windows NT environment, this is the TCP/IP host name. The following is an example in Unix:
    DEV/dbname = DEV
    DEV/dbhost = sap9f
    DEV/r3transpath = /usr/sap/DEV/SYS/exe/run/R3trans
    For TP, to control u2018Start and Stopu2019 command files and database in R/3 the following important parameters are specified in TPPARAM:
    Parameters for the tp Function u201CPUTu201D: LOCK_EU (boolean) default value is u201CTRUEu201D. Though from version 3.1 onward the tp put command is used seldom in cts process still it is important to know how this parameter works. When u201Ctp putu201D is used, it changes the system change option . If the parameter is set to u201CFALSEu201D nothing gets changed. If the parameter is set to u201CTRUEu201D, the system change option is set to u201CObjects cannot be changedu201D at the beginning of the call, and gets changed back to its previous value at the end of the call. The u201Ctp putu201D command will give the exact status of the locking mechanism.
    LOCKUSER (used as boolean value): Default value is u201CTRUEu201D. This parameter is about the user login while tp put call is executed. If this parameter is set to u201CFALSEu201D, no locking mechanism for the users takes affect. If this parameter is defined as u201CTRUEu201D then a character is set in the database level; so only DDIC and SAP* can log on to the system. Users that have already logged on are not affected (this is a reason for activating the parameters STARTSAP and STOPSAP). The charactertor is removed at the end of the call, and all the users can log on to the SAP R/3 System again.
    STARTSAP: Default value is u201D u201C.or u201CPROMPTu201D for Windows NT . This parameter is used by TP to start an R/3 System. It is not necessary for the clients to make tp start and stop R/3 system..
    STOPSAP: Default value is u201D u201Cor u201CPROMPTu201D for Windows NT. TP uses this parameter to stop an R/3 System.
    STARTDB: Default value is u201D u201C. TP uses the value of this parameter to start the database of an R/3 System.
    The parameter is not active under Windows NT.
    STOPDB: Default value is u201D u201C. TP uses the value of this parameter to stop the database of an R/3 System.
    This parameter is not active under Windows NT.
    The above parameters in UNIX can be used as following:
    STARTSAP = startsap R3
    STOPSAP = stopsap R3
    STARTDB = startsap db
    STOPDB = stopsap db
    In Windows NT:
    STARTSAP =
    $(SAPGLOBALHOST)\sapmnt\$(system)\sys\exe\run\startsap.exe
    R3 <SID> <HOST NAME> <START PROFILE>
    STOPSAP =
    $(SAPGLOBALHOST)\sapmnt\$(system)\sys\exe\run\stopsap.exe
    R3 <SID> <HOST NAME> <INSTANCE> <PROFILE PATH + Instance profile>
    The parameters STARTDB and STOPDB are not active under Windows NT.
    Parameters for the tp function u201CCLEAROLDu201D
    DATALIFETIME (Numeric): Default value is u201C200u2033. When the data file has reached a minimum age, it is moved to the subdirectory old data with tp check. tp clearold all. The life span of the data files in the data sub directory can be set in days with this all, parameter.
    OLDDATALIFETIME (Numeric): Default value is u201C365u2033. When a file located in the olddata subdirectory is no longer needed for further actions of the transport system and has reached a minimum age, it is removed with tp check.all, tp clearold all. The minimum age in days can be set with this parameter.
    COFILELIFETIME (Numeric): Default value is u201C365u2033. This parameter is used just like DATALIFETIME parameter.
    LOGLIFETIME (Numeric): Default value is u201C200u2033. This parameter applies to the life span of the log files. When the log files in log subdirectory is no longer needed for the transport system and has reached a minimum age, it is deleted with the calls tp check.all, tp clearold all. The minimum age in days can be defined with this parameter.
    The Three Key Utilities of the CTS system (TP, R3trans and R3chop):
    TP: Earlier in this chapter we have seen the objectives of TP. The TP transport control program is a utility program that helps the user to transport objects from one system to another. TP program is the front-end for the utility R3trans. TP stands for u201CTransports and Putsu201D. To make the TP work successfully the CTS system needs to be correctly configured. The following steps are very important for TP to run properly.
    The transport directory /usr/sap/trans must be installed and NFS mounted to all the systems in the CTS pipe line.
    RDDIMPDP program must be running (event driven is recommended) in each client. RDDIMPDP can be scheduled in the background by executing RDDNEWPP or RDDPUTPP. Use the tp checkimpdp <sap sid> command in /usr/sap/trans/bin directory as <sid>adm user to check RDDIMPDP program.
    Use the tp connect <sap sid> command in /usr/sap/trans/bin directory to see whether the tp program is connecting to the database successfully or not. To run TP command the user has to logon as <sid>adm in source or target system.
    The R/3 Systems in the CTS pipeline must have different names.
    The Global CTS Parameter File TPPARAM must be correctly configured.
    The source system (for the export) and target system ( for the import) must have at least two background work processes. TP always schedules the C class job, so if all the background jobs are defined as A class job then there will be problems in transport steps.
    Important Tips :.It is always better to have the up to date TP version installed in your system. A user can ftp a current version of TP from SAPSERV4 of SAP. Though R3trans and other utility programs can be used to do the transport, it is recommended to use TP whenever possible for the following reasons..
    The exports and imports are done separately using TP program. For example: when a transport is released from the system, the objects are exported from the source database to the operating system and then the import phase starts to transport those objects to the target system.
    TP takes care of the order of the objects. The order, that was followed to export the objects; the same order will be followed to import them to the target database.
    The TP command processes all change requests or transports in the SAP system buffer that have not yet been imported successfully. All the import steps are executed automatically after TP calls R3trans program to execute the following necessary steps:
    Dictionary Import: ABAP/4 dictionary objects will be imported in this step.
    Dictionary Activation: Name tabs or runtime descriptions will be written inactively. The R/3 system keeps running until the activation phase is complete. The enqueue modules are the exceptions in the running phase. After the activation of new dictionary structure the new actions are decided to get the runtime objects to the target system.
    Structure conversion: If necessary the table structure is changed in this phase.
    Move Nametabs: The new ABAP/4 Dictionary runtime objects which were inactive up to now are moved into the active runtime environment in this process. The database structures are adjusted accordingly. From the first step to the Main import step inconsistencies can occur to the R/3 system. After the main import phase all the inconsistency ca be solved.
    Main import with R3trans: All the data are imported completely and the system comes to a consistent state.
    Activation of enqueue-objects: The enqueue-objects cannot be activated in the same way as the objects of the ABAP/4 Dictionary, so they have to be activated after the main import in this step. They are then used directly in the running system.
    Structure Conversion of match codes, Import application defined objects, versioning and execution of user defined activities are some of the steps after activation of enqueue-objects. The next step is generation of ABAP/4 programs and screens, where all the programs and screens associated with the change request are generated. When all the import steps are completed successfully, the transport request is removed from the import buffer.
    It is recommended by SAP to schedule regular periods for imports into the target system (e.g. daily, weekly or monthly). Shorter periods between imports are not advisable. The transport to production should not be done in the off hours when the users are not working
    TP can be started with different parameters. The u201Ctp helpu201D command can help user to generate a short description about the use of the command.
    The following are the some important commands of TP:
    For export:
    tp export <change request>: The complete objects in the request from the source system will be transported. This command also used by SAP System when it releases a request.
    tp r3e <change request>: R3trans export of one transport request.
    tp sde <change request>: Application defined objects in one transport request can be exported.
    tp tst <change request> <SAP system >: The test import for transport request can be done using this command.
    tp createinfo <change request>: This command creates a information file that is automatically done during the export.
    tp verse <request>: This command creates version creates versions of the objects in the specified request.
    To Check the transport buffer, global parameter file and change requests:
    tp showbuffer <sid>: Shows all the change requests ready to be imported to the target system.
    tp count <sid>: Using this command users can find out the number of requests in the buffer waiting for import.
    tp go <sid>: This command shows the environment variables needed for the connection to the database of the <sid> or target system.
    tp showparams <sid>: All the values of modifiable tp parameters in the global parameter file. The default value is shown for parameters that have not been set explicitly.
    To import the change requests or transports:
    tp addtobuffer <request>.<sid>: If a change request is not in the buffer then this command is used to add it to the buffer, before the import step starts.
    tp import all <sid>: This command imports all the change requests from the buffer to the target system.
    tp put <sid>: The objective of this command is same as u201Ctp import all <sid>u201D, but this command locks the system. This command also starts and stops the SAP system, if the parameters startsap and stopsap parameters are not set to u201D u201C.
    tp import <change request> <sid>: To import a single request from the source system to target system.
    tp r3h <change request>| all <sid>: Using this command user can import the dictionary structures of one transport or all the transport from the buffer.
    tp act <change request>|all <sid>: This command activates all the dictionary objects in the change request.
    tp r3i <change request> | all <sid>: This command imports everything but dictionary structures of one.
    tp sdi <change request>|all <sid>: Import application-defined objects.
    tp gen <change request>|all <sid>: Screen and reports are generated using this command.
    tp mvntabs <sid>: All inactive nametabs will be activated with this command.
    tp mea <change request>|all <sid>: This command will activate the enqueue modules in the change request.
    When you call this command, note the resulting changes to the import sequence.
    Additional tp utility options:
    tp check <sid>|all (data|cofiles|log|sapnames|verbose): User uses this command to find all the files in the transport directory that are not waiting for imports and they have exceeded the minimum time specified using the COFILELIFETIME, LOGFILELIFETIME, OLDDATALIFETIME and DATALIFETIME parameters of TPPARAM file.
    tp delfrombuffer <request>.<sid>: This command removes a single change request from the buffer. In case of TMS, the request will be deleted from the import queue.
    tp setstopmark <sid>: A flag is set to the list of requests ready for import into the target system. When the user uses the command tp import all <sapsid> and tp put <sapsid>, the requests in front of this mark are only processed. After all the requests in front of the mark have been imported successfully, the mark is deleted.
    tp delstopmark <sid>: This command deletes the stop mark from the buffer if it exists.
    tp cleanbuffer <sapsid>: Removes all the change requests from the buffer that are ready for the import into the target system.
    tp locksys <sid>: This command locks the system for all the users except SAP* and DDIC. The users that have already logged on are not affected by the call.
    tp unlocksys <sid>: This command unlocks the system for all the users.
    tp lock_eu <sid>: This command sets the system change option to u201Csystem can not be changedu201D tmporarily.
    tp unlock_eu <sid>: This command unlocks the system for all the changes.
    tp backupall <sid>: This command starts a complete backup using R3trans command. It uses /usr/sap/trans/backup directory for the backup.
    tp backup delta <sid>: Uses R3trans for a delta backup into /usr/sap/trans/backup directory.
    tp sapstart <sid>: To start the R/3 system.
    tp stopsap <sid>: To stop the R/3 system.
    tp dbstart <sid>: To start the database.
    tp dbstop <sid>: To stop the database.
    Unconditional modes for TP: Unconditional modes are used with the TP program and these modes are intended for the special actions needed in the transport steps. Using unconditional mode user can manipulate the rules defined by the workbench organizer. The unconditional mode should be used when needed, otherwise it might create problems for the R/3 system database. Unconditional mode is used after the letter u201CUu201D in the TP command. Unconditional mode can be a digit between 0 to 9 and each has a meaning to it. The following is a example of a import having unconditional mode.
    tp import devk903456 qas client100 U12468
    0: Called a overtaker; change request can be imported from buffer without deleting it and then uncoditional mode 1 is used to allow another import in the correct location.
    1: If U1 is used with the export then it ignores the correct status of the command file; and if it is used with import then it lets the user import the same change request again.
    2: When used with tp export, it dictates the program to not to expand the selection with TRDIR brackets. If used in tp import phase, it overwrites the originals.
    3: When used with tp import, it overwrites the system-dependant objects.
    5: During the import to the consolidation system it permits the source systems other than the integration system.
    6: When used in import phase, it helps to overwrite objects in unconfirmed repairs.
    8: During import phase it ignores the limitations caused by the table classification.
    9: During import it ignores that the system is locked for this kind of transport.
    R3trans: TP uses R3trans program to transport data from one system to another in the CTS pipeline. efficient basis administrator can use R3trans directly to export and import data from and into any SAP systems. Using this utility transport between different database and operating system can e done without any problems. Different versions of R3trans are fully compatible with each other and can be used for export and import. The basis administrator has to be careful using R3trans for different release levels of R/3 software; logical inconsistency might occur if the up to date R3trans is not used for the current version of R/3 system.
    The syntax for using the control file is following:
    R3trans [<options>] <control file> (several options used at the same time; at least one option must be there)
    For example: R3trans u2013u 1 u2013w test.log test
    In the above example a unconditional mode is used, a log file u201Ctest.logu201D file is used to get the log result and a control file u201Ctestu201D, where the instructions are given for the R3trans to follow. The user needs to logon as <sid>adm to execute R3trans.
    The following options are available for the R3trans program:
    R3trans -d : This command is used to check the database connection .
    R3trans -u <int>: Unconditional mode can be used as we have seen in the above example.
    R3trans -v : This is used for verbose mode. It writes additional details to the log file
    R3trans -i <file>: This command directly imports data from data file without a control file.
    R3trans -l <file>: This provides output of a table of contents to the log file.
    R3trans -n : This option provides a brief information about new features of R3trans.
    R3trans u2013t: This option is used for the test mode. All modifications in the database are rolled back.
    R3trans -c <f1> [<f2>]: This command is used for conversion. The <f1> file will be copied to <f2> file after executing a character set conversion to the local character set.
    Important tips: Do not confuse the backup taken using R3trans with database backup. The backups taken using R3trans are logical backups of objects. In case something happens to the SAP system these backups can not be used for recovery. R3trans backups can be only used to restore a copy of a particular object that has been damaged or lost by the user.
    R3trans -w <file>: As we have seen in the above example this option can be used to write to a log file. If no file is mentioned then trans.log is default directory for the log.
    R3trans also can be used for the database backup.
    R3trans u2013ba: This command is used for a complete backup. we will see in the next paragraph how to use
    the control file for the backup.
    R3trans u2013bd: This command is used for a delta backup if the user does not want a complete backup.
    R3trans u2013bi: This option will display backup information.
    The following are some of the examples of control files:
    We have already learned how to use a command for the logical backup of the objects in the database. To get a complete backup the following example control file can be used.
    backup all
    file = /usr/sap/trans/backall
    The option u201Cfile = u2026u201D is the name of the directory into which the data files are to be written. If you are taking a complete backup of DEV system then the backup file is going to look like u201CDEV.A000.bcku201D the next complete
    Reward if useful
    Regards
    divya

  • User group [$CLASS] not an Org level field in IA, whereas it is in DA

    Hi All,
    We have an authorization problem that we faced while SAP Upgrade. In the development system while we upgraded all the roles, we did not face any issue. User group field [$CLASS] was actually an org level field in that system and the roles were upgraded based on that condition.
    When the Integration system was up and the upgraded roles were transported to IA, we noticed that they ended with a warning. On checking the logs we found out that User group [CLASS] actually was not an Org level value in the INtegration system, whereas it was an org level field in the development system.
    Can someone tell me the reason why it is different? Is there any settings we have to change to make User group  an org level field in IA. Thanks a lot for your help.
    Vijith

    Hello, I ran into this also and found these notes to explain why this is suddenly an org value and how to fix it:
    http://search.sap.com/notes?id=0001580048
    http://search.sap.com/notes?id=0001739055
    Basically, GRC 10 add-on makes the user group an org value and the note instructs how to undo this manually, but there is a required pre-requisite because you cannot modify this for SAP delivered fields normally.
    You know what else would be nice.... maybe there's a note that explains why Account Type is an org value.  It REALLY should not be, IMO.

  • Classes and Objects

    I am new to object oriented programing. Now I think I am in little bit confusion with the concept of Basic building blocks of Object oriented programing. My doubt is;-
    1) "What Actually a class is ? Is it a user defined Data Type ?"
    2) "What actually an Object is ? Is an Object another format of a class ? Or an object is just a piece of Class, I mean is a group of Objects going to construct a class ?"
    3) "Is an object going to contain the reference of the data members only or both the data members and functions as well ? If It is going to contain both of these two then what is the Difference between a class and an object ?"
    4) "Is Constructor simply a function or something else ?"

    929663 wrote:
    I am new to object oriented programing. Now I think I am in little bit confusion with the concept of Basic building blocks of Object oriented programing. My doubt is;-
    1) "What Actually a class is ? Is it a user defined Data Type ?"More or less. You can find a more complete definition in any text or tutorial, or with a simple google search.
    2) "What actually an Object is ? Is an Object another format of a class ? Or an object is just a piece of Class, I mean is a group of Objects going to construct a class ?"Again: text, tutorial, google, or any combination.
    3) "Is an object going to contain the reference of the data members only or both the data members and functions as well ? If It is going to contain both of these two then what is the Difference between a class and an object ?"Data only. But the object also knows what class it is and where to find its class definition that tells it what methods it has.
    4) "Is Constructor simply a function or something else ?"It's not a method. It' similar to a method, in that it's a named grouping of operations, but it doesn't have a return type, and it can only be called at certain points in your code. It's job is specifically to get a newly created object into a valid initial state.
    http://docs.oracle.com/javase/tutorial/ --> http://docs.oracle.com/javase/tutorial/java/index.html --> http://docs.oracle.com/javase/tutorial/java/javaOO/index.html

  • Xs:group refs and "ORA-30936: Maximum number (1) of XML nodes exceeded'

    We registered a subset of the IBM DITA schemas in Oracle XDB. These schemas contain a lot of xs:group definitions where references to these groups contain "minOccurs" and "maxOccurs" atttributes, for instance:
    <xs:group name="category">
    <xs:sequence>
    <xs:element ref="category"/>
    </xs:sequence>
    </xs:group>
    <xs:complexType name="metadata.class">
    <xs:sequence>
    <xs:group ref="audience" minOccurs="0" maxOccurs="unbounded"/>
    <xs:group ref="category" minOccurs="0" maxOccurs="unbounded"/>
    <xs:group ref="keywords" minOccurs="0" maxOccurs="unbounded"/>
    <xs:group ref="prodinfo" minOccurs="0" maxOccurs="unbounded"/>
    <xs:group ref="othermeta" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="mapkeyref" type="xs:string"/>
    <xs:attributeGroup ref="global-atts"/>
    <xs:attribute ref="class" default="- topic/metadata "/>
    </xs:complexType>
    When we create an instance document that contains more than one "category" element under the element "metadata" element, like:
    <metadata>
    <audience type="purchaser" othertype="" job="using" otherjob="" experiencelevel="general"/>
    <category/>
    <category/>
    </metadata>
    we get the error: ORA-30936: Maximum number (1) of 'category' XML node elements exceeded
    This error is not consistent with the schema and XML parsers like Xerces and the Oracle "XDK parser" think the instance is fine.
    When we skip the reference to the group and reference the element definition "category" directly the problem is gone:
    <xs:complexType name="metadata.class">
    <xs:sequence>
    <xs:group ref="audience" minOccurs="0" maxOccurs="unbounded"/>
    <xs:element ref="category" minOccurs="0" maxOccurs="unbounded"/>
    <xs:group ref="keywords" minOccurs="0" maxOccurs="unbounded"/>
    <xs:group ref="prodinfo" minOccurs="0" maxOccurs="unbounded"/>
    <xs:group ref="othermeta" minOccurs="0" maxOccurs="unbounded"/>
    </xs:sequence>
    <xs:attribute name="mapkeyref" type="xs:string"/>
    <xs:attributeGroup ref="global-atts"/>
    <xs:attribute ref="class" default="- topic/metadata "/>
    </xs:complexType>
    In this case the problems for the other group references like "audience" remain.
    It looks like Oracle XDB doesn't honor the cardinality attributes "minOccurs" and "maxOccurs" on xs:group elements that are references. Does anyone experienced similar problems?
    Thanks,
    Maarten

    This sounds like bug 5204107. You'll need to open a tar with Oracle Support if you need a fix for this..
    It happens when a group is used once without a maxOccurs and then later with a maxOccurs.
    The workaround is 'in-line' the group before registering the XML Schema. This can be done as follows (assuming the group definition and reference are in the same XML Schema)..
    procedure expandGroup(xmlschema in out xmltype, groupName varchar2,xsdDirectory varchar2)
    is
      xsdSchemaPath varchar2(512);
      groupModel xmltype;
      sequenceModel xmlType := xmlType('<xsd:sequence ' || xdb_namespaces.XMLSCHEMA_PREFIX_XSD || '><xsd:sequence/></xsd:sequence>');
      maxOccursValue binary_integer;
    begin
      -- FInd the Group Definition
      if xmlSchema.existsNode('/xsd:schema//xsd:group[@ref="' || groupName || '"]',NAMESPACES) = 1 then
        if xmlSchema.existsNode('/xsd:schema/xsd:group[@name="' || groupName || '"]',NAMESPACES) = 1 then
          select extract
                   xmlSchema,
                   '/xsd:schema/xsd:group[@name="' || groupName || '"]',
                   NAMESPACES
            into groupModel
            from dual; 
        else
          dbms_output.put_line('xsdDirectory = ' || xsdDirectory);
          select PATH,
                 extract
                   CONTENTS,
                   '/xsd:schema/xsd:group[@name="' || groupName || '"]',
                   NAMESPACES
            into xsdSchemaPath, groupModel
            from (
                   select PATH, xdburitype(path).getXML() CONTENTS
                     from PATH_VIEW
                    where under_path(res,xsdDirectory) = 1
           where existsNode
                   CONTENTS,
                   '/xsd:schema/xsd:group[@name="' || groupName || '"]',
                   NAMESPACES
                 ) = 1;
           dbms_output.put_line('Resolved ' || groupModel.extract('/xsd:group/@name',NAMESPACES).getStringVal() || ' in Schema ' || xsdSchemaPath);
        end if;
        if groupModel.existsNode('/xsd:group/xsd:annotation',NAMESPACES) = 1 then
          select deleteXML
                   groupModel,
                   '/xsd:group/xsd:annotation',
                   NAMESPACES
            into groupModel
            from dual;
        end if;
        -- create a sequence that can be placed in-line in the XML Schema to replace the <group ref=""/>
        groupModel := groupModel.extract('/xsd:group/*',NAMESPACES);
        select updateXML
                 sequenceModel,
                 '/xsd:sequence/xsd:sequence',
                groupModel,
                 NAMESPACES
          into sequenceModel
          from dual;
        if xmlSchema.existsNode('/xsd:schema//xsd:group[@ref="' || groupName || '" and not(@maxOccurs)]',NAMESPACES) = 1 then
          select updateXML
                   xmlSchema,
                   '/xsd:schema//xsd:group[@ref="' || groupName || '" and not(@maxOccurs)]',
                  sequenceModel,
                   NAMESPACES
            into xmlSchema
            from dual;
        end if;
        select insertChildXML
                 sequenceModel,
                 '/xsd:sequence',
                 '@maxOccurs',
                 'unbounded',
                 NAMESPACES
          into sequenceModel
          from dual;
        if xmlSchema.existsNode('/xsd:schema//xsd:group[@ref="' || groupName || '" and @maxOccurs="unbounded"]',NAMESPACES) = 1 then
          select updateXML
                   xmlSchema,
                   '/xsd:schema//xsd:group[@ref="' || groupName || '" and @maxOccurs="unbounded"]',
                   sequenceModel,
                   NAMESPACES
            into xmlSchema
            from dual;
        end if;
        while xmlschema.existsNode('/xsd:schema//xsd:group[@ref="' || groupName || '"]',NAMESPACES) = 1 loop
          maxOccursValue := xmlSchema.extract('/xsd:schema//xsd:group[@ref="' || groupName || '"]/@maxOccurs').getNumberVal();
          select updateXML
                   sequenceModel,
                  '/xsd:sequence/@maxOccurs',
                   maxOccursValue,
                   NAMESPACES
            into sequenceModel
            from dual;
          select updateXML
                   xmlSchema,
                   '/xsd:schema//xsd:group[@ref="' || groupName || '" and @maxOccurs="' || maxOccursValue || '"]',
                   sequenceModel,
                   NAMESPACES
            into xmlSchema
            from dual;
        end loop;
      end if;
    end;
    --

  • Running SCOM ACS in multiple customer environments from the same SCOM management group ?

    We are currently monitoring multiple customer environments from the one SCOM management group and are looking at possibilities of using ACS for auditing. 
    Is this technically possible ?  Does the ACS collector service need to sit on the customer side ?  could it be installed on the same server acting as the SCOM gateway server ?

    Hi,
    The number of ACS forwarders that can be supported by a single ACS collector and ACS database can vary, depending on many factors, such as the number of events that your audit policy generates, the role of the computers that the ACS forwarders monitor.
    If your environment contains too many ACS forwarders for a single ACS collector, you can install more than one ACS collector. Each ACS collector must have its own ACS database.
    An ACS Collector must be installed on computers running Operations Manager management server and for security reasons, it must also be a member of an Active Directory domain.
    More details please refer to the article below:
    http://technet.microsoft.com/en-us/library/hh212908.aspx
    Regards,
    Yan Li
    Regards, Yan Li

  • Need the default configuration of map-class and policies for WAAS version 5.3.1.20

    Do someone have the default configuraion cli of class-maps and policies for the WAAS version 5.3.1.20. I need this to compare with the configuration that I have in the network, I need the verify what have been changed.

    I'm sure you figured this out already since it's been 9 months.  But just in case someone needs this - you can restore the default policies if they don't show up.  Device Groups -> AllWAASGroup and edit.  There you will see a "Restore default Optimization Policies"

  • Startup Classes and JMS - Suggestions Please!

    I'm in serious need of having several resources initialized before beans
    start handling requests.
    I tried implementing a Weblogic Startup Class, and it works fine - as long
    as it's the first thing
    to run! -- the problem is, when my Message Driven Beans deploy, if there are
    messages waiting
    for them in their durable subscriptions, they immediately start
    processing... then about 30 seconds
    later Weblogic (6.0sp1) gets around to starting my startup class. If I put
    code in each MDB that
    kicks off the initialization when they are invoked I still run into
    problems, because my initialization
    takes a LONG time (more than 2 minutes) - so I end up with lots of
    transaction rollbacks... which
    are very annoying and clutter up the log files, and scare customers of the
    product.
    Is there anyway to make a startup class/servlet/something that runs and
    completes before any
    other processing occurs?
    Thanks,
    James

    Yes, Startup servlet has the same problem - it doesn't 'startup' until after
    jms messages are already being delivered. :( aside from this, there are
    class loader issues -servlet space and ejb space are not the same...
    Thanks though,
    James
    "minjiang" <[email protected]> wrote in message
    news:[email protected]...
    Hi, did you ever try startup servlet? not startup class?
    mj
    James House wrote:
    The only problem with creating a base class to extend is the fact that
    Java only supports single inheritance, -- and I'm already inheriting...
    >>
    I've been involved with many projects that use WLServer, and in
    almost every one of them, there has been a need for a startup class
    that fires before the server starts handling requests.... strange that
    I'd be the only one to need this, when the need has recurred so often.
    James
    "Raja Mukherjee" <[email protected]> wrote in message
    news:[email protected]...
    James,
    If you have common initialization tasks to be shared by multiple MDBs,
    I
    would create an abstract class (a.k.a BeanAdapter class) where you canhave
    all your initialization logics and have your MDB extend from it.
    I am not convinced that the Startup class needs to run first. In fact,
    I
    have the same view that Startup class should run last. My only wishlist
    for
    startup class was that I should be able to specify order, which isaddressed
    in 6.1.
    I am also getting the feeling from different posts that MDB deploymentwould
    have a re-try logic in 6.1, which I am beginning to look into. Check
    (or
    post) in JMS news group.
    .raja
    "James House" <[email protected]> wrote in message
    news:[email protected]...
    Thanks for the help... I like the pattern you pointed me to better
    than
    anything else... ... but in all cases (your method, Gene's, and whatI'm
    currently doing) I still have to put some code in every MDB that
    I deploy... : (
    Put in a good word for me there at BEA and convince the appropriate
    developer that startup classes should run first!
    James
    "Raja Mukherjee" <[email protected]> wrote in message
    news:[email protected]...
    James,
    There are several ways to solve your problem. I normally use
    setMessageDrivenContext to do all my initialization. There are two
    types
    of
    initialization that I have performed here, first, reading theconfiguration
    file and then load some utility classes in specific order. The
    problem
    with
    the second was that you will have to use synchronized block
    w/HotSpot
    2.0
    to
    keep the order, which is ok. I don't use static block to do the
    initialization, instead use an init() metod. Hopefully you got the
    idea.
    Recently, Gene Chuang created a pattern which esentially does the
    same
    and
    I
    liked the pattern because it was a nicer way of doing what I
    needed to
    do.
    I
    have changed all my examples to customer to use the new pattern.
    You
    can
    find it in
    http://theserverside.com/patterns/thread.jsp?thread_id=7270.
    The
    only think I do not use of this pattern is
    initializeEveryContextSwap()
    method. I am not convinced yet that I would need it (of course
    that
    might
    change over the time).
    Hope this helps, and thanks Gene.
    .raja
    "James House" <[email protected]> wrote in message
    news:[email protected]...
    Ok... here's some more detail:
    The application is largely JMS based, and most of my Session
    EJBs
    are
    invoked only my Message Driven Beans.
    I have a large set of properties that need to be read from a
    config
    file,
    and stored somewhere "globally". I also have a number of
    utilities
    that
    need to get "warmed up" before I start doing any real processing(before
    I start receiving messages from the JMS Topics). These
    utilities
    take
    a
    long time to warm up (a long time being about 45-60 seconds) -
    because
    they are loading hundereds of classes, and creating variousconnections
    to external resources.
    Currently I'm creating a Singleton object that reads the
    configuration
    file
    name from an environment property, and it then parses the file,
    and
    starts
    configuring all of these utilities. Since the "Startup Class"
    didn't
    work
    (weblogic invokes it after I'm already receiving messages), I
    put
    code
    at
    the beginning of all of my MDB's onMessage() methods that calls
    the
    singleton's "getInstance()" method - which synchronizes on alock
    object,
    and does all of it's work.
    I don't like this solution because:
    1- I have to put code in EVERY message-driven bean that I
    create -
    if
    I
    forget one, everything is broken.
    2- I have to increase the transaction time out of the entire
    server
    to
    be over 60 seconds since the beans hang that long while theconfiguration
    is
    happening.
    It seems very obvious that a "Startup Class" should be invoked
    after
    the
    server has come completely up, but before it starts listening
    for
    requests -- isn't the whole point of a "startup class" to getthings
    ready
    that need to be done as soon as the server comes up? but alas,
    the
    person
    who designed this at BEA apparently didn't agree with me on this
    point!
    Any suggestion on better solutions would be greatly appreciated.
    James
    "Raja Mukherjee" <[email protected]> wrote in message
    news:[email protected]...
    You can do it this way, but I would not recommend it, unless
    that's
    the
    only
    way to attack the problem at hand. But that's just me.
    I have seen this problem with multiple clients and in most
    cases
    there
    is
    a
    better way to handle it. If James give us a little more
    information
    on
    what
    type of configuration is he talking about and some background
    of
    his
    application, we as a group can think and may be able to come
    up
    with
    some
    idea.
    .raja
    "Joel Nylund" <[email protected]> wrote in message
    news:[email protected]...
    you could wrap the starting of weblogic in your own class
    and do
    initialization
    there. You have to be careful because of the way weblogic
    classloaders
    work, but
    you may be able to do what you want. Weblogic is just a java
    class,
    so
    you
    can
    start your class, then once your done initializing, just
    call
    weblogic.Server.main
    -Joel
    James House wrote:
    I'm in serious need of having several resources
    initialized
    before
    beans
    start handling requests.
    I tried implementing a Weblogic Startup Class, and it
    works
    fine -
    as
    long
    as it's the first thing
    to run! -- the problem is, when my Message Driven Beans
    deploy,
    if
    there
    are
    messages waiting
    for them in their durable subscriptions, they immediately
    start
    processing... then about 30 seconds
    later Weblogic (6.0sp1) gets around to starting my startupclass.
    If
    I
    put
    code in each MDB that
    kicks off the initialization when they are invoked I still
    run
    into
    problems, because my initialization
    takes a LONG time (more than 2 minutes) - so I end up with
    lots
    of
    transaction rollbacks... which
    are very annoying and clutter up the log files, and scarecustomers
    of
    the
    product.
    Is there anyway to make a startup class/servlet/something
    that
    runs
    and
    completes before any
    other processing occurs?
    Thanks,
    James

  • Extending Image class and addChild errors

    I am attempting to extend the Image class to allow us to place highlights over an image that are stored in an XML file. 
    I'm extending the class and adding the highlights to a sprite.  I use the addChild() method to add the sprite to the Image class, but I get the error "addChild() is not available in this class. Instead, use addElement() or modify the skin, if you have one."
    I try to use the addElement but that gives me compile errors..
    anyone have any idea what to do?  I'm trying to copy/paste the class code here, but its not working (no idea how to make it work)

    Update to last posting,
    Since the Primtive Class is extending Group I just added TransformGroup to it. To this TransformGroup object I add 4 TransformGroup objects and it all works fine. I was able to rotate all the 4 triangles at different rates.
    This is how it looks now :-
    http://www.tzi.de/~fayyaz/2.bmp
    Now I just want to shift the centre of rotation of all the triangles to the center and rotate the TOP and Bottom triangles along the x-axis.
    Assume I just have to add these operations to a TransformGroup above a single shape which is a single triangle. How do I do it. I know the answer is really simple but it beats me.
    Thanx in advance.

  • Link Recipe Group counter and Operation

    Hi,
    I need to develop a report where selection screen will have "recipe group", "group counter" and "plant" and want to extract operation details for that group and counter.
    Please tell me the table / field name where I can see Group/ Group Counter and Operation details together. (In PLPO group counter is not available and other fields like "Group of the referenced task list" and "Refer. group counter" is appearing blank; value in field "node" is not the value of "counter").
    Also let me knwo the table where operation classification (KLAKZ) indicator is maintained.
    I am trying to extract operation classification (class type 019) values for a recipe group and counter.
    Regards,
    Abir.

    hi .
    Please find following tables related with routing
                         MAPL                    Allocation of task lists to materials
         PLAS                    Task list - selection of operations/activities
         PLFH                    Task list - production resources/tools
         PLFL                    Task list - sequences
         PLKO                    Task list - header
         PLKZ                    Task list: main header
         PLPH                    Phases / suboperations
         PLPO                    Task list operation / activity
         PLPR                    Log collector for tasklists
         PLMZ                    Allocation of BOM - items to operations
    Regards
    SANIL

Maybe you are looking for

  • Useful Document/resources For Begginners

    this document is easily on net WS-BPEL Guide Last changed on Dec 10, 2004 by Matthieu Riou What is this article about ? This is an introduction to WS-BPEL that should give you a practical understanding of what you have to do to create a nice WS-BPEL

  • My ipod isn't being recognized in itunes and not in my computer

    when i plug my 2nd generation ipod up it won't go to itunes and it wont even register in the computer. But when i plug up the 30 gig ipod it goes directly to itunes and it shows up in the address bar. I ran the diagnostic and it showed that the 2nd g

  • HP Recovery vs. Windows 7 image restore

    I just got my HP laptop and I'm learning about recovery and backup.  I understand that if the hard disk fails a set of HP Recovery DVD's will restore the laptop to the factory image once a new hard drive is installed.  I also understand you can use t

  • Clean Install of Mountain Lion seems to be the answer

    As no-one could tell me what 'Applejack' meant when it told me that I have a 'missing launch daemon' and a 'problem with swapdir', and a 'virtual memory directories' problem, I decided to do a clean install. Originally, I updated to Mountain Lion 10.

  • Can't drad psd file or image file to illustrator CS6; PC user, and there is no Font X plug-in

    I've searched for the answers, but there is no "Font X" plug-in PC to disable, whatelse should I do? I just install the HotDoor plug-in dimension tool, and it is too important for me to disable it.