PRI Utilization
I have a total of three PRIs connected to my Voice Gateways. Two of them are listed as MGCP and one is H.323. As we aren't doing any video that I am aware of, I question the need for this PRI. Are there any handy commands that show the utilization of a PRI? Is there any other reason to need a H.323 PRI?
For MGCP PRI's you will need to pull PRI utilization from RTMT. For H323 PRIs, you could issue "show isdn status" to see the number of active Layer 3 calls.As far as a reason for 1 to be H323 and the other to be MGCP, will be difficult to comment without knowing your network.
http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/service/7_1_2/rtmt/RTMT/rtpmcm.html#wp1012748
Similar Messages
-
Unable to generate CUSSM Trunk/PRI Utilization Report
Dear All,
We want to generate Voice Gateways E1 Utilization report from CUSSM 8.5. I have configured CUOM 8.5 and CUSM 8.5 but i am unable to get the date for number of active calls and % utilization/PRI.
Thanks in Advance.
HassanWell i have done a little reading and found out that CUSSM 8.5 gives trunk utilizaition but only for MGCP Controlled Gateways. Reverting back to version 1.3
Hassan -
Hi All - Is there any way to achieve Utilization of Multiple PRI in Single Gateway with CUSP Deployment.
Let's say i have 5 PRI'S in same VG, Suppose if one of the PRI goes down is there any option to utilize remaining 4 PRI'S in the same VG.
I checked SRND, where i can see IF VG is overloaded and loses it's WAN connectivity with PSTN it will return error code as 503
In CUSP we can configure Failure response code as 503 so that CUSP will take alrenate element and route it accordingy.
If that is the case, We are not utilizing remaining 4 PRI'S in the first VG.
Is there any way to achieve the remaining 4 PRI'S without jumping to alternate element in CUSP.
SIVANESAN RHello Sivanesan,
If you have 5 PRI's. Do you have 5 different dial-peers in the Voice Gateway for the same destination pattern ? Do you have Preference command given in the dial-peer ? Basically If a Call matches more than one outbound dial peer, the router itseld hunt them in order as per your configuation. If the Call setup failed for some reason, the next dial-peer will be attempted. Its based on the hunt order you specify.
In the voicegateway global config mode use the below command, you should see 7 options, based on your configuration the dial-peer will be hunted. This happens within the Gateway/Router before sending the response to CUSP
VG3845(config)#dial-peer hunt ?
<0-7> Dial-peer hunting choices, listed in hunting order within each choice:
0 - Longest match in phone number, explicit preference, random selection.
1 - Longest match in phone number, explicit preference, least recent use.
2 - Explicit preference, longest match in phone number, random selection.
3 - Explicit preference, longest match in phone number, least recent use.
4 - Least recent use, longest match in phone number, explicit preference.
5 - Least recent use, explicit preference, longest match in phone number.
6 - Random selection.
7 - Least recent use.
Regards,
Senthil -
Hi,
I have a UC560 with a single T1-PRI and I am wanting to know if the UC series has any monitoring tools so that I can actively monitor their PRI utilization. We believe that thier call flow is more than they expected but I am unable to find a way to monitor this on my side. Any help would be appreciated.Hi James,
The only thing I can suggest is to use SNMP and capture the call traps, you can use a free and open source application such as FAN "Fully Automated Nagios" which can show you the call rate per minute/hour/day.
Consider looking at that
Cheers,
David Trad. -
Considerations on migrating from TDM PRI's to SIP Trunking?
Hello,
We are planning a migration from traditional TDM PRI's to SIP trunking for inbound toll-free and outbound LD telephone traffic. Currently, we have 7 x PRI's connected to a 3945 router feeding our CUCM 8.6 system. This works fine. We are planning on making use of a 100 Mb data circuit to bring in SIP service from our carrier. This would terminate on our 3945 voice gateway. We would have roughly the same number of SIP concurrent call paths as we have PRI channels (162).
I know that the 3945 makes heavy use of DSP resources to handle the PRI traffic. Are DSP resources needed for SIP traffic as well?
Will the SIP usage (162 SIP concurrent call paths) cause similar router utilization as the PRI's (router CPU, memory, etc.)?
What are other things we should look out for in this migration?
Thank you!
Brian1. Just to add to the excellent tip provided by George (+5), Here is the capacity matrix for the 3900 gateways. Your 3945 can support 950 concurrent calls. So in terms of capacity, you are well taken care of.
Number of IP-to-IP Calls per Platform
Platform
Maximum Number of Simultaneous Calls (Flow-Through)
Cisco 3945E
2500
Cisco 3925E
2100
Cisco 3945
950
Cisco 3925
800
Cisco 2951
500
Cisco 2921
400
Cisco 2911
200
Cisco 2901
100
Cisco ASR 1004; and Cisco ASR 1006 Router Processor 2 (RP2)
5000; 16000*
Cisco ASR 1002, ASR 1004, and ASR 1006 RP1
1750
Cisco AS5350XM and AS5400XM
600
Cisco 3845
500
Cisco 3825
400
Cisco 2851
225
Cisco 2821
200
Cisco 2811
110
Cisco 2801
55
2. You should definitely make provisions for dsps. You may need DSPs for MTP, xcoding, etc. Especially with SIP, MTP may be needed for DTMF mistmatch, supplementary services etc
3. One of the most important thing to consider is the codec you will use for your calls. Your users are used to PSTN (TDM). Using G711 on your sip calls is not even the same as the traditional PSTN. The quality will be noticeable. Using G729 is going to be distinct. I have seen where users complain of this during a deployment. Using G729 will be a rude shock and they may not like it.
4. You also need to consider your analogue options etc FAX. What fax protocol does your ITSP support etc...We have been sued before by a customer because their fax machines didnt work. The ITSP said it supported T.38 while in reality it didnt.
5. In terms of memory utilization and CPU, I will think that it should be less for IP-IP call. This is because in an IP-TDM call, your router is constantly encoding via your dsp the IP payload (codecs) to TDM for transimissiont ot he PSTN. This wont be happening any more. -
Syslogd Consuming more CPU utilization in Solaris 10
Hi All,
The Syslogd process consuming more CPU utilization in Solaris 10. Kindly help how to reduce this cpu utilization.
Regards
SivaHi Robert,
Both are same architecture. x86
The following is the prstat o/p from the affected server. Pls note that one of the mount point in this server is in ZFS.
prstat -l
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/LWPID
26092 root 3933M 3930M cpu1 60 0 29:00:43 22% syslogd/56
26092 root 3933M 3930M run 30 0 289:47:33 12% syslogd/18
26092 root 3933M 3930M run 40 0 272:31:05 11% syslogd/22
26092 root 3933M 3930M run 22 0 14:47:16 9.7% syslogd/65
26092 root 3933M 3930M run 42 0 14:43:46 9.7% syslogd/63
26092 root 3933M 3930M run 31 0 14:40:42 9.6% syslogd/66
26092 root 3933M 3930M sleep 40 0 152:45:42 5.9% syslogd/21
26092 root 3933M 3930M cpu0 53 0 6:41:58 4.1% syslogd/58
26092 root 3933M 3930M run 52 0 6:23:13 4.0% syslogd/57
26092 root 3933M 3930M sleep 43 0 6:29:21 3.9% syslogd/59
26092 root 3933M 3930M sleep 52 0 5:55:14 3.6% syslogd/71
More over we are continuously receiving the below error message in the /var/adm/messages, we don't know, from where the error arises from the syslog server.
syslogd: malloc failed: dropping message from remote: Not enough space
PRIVILEGE :[4] 'NONE'
Edited by: Siva_Systems on Mar 29, 2010 5:06 AM
Edited by: Siva_Systems on Mar 29, 2010 8:18 AM -
Sawserver memory utilization in OBIEE 10g
We recently merged two of our OBI production environments into a single production environment and as expected we could see significant increase in the memory utilization of the sawserver.
The Virtual Bytes of Sawserver hits around 2.7GB and the working set hits around 2.55 GB. As 3GB is the maximum limit for the sawserver utilization we are worried if this could lead to a crash though we did not have a crash yet
The OBIEE version is 10.1.3.4.1 and it is running on Windows 2k3 Enterprise Edition SP2 and 16 GB RAM.
I would need to know if there is any possibility to decrease the sawserver memory utilization just to avoid any crashes.OS level it is showing the following result
# swapinfo -mat
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 4096 52 4044 1% 0 - 1 /dev/vg00/lvol2
reserve - 4044 -4044
memory 8172 3458 4714 42%
total 12268 7554 4714 62% - 0 -
SQL> select * from v$sga_target_advice;
SGA_SIZE SGA_SIZE_FACTOR ESTD_DB_TIME ESTD_DB_TIME_FACTOR ESTD_PHYSICAL_READS
3552 1 103504 1 3296335
888 .25 111463 1.0769 4525868
1776 .5 107178 1.0355 3873853
7104 2 95907 .9266 2099436
4440 1.25 100668 .9726 2765295
5328 1.5 98401 .9507 2442914
6216 1.75 96166 .9291 2099436
2664 .75 105284 1.0172 3587072
8 rows selected.
We have currently 3550 MB sga allocated...
using the above query, we can say that if SGA size is 7104 MB, we will be getting more peformance as per my current load.
Please suggest... -
Memory utilization alway at 100%
I frequently receive alerts via EM 12c that I have reached near 100% of my memory for my host servers. My servers have lots of RAM and have been running fine for a long time. I have tried to increase the metrics so it does not warn me until it hits 99.5% but it still sends me alerts. I am not sure what I have to configure differently for EM to not think this is a problem.
Thanks
AndyJust to clarify a few things. My oracle databases instances are running on an 2 HPUX server. Enterprise Manger consistently shows that the Memory uitilization is near 100%, it never really drops it just stays flat lined at near 100% and every once and a while their will be a spike where it will drop to 90 or 95%. I am letting AMM auto tune the memory between PGA and SGA. I only have agents running on HPUX for Enterprise Manager.
I have Enterprise Manager 12.1.0.1 setup on a Windows 2008 R2 server. I have only had EM setup for a few months but I beleive the issue of memory showing near 100% is an old issue. Again the environment and the servers are working fine.
After doing a bit more research to me the real problem is how enterprise manager looks at the memory. If I use an utility such as a tool that comes with HPUX called "Glance" it shows the following. I have 32 GB or Physical memory, System is using 7.5 GB of this memory and User is using 10.7GB and their is 15.1 GB of free memory. So the "Glance" utility shows my memory utilization is just fine.
FYI memory is also showed to be healthy when looking at it with the below command.
# swapinfo -m
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 34816 0 34816 0% 0 - 1 /dev/vg00/lvol2
dev 34816 35 34781 0% 0 - 0 /dev/vg02/lvolsw1
reserve - 6626 -6626
memory 31147 14588 16559 47%
To me it seems to me that Enterprise Manager is adding up the System Memory + User Memory + Free Memory and comparing it to the Physical Memory. When it should be taking only System Memory + User Memory and comparing it to Physcial memory. So the metrics it is looking at need to be modified somehow. -
Report for calculating capacity utilization and Efficency
Hi,
We are following REM in our company. The production line is defined in the production version. While backflushing the production line is called automatically and hence backflushing is done.
We calculate the capacity utilization by using the formulae.
Capacity Utilization = (Backflushed Qty/ Available capacity)*100.
My queries are:
1. Is there any standard report to determine the capacity utilization of a production line.
2. Is there any standard report to calcualte the efficency of a production line.
waiting for reply.
With regards,
AfzalHi afzal
1. you have mentioned ; Available capacity = Std.time per piece * no. of working hrs
Let me explain with example
suppose per piece if it takes 10 mins, now according to your formula
A.C = 10* 24 * 60 = 14400 per day, but which is not correct
normally if 10 mins/ peice means 6 peices/hr and for 24 hrs 24*6 = 144.
so it must be A.C = no. of working hrs / Std.time per piece.
2. You have mentioned = capacity utlilised = total Backflushed qty per day., which means you are caluculating capacity utilization based on input material.
3. Utilization = (Avaliable capacity/ Capacity utilised) * 100
suppose let us consider Available capacity per day =100
capacity utilized = 50
Utilization = (100/50) * 100 = 200%, which is not correct it should be only 50%
Here my main doubt is why You are caluculating capacity based on input material.
Please explanin me you business process and whats the exact requirement so that I can help you out.
Please check the formulae -
Follow up on an old thread about memory utilization
This thread was active a few months ago, unfortunately its taken me until now
for me to have enough spare time to craft a response.
From: SMTP%"[email protected]" 3-SEP-1996 16:52:00.72
To: [email protected]
CC:
Subj: Re: memory utilization
As a general rule, I would agree that memory utilzation problems tend to be
developer-induced. I believe that is generally true for most development
environments. However, this developer was having a little trouble finding
out how NOT to induce them. After scouring the documentation for any
references to object destructors, or clearing memory, or garbage collection,
or freeing objects, or anything else we could think of, all we found was how
to clear the rows from an Array object. We did find some reference to
setting the object to NIL, but no indication that this was necessary for the
memory to be freed.
I believe the documentation, and probably some Tech-Notes, address the issue of
freeing memory.
Automatic memory management frees a memory object when no references to the
memory
object exist. Since references are the reason that a memory object lives,
removing
the references is the only way that memory objects can be freed. This is why the
manuals and Tech-Notes talk about setting references to NIL (I.E. freeing memory
in an automatic system is done by NILing references and not by calling freeing
routines.) This is not an absolute requirement (as you have probably noticed
that
most things are freed even without setting references to NIL) but it accelerates
the freeing of 'dead' objects and reduces the memory utilization because it
tends
to carry around less 'dead' objects.
It is my understanding that in this environment, the development tool
(Forte') claims to handle memory utilization and garbage collection for you.
If that is the case, then it is my opinion that it shoud be nearly
impossible for the developer to create memory-leakage problems without going
outside the tool and allocating the memory directly. If that is not the
case, then we should have destructor methods available to us so that we can
handle them correctly. I know when I am finished with an object, and I
would have no problem calling a "destroy" or "cleanup" method. In fact, I
would prefer that to just wondering if Forte' will take care of it for me.
It is actually quite easy to create memory leaks. Here are some examples:
Have a heap attribute in a service object. Keep inserting things into
the heap and never take them out (I.E. forgot to take them out). Since
service objects are always live, everything in the heap is also live.
Have an exception handler that catches exceptions and doesn't do
anything
with the error manager stack (I.E. it doesn't call task.ErrMgr.Clear).
If the handler is activated repeatedly in the same task, the stack of
exceptions will grow until you run out of memory or the task terminates
(task termination empties the error manager stack.)
It seems to me that this is a weakness in the tool that should be addressed.
Does anyone else have any opinions on this subject?
Actually, the implementation of the advanced features supported by the Forte
product
results in some complications in areas that can be hard to explain. Memory
management
happens to be one of the areas most effected. A precise explanation to a
non-deterministic process is not possible, but the following attempts to
explain the
source of the non-determinism.
o The ability to call from compiled C++ to interpreted TOOL and back
to compiled C++.
This single ability causes most of the strange effects mentioned in
this thread.
For C++ code the location of all variables local to a method is not
know
(I.E. C++ compilers can't tell you at run-time what is a variable
and what
isn't.) We use the pessimistic assumption that anything that looks
like a
reference to a memory object is a reference to a memory object. For
interpreted
TOOL code the interpreter has exact knowledge of what is a reference
and what
isn't. But the TOOL interpreter is itself a C++ method. This means
that any
any memory objects referenced by the interpreter during the
execution of TOOL
code could be stored in local variables in the interpreter. The TOOL
interpreter
runs until the TOOL code returns or the TOOL code calls into C++.
This means
that many levels of nested TOOL code can be the source of values
assigned to
local variables in the TOOL interpreter.
This is the complicated reason that answers the question: Why doesn't a
variable that is created and only used in a TOOL method that has
returned
get freed? It is likely that the variable is referenced by local
variables
in the TOOL interpreter method. This is also why setting the
variable to NIL
before returning doesn't seem to help. If the variable in question is a
Array than invoke Clear() on the Array seems to help, because even
though the
Array is still live the objects referenced by the Array have less
references.
The other common occurrence of this effect is in a TextData that
contains a
large string. In this case, invoking SetAllocatedSize(0) can be used
to NIL
the reference to the memory object that actually holds the sequence of
characters. Compositions of Arrays and TextData's (I.E. a Array of
TextData's
that all have large TextDatas.) can lead to even more problems.
When the TOOL code is turned into a compiled partition this effect
is not
noticed because the TOOL interpreter doesn't come into play and
things execute
the way most people expect. This is one area that we try to improve
upon, but it is complicated by the 15 different platforms, and thus
C++ compilers,
that we support. Changes that work on some machines behave
differently on other
machines. At this point in time, it occasionally still requires that
a TOOL
programmer actively address problems. Obviously we try to reduce
this need over
time.
o Automatic memory management for C++ with support for multi-processor
threads.
Supporting automatic memory management for C++ is something that is
not a very
common feature. It requires a coding standard that defines what is
acceptable and
what isn't. Additionally, supporting multi-processor threads adds
its own set of
complications. Luckily TOOL users are insulated from this because
the TOOL to C++
code generator knows the coding standard. In the end you are
impacted by the C++
compiler and possibly the differences that occur between different
compilers and/or
different processors (I.E. Intel X86 versus Alpha.) We have seen
applications that
had memory utilization differences of up to 2:1.
There are two primary sources of differences.
The first source is how compilers deal with dead assignments. The
typical TOOL
fragment that is being memory manager friendly might perform the
following:
temp : SomeObject = new;
... // Use someObject
temp = NIL;
return;
When this is translated to C++ it looks very similar in that temp
will be assigned the
value NULL. Most compilers are smart enough to notice that 'temp' is
never used again
because the method is going to return immediately. So they skip
setting 'temp' to NULL.
In this case it should be harmless that the statement was ignored
(see next example for a different variation.) In more
complicated examples that involve loops (especially long
lived event loops) a missed NIL assignment can lead to leaking the
memory object whose
reference didn't get set to NIL (incidentally this is the type of
problem that causes
the TOOL interpreter to leak references.)
The second source is a complicated interaction caused by history of
method invocations.
Consider the following:
Method A() invokes method B() which invokes method C().
Method C() allocates a temporary TextData, invokes
SetAllocatedSize(1000000)
does some more work and then returns.
Method B() returns.
Method A() now invokes method D().
Method D() allocates something that cause the memory manager to look
for memory objects to free.
Now, even though we have returned out of method C() we have starting
invoking
methods. This causes us to use re-use portions of the C++ stack used to
maintain the history of method invocation and space for local variables.
There is some probability that the reference to the 'temporary' TextData
will now be visible to the memory manager because it was not overwritten
by the invocation of D() or anything invoked by method D().
This example answers questions of the form: Why does setting a local
variable to
NIL and returning and then invoking task.Part.Os.RecoverMemory not
cause the
object referenced by the local variable to be freed?
In most cases these effects cause memory utilization to be slightly
higher
than expected (in well behaved cases it's less than 5%.) This is a small
price to pay for the advantages of automatic memory management.
An object-oriented programming style supported by automatic memory
management makes it
easy to extended existing objects or sets of objects by composition.
For example:
Method A() calls method B() to get the next record from the
database. Method B()
is used because we always get records, objects, of a certain
type from
method B() so that we can reuse code.
Method A() enters each row into a hash table so that it can
implement a cache
of the last N records seen.
Method A() returns the record to its caller.
With manual memory management there would have to be some interface
that allows
Method A() and/or the caller of A() to free the record. This
requires
that the programmer have a lot more knowledge about the
various projects
and classes that make up the application. If freeing doesn'
happen you
have a memory leak, if you free something while its still
being used the
results are unpredictable and most often fatal.
With automatic memory management, method A() can 'free' its
reference by removing
the reference from the hash table. The caller can 'free' its
reference by
either setting the reference to NIL or getting another
record and referring
to the new record instead of the old record.
Unfortunately, this convenience and power doesn't come for free. Consider
the following,
which comes from the Forte' run-time system:
A Window-class object is a very complex beast. It is composed of two
primary parts:
the UserWindow object which contains the variables declared by the
user, and the
Window object which contains the object representation of the window
created in
the window workshop. The UserWindow and the Window reference each
other. The Window
references the Menu and each Widget placed on the Window directly. A
compound Window
object, like a Panel, can also have objects place in itself. These
are typically
called the children. Each of the children also has to know the
identity of it's
Mom so they refer to there parent object. It should be reasonably
obvious that
starting from any object that make up the window any other object
can be found.
This means that if the memory manager finds a reference to any
object in the Window
it can also find all other objects in the window. Now if a reference
to any object
in the Window can be found on the program stack, all objects in the
window can
also be found. Since there are so many objects and the work involved
in displaying
a window can be very complicated (I.E. the automatic geometry
management that
layouts the window when it is first opened or resized.) there are
potentially many
different reference that would cause the same problem. This leads to
a higher than
normal probability that a reference exists that can cause the whole
set of Window
objects to not be freed.
We solved this problem in the following fashion:
Added a new Method called RecycleMemory() on UserWindow.
Documented that when a window is not going to be used again
that it is
preferably that RecycleMemory() is invoked instead
of Close().
The RecycleMemory() method basically sets all references
from parent to
child to NIL and sets all references from child to
parent to NIL.
Thus all objects are isolated from other objects
that make up
the window.
Changed a few methods on UserWindow, like Open(), to check
if the caller
is trying to open a recycled window and throw an
exception.
This was feasible because the code to traverse the parent/child
relationship
ready existed and was being used at close time to perform other
bookkeeping
operations on each of the Widgets.
To summarize:
Automatic memory management is less error prone and more productive but
doesn't come totally for free.
There are things that the programmer can do that assists the memory
manager:
o Set object reference to NIL when known to be correct (this
is the
way the memory is deallocated in an automatic system.)
o Use methods like Clear() on Array and SetAllocatedSize()
on TextData to
that allow these objects to set their internal
references to NIL
when known to be correct.
o Use the RecycleMemory() method on windows, especially very
complicated
windows.
o Build similar type of methods into your own objects when
needed.
o If you build highly connected structures that are very
large in the
number of object involved think that how it might be
broken
apart gracefully (it defeats some of the purpose of
automatic
management to go to great lengths to deal with the
problem.)
o Since program stacks are the source of the 'noise'
references, try
and do things with less tasks (this was one of the
reasons that
we implemented event handlers so that a single task
can control
many different windows.)
Even after doing all this its easy to still have a problem.
Internally we have
access to special tools that can help point at the problem so that
it can be
solved. We are attempting to give users UNSUPPORTED access to these
tools for
Release 3. This should allow users to more easily diagnose problems.
It also
tends to enlighten one about how things are structured and/or point out
inconsistencies that are the source of known/unknown bugs.
Derek
Derek Frankforth [email protected]
Forte Software Inc. [email protected]
1800 Harrison St. +510.869.3407
Oakland CA, 94612I beleive he means to reformat it like a floppy disk.
Go into My Computer, Locate the drive letter associated with your iPod(normally says iPod in it, and shows under removable storage).
Right click on it and choose format - make sure to not have the "quick format" option checked. Then let it format.
If that doesnt work, There are steps somewhere in the 5th gen forum( dont have the link off hand) to try to use the usbstor.sys to update the USB drivers for the Nano/5th gen. -
How to set up notification email for Full CPU utilization on OEM12c?
I have found a Oracle Doc,Is that's the way email notifications are setup?How can i check that after setting the notifications?
4.1.2.3 Subscribe to Receive E-mail for Incident Rules
An incident rule is a user-defined rule that specifies the criteria by which notifications should be sent for specific events that make up the incident. An incident rule set, as the name implies, consists of one or more rules associated with the same incident.
When creating an incident rule, you specify criteria such as the targets you are interested in, the types of events to which you want the rule to apply. Specifically, for a given rule, you can specify the criteria you are interested in and the notification methods (such as e-mail) that should be used for sending these notifications. For example, you can set up a rule that when any database goes down or any database backup job fails, e-mail should be sent and the "log trouble ticket" notification method should be called. Or you can define another rule such that when the CPU or Memory Utilization of any host reach critical severities, SNMP traps should be sent to another management console. Notification flexibility is further enhanced by the fact that with a single rule, you can perform multiple actions based on specific conditions. Example: When monitoring a condition such as machine memory utilization, for an incident severity of 'warning' (memory utilization at 80%), send the administrator an e-mail, if the severity is 'critical' (memory utilization at 99%), page the administrator immediately.
You can subscribe to a rule you have already created.
From the Setup menu, select Incidents, then select Incident Rules.
On the Incident Rules - All Enterprise Rules page, click on the rule set containing incident escalation rule in question and click Edit... Rules are created in the context of a rule set.
Note: In the case where there is no existing rule set, create a rule set by clicking Create Rule Set... You then create the rule as part of creating the rule set.
In the Rules section of the Edit Rule Set page, highlight the escalation rule and click Edit....
Navigate to the Add Actions page.
Select the action that escalates the incident and click Edit...
http://docs.oracle.com/cd/E24628_01/doc.121/e24473/notification.htm#CACHDCADMake sure you have correct thresholds...
from target home>monitoring>"Metric and Collection Settings"
Check the incident rule for warning and critical events for host targets
Setup>Incident>Incident Rules -
Material com múltipla utilização - Determinação do CFOP
Olá experts!
Gostaria de validar aqui a solução na qual chegamos para determinação automática do CFOP para material de venda para múltipla utilização.
Ocorre que na empresa em que trabalho, o mesmo produto pode ser vendido:
1) para consumo final e para revenda
2) para industrialização e para revenda.
No mestre de materiais esta configuração está na visão de Contabilidade(2) e possibilita a identificação de apenas uma utilização. Dá a impressão que a SAP espera que criemos múltiplos materiais, porém por razão de planejamento, preço e etc isto não é possível.
Se este parâmetro estivesse na visão de vendas as coisas seriam mais fáceis e lógicas a nosso ver.
Outro ponto interessante é que na J1bTax > SD > Visão SD, a utilização cliente só está como Consumo e Industrialização. Não trás revenda, porém da visão de MM a revenda é considerada.
Pergunta: O conceito deste ponto com o do mestre de materiais é o mesmo?
Como não conseguimos configurar a determinação do CFOP por estes pontos, a ainda seguindo uma solução dentro do standard, fizemos o seguinte:
1) Configuramos uma nova utilização de material: ZL01 u2013 Resale exception
2) Criamos uma nova categoria de item ZBN e associamos à utilização ZL01
3) Em localização > Determinação do CFOP incluímos a exceção para esta nova categoria de item
4) E depois de tudo isto foi possível em Cliente/Material (VD51) fazer a associação do material com esta utilização.
O problema é que isto demandará replicar um volume grande de dados, para todos os clientes que compram na exceção. E tememos que a manutenção disto no futuro não fosse 100% confiável.
Vocês indicam outra solução?
Desde já agradeço
Emerson ZaniniDeterminação de CFOP com múltipla utilização.
Boa tarde pessoal,
Tenho um problema parecido em MM.
Hora compro o material para consumo, hora compro o mesmo material para ativo.
Gostaria de saber se alguém tem uma solução para resolver esse problema.
A criação de 2 cadastros de materiais é inviável.
Att:.
Bruno Viol -
VPN IPSEC - Contabilizar a utilização
Vocês saberiam me responder como faço para contabilizar o período de utilização de uma VPN IPSEC entre dois roteadores Cisco?
Antecipadamente grata,
AlineVocês saberiam me responder como faço para contabilizar o período de utilização de uma VPN IPSEC entre dois roteadores Cisco?
Antecipadamente grata,
Aline -
Relatório de utilização do SAP por usuário
Bom dia pessoal,
Sou novo aqui no forum e tenho uma dúvida que não estou encontrando a resposta em lugar nenhum,
preciso fazer um levantamento da utilização do SAP por usuários, para realizar um limpeza, ou seja,
levantar quais usuário utilizam muito pouco o sistema.
Existe uma maneira de eu levantar quantas horas no mês cada usuário utiliza, ou então quantas vezes por dia eles entram no sistema, preciso de uma maneira de ver quem realmente utiliza o sistema.
Alguém sabe como posso ter essas informações?
Obrigado pela ajuda.Amigo, dessa forma tão especifica como você quer, eu não conheço. Porém talvez possa aparecer algum BASIS aqui e te prover outra solução.
Por ora, sem pesquisar muito o assunto e sem considerar desenvolvimentos Z, posse te recomendar como solução paliativa tentar as transações RSUSR200 e S_BCE_68002311.
Use a RSUSR200 para ver o último acesso do usuário.
A S_BCE_68002311 eu recomendaria que usasse da seguinte forma: coloque todos usuários para renovar senha em um período adequado ao teu interesse, então na S_BCE_68002311 você poderia ver quem está acessando na frequência que te interessa pois ela registra as alterações de senha.
Considere analisar com calma a transação SUIM. -
Código de utilização do material - 1 - Industrialização - 2 - Consumo
Boa tarde a todos,
Por gentileza, alguém conhece uma transação no SAP que mostre a utilização do material? exemplo
Pedido Item Utilização material
4507001234 10 1
Muito Obrigado pela atenção
Davi ParaguaiOlá Davi,
Serve consulta direto na tabela? Via SE16 ou SE16n você pode consultar na EKPO esta informação, conforme imagem abaixo
Ou diretamente no pedido, na aba Brasil (por item):
Se não for suficiente, explique melhor o seu objetivo para buscar esta informação.
Abs,
Eduardo Hartmann
Maybe you are looking for
-
NFE com status "em processamento" que não foi enviada ao PI para assinatura
Olá pessoal. No dia 08/03 tivemos um problema no servidor do PI e GRC (ambiente de produção) devido a uma diferença entre os horários do Application Server e do Database Server. A grande maioria das NF-eu2019s geradas durante o problema, tiveram o f
-
IP Phone restart and reset issue
Dear All, Can someone help me here? Our IP phones are getting resetting and restarting frequently. Details are given below, but its not affecting our active calls. 9:38:38a 14: Name=SEPECC882B0AD77 Load= SCCP45.9-0-3S Last=UCM-closed-TCP 9:38:38a 18:
-
Best practice for locations to deploy AP3602i's?
I am doing an install for a new building for my company. We have 2 office floors and 19x 2602's. They will be switched through a WLC-5508. I am looking to find a best practices guide on how to deploy them (location wise). For instance, should they be
-
How to install windows 2012 R2 in Sun Fire X4140 server
I want to install Windows server 2012 R2 in Sun Fire X4140 server. But SIA not supporting this OS. So I installed windows 2008 using SIA in server and after that upgrade to 2012. But now the server is very slow and network gets hanged frequently.
-
Where do we use the three users created during BPC installation
Hi Experts, I just want to know where we use the three users we create during installation. Because without AppSet the users cannot log in. Though we map three different domain users in "UJA3_WRITE_SYS_USERS", except the user BPC_SYSADMIN, I think w