SMF and monitored processes

Hi Folks,
Please look at the following description.
I have implemented a manifest and method for the JES Messaging Server. The JES Messaging Server uses a script called start-msg to bring imap / pop / http /smtp services. I am running in to problems after implementing the manifest and the method.
Question 1: the ./start-msg script ( which we use to start the Messaging Services ), starts a bunch of processes. eg. imap / pop / http / etc. as shown below. I would assume since my manifest uses start-msg. All the services ( processes ) started by start-msg should be monitored under SMF ?
# svcs -p svc:/network/messaging_server:default
STATE STIME FMRI
online 11:20:01 svc:/network/messaging_server:default
11:19:54 3245 watcher
11:19:54 3247 stored
11:19:56 3248 imapd
11:19:58 3252 mshttpd
11:19:59 3254 imsched
11:19:59 3256 dispatcher
11:20:00 3259 job_controller
11:20:00 3260 tcp_smtp_server
11:20:00 3262 tcp_smtp_server
11:20:01 3267 AService
11:55:00 3293 msprobe
Question 2: So I tried killing one of the processes Example : kill -9 this restarts the entire messaging server, Well and good, but say that we don't want this to happen. We want only the process that died should be restarted. In other words Is it possible for us to have a fine granularity, so that only imap gets restarted ?.
Question 3: seems like SMF doesn't recognize pkill, if i repeat the same experiment above with pkill imapd it doesn't seem to notify the SMF
Question 4: By looking at the svcadm usage below, my method (msg) has start / stop / refresh will this suffice ?.
svcadm enable [-rst] ... - enable and online service(s)
svcadm disable [-st] ... - disable and offline service(s)
svcadm restart ... - restart specified service(s)
svcadm refresh ... - re-read service configuration
The following is a very detailed description:
# svcs -xv svc:/network/messaging_server:default
svc:/network/messaging_server:default (?)
State: online since Wed Feb 28 11:20:01 2007
See: /var/svc/log/network-messaging_server:default.log
Impact: None.
MANIFEST
# cat /var/svc/manifest/network/messaging.xml
<?xml version="1.0"?>
<!--
Sun Java Enterprise System Messaging Server
-->
< service_bundle type='manifest' name='messaging_server'>
< service
name='network/messaging_server'
type='service'
version='1'>
< create_default_instance enabled='false' />
< single_instance />
< dependency
name='network-service'
grouping='require_all'
restart_on='n one'
type='service'>
< /dependency>
< dependency
name='fs-local'
grouping='require_all'
restart_on='none'
type='service'>
< /dependency>
< dependency
name='name-services'
grouping='require_all'
restart_on='ref resh'
type='service'>
< service_fmri value='svc:/milestone/name-services' />
< /dependency>
< dependency
name='identity'
grouping='optional_all'
restart_on='refresh '
type='service'>
< service_fmri value='svc:/system/identity:domain' />
< /dependency>
< exec_method
type='method'
name='start'
exec='/opt/SUNWmsgsr/lib/svc/me thod/msg start'
timeout_seconds='180' >
< /exec_method>
< exec_method
type='method'
name='stop'
exec='/opt/SUNWmsgsr/lib/svc/met hod/msg stop'
timeout_seconds='180' >
< /exec_method>
< exec_method
type='method'
name='refresh'
exec='/opt/SUNWmsgsr/lib/svc/ method/msg refresh'
timeout_seconds='180' >
< /exec_method>
< property_group name='application' type='framework'>
< propval name='messaging_server_base' type='astring'
value='/opt/SUNWmsgsr' />
< /property_group>
< stability value='Unstable' />
< /service>
< /service_bundle>
METHOD
# cat /opt/SUNWmsgsr/lib/svc/method/msg
#!/sbin/sh -x
## Copy/Rename this script to /opt/csw/lib/svc/method/svnserve
## Please change $REPOS to the location of your SVN Repo
. /lib/svc/share/smf_include.sh
parser() {
PROPERTY=$1
#VALUE=`svcprop -p ${PROPERTY} ${SMF_FMRI}`
VALUE=`svcprop -p ${PROPERTY} test_server`
if [ "${VALUE}" = "\"\"" ] ; then
VALUE=
fi
echo $VALUE
MESSAGING_BASE=`parser application/messaging_server_base`
if [ -z "${MESSAGING_BASE}" ] ; then
exit $SMF_EXIT_ERR_CONFIG
fi
case $1 in
'start')
${MESSAGING_BASE}/sbin/start-msg
if [ $? -ne 0 ] ; then
exit $SMF_EXIT_ERR_FATAL
fi
exit $SMF_EXIT_OK
'stop')
${MESSAGING_BASE}/sbin/stop-msg
if [ $? -ne 0 ] ; then
exit $SMF_EXIT_ERR_FATAL
fi
exit $SMF_EXIT_OK
'refresh')
${MESSAGING_BASE}/lib/msstart -x
sleep 5
${MESSAGING_BASE}/sbin/start-msg
if [ $? -ne 0 ] ; then
exit $SMF_EXIT_ERR_FATAL
fi
exit $SMF_EXIT_OK
echo "Usage: $0 { start | stop | refresh }"
exit $SMF_EXIT_ERR_FATAL
esac
Thanks in Advance
-DD

Your problem is not due to a deadlock in SMF code. Your problem is that
not all of the processes which implement the service have been stopped, so
SMF assumes that the service hasn't been stopped yet.
By "simple modifications" I mean that if the processes that you need to be
in separate contracts are launched by a script, then you can just change
the script to launch them with "ctrun", which launches the processes in a
new process contract. This would usually be considered an unsupportable
modification of the software, though, so the vendor may choose not to
help you fix any problems you may encounter.
David

Similar Messages

  • Difference between Integration Process and Monitoring Process

    Hi Experts,
    What is the difference between Integration Process and Monitoring Process available in PI7.1?
    SAP says that Monitoring process is a special kind of integration process that receives the event messages.
    My doubt is even integration process can receive the event messages.
    Why these two different type of entities are created for the same purpose?
    And what is the technical difference between the two in terms of PI perspective?
    Regards,
    Sami.

    My question is now answered.
    [https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/70a25d3a-e4fa-2a10-43b5-b4169ba3eb17]
    On page 17 of this pdf following sentence is mentioned :-
    From technical perspective, there is no difference between monitoring process and integration process.
    Though logically those are two deifferent things.
    Monitoring porcesses are used to receive only event messages that is comprises of event data only.
    Like Purchase order creation is a event and its event message will have the event data like Order Id, Created on, Created by, Quantity etc., instead of whole purchase order.
    Where as Integration Process is a way to provide solution in some specific circummtances like where we have to automate our process or where we need something in between for the course of communication.
    Guys thanks for your precious time.
    Regards,
    Sami.

  • Step Group , Monitoring Process and Integration Process.

    Hi Experts,
    I would like to when we use Step Group, Monitoring Process and what is the difference of these two with Integration Process. As we can use all the steps of Integration Process with Step Goup and Monitoring Process.
    Regards,
    Syed

    For the Step Group:
    You can consider it as a BPM which can be used in another BPM....concept similar to a reusable class :)....So if you have a constant patter of Receive --- transformation -
    Send in some BPMs then create a Step Group and include it in the required BPMs...
    http://help.sap.com/saphelp_nwpi71/helpdata/en/42/ef868be2753268e10000000a1553f6/frameset.htm
    Regards,
    Abhishek.

  • Separate Distribution Monitor Export and Import Processes on Multiple Machines

    Hi,
    Would you kindly let me know whether it is possible (means officially supported way) to run Distribution Monitor Export and Import processes on different machines?
    As per SAP note 0001595840 "Using DISTMON and MIGMON on source and target systems", it says as below.
    > 1. DISTMON expects the export and import to be carried out on the same server
    I think it means that export and import processes for the same tables must be run on the same machine, is it correct? If yes, Export on the machine A, and then Import those exported data on the other machine B is not the officially supported way... (However, I know it is technically possible.)
    Kind regards,
    Yutaka

    Hi Yutaka,
    Point no. 2 & 3 clarify the confusion. However let me explain it briefly:
    Distribution Monitor is used basically in case of migration of large SAP systems (database). It provides the feature to increase parallelism of export and import, distributing the process across available systems.
    You have to prepare the system for using DistMon. A common directory needs to be created as"commDir" and in case you use multiple systems for executing more number of processes of export and import then that "commDir" should be shared across all those systems.  And this is what the Point no.1 in KBA 1595840 mentions about. Distribution Monitor will run both the export and import process from the machine which is prepared for using DistMon and DistMon itself will control the other processes i.e. MigMon. No need to start separate MigMon.
    For example: You are performing a migration of SAP system based on OS:AIX and DB:DB2 to  OS: HP-UX and DB: Oracle. You need to perform the export using DistMon and you are having 4 Windows servers which can be used for parallel export/import. Once you have prepared the system for DistMon which hosts the "commDir" you'll have to provide the information of involved host machines in the "distribution_monitor_cmd.properties" file. Now when DistMon is executed it will distribute the export and import process across the systems which were defined in "distribution_monitor_cmd.properties" file automatically.
    Best regards,
    SUJIT

  • Difference between Monitoring Process and Integration Process

    What is the difference between the Monitoring Process object for BAM and Integration Process object? Both seem to have exactly the same design environment in the Enterprise Services Builder.
    Is it so that Monitoring Process alarms only appear in the UWL of the specified user?
    Also, suppose I am wanting to create a receive step to wait for the arrival of a message instead of an event (say I want to check a specific message arrives before a certain time, correlating several fields of the message, which is something I cannot do I believe with alert monitoring). Am I able to do this? I cannot see a reason why not, but I'd like confirmation.
    BR,
    Tony.

    Hi,
    Thanks for the link! I read through the replies, but it still leaves a couple of basic questions unsolved:
    1) Why did SAP discriminate these two types of PI objects at design time - the Monitoring Process and the Integration Process?
    2) new capabilities of PI 7.1 are touted as:
    Event provisioning and consumption for BAM:
    - Local container
    - Subscription and handling of business process events
    - Milestone Monitoring
    So can I only employ event provisioning, subscription of Business process Events and Milestome monioring with  a Monitoring process, or can I do that with an Integration Process as well?
    BR,
    Tony.

  • SAP XI and Business Process Modelling and Monitoring

    hi All,
    Could anybody tell me whether SAP XI has any Business Process Modelling tools or any Process Monitoring tools?
    If any, are these analogous to IBM WBIs Modeller and Monitor? Also does SAP XI provide any human interaction?
    Regards,
    Shree Norway

    Hi,
    In the XI-Repository you can build Business Procces with the Process Builder.
    for Monitoring you can use the Workflow Monitoring on XI.
    See also Transaction "SXMB_MONI_BPE"
    Regards,
    Robin

  • Error handling, logging and monitoring business process

    I would like to know more about error handling, logging and monitoring in business process? Can someone give more information on this one?

    Chandran
    Please refer to following tutorials to understand each of these topics in detail:
    Validations:
    http://www.orafmwschool.com/validations/
    Exception Handling:
    http://www.orafmwschool.com/exception-handling/
    Fault Management Tutorial:
    http://www.orafmwschool.com/fault-management-tutorial/
    Business Activity Monitoring Tutorial:
    http://www.orafmwschool.com/bam-tutorial/
    You'll have to refer to Oracle documentation to understand more finer details.
    -Amjad.

  • How to save business processes structure and monitoring parameters?

    Hi dears!
    I've got a little question -is it possible to save business processes structure and monitoring parameters as a local file?
    Thanks in advance!

    One way to resolve this problem is use the method commit_and_refresh as shown below.
       data: lv_dest           type rfcdest.
        cl_hrrcf_m_rfc_services=>commit_and_refresh( lv_dest ).

  • Screen Sharing and WindowServer processes using 35-45% of CPU!!

    I have recently relocated and reconfigured my Apple network and computers in the house :1) MacBook Air dual display MBA with 20" cinema screen in the office, and 2) and Mac Mini connected to a large LCD TV in the living room.
    The plan now is to use the MBA as a primary computer and connect via wireless screen sharing to the MM so that I can monitor that system, run more intensive apps, watch EyeTV, file server tasks, etc.
    But now this has increased CPU and fan activity on the MBA beyond what I would expect. The activity monitor indicates that the Screen Sharing and WindowServer processes are using combined between 35-45% of the CPU (screen share process is a constant 25%!!).
    This is not good, and I wonder if it will be fixed in a future release. Any ideas or suggestions on how to limit the impact of screen sharing on CPU?

    It seems to be running better with system update, and use of a faster computer (now using the higher end mac book air)

  • Hello, I have a Mac computer with NVIDIA 750M dedicated graphics card and monitor EIZO but the problem was there when I was working with Windows and Acer monitor. When I open a file from Camera Raw in PS this is smaller than the screen so I double-click w

    Hello, I have a Mac computer with NVIDIA 750M dedicated graphics card and monitor EIZO but the problem was there when I was working with Windows and Acer monitor. When I open a file from Camera Raw in PS this is smaller than the screen so I double-click with the tool "hand" to fit on the screen, but the picture loses sharpness and becomes blurry. If you magnify the image even only slightly with the tool "zoom" the picture comes back clear. In Camera Raw instead is always sharp. I solve the problem by turning off the graphics card in PS but often use plugin that need the graphics card otherwise the processing time is much longer. I ask for help.
    Thanks.

    Hello, I have a Mac computer with NVIDIA 750M dedicated graphics card and monitor EIZO but the problem was there when I was working with Windows and Acer monitor. When I open a file from Camera Raw in PS this is smaller than the screen so I double-click with the tool "hand" to fit on the screen, but the picture loses sharpness and becomes blurry. If you magnify the image even only slightly with the tool "zoom" the picture comes back clear. In Camera Raw instead is always sharp. I solve the problem by turning off the graphics card in PS but often use plugin that need the graphics card otherwise the processing time is much longer. I ask for help.
    Thanks.

  • Performance analysis and monitoring of a Forteapplication

    Hello,
    It would be good if one could do some performance analysis and monitoring of
    a Forte application at production time.
    By performance analysis I mean measure the time some selected methods take
    to return. In a CS application such a method would be some selected method
    of a key remote service representative of the application's activity.
    One would like to mesure the min, max, average time, and also give a
    threshhold which when reached will automatically generate an alarm or some
    pre-defined processing.
    The most powerful way would be to use a SNMP application through a Forte -
    SNMP gateway. (see the G. Puterbaugh paper "Building a Forte-SNMP gateway"
    in the 96' Forte Forum proceedings).
    But before going that far some simple means accessible through EConsole
    would already be great.
    A collegue of mine when to that Forum and reported that Puterbaugh said that
    such an agent is currently missing, but that its implementation is not
    difficult.
    I looked at all the agents and their instruments. I came to the following
    conclusions :
    1) instrumented data are available at the granularity level of a partition,
    not at a smaller granularity. For example, the DistObjectMgr agent gives you
    very useful information : the number of events (sent/received) and the
    number of (remote) methods (called/invoked), but this for the entire
    partition. Thus it prevents making tuned observations (unless you partition
    in a special way your application to put in a dedicated partition the thing
    you want to observe and only this thing).
    2) there is no instrumented data related to processing time.
    This leads me to the point that no information observed by the standard
    agents help me figuring out my performances. Thus I have to add at
    development time some lines of code to the methods I potentially want to
    observe later at production time to generate the appropriate information a
    custom agent will then display (process) with the appropriate instruments.
    Does someone share this position ?
    Has someone implemented such an agent and assotiated means ?
    PS: I will probably implement my own one if no other way around.
    best regards,
    Pierre Gelli
    ADP GSI
    Payroll and Human Resources Management
    72-78, Grande Rue, F-92310 SEVRES
    phone : +33 1 41 14 86 42 (direct) +33 1 41 14 85 00 (reception desk)
    fax : +33 1 41 14 85 99

    Hello,
    It would be good if one could do some performance analysis and monitoring of
    a Forte application at production time.
    By performance analysis I mean measure the time some selected methods take
    to return. In a CS application such a method would be some selected method
    of a key remote service representative of the application's activity.
    One would like to mesure the min, max, average time, and also give a
    threshhold which when reached will automatically generate an alarm or some
    pre-defined processing.
    The most powerful way would be to use a SNMP application through a Forte -
    SNMP gateway. (see the G. Puterbaugh paper "Building a Forte-SNMP gateway"
    in the 96' Forte Forum proceedings).
    But before going that far some simple means accessible through EConsole
    would already be great.
    A collegue of mine when to that Forum and reported that Puterbaugh said that
    such an agent is currently missing, but that its implementation is not
    difficult.
    I looked at all the agents and their instruments. I came to the following
    conclusions :
    1) instrumented data are available at the granularity level of a partition,
    not at a smaller granularity. For example, the DistObjectMgr agent gives you
    very useful information : the number of events (sent/received) and the
    number of (remote) methods (called/invoked), but this for the entire
    partition. Thus it prevents making tuned observations (unless you partition
    in a special way your application to put in a dedicated partition the thing
    you want to observe and only this thing).
    2) there is no instrumented data related to processing time.
    This leads me to the point that no information observed by the standard
    agents help me figuring out my performances. Thus I have to add at
    development time some lines of code to the methods I potentially want to
    observe later at production time to generate the appropriate information a
    custom agent will then display (process) with the appropriate instruments.
    Does someone share this position ?
    Has someone implemented such an agent and assotiated means ?
    PS: I will probably implement my own one if no other way around.
    best regards,
    Pierre Gelli
    ADP GSI
    Payroll and Human Resources Management
    72-78, Grande Rue, F-92310 SEVRES
    phone : +33 1 41 14 86 42 (direct) +33 1 41 14 85 00 (reception desk)
    fax : +33 1 41 14 85 99

  • Autonomy, Governance and Monitoring Inter Organisational BPM

    Hi All,
    Which technological approach is suitable for implementing Collaborating BPM across the value chain from the point of view of Autonomy, Governance, Security, and Monitoring? What are the benefits of selecting the one approach over the other?
    These approaches are Centralise, Decentralise, and Peer-to-Peer.
    Explanation of these approaches:
    Centralise CBPM: Ownership of Collaborative System is with one Central Organisation and other Partners Participate in this collaboration through use of various UI’s such as portal. Example, Collaboration between the automobile manufacturer and dealer were automobile manufacturer provides portal for dealer to place order, manage its customer and handle warranty and recalls.
    Decentralise CBPM: In this approach every partner provides communication technology like Web Services for its Business Partner for collaboration. The ownership is decentralise and more flexible
    Peer-to-Peer CBPM: In ideal condition in this approach every Partner should have the same technology which is developed for handling collaborative Business Processes. In this case the same modelling, monitoring, and implementation technology is available across the value chain.
    I kindly request your views and appreciate any questions and guidance.
    Regards,
    Ganesh Sawant

    PROS:
    Using the Webservices you can trigger the BPM Process and pass the values from the web dypro component which will available for the end user and pass it on to the BPM Proess.
    We can use EJB as a webservice, where the automated activity in the BPM can output the value based on the logic in the EJB Function. We can manage the automated activity which runs in the back ground of the process as per our logic and returns the value to the next task.
    CONS
    Using Webservices in the various activity with fewer data is not advisable as it takes longer time to deploy the process. And any change in the data in the webservices requires regularly re importing of the web services.

  • Autonomy, Governance and Monitoring in Inter Organisational BPM

    Which technological approach is suitable for implementing Collaborating BPM across the value chain from the point of view of Autonomy, Governance, Security, and Monitoring?  What are the benefits of selecting the one approach over the other?
    These approaches are Centralise, Decentralise, and Peer-to-Peer.
    Explanation of these approaches:
    Centralise CBPM:  Ownership of Collaborative System is with one Central Organisation and other Partners Participate in this collaboration through use of various UIu2019s such as portal.  Example, Collaboration between the automobile manufacturer and dealer were automobile manufacturer provides portal for dealer to place order, manage its customer and handle warranty and recalls.
    Decentralise CBPM: In this approach every partner provides communication technology like Web Services for its Business Partner for collaboration.  The ownership is decentralise and more flexible
    Peer-to-Peer CBPM: In ideal condition in this approach every Partner should have the same technology which is developed for handling collaborative Business Processes.  In this case the same modelling, monitoring, and implementation technology is available across the value chain.
    I kindly request your views and appreciate any questions and guidance.
    Regards,
    Ganesh Sawant

    PROS:
    Using the Webservices you can trigger the BPM Process and pass the values from the web dypro component which will available for the end user and pass it on to the BPM Proess.
    We can use EJB as a webservice, where the automated activity in the BPM can output the value based on the logic in the EJB Function. We can manage the automated activity which runs in the back ground of the process as per our logic and returns the value to the next task.
    CONS
    Using Webservices in the various activity with fewer data is not advisable as it takes longer time to deploy the process. And any change in the data in the webservices requires regularly re importing of the web services.

  • OWB mappings to skip rows that are in error and continue processing

    OWB mappings to skip rows that are in error and continue processing.
    1) Enter a record into an error log
    2) Skip rows that are in error
    3) and continue processing
    Type of information could be needed in the error log:
    SY_LOG_ERROR_KEY
    ERROR_TIMESTAMP
    MAP_NAME
    SOURCE_RECORD
    ERROR_CODE
    ERROR_MESSAGE
    ERROR_NOTES
    Example:
    If the source table has five records, in that 3 records has some error.
    When I run the OWB mapping to load the source data to target table, OWB should skip the 3 record and load all the remaining record. This is our requirement.
    Another think I want to store the error record details in a error log table.
    Can u plz tell me whether it is possible in OWB. If not means please give some suggestion to do this.

    Hi,
    thanks for ur help, As is OWB version is 10.2.0 so for set based it is not working. with your idea i create a POST PROCESSING MAPPING. it is now working fine.
    Step 1:
    Create a table MAP_ERROR_LOG.
    Script:
    CREATE TABLE MAP_ERROR_LOG
    ERROR_SEQ NUMBER,
    MAPPING_NAME VARCHAR2(32 BYTE),
    TARGET_TABLE VARCHAR2(35 BYTE),
    TARGET_COLUMN VARCHAR2(35 BYTE),
    TARGET_VALUE VARCHAR2(100 BYTE),
    PRIMARY_TABLE VARCHAR2(100 BYTE),
    ERROR_ROWKEY NUMBER,
    ERROR_CODE VARCHAR2(12 BYTE),
    ERROR_MESSAGE VARCHAR2(2000 BYTE),
    ERROR_TIMESTAMP DATE
    TABLESPACE ODS_D1_AA
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 80K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    Step 2:
    Create a sequence MAP_ERROR_LOG_SEQ
    CREATE SEQUENCE MAP_ERROR_LOG_SEQ START WITH 1 INCREMENT BY 1
    Step 3:
    Create a procedure PROC_MAP_ERROR_LOG through OWB.
    In this i have used 3 cursor, first cursor is used to check the count of error messages for the corresponding table(WB_RT_ERROR_SOURCES).
    The second cursor is used to get the oracle error and the primary key values.
    The third cursor is used for get the ORACLE DBA errors such as "UNABLE TO EXTEND THE TABLESPACE" for this type errors.
    CREATE OR REPLACE PROCEDURE PROC_MAP_ERROR_LOG(MAP_ID VARCHAR2) IS
    --initialize variables here
    CURSOR C1 IS
    SELECT COUNT(RTA_IID) FROM OWBREPO.WB_RT_ERROR_SOURCES
    WHERE RTA_IID =( SELECT MAX(RTA_IID) FROM OWBREPO.WB_RT_AUDIT WHERE RTA_PRIMARY_TARGET ='"'||MAP_ID||'"');
    V_COUNT NUMBER;
    CURSOR C2 IS
    SELECT A.RTE_ROWKEY ERR_ROWKEY,SUBSTR(A.RTE_SQLERRM,1,INSTR(A.RTE_SQLERRM,':')-1) ERROR_CODE,
    SUBSTR(A.RTE_SQLERRM,INSTR(A.RTE_SQLERRM,':')+1) ERROR_MESSAGE,
    C.RTA_LOB_NAME MAPPING_NAME,SUBSTR(B.RTS_SOURCE_COLUMN,(INSTR(B.RTS_SOURCE_COLUMN,'.')+1)) TARGET_COLUMN,
    B.RTS_VALUE TARGET_VALUE,C.RTA_PRIMARY_SOURCE PRIMARY_SOURCE,C.RTA_PRIMARY_TARGET TARGET_TABLE,
    C.RTA_DATE ERROR_TIMESTAMP
    FROM OWBREPO.WB_RT_ERRORS A,OWBREPO.WB_RT_ERROR_SOURCES B, OWBREPO.WB_RT_AUDIT C
    WHERE C.RTA_IID = A.RTA_IID
    AND C.RTA_IID = B.RTA_IID
    AND A.RTA_IID = B.RTA_IID
    AND A.RTE_ROWKEY =B.RTE_ROWKEY
    --AND RTS_SEQ =1  
    AND B.RTS_SEQ IN (SELECT POSITION FROM ALL_CONS_COLUMNS A,ALL_CONSTRAINTS B
    WHERE A.TABLE_NAME = B.TABLE_NAME
    AND A.CONSTRAINT_NAME = B.CONSTRAINT_NAME
    AND A.TABLE_NAME =MAP_ID
    AND CONSTRAINT_TYPE ='P')
    AND A.RTA_IID =(
    SELECT MAX(RTA_IID) FROM OWBREPO.WB_RT_AUDIT WHERE RTA_PRIMARY_TARGET ='"'||MAP_ID||'"');
    CURSOR C3 IS
    SELECT A.RTE_ROWKEY ERR_ROWKEY,SUBSTR(A.RTE_SQLERRM,1,INSTR(A.RTE_SQLERRM,':')-1) ERROR_CODE,
    SUBSTR(A.RTE_SQLERRM,INSTR(A.RTE_SQLERRM,':')+1) ERROR_MESSAGE,
    C.RTA_LOB_NAME MAPPING_NAME,SUBSTR(B.RTS_SOURCE_COLUMN,(INSTR(B.RTS_SOURCE_COLUMN,'.')+1)) TARGET_COLUMN,
    B.RTS_VALUE TARGET_VALUE,C.RTA_PRIMARY_SOURCE PRIMARY_SOURCE,C.RTA_PRIMARY_TARGET TARGET_TABLE,
    C.RTA_DATE ERROR_TIMESTAMP
    FROM OWBREPO.WB_RT_ERRORS A,OWBREPO.WB_RT_ERROR_SOURCES B, OWBREPO.WB_RT_AUDIT C
    WHERE C.RTA_IID = A.RTA_IID
    AND A.RTA_IID = B.RTA_IID (+)
    AND A.RTE_ROWKEY =B.RTE_ROWKEY (+)
    AND A.RTA_IID =(
    SELECT MAX(RTA_IID) FROM OWBREPO.WB_RT_AUDIT WHERE RTA_PRIMARY_TARGET ='"'||MAP_ID||'"');
    -- main body
    BEGIN
    DELETE ED_ODS.MAP_ERROR_LOG WHERE TARGET_TABLE ='"'||MAP_ID||'"';
    COMMIT;
    OPEN C1;
    FETCH C1 INTO V_COUNT;
    IF V_COUNT >0 THEN
    FOR REC IN C2
    LOOP
    INSERT INTO ED_ODS.MAP_ERROR_LOG
    (Error_seq ,
    Mapping_name,
    Target_table,
    Target_column ,
    Target_value ,
    Primary_table ,
    Error_rowkey ,
    Error_code ,
    Error_message ,
    Error_timestamp)
    VALUES(
    ED_ODS.MAP_ERROR_LOG_SEQ.NEXTVAL,
    REC.MAPPING_NAME,
    REC.TARGET_TABLE,
    REC.TARGET_COLUMN,
    REC.TARGET_VALUE,
    REC.PRIMARY_SOURCE,
    REC.ERR_ROWKEY,
    REC.ERROR_CODE,
    REC.ERROR_MESSAGE,
    REC.ERROR_TIMESTAMP);
    END LOOP;
    ELSE
    FOR REC IN C3
    LOOP
    INSERT INTO ED_ODS.MAP_ERROR_LOG
    (Error_seq ,
    Mapping_name,
    Target_table,
    Target_column ,
    Target_value ,
    Primary_table ,
    Error_rowkey ,
    Error_code ,
    Error_message ,
    Error_timestamp)
    VALUES(
    ED_ODS.MAP_ERROR_LOG_SEQ.NEXTVAL,
    REC.MAPPING_NAME,
    REC.TARGET_TABLE,
    REC.TARGET_COLUMN,
    REC.TARGET_VALUE,
    REC.PRIMARY_SOURCE,
    REC.ERR_ROWKEY,
    REC.ERROR_CODE,
    REC.ERROR_MESSAGE,
    REC.ERROR_TIMESTAMP);
    END LOOP;
    END IF;
    CLOSE C1;
    COMMIT;
    -- NULL; -- allow compilation
    EXCEPTION
    WHEN OTHERS THEN
    NULL; -- enter any exception code here
    END;

  • Unit tests and QA process

    Hello,
    (disclaimer : if you agree that this topic does not really belong to this forum please vote for a new Development Process forum there:
    http://forum.java.sun.com/thread.jspa?forumID=53&threadID=504658 ;-)
    My current organization has a dedicated QA team.
    They ensure end-user functional testing but also run and monitor "technical" tests as well.
    In particular they would want to run developer-written junit tests as sanity tests before the functional tests.
    I'm wondering whether this is such a good idea, and how to handle failed unit tests:
    1) Well , indeed, I think this is a good idea: even if developer all abide by the practice of ensuring 100% of their test pass before promoting their code (which is unfortunately not the case), integration of independant development may cause regression or interactions that make some test fail.
    Any reason against QA running junit tests at this stage?
    However the next question is, what do they do with failed tests : QA has no clue how important a given unit test is with regard to the whole application.
    Maybe a single unit test failed out of 3500 means a complete outage of a 24x7 application. Or maybe 20% of failed tests only means a few misaligned icons...
    2) The developer of the failed package may know, but how can he communicate this to QA?
    Javadocing their unit testing code ("This test is mandatory before entering user acceptance") seems a bit fragile.
    Are there recommended methods?
    3) Even the developer of the failed package may not realize the importance of the failure. So what should be the process when unit tests fail in QA?
    Block the process until 100% tests pass? Or, start acceptance anyway but notify the developper through the bug tracking system?
    4) Does your acceptance process require 100% pass before user acceptance starts?
    Indeed I have ruled out requiring 100% pass, but is this a widespread practice?
    I rule it out because maybe the failed test indeed points out a bad test, or a temporary unavailability of a dependent or simulated resource.
    This has to be analyzed of course, as tests have to be maintained as well, but this can be a parallel process to the user acceptance (accepting that the software may have to be patched at some point during the acceptance).
    Thank you for your inputs.
    J.

    >
    Any reason against QA running junit tests at this
    stage?
    Actually running them seems pointless to me.
    QA could be interested in the following
    - That unit tests do actually exist
    - That the unit tests are actually being run
    - That the unit tests pass.
    This can all be achieved as part of the build process however. It can either be done for every cm build (like automated nightly) or for just for release builds.
    This would require that the following information was logged
    - An id unique to each test
    - Pass fail
    - A collection system.
    Obviously doing this is going to require more work and probably code than if QA was not tracking it.
    However the next question is, what do they do with
    failed tests : QA has no clue how important a given
    unit test is with regard to the whole application.
    Maybe a single unit test failed out of 3500 means a
    complete outage of a 24x7 application. Or maybe 20%
    of failed tests only means a few misaligned icons...
    To me that question is like asking what happens if one class fails to build for a release build.
    To my mind any unit test failure is logged as a severe exception (the entire system is unusable.)
    2) The developer of the failed package may know, but
    how can he communicate this to QA?
    Javadocing their unit testing code ("This test is
    mandatory before entering user acceptance") seems a
    bit fragile.
    Are there recommended methods?Automatic collection obviously. This has to be planned for.
    One way is to just log success and failure for each test which is gathered in one or more files. Then a seperate process munges the result file to collect the data.
    I know that there is a java build engine (add on to ant or a wrapper to ant) which will do periodic builds and email reports to developers. I think it even allows for categorization so the correct developer gets the correct error.
    >
    3) Even the developer of the failed package may not
    realize the importance of the failure. So what
    should be the process when unit tests fail in
    QA?
    Block the process until 100% tests pass? Or, start
    acceptance anyway but notify the developper through
    the bug tracking system?
    I would block it.
    4) Does your acceptance process require 100% pass
    before user acceptance starts?
    No. But I am not sure what that has to do with what you were discussing above. I consider unit tests and acceptance testing to be two seperate things.
    Indeed I have ruled out requiring 100% pass, but is
    this a widespread practice?
    I rule it out because maybe the failed test indeed
    points out a bad test, or a temporary unavailability
    of a dependent or simulated resource.Then something is wrong with your process.
    When you create a release build you should include those things that should be in the release. If they are not done, missing, have build errors, then they shouldn't be in the release.
    If some dependent piece is missing for the release build then the build process has failed. And so it should not be released to QA.
    I have used version control/debug systems which made this relatively easy by allowing control over which bug/enhancements are included in a release. And of course requiring that anything checked in must be done so under a bug/enhancement. They will even control dependencies as well (the files for bug 7 might require that bug 8 is added as well.)

Maybe you are looking for

  • How to transfer Customers to APO based on selection criteria of Division

    Hello, We have to transfer Customer Master specific to one business unit to APO and we don't want to transnfer the Customers of another business units. The differentiation of customers is done at a Distribution Channel & Division level. But we don't

  • Server for KM

    Good morning We have created a server node in another server for KM.  As we can start km in this server and not in the portal server?? Thanks, Mercedes

  • Dynamic Media Server not available?

    I'm trying to create a GIF image, and in the past I had no pronlems with 'Importing video frames inoto layers' but now an error pops up about the Media Link not being available. Please help. x

  • Php5-sybase (MSSQL) or lack thereof?

    I didn't want to need support for MSSQL but sometimes you're in situations where you have to interface with it.  Anyway, using PHP + MSSQL in Ubuntu was a breeze I just installed the following package: http://packages.ubuntu.com/en/jaunty/web/php5-sy

  • System.load("/opt/test/libcoder.so") does not work!

    hello! I have a problem with JNI. I have a shared object, libcoder.so, that i store in /opt/test/ The code runs in a UNIX enviroment. the code that loads the lib looks like this: static{ System.load("/opt/test/libcoder.so"); I get this exception when