Queue alerts container variables are not getting filled
Hi All,
I am referring this blog to create alerts when message struck in queues
/people/santhosh.kumarv/blog/2009/05/19/sap-xipi-alerts-for-queue-errors
I am able to get the alert when a message struck in Queue of XI. But Container variables are not filled with actual run time values
I have used the report as it is and container variables as described in blog.
I have implemented Alerts for IE & AE alerts, those are looks good as expected.
Please let me know any clue
Regards
Hi,
After investigating all the things i found these
1) We have already Integration Engine & Adapter Engine Alerts configured in landscape with separate alert categories & working fine
2) Now i configured one more alert category for Queue error with the above mentioned Blog
Important thing: is even though if any error occurred on Integration engine, i get 2 alerts. one with Integration Engine alert category description & one more is with Queue alert category description
Reason for this is, for Queue alert category i selected radio button as "No Restriction". because of this if any IE error occurred also queue alert category is also triggering along with Integration engine alert category .
Please let me know is it possible to implement Queue alerts along with the IE & AE type alert categories
Any one implemented IE, AE & Queue alerts in single landscape
Regards
vamsi
Edited by: Vamsi Krishna on Jul 19, 2010 4:53 PM
Similar Messages
-
Material Number and Serial Number are not getting filled...Very very urgent
I have my code in the badi...CRM_EQUI_LOAD...method PERFORM_LOAD.In this i try to to create a component for the ibase using the function module... CRM_ICSS_CREATE_COMPONENT...The component gets created but the material and serial number are not getting updated...It is very very urgent..please help....
I am not sure what information you are passing in given FM.
But in my case I have used FM 'CRM_CREATE_IBASE_FROM_EQUI' to create Ibase and component. It creates IBase and Component along with other details like material number and serial number.
you may refer the below code for the same >
Call the standard handling for crm equi load.
call function 'CRM_CREATE_IBASE_FROM_EQUI'
exporting
is_header = is_header
it_equi_dmbdoc = it_equi_dmbdoc
changing
et_error_segments = ct_error_segment
exceptions
technical_error = 1
others = 2.
if sy-subrc <> 0.
raise equi_load_badi_error.
endif.
Cheers,
Ashish -
Variables are not getting changed for rs_pres_plan report
Hi Guys,
I'm trying to use the RS_prec_plan report for broadcasting but my variables which I'm selecting in as a varaint in workbook is not getting replaced.
Its taking from the saved view of the workbook.I read note 1154928 which explains the same issue but no luck any help would be appreciated.
Thanks
RCan anybody pls provide their suggestions?
Thanks in Advance! -
How can I get past the error message "Variables are not compatible DMB00"?
Hello,
I'm working in Desktop Intelligence XI and I'm using an Excel data provider and a universe data provider. I've linked the two providers on the common field "Cost_Code". The Excel data provider also has a "Cost_Code_Descr_xls" field so I've created a new variable that makes this description field a detail object associated with the "Cost_Code" dimension. This allows me to use both objects in the report.
Some of the cost codes in my universe data provider are not found in my cost_code Excel provider so I'm trying to create a simple formula to deal with these null values:
= if isnull(<Cost_Code_Descr_xls>) then "Unclassified" else <Cost_Code_Descr_xls>
This is where I get the "Variables are not compatible" error.
Any ideas on how to get around this error?
Thanks!
DavidHi David,
I might have been a bit to quick with just saying that the only thing you needed to do was replacing the object with the variable you created.
The variable is only compatible with 'Cost Code' dimension, but not with any of the other dimensions in the report. Your header probably only contains the 'Cost Code' dimension and as such the formula isn't giving any problems. But your details contain ohter incompatible dimensions.
What you need to do is also create detail variables for the other dimensions and relate those to the 'Cost Code' dimension. Use those newly created detail variables in your report.
Regards,
Harry -
As of yesterday, calendar subscribers are no longer getting event alerts. We are not exceeding the limits for number of calendars, and have not installed new hardware or software. What else could be causing this, and how do I fix it? This is critical!
The warranty entitles you to complimentary phone support for the first 90 days of ownership.
-
Some Out-Variables are not filled when one is missing (Oracle-Client 10)
Hello everybody,
we have a problem in our applications, written in C++ using OCI.
All works fine with Oracle Client 8 and 9, the problem occurs when using Client 10.
A simple example:
select 1, 2, 3, 4, 5 from dual;
I have 4 out-variables, the 3rd one is missing:
OCIDefineByPos(..., 1, &out1, ...);
OCIDefineByPos(..., 2, &out2, ...);
OCIDefineByPos(..., 4, &out4, ...);
OCIDefineByPos(..., 5, &out5, ...);
When executing with Oracle Client 8 and 9, the result is:
out1 = 1
out2 = 2
out4 = 4
out5 = 5
Executing the same with Oracle Client 10, the result is:
out1 = 1
out2 = 2
out4 = 0
out5 = 0
When there is a selected column without a variable for it, all following out-variables are not filled. Can someone repeat and / or explain this? I read the Oracle Docs for the OCI 10, but nowhere is a hint about differences or changes in this behaviour. I know that when selecting a column I should spend an out-variable for it, but nobody is perfect.
Here are some details:
Oracle Client 10.2.0.1.0
Client OS Windows XP SP1
Oracle Database 10g Release 10.2.0.1.0
Application developed with Visual Studio C++ 7.1
Thanks for any help.
Torsten
Here's the code, I changed the simple OCI-example from the Oracle homepage:
void ocitest()
static text username = (text ) "xxx";
static text password = (text ) "yyy";
static OCIEnv *envhp;
static OCIError *errhp;
static sword status;
sword out1, out2, out3, out4, out5;
sb2 ind1, ind2, ind3, ind4, ind5; /* indicator */
static text maxemp = (text ) "SELECT 1, 2, 3, 4, 5 FROM dual ";
OCISession authp = (OCISession ) 0;
OCIServer *srvhp;
OCISvcCtx *svchp;
OCIStmt stmthp, stmthp1;
OCIDefine defnp1 = (OCIDefine ) 0;
OCIDefine defnp2 = (OCIDefine ) 0;
OCIDefine defnp3 = (OCIDefine ) 0;
OCIDefine defnp4 = (OCIDefine ) 0;
OCIDefine defnp5 = (OCIDefine ) 0;
(void) OCIInitialize((ub4) OCI_DEFAULT, (dvoid *)0,
(dvoid * (*)(dvoid *, size_t)) 0,
(dvoid * (*)(dvoid *, dvoid *, size_t))0,
(void (*)(dvoid *, dvoid *)) 0 );
(void) OCIEnvInit( (OCIEnv **) &envhp, OCI_DEFAULT, (size_t) 0,
(dvoid **) 0 );
(void) OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &errhp, OCI_HTYPE_ERROR,
(size_t) 0, (dvoid **) 0);
/* server contexts */
(void) OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &srvhp, OCI_HTYPE_SERVER,
(size_t) 0, (dvoid **) 0);
(void) OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &svchp, OCI_HTYPE_SVCCTX,
(size_t) 0, (dvoid **) 0);
(void) OCIServerAttach( srvhp, errhp, (text *)"", strlen(""), 0);
/* set attribute server context in the service context */
(void) OCIAttrSet( (dvoid *) svchp, OCI_HTYPE_SVCCTX, (dvoid *)srvhp,
(ub4) 0, OCI_ATTR_SERVER, (OCIError *) errhp);
(void) OCIHandleAlloc((dvoid *) envhp, (dvoid **)&authp,
(ub4) OCI_HTYPE_SESSION, (size_t) 0, (dvoid **) 0);
(void) OCIAttrSet((dvoid *) authp, (ub4) OCI_HTYPE_SESSION,
(dvoid *) username, (ub4) strlen((char *)username),
(ub4) OCI_ATTR_USERNAME, errhp);
(void) OCIAttrSet((dvoid *) authp, (ub4) OCI_HTYPE_SESSION,
(dvoid *) password, (ub4) strlen((char *)password),
(ub4) OCI_ATTR_PASSWORD, errhp);
checkerr(errhp, OCISessionBegin ( svchp, errhp, authp, OCI_CRED_RDBMS,
(ub4) OCI_DEFAULT));
(void) OCIAttrSet((dvoid *) svchp, (ub4) OCI_HTYPE_SVCCTX,
(dvoid *) authp, (ub4) 0,
(ub4) OCI_ATTR_SESSION, errhp);
checkerr(errhp, OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &stmthp,
OCI_HTYPE_STMT, (size_t) 0, (dvoid **) 0));
checkerr(errhp, OCIHandleAlloc( (dvoid *) envhp, (dvoid **) &stmthp1,
OCI_HTYPE_STMT, (size_t) 0, (dvoid **) 0));
checkerr(errhp, OCIStmtPrepare(stmthp, errhp, maxemp,
(ub4) strlen((char *) maxemp),
(ub4) OCI_NTV_SYNTAX, (ub4) OCI_DEFAULT));
checkerr(errhp, OCIDefineByPos(stmthp, &defnp1, errhp, 1, (dvoid *) &out1,
(sword) sizeof(sword), SQLT_INT, (dvoid *) &ind1, (ub2 *)0,
(ub2 *)0, OCI_DEFAULT));
checkerr(errhp, OCIDefineByPos(stmthp, &defnp2, errhp, 2, (dvoid *) &out2,
(sword) sizeof(sword), SQLT_INT, (dvoid *) &ind2, (ub2 *)0,
(ub2 *)0, OCI_DEFAULT));
/* checkerr(errhp, OCIDefineByPos(stmthp, &defnp3, errhp, 3, (dvoid *) &out3,
(sword) sizeof(sword), SQLT_INT, (dvoid *) &ind3, (ub2 *)0,
(ub2 *)0, OCI_DEFAULT));*/
checkerr(errhp, OCIDefineByPos(stmthp, &defnp4, errhp, 4, (dvoid *) &out4,
(sword) sizeof(sword), SQLT_INT, (dvoid *) &ind4, (ub2 *)0,
(ub2 *)0, OCI_DEFAULT));
checkerr(errhp, OCIDefineByPos(stmthp, &defnp5, errhp, 5, (dvoid *) &out5,
(sword) sizeof(sword), SQLT_INT, (dvoid *) &ind5, (ub2 *)0,
(ub2 *)0, OCI_DEFAULT));
/* execute and fetch */
if (status = OCIStmtExecute(svchp, stmthp, errhp, (ub4) 1, (ub4) 0,
(CONST OCISnapshot *) NULL, (OCISnapshot *) NULL, OCI_DEFAULT))
if (status == OCI_NO_DATA)
else
checkerr(errhp, status);
void checkerr(OCIError *errhp, sword status)
text errbuf[512];
sb4 errcode = 0;
switch (status)
case OCI_SUCCESS:
break;
case OCI_SUCCESS_WITH_INFO:
(void) printf("Error - OCI_SUCCESS_WITH_INFO\n");
break;
case OCI_NEED_DATA:
(void) printf("Error - OCI_NEED_DATA\n");
break;
case OCI_NO_DATA:
(void) printf("Error - OCI_NODATA\n");
break;
case OCI_ERROR:
(void) OCIErrorGet((dvoid *)errhp, (ub4) 1, (text *) NULL, &errcode,
errbuf, (ub4) sizeof(errbuf), OCI_HTYPE_ERROR);
(void) printf("Error - %.*s\n", 512, errbuf);
break;
case OCI_INVALID_HANDLE:
(void) printf("Error - OCI_INVALID_HANDLE\n");
break;
case OCI_STILL_EXECUTING:
(void) printf("Error - OCI_STILL_EXECUTE\n");
break;
case OCI_CONTINUE:
(void) printf("Error - OCI_CONTINUE\n");
break;
default:
break;
}Use int/long instead sword .
-
IT_FIELDCATALOG IS NOT GETTING FILLED
HI friends
I have returned a program for workflow tracking. Am getting the output has led light alone no data's are displayed. I found that my IT_FIELDCATALOG = GT_FCAT is not getting filled. How to fill that. Am pasting my programming here. Kinldy help me out.
TYPE-POOLS: ABAP.
TABLES : PTREQ_ATTABSDATA,PTREQ_HEADER,PTREQ_ITEMS.
TYPES: BEGIN OF TY_S_OUTTAB,
EXCEPTION TYPE LVC_EXLED,
PERNR TYPE P0001-PERNR,
BEGDA TYPE PTREQ_ATTABSDATA-BEGDA,
ENDDA TYPE PTREQ_ATTABSDATA-ENDDA,
SUBTY TYPE SUBTY,
STATUS TYPE PTREQ_HEADER-STATUS,
END OF TY_S_OUTTAB.
TYPES: TY_T_OUTTAB TYPE STANDARD TABLE OF TY_S_OUTTAB
WITH DEFAULT KEY.
DATA : REQUEST_ID TYPE PTREQ_HEADER-REQUEST_ID.
DATA:
GD_REPID TYPE SYREPID,
GD_OKCODE TYPE UI_FUNC,
GT_FCAT TYPE LVC_T_FCAT,
GS_LAYOUT TYPE LVC_S_LAYO,
GS_VARIANT TYPE DISVARIANT,
GO_DOCKING TYPE REF TO CL_GUI_DOCKING_CONTAINER,
GO_GRID TYPE REF TO CL_GUI_ALV_GRID.
DATA: FIELDCATALOG TYPE SLIS_T_FIELDCAT_ALV WITH HEADER LINE.
DATA: GT_OUTTAB TYPE TY_T_OUTTAB.
PARAMETERS:
PERNR TYPE PA0001-PERNR,
REQ_ID TYPE PTREQ_HEADER-REQUEST_ID,
LEA_TY TYPE PA0001-SUBTY,
BEGDA TYPE PA0001-BEGDA,
ENDDA TYPE PA0001-ENDDA.
REQ_ID = '52A08D487A9B5807E10000000A170133'.
START-OF-SELECTION.
BREAK-POINT.
* SELECT * FROM PTREQ_ATTABSDATA INTO CORRESPONDING FIELDS OF TABLE gt_outtab
* WHERE PERNR = PERNR AND SUBTY = LEA_TY.
SELECT C~PERNR C~BEGDA C~ENDDA C~SUBTY A~STATUS INTO CORRESPONDING FIELDS OF TABLE GT_OUTTAB
FROM ( ( PTREQ_HEADER AS A INNER JOIN
PTREQ_ITEMS AS B ON A~ITEM_LIST_ID = B~ITEM_LIST_ID ) INNER JOIN
PTREQ_ATTABSDATA AS C ON B~ITEM_INS = C~ITEM_ID )
WHERE REQUEST_ID = REQ_ID AND REQUEST_TYPE = 'ABSREQ'
AND VERSION_NO = ( SELECT MAX( VERSION_NO ) FROM PTREQ_HEADER
WHERE REQUEST_ID = REQ_ID ) AND
ITEM_LIST_NO = ( SELECT MAX( ITEM_LIST_NO )
FROM PTREQ_ITEMS WHERE ITEM_LIST_ID = A~ITEM_LIST_ID ) .
PERFORM INIT_CONTROLS.
PERFORM CHECK_CONDITION.
* Display data
CALL METHOD GO_GRID->SET_TABLE_FOR_FIRST_DISPLAY
EXPORTING
IS_LAYOUT = GS_LAYOUT
IS_VARIANT = GS_VARIANT
I_SAVE = 'A'
CHANGING
IT_OUTTAB = GT_OUTTAB
IT_FIELDCATALOG = GT_FCAT
EXCEPTIONS
OTHERS = 4.
IF SY-SUBRC = 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
* MESSAGE ID mid TYPE mtype NUMBER num.
ENDIF.
CALL SCREEN '0100'.
END-OF-SELECTION.
*& Module STATUS_0100 OUTPUT
* text
MODULE STATUS_0100 OUTPUT.
SET PF-STATUS 'STATUS_0100'.
* SET TITLEBAR 'xxx'.
** CALL METHOD go_grid1->refresh_table_display
*** EXPORTING
*** IS_STABLE =
*** I_SOFT_REFRESH =
** EXCEPTIONS
** FINISHED = 1
** others = 2
** IF sy-subrc 0.
*** MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
*** WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
** ENDIF.
*OK-CODE->GD_OKCODE.
ENDMODULE. " STATUS_0100 OUTPUT
*& Module USER_COMMAND_0100 INPUT
* text
MODULE USER_COMMAND_0100 INPUT.
CASE GD_OKCODE.
WHEN 'BACK' OR
'END' OR
'CANC'.
SET SCREEN 0.
LEAVE SCREEN.
WHEN OTHERS.
CALL METHOD GO_GRID->REFRESH_TABLE_DISPLAY
EXCEPTIONS
FINISHED = 1
OTHERS = 2.
IF SY-SUBRC = 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDCASE.
CLEAR: GD_OKCODE.
ENDMODULE. " USER_COMMAND_0100 INPUT
*& Form INIT_CONTROLS
* text
* --> p1 text
* <-- p2 text
FORM INIT_CONTROLS .
* Create ALV grid
CREATE OBJECT GO_GRID
EXPORTING
I_PARENT = GO_DOCKING
EXCEPTIONS
OTHERS = 5.
IF SY-SUBRC = 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
PERFORM BUILD_FIELDCATALOG.
PERFORM SET_LAYOUT_AND_VARIANT.
ENDFORM. " INIT_CONTROLS
*& Form BUILD_FIELDCATALOG
* text
* --> p1 text
* <-- p2 text
FORM BUILD_FIELDCATALOG.
*fieldcatalog-fieldname = 'EXPECTION'.
* fieldcatalog-seltext_m = 'LIGHT'.
* fieldcatalog-col_pos = 1.
* fieldcatalog-outputlen = 3.
** fieldcatalog-emphasize = 'X'.
* APPEND fieldcatalog TO fieldcatalog.
* CLEAR fieldcatalog.
FIELDCATALOG-FIELDNAME = 'PERNR'.
FIELDCATALOG-SELTEXT_M = 'EMPLOYEE NO'.
FIELDCATALOG-COL_POS = 1.
FIELDCATALOG-OUTPUTLEN = 8.
FIELDCATALOG-EMPHASIZE = 'X'.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'REQ_ID'.
FIELDCATALOG-SELTEXT_M = 'REQUEST_ID'.
FIELDCATALOG-COL_POS = 2.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'LEA_TY'.
FIELDCATALOG-SELTEXT_M = 'LEAVE_TYPE'.
FIELDCATALOG-COL_POS = 3.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'BEGDA'.
FIELDCATALOG-SELTEXT_M = 'BEGIN_DATE'.
FIELDCATALOG-COL_POS = 4.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'ENDDA'.
FIELDCATALOG-SELTEXT_M = 'END_DATE'.
FIELDCATALOG-COL_POS = 5.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'STATUS'.
FIELDCATALOG-SELTEXT_M = 'STATUS'.
FIELDCATALOG-COL_POS = 6.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
* define local data
DATA:
LS_FCAT TYPE LVC_S_FCAT.
CALL FUNCTION 'LVC_FIELDCATALOG_MERGE'
* EXPORTING
* I_BUFFER_ACTIVE =
* I_STRUCTURE_NAME = 'TY_S_OUTTAB'
* I_CLIENT_NEVER_DISPLAY = 'X'
* I_BYPASSING_BUFFER =
* I_INTERNAL_TABNAME =
CHANGING
CT_FIELDCAT = GT_FCAT
EXCEPTIONS
INCONSISTENT_INTERFACE = 1
PROGRAM_ERROR = 2
OTHERS = 3.
IF SY-SUBRC = 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDFORM. " BUILD_FIELDCATALOG_KNB1
*& Form SET_LAYOUT_AND_VARIANT
* text
* --> p1 text
* <-- p2 text
FORM SET_LAYOUT_AND_VARIANT .
CLEAR: GS_LAYOUT,
GS_VARIANT.
* GS_LAYOUT-CWIDTH_OPT = ABAP_TRUE.
GS_LAYOUT-ZEBRA = ABAP_TRUE.
GS_LAYOUT-EXCP_FNAME = 'EXCEPTION'. " define column for LED
GS_LAYOUT-EXCP_LED = ABAP_TRUE.
GS_VARIANT-REPORT = SYST-REPID.
GS_VARIANT-HANDLE = 'GRID'.
ENDFORM. " SET_LAYOUT_AND_VARIANT
*& Form CHECK_CONDITION
* text
* --> p1 text
* <-- p2 text
FORM CHECK_CONDITION .
* define local data
DATA: LS_OUTTAB TYPE TY_S_OUTTAB.
LOOP AT GT_OUTTAB INTO LS_OUTTAB.
IF ( LS_OUTTAB-STATUS = 'APPROVED' ).
LS_OUTTAB-EXCEPTION = '3'. " GREEN LED/traffic light
ELSE.
LS_OUTTAB-EXCEPTION = '1'. " RED LED / traffic light
ENDIF.
MODIFY GT_OUTTAB FROM LS_OUTTAB INDEX SYST-TABIX.
ENDLOOP.
ENDFORM. " CHECK_CONDITION
Here in the function module 'LVC_FIELDCATALOG_MERGE'
GT_FACT is not getting filled. How to do that.
Regards
vijaysome thing you are missing, but any you can also do this..
change the definiton of the fieldcatalog.
DATA: FIELDCATALOG TYPE lvc_t_fcat WITH HEADER LINE.
population change
FIELDCATALOG-FIELDNAME = 'PERNR'.
FIELDCATALOG-COLTEXT = 'EMPLOYEE NO'.
FIELDCATALOG-COL_POS = 1.
FIELDCATALOG-OUTPUTLEN = 8.
FIELDCATALOG-EMPHASIZE = 'X'.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'REQ_ID'.
FIELDCATALOG-COLTEXT = 'REQUEST_ID'.
FIELDCATALOG-COL_POS = 2.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'LEA_TY'.
FIELDCATALOG-COLTEXT = 'LEAVE_TYPE'.
FIELDCATALOG-COL_POS = 3.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'BEGDA'.
FIELDCATALOG-COLTEXT = 'BEGIN_DATE'.
FIELDCATALOG-COL_POS = 4.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'ENDDA'.
FIELDCATALOG-COLTEXT = 'END_DATE'.
FIELDCATALOG-COL_POS = 5.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
FIELDCATALOG-FIELDNAME = 'STATUS'.
FIELDCATALOG-COLTEXT = 'STATUS'.
FIELDCATALOG-COL_POS = 6.
APPEND FIELDCATALOG TO FIELDCATALOG.
CLEAR FIELDCATALOG.
method call display change..
* Display data
CALL METHOD GO_GRID->SET_TABLE_FOR_FIRST_DISPLAY
EXPORTING
IS_LAYOUT = GS_LAYOUT
IS_VARIANT = GS_VARIANT
I_SAVE = 'A'
CHANGING
IT_OUTTAB = GT_OUTTAB
IT_FIELDCATALOG = FIELDCATALOG[]
EXCEPTIONS
OTHERS = 4.
IF SY-SUBRC EQ 0.
ENDIF.
apply all the changes and see... -
The agents are not getting determined
Hi Gurus,
In the standard workflow WS00300022 with BUS 2104 for Approval of Appropriate request, the agents are not getting determined.
Below are the steps followed:
We have created a subtype for BUS 2104 and redefined the display and edit to add some new functionality. Delegated the business object and followed all the standard steps. Now the present situation is that the Workflow is not able to determine agents.
We have even removed the subtype and checked with the standard functionality but still it is not determining the agents.Hi,
Kindly Diagnose your WF:-
Use the following transaction:-
1> SWE4/SWELS -> EVENT TRACE ON/OFF (Switch On the Event Trace)
2> SWEL -> DISPLAY EVENT TRACE
3> SWUD -> WF DIAGNOSTICS
4> SWPR -> WORKFLOW RESTART AFTER ERROR
Enter the WF Number and Click on the LOG Icon.
There you can see the Graphical Log of the WF to know where the WF is stuck.
And You can also check the Technical Details of the WF there, to find which WF Container has what value.
In the LOG, you can determine the step where it is unable to determine the Agent. That will give a picture of how to proceed further and what is the WF status at that point.
Kindly check and revert back, so that a solution can be provided.
Regards,
Kanika -
Stock transfer PRs are not getting created in APO
Hi,
I have a plant-to-plant transfer is configured using special procurement type. When APO creates stock.PRs , it does not provide any numbers to PRs and they are not getting trasnfered to R3. CIF model for PIR & POs are running good.
In CIF post processing these PRs as stuck and I see " Vendor master data does not exist for plant A" . There is vendor master data created for plant A in R3 and material info record also exists.
How to de-bug this post processing queue to see what is missing in vendor master data?
regards,
shanHi
Can you explain in detail your scenario..you say plant A so I suppose there is plant named B in your scenario. which one is the source location and which one destination location?
In Stock transfer from plant to plant scenario, key is to have correct settings for the system to do source determination.
Assuming plant A as destination and plant B as source. First, The material should be maintained at both the plants.
You need to have a purchasing info record(ME11) linking your material and the source plant B.
I assume you are using external procurement type in your material master. Special procurement type 40 (stock transfer)should be maintained in material master.
Then the system creates a STReq and PurReq using the source plant B. For the system to do this source determination should happen. To verify that..you can check the following...
When you transfer the purchasing info record, system automatically generates the external procurement relationships with source location as plant B, destination location as plant A and info record reference.
When you transfer material master data, the system auto'lly generates transportation lanes between plant A and plant B. (keep in mind transportation lanes should be manually created here)
So in essence you have to check if all the above things are done by the system.
Hope Im clear in this
regards
Mohan V R Chunchu -
Set type is not getting filled
Hi ,
I have created a set type ZTLS_ORG, which is organisation dependent. This set type has fields from table MVKE. I am filling up the fields of this set type in an enhacement of Badi Definition Customer_product2. But the values are not getting populated, when I am checking it in product master through commpr01. Kindly suggest.
Regards
ShwetaHello Shweta,
You mean to say that attributes of your settype is nothing but fields from table MVKE?
Have you generated the settype before using it in your program?
Best Regards,
Shanthala Kudva. -
Logs are not getting applied in the standby database
Hello,
I have created a physical standby database and in it the logs are not getting applied..
following is an extract of the standby alert log
Wed Sep 05 07:53:59 2012
Media Recovery Log /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc
Error opening /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc
Attempting refetch
Media Recovery Waiting for thread 1 sequence 37638
Fetching gap sequence in thread 1, gap sequence 37638-37643
Wed Sep 05 07:53:59 2012
RFS[46]: Assigned to RFS process 3081
RFS[46]: Allowing overwrite of partial archivelog for thread 1 sequence 37638
RFS[46]: Opened log for thread 1 sequence *37638* dbid 1723205832 branch 765704228
Wed Sep 05 07:55:34 2012
RFS[42]: Possible network disconnect with primary database
However, the archived files are getting copied to the standby server.
I tried registering and recovering the logs but it also failed..
Follows some of the information,
Primary
Oralce 11R2 EE
SQL> select max(sequence#) from v$log where archived='YES';
MAX(SEQUENCE#)
37668
SQL> select DEST_NAME, RECOVERY_MODE,DESTINATION,ARCHIVED_SEQ#,APPLIED_SEQ#, SYNCHRONIZATION_STATUS,SYNCHRONIZED,GAP_STATUS from v$archive_dest_status where DEST_NAME = 'LOG_ARCHIVE_DEST_3';
DEST_NAME RECOVERY_MODE DESTINATION ARCHIVED_SEQ# APPLIED_SEQ# SYNCHRONIZATION_STATUS SYNCHRONIZED GAP_STATUS
LOG_ARCHIVE_DEST_3 MANAGED REAL TIME APPLY ABC 37356 0 CHECK CONFIGURATION NO RESOLVABLE GAP
Standby
Oralce 11R2 EE
SQL> select max(sequence#) from v$archived_log where applied='YES';
MAX(SEQUENCE#)
37637
SQL> select * from v$archive_gap;
no rows selected
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
READ ONLY WITH APPLY PHYSICAL STANDBY
Please help me to troubleshoot this and get the standby in sync..
Thanks a lot..the results are as follows..
SQL> select process, status, group#, thread#, sequence# from v$managed_standby order by process, group#, thread#, sequence#;
PROCESS STATUS GROUP# THREAD# SEQUENCE#
ARCH CLOSING 1 1 37644
ARCH CLOSING 1 1 37659
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
ARCH CONNECTED N/A 0 0
MRP0 WAIT_FOR_GAP N/A 1 37638
RFS IDLE N/A 0 0
RFS IDLE N/A 0 0
RFS IDLE N/A 0 0
RFS RECEIVING N/A 1 37638
RFS RECEIVING N/A 1 37639
RFS RECEIVING N/A 1 37640
RFS RECEIVING N/A 1 37641
RFS RECEIVING N/A 1 37642
RFS RECEIVING N/A 1 37655
RFS RECEIVING N/A 1 37673
RFS RECEIVING N/A 1 37675
42 rows selected.
SQL>
SQL> select name,value, time_computed from v$dataguard_stats;
NAME VALUE TIME_COMPUTED
transport lag +00 02:44:33 09/05/2012 09:25:58
apply lag +00 03:14:30 09/05/2012 09:25:58
apply finish time +00 00:01:09.974 09/05/2012 09:25:58
estimated startup time 12 09/05/2012 09:25:58
SQL> select timestamp , facility, dest_id, message_num, error_code, message from v$dataguard_status order by timestamp
TIMESTAMP FACILITY DEST_ID MESSAGE_NUM ERROR_CODE MESSAGE
05-SEP-12 Remote File Server 0 60 0 RFS[13]: Assigned to RFS process 2792
05-SEP-12 Remote File Server 0 59 0 RFS[12]: Assigned to RFS process 2790
05-SEP-12 Log Apply Services 0 61 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 62 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 63 0 Managed Standby Recovery Canceled
05-SEP-12 Log Apply Services 0 64 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 65 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 66 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 67 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 68 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 69 0 RFS[5]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 70 0 RFS[6]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 71 0 RFS[14]: Assigned to RFS process 2829
05-SEP-12 Remote File Server 0 72 0 RFS[15]: Assigned to RFS process 2831
05-SEP-12 Network Services 0 73 0 RFS[9]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 74 0 RFS[16]: Assigned to RFS process 2833
05-SEP-12 Network Services 0 75 0 RFS[1]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 76 0 RFS[17]: Assigned to RFS process 2837
05-SEP-12 Network Services 0 77 0 RFS[3]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 78 0 RFS[2]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 79 0 RFS[7]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 80 0 RFS[18]: Assigned to RFS process 2848
05-SEP-12 Network Services 0 81 0 RFS[16]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 82 0 RFS[19]: Assigned to RFS process 2886
05-SEP-12 Network Services 0 83 0 RFS[19]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 84 0 RFS[20]: Assigned to RFS process 2894
05-SEP-12 Log Apply Services 0 85 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 86 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 87 0 Managed Standby Recovery Canceled
05-SEP-12 Remote File Server 0 89 0 RFS[22]: Assigned to RFS process 2900
05-SEP-12 Remote File Server 0 88 0 RFS[21]: Assigned to RFS process 2898
05-SEP-12 Remote File Server 0 90 0 RFS[23]: Assigned to RFS process 2902
05-SEP-12 Remote File Server 0 91 0 Primary database is in MAXIMUM PERFORMANCE mode
05-SEP-12 Remote File Server 0 92 0 RFS[24]: Assigned to RFS process 2904
05-SEP-12 Remote File Server 0 93 0 RFS[25]: Assigned to RFS process 2906
05-SEP-12 Log Apply Services 0 94 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 95 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 96 0 Managed Standby Recovery starting Real Time Apply
05-SEP-12 Log Apply Services 0 97 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Log Transport Services 0 98 0 ARCa: Beginning to archive thread 1 sequence 37644 (7911979302-7912040568)
05-SEP-12 Log Transport Services 0 99 0 ARCa: Completed archiving thread 1 sequence 37644 (0-0)
05-SEP-12 Network Services 0 100 0 RFS[8]: Possible network disconnect with primary database
05-SEP-12 Log Apply Services 0 101 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 102 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 103 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 104 0 Managed Standby Recovery Canceled
05-SEP-12 Network Services 0 105 0 RFS[20]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 106 0 RFS[26]: Assigned to RFS process 2930
05-SEP-12 Network Services 0 107 0 RFS[24]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 108 0 RFS[27]: Assigned to RFS process 2938
05-SEP-12 Network Services 0 109 0 RFS[14]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 110 0 RFS[28]: Assigned to RFS process 2942
05-SEP-12 Network Services 0 111 0 RFS[15]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 112 0 RFS[29]: Assigned to RFS process 2986
05-SEP-12 Network Services 0 113 0 RFS[17]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 114 0 RFS[30]: Assigned to RFS process 2988
05-SEP-12 Log Apply Services 0 115 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 116 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 117 0 Managed Standby Recovery starting Real Time Apply
05-SEP-12 Log Apply Services 0 118 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 119 0 RFS[18]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 120 0 RFS[31]: Assigned to RFS process 3003
05-SEP-12 Network Services 0 121 0 RFS[26]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 122 0 RFS[32]: Assigned to RFS process 3005
05-SEP-12 Network Services 0 123 0 RFS[27]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 124 0 RFS[33]: Assigned to RFS process 3009
05-SEP-12 Remote File Server 0 125 0 RFS[34]: Assigned to RFS process 3012
05-SEP-12 Log Apply Services 0 127 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 126 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 128 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 129 0 Managed Standby Recovery Canceled
05-SEP-12 Network Services 0 130 0 RFS[32]: Possible network disconnect with primary database
05-SEP-12 Log Apply Services 0 131 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 132 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 133 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 134 0 Managed Standby Recovery not using Real Time Apply
05-SEP-12 Log Apply Services 0 135 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 136 0 RFS[33]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 137 0 RFS[35]: Assigned to RFS process 3033
05-SEP-12 Log Apply Services 0 138 16037 MRP0: Background Media Recovery cancelled with status 16037
05-SEP-12 Log Apply Services 0 139 0 MRP0: Background Media Recovery process shutdown
05-SEP-12 Log Apply Services 0 140 0 Managed Standby Recovery Canceled
05-SEP-12 Remote File Server 0 141 0 RFS[36]: Assigned to RFS process 3047
05-SEP-12 Log Apply Services 0 142 0 Attempt to start background Managed Standby Recovery process
05-SEP-12 Log Apply Services 0 143 0 MRP0: Background Managed Standby Recovery process started
05-SEP-12 Log Apply Services 0 144 0 Managed Standby Recovery starting Real Time Apply
05-SEP-12 Log Apply Services 0 145 0 Media Recovery Waiting for thread 1 sequence 37638 (in transit)
05-SEP-12 Network Services 0 146 0 RFS[35]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 147 0 RFS[37]: Assigned to RFS process 3061
05-SEP-12 Network Services 0 148 0 RFS[36]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 149 0 RFS[38]: Assigned to RFS process 3063
05-SEP-12 Remote File Server 0 150 0 RFS[39]: Assigned to RFS process 3065
05-SEP-12 Network Services 0 151 0 RFS[25]: Possible network disconnect with primary database
05-SEP-12 Network Services 0 152 0 RFS[21]: Possible network disconnect with primary database
05-SEP-12 Remote File Server 0 153 0 Archivelog record exists, but no file is found
05-SEP-12 Remote File Server 0 154 0 RFS[40]: Assigned to RFS process 3067
05-SEP-12 Network Services 0 155 0 RFS[37]: Possible network disconnect with primary database -
IPA(phonetics) chars are not getting displayed in ...
Hi,
I am using Nokia E63 , I am facing character display problem in browser, not all characters, but characters related to IPA.
IPA(phonetics) chars are not getting displayed in internet browser ( even in Opera mini). How do i solve this.
Ex: www.dictionary.com shows IPA char, but E63 shows as s plain BOX.
Is it possible to fix this problem or I should go for android OS based phone?
Thank you...
SamsonHello,
have created some new users and done with the leave request, it comes to backend inbox, but not
visible in portal. Re-configured UWL, but still the task not coming( leave request). For other users it is > coming?
Where have you created these users? Many times it is not enough to create new users in a backend system. You will have to create them in the data source which is used by the UME of your portal or to map this data source with the system where you have created your new users
You can see what data source your portal is using by going to
Portal --> System Administration --> System Configuration --> UME Configuration --> Data Source
If this Data Source doesn´t contain your new users, then UWL wouldn´t be able to show their items.
Here you can check if your user is contained in the data source:
Portal --> User Administration --> choose the right data source --> put in the username --> search
Please check if your users are correctly created.
Regards,
Iris -
Workitems are not getting displayed in UWL
Hi All,
We have deployed ESS/MSS Business packages in our system. Also configured UWL to display workitems in portal. But when testing leave request, leave request workitems are not getting displayed in portal as UWL. Any help on this will be appreciated.
Thanks,
Krish.Hello,
have created some new users and done with the leave request, it comes to backend inbox, but not
visible in portal. Re-configured UWL, but still the task not coming( leave request). For other users it is > coming?
Where have you created these users? Many times it is not enough to create new users in a backend system. You will have to create them in the data source which is used by the UME of your portal or to map this data source with the system where you have created your new users
You can see what data source your portal is using by going to
Portal --> System Administration --> System Configuration --> UME Configuration --> Data Source
If this Data Source doesn´t contain your new users, then UWL wouldn´t be able to show their items.
Here you can check if your user is contained in the data source:
Portal --> User Administration --> choose the right data source --> put in the username --> search
Please check if your users are correctly created.
Regards,
Iris -
Portal events are not getting loaded into the Analytics database tables
Analytics database ASFACT tables (ASFACT_PAGEVIEWS,ASFACT_PORLETVIEW) are not getting populated with data.
Possible diagnosis/workarounds tried:
-Checked the analytics configuration in configuration manager, Enable Analytics Communication option checked
-Registered Portal Events during analytics installation
-Verified that UDP events are sent out from the portal: Test: OK
-Reinstalled Interaction analytics component
Any inputs highly appreciated.
Cheers,
Sandeep
In collector.log, found the exception:
08 Jul 2010 07:12:54,613 ERROR PageViewHandler - could not retrieve user: com.plumtree.analytics.collector.exception.DimensionManagerException: Could not insert dimension in the database
com.plumtree.analytics.collector.exception.DimensionManagerException: Could not insert dimension in the database
at com.plumtree.analytics.collector.cache.DimensionManager.insertDB(DimensionManager.java:271)
at com.plumtree.analytics.collector.cache.DimensionManager.manageDBImage(DimensionManager.java:139)
at com.plumtree.analytics.collector.cache.DimensionManager.handleNewDimension(DimensionManager.java:85)
at com.plumtree.analytics.collector.eventhandler.BaseEventHandler.insertDimension(BaseEventHandler.java:63)
at com.plumtree.analytics.collector.eventhandler.BaseEventHandler.getUser(BaseEventHandler.java:198)
at com.plumtree.analytics.collector.eventhandler.PageViewHandler.handle(PageViewHandler.java:71)
at com.plumtree.analytics.collector.DataResolver.handleEvent(DataResolver.java:165)
at com.plumtree.analytics.collector.DataResolver.run(DataResolver.java:126)
Caused by: org.hibernate.MappingException: Unknown entity: com.plumtree.analytics.core.persist.BaseCustomEventDimension$$BeanGeneratorByCGLIB$$6a0493c4
at org.hibernate.impl.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:569)
at org.hibernate.impl.SessionImpl.getEntityPersister(SessionImpl.java:1086)
at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:83)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:184)
at org.hibernate.event.def.DefaultSaveEventListener.saveWithGeneratedOrRequestedId(DefaultSaveEventListener.java:33)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:173)
at org.hibernate.event.def.DefaultSaveEventListener.performSaveOrUpdate(DefaultSaveEventListener.java:27)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:69)
at org.hibernate.impl.SessionImpl.save(SessionImpl.java:481)
at org.hibernate.impl.SessionImpl.save(SessionImpl.java:476)
at com.plumtree.analytics.collector.cache.DimensionManager.insertDB(DimensionManager.java:266)
... 7 more
In analyticsui.log, found the exception below:
08 Jul 2010 06:50:25,910 ERROR Configuration - Could not compile the mapping document
org.hibernate.MappingException: duplicate import: com.plumtree.analytics.core.persist.BaseCustomEventFact$$BeanGeneratorByCGLIB$$6a896b0d
at org.hibernate.cfg.Mappings.addImport(Mappings.java:105)
at org.hibernate.cfg.HbmBinder.bindPersistentClassCommonValues(HbmBinder.java:541)
at org.hibernate.cfg.HbmBinder.bindClass(HbmBinder.java:488)
at org.hibernate.cfg.HbmBinder.bindRootClass(HbmBinder.java:234)
at org.hibernate.cfg.HbmBinder.bindRoot(HbmBinder.java:152)
at org.hibernate.cfg.Configuration.add(Configuration.java:362)
at org.hibernate.cfg.Configuration.addXML(Configuration.java:317)
at com.plumtree.analytics.core.HibernateUtil.loadEventMappings(HibernateUtil.java:796)
at com.plumtree.analytics.core.HibernateUtil.loadEventMappings(HibernateUtil.java:652)
at com.plumtree.analytics.core.HibernateUtil.refreshCustomEvents(HibernateUtil.java:496)
at com.plumtree.analytics.ui.common.AnalyticsInitServlet.init(AnalyticsInitServlet.java:104)
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1161)
at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:981)
at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4045)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4351)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:920)
at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:883)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:492)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:719)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:516)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:566)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.plumtree.container.Bootstrap.start(Bootstrap.java:531)
at com.plumtree.container.Bootstrap.main(Bootstrap.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.tanukisoftware.wrapper.WrapperStartStopApp.run(WrapperStartStopApp.java:238)
at java.lang.Thread.run(Thread.java:595)
08 Jul 2010 06:50:25,915 ERROR Configuration - Could not configure datastore from XML
org.hibernate.MappingException: duplicate import: com.plumtree.analytics.core.persist.BaseCustomEventFact$$BeanGeneratorByCGLIB$$6a896b0d
at org.hibernate.cfg.Mappings.addImport(Mappings.java:105)
at org.hibernate.cfg.HbmBinder.bindPersistentClassCommonValues(HbmBinder.java:541)
at org.hibernate.cfg.HbmBinder.bindClass(HbmBinder.java:488)
at org.hibernate.cfg.HbmBinder.bindRootClass(HbmBinder.java:234)
at org.hibernate.cfg.HbmBinder.bindRoot(HbmBinder.java:152)
at org.hibernate.cfg.Configuration.add(Configuration.java:362)
at org.hibernate.cfg.Configuration.addXML(Configuration.java:317)
at com.plumtree.analytics.core.HibernateUtil.loadEventMappings(HibernateUtil.java:796)
at com.plumtree.analytics.core.HibernateUtil.loadEventMappings(HibernateUtil.java:652)
at com.plumtree.analytics.core.HibernateUtil.refreshCustomEvents(HibernateUtil.java:496)
at com.plumtree.analytics.ui.common.AnalyticsInitServlet.init(AnalyticsInitServlet.java:104)
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1161)
at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:981)
at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4045)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4351)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:920)
at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:883)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:492)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:719)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:516)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:566)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.plumtree.container.Bootstrap.start(Bootstrap.java:531)
at com.plumtree.container.Bootstrap.main(Bootstrap.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.tanukisoftware.wrapper.WrapperStartStopApp.run(WrapperStartStopApp.java:238)
at java.lang.Thread.run(Thread.java:595)
wrapper_collector.log
INFO | jvm 1 | 2009/11/10 17:25:22 | at com.plumtree.analytics.collector.eventhandler.PortletViewHandler.handle(PortletViewHandler.java:46)
INFO | jvm 1 | 2009/11/10 17:25:22 | at com.plumtree.analytics.collector.DataResolver.handleEvent(DataResolver.java:165)
INFO | jvm 1 | 2009/11/10 17:25:22 | at com.plumtree.analytics.collector.DataResolver.run(DataResolver.java:126)
INFO | jvm 1 | 2009/11/10 17:25:22 | Caused by: java.sql.SQLException: [plumtree][Oracle JDBC Driver][Oracle]ORA-00001: unique constraint (ANALYTICSDBUSER.IX_USERBYUSERID) violated
INFO | jvm 1 | 2009/11/10 17:25:22 |
INFO | jvm 1 | 2009/11/10 17:25:22 | at com.plumtree.jdbc.base.BaseExceptions.createException(Unknown Source)Key words from the error msg suggests reinstallation of Analytics is needed to resolve this.Analytics database is failing to get updated with the correct event mapping and this is why no data is being inserted.
"Could not insert dimension in the database",
"ERROR Configuration - Could not configure datastore from XML
org.hibernate.MappingException: duplicate import: com.plumtree.analytics.core.persist.BaseCustomEventFact$$BeanGeneratorByCGLIB$$6a896b0d"
"ORA-00001: unique constraint (ANALYTICSDBUSER.IX_USERBYUSERID) violated",
"ERROR Configuration - Could not compile the mapping document -
ORA-01008 All variables are not bound
Hi I am running this query and getting this exception ORA-01008 All variables are not bound .
Could anyone please insight ?
SELECT EQMT_INGT_LOG_ID, EQMT_ID,
XMLSerialize(DOCUMENT XMLType(ingLog.BUCK_SLIP_XML) AS CLOB) BUCK_SLIP_XML
FROM TOS_EQMT_INGT_LOG ingLog
where BUCK_SLIP_XML is not null and ingt_date between to_date(:fromDate, 'MM/DD/YYYY HH24:MI')
and to_date(:toDate,'MM/DD/YYYY HH24:MI' )
and eqmt_id in (select eqmt_id from tos_eqmt
where eqmt_init = :eqmtInit and eqmt_nbr = :eqmtNbr
and orig_loca_id in (select loca_id from tos_loca where altn_src_sys_stn_id = :circ7 ))
and SCAC = :scac
and STCC = :stcc
and SHPR_NAME = :shprName
and CNSE_NAME = :cnseName
and driv_id in (select driv_id from tos_driv where lcns = :lcns )
and driv_id in (select driv_id from tos_driv where sabv = :sabv )
and ingt_stat_ind = :ingtStatInd
and BUCK_SLIP_XML like :inspectedBy
ORDER BY INGT_DATESlightly off-topic but what do you think this does :
XMLSerialize(DOCUMENT XMLType(ingLog.BUCK_SLIP_XML) AS CLOB) BUCK_SLIP_XML?
Maybe you are looking for
-
Unzip file from froms then load
i have form (basically called loader) which read text file and then load it in data base. Now,there is a user requirement that if text file is in zip format then he can unzip file using form. Any idea how to do this can some unzip software used at ba
-
Turn repeat off on iPod classic
I have just got a new car and it has an ipd connection. Since using it when I then play podcasts at home it plays the next one in sequence despite repeat being "off"!!!! although off the iPod says play current playlist once! the other options are "on
-
Create a procedure which will pass table name as an argument
Please let me know, I want to create a procedure which will take table name as an argument and will display total no. of rows of that table.
-
BAPI to read all projects in which a user (bp) is staffed?
Is there a BAPI in cPro 4.5. that returns all projets in which a certain user (business partner) is staffed? Something like "get 'my' projects"?
-
OS is Solaris 9 - I just downloaded the latest SQL Developer (1.2>), and when I try ti run it using the sqldeveloper.sh script, I get: Unrecognized VM option 'JavaPriority10_To_OSPriority=10' Could not create the Java virtual machine. Any ideas what'