RSTSODSPART contains wrong PARTNO value

We have having trouble deleting PSA/changelog data.  The requests/changelogs we wish to delete are visible in the PSA tree (directory), and exist physically in the PSA tables.  But, when we run the deletion job, we receive the following type of job log entry (no runtime error is logged):
DDL time(___1): .........2 milliseconds
Delete request REQU_4DDT1PRMY41NHYNPT2B872PHK from PSA 8BOF_O41_OA : Error - subrc: 2
Checks using RSRV, RSAR_PSA_PARTITION_CHECK, and SAP_PSA_PARTNO_CORRECT all come back clean.
RSAR_PSA_CLEANUP_DIRECTORY contains the following entries:
Request : REQU_4DDT1PRMY41NHYNPT2B872PHK deletion flag inconsistent in RSTSODSPART
Orphaned RSTSODSPART entries detected
Partition 0001 not dropped
Also, I have noticed that the PARTNO values in RSTSODSPART for the affected requests do not correspond to the PARTNO values in the actual PSA tables.  In every case, the PARTNO value in RSTSODSPART is 2, whereas the PARTNO value in the PSA table is 1.
I have not attempted running RSAR_PSA_CLEANUP_DIRECTORY in repair mode because the logs indicate no auto correction is available for inconsistent RSTSODSPART entries.  I'm tempted to try, however.
Is there any risk to running RSAR_PSA_CLEANUP_DIRECTORY in repair mode in this circumstance?  Any other thoughts on how to resolve this problem?
Thanks,
Bob
P.S.  We are running BI 7.0, SP15, on Oracle 10.2.
P.P.S. I have already consulted the following OSS Notes:  733371, 1044023, 1063105, 1102626, 1150724.

Hi Robert,
yes. Please start the report RSAR_PSA_CLEANUP_DIRECTORY.
there are also same others options:
- Transaction RSRV: alle elementary tests -> PSA Tables -> Consistency between PSA partitions
- abap SAP_PSAPARTNO_CORRECT.
execute the three routines, so
- the partionsnumber is correct
- all requests are correct. When you delete a request in BI, also is deleted in the database.
- the data package are saved the the correct partition.
Sven

Similar Messages

  • How to create an array containing shared variable values

    Hi
    I am trying to programmatically create an array containing shared variable values and their names.  I can get the variable names by supplying the process name to the get shared variable list function.  How do I then read the value of all the shared variable items returned?
    I have used a data socket open to open a connection to all variables when my program starts.  I then use datasocket read on the opened connections to write to an array.  This works fine until I try to write to one of the variables using a shared variable node.  The variables writes can take from 4secs to 2 mins.  When I remove the shared variable node again all is fine.  Also when I stop using the data sockets, all is fine.
    Is there a conflict between shared variable nodes and data socket writes to the shared variables?
    Can anyone help?  I cannot easily post example code because I am reading the variables from a Wago PFC (PLC) using OPC.

    Hi
    Sorry I forgot to mention the LabVIEW version, its 8.20.  I have tried saving the shared variable node as a sub VI and it makes no difference.
    Attached is a stripped down version of the software.  You will not be able to connect to the IO server because it requires some Wago hardware and software.  You may spot something I have done wrong with the I/O servers, variables or sub VI's.
    The main program that runs is called 'HMI Engine' in the 'Framework' folder.  There may be some other things in the project that aren't used in this example.  I have removed all but the variable connection part of the code.
    I hope someone can help!?
    Thanks
    Mark.
    Attachments:
    HMI Test.zip ‏144 KB

  • ORA-14314: resulting List partition(s) must contain atleast 1 value

    Hi,
    Using: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production, Windows 7 Platform.
    I'm trying to understand Exchange Partition and Split (List) partitioning. Below is the code I'm trying to work on:
      CREATE TABLE big_table (
      id            NUMBER(10),
      created_date  DATE,
      lookup_id     NUMBER(10),
      data          VARCHAR2(50)
    declare
      l_lookup_id big_table.lookup_id%type;
      l_create_date date;
    begin
      for i in 1 .. 1000000 loop
        if mod(i,3) = 0 then
           l_create_date := to_date('19-mar-2011','dd-mon-yyyy');
           l_lookup_id := 2;
        elsif mod(i,2) = 0 then
           l_create_date := to_date('19-mar-2012','dd-mon-yyyy');
           l_lookup_id := 1;
        else
           l_create_date := to_date('19-mar-2013','dd-mon-yyyy');
           l_lookup_id := 3;
        end if;
        insert into big_table(id, created_date, lookup_id, data)
           values (i, l_create_date, l_lookup_id, 'This is some data for '||i);
      end loop;
      commit;
    end;
    alter table big_table add (
    constraint big_table_pk primary key (id));
    exec dbms_stats.gather_table_stats(user, 'BIG_TABLE', cascade => true);
    create table big_table2 (
    id number(10),
    created_date date,
    lookup_id number(10),
    data varchar2(50)
    partition by list (created_date)
    (partition p20991231 values (TO_DATE(' 2099-12-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')));
    alter table big_table2 add (
    constraint big_table_pk2 primary key(id));
    alter table big_table2 exchange partition p20991231
    with table big_table
    without validation
    update global indexes;
    drop table big_table;
    rename big_table2 to big_table;
    alter table big_table rename constraint big_table_pk2 to big_table_pk;
    alter index big_table_pk2 rename to big_table_pk;
    exec dbms_stats.gather_table_stats(USER, 'BIG_TABLE', cascade => TRUE);
    I'm trying to split the data by moving created_date=19-mar-2013 to new partition p20130319. I tried to run the below query but failed with error. Where am I doing it wrong?
    Thanks.
    alter table big_table
    split partition p20991231 values (to_date('19-mar-2013','dd-mon-yyyy'))
    into (partition p20130319
         ,partition p20991231
    Error report:
    SQL Error: ORA-14314: resulting List partition(s) must contain atleast 1 value
    14314. 00000 -  "resulting List partition(s) must contain atleast 1 value"
    *Cause:    After a SPLIT/DROP VALUE of a list partition, each resulting
               partition(as applicable) must contain at least 1 value
    *Action:   Ensure that each of the resulting partitions contains atleast
               1 value

    I stand corrected.
    Below are the steps I have gone through to understand:
    1. How to partition a table with data in it.
    2. Exchange partition.
    3. Split partition (List).
    4. Split data to more than 2 partitions.
    Please correct me if I'm missing anything.
    CREATE TABLE big_table
        id           NUMBER(10),
        created_date DATE,
        lookup_id    NUMBER(10),
        data         VARCHAR2(50)
    DECLARE
      l_lookup_id big_table.lookup_id%type;
      l_create_date DATE;
    BEGIN
      FOR i IN 1 .. 1000000
      LOOP
        IF mod(i,3)= 0 THEN
          l_create_date := to_date('19-mar-2011','dd-mon-yyyy');
          l_lookup_id   := 2;
        elsif mod(i,2)   = 0 THEN
          l_create_date := to_date('19-mar-2012','dd-mon-yyyy');
          l_lookup_id   := 1;
        ELSE
          l_create_date := to_date('19-mar-2013','dd-mon-yyyy');
          l_lookup_id   := 3;
        END IF;
        INSERT INTO big_table(id, created_date, lookup_id, data)
          VALUES(i, l_create_date, l_lookup_id, 'This is some data for '||i);
      END LOOP;
      COMMIT;
    END;
    ALTER TABLE big_table ADD
    (CONSTRAINT big_table_pk PRIMARY KEY (id));
    EXEC dbms_stats.gather_table_stats(USER, 'BIG_TABLE', CASCADE => true);
    CREATE TABLE big_table2
      ( id           NUMBER(10),
        created_date DATE,
        lookup_id    NUMBER(10),
        data         VARCHAR2(50)
      partition BY list(created_date)
      (partition p0319 VALUES
        (TO_DATE(' 2013-03-19 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') ,TO_DATE(' 2012-03-19 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') ,TO_DATE(' 2011-03-19 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    ALTER TABLE big_table2 ADD
    (CONSTRAINT big_table_pk2 PRIMARY KEY(id));
    ALTER TABLE big_table2 exchange partition p0319
    WITH TABLE big_table without validation
    UPDATE global indexes;
    DROP TABLE big_table;
    RENAME big_table2 TO big_table;
    ALTER TABLE big_table RENAME CONSTRAINT big_table_pk2 TO big_table_pk;
    ALTER INDEX big_table_pk2 RENAME TO big_table_pk;
    EXEC dbms_stats.gather_table_stats(USER, 'BIG_TABLE', CASCADE => TRUE);
    SELECT p.partition_name, p.num_rows
    FROM user_tab_partitions p
    WHERE p.table_name = 'BIG_TABLE';
    ALTER TABLE big_table split partition p0319 VALUES
    (TO_DATE(' 2013-03-19 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    INTO (partition p20130319, partition p0319);
    ALTER INDEX big_table_pk rebuild;
    EXEC dbms_stats.gather_table_stats(USER, 'BIG_TABLE', CASCADE => TRUE);
    SELECT p.partition_name, p.num_rows
    FROM user_tab_partitions p
    WHERE table_name = 'BIG_TABLE';
    SELECT DISTINCT created_date FROM big_table partition(p20130319);
    SELECT DISTINCT created_date FROM big_table partition(p0319);
    ALTER TABLE big_table split partition p0319 VALUES
    (TO_DATE(' 2012-03-19 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    INTO (partition p20120319, partition p20110319);
    ALTER INDEX big_table_pk rebuild;
    EXEC dbms_stats.gather_table_stats(USER, 'BIG_TABLE', CASCADE => TRUE);
    SELECT p.partition_name, p.num_rows
    FROM user_tab_partitions p
    WHERE table_name = 'BIG_TABLE';
    SELECT DISTINCT created_date FROM big_table partition(p20130319);
    SELECT DISTINCT created_date FROM big_table partition(p20120319);
    SELECT DISTINCT created_date FROM big_table partition(p20110319);

  • Selected model does not contain any target value prior

    Hi ODM experts,
    I have tried to apply the SVM alg in order to find anomalous records.The table source have rows like that:
    uniq_rec ID NAME A1 A2 A3 A4 A5 data
    577     2052956018     NAMEHDRCP8     2.27     0.4     85.46     0.01     14.54     24-JAN-13
    578     1250914484     NAMEDJDRVP3     11.45     1.24     56.24     0.01     43.77     24-JAN-13
    579     1968689283     NAMEDKEND12     0.000011     6.78     0.000029     0.01     0.091     24-JAN-13
    580     2063389130     NAMEDNMXG14     0.000011     0.65     36.65     0.02     0.091     24-JAN-13
    unq_rec is the pk, id is the id for the generic name and A1 .. A5 attributes ,data when collection occur etc
    I'm trying to execute the following code:
    drop table ALG_SET;
    exec dbms_data_mining.drop_model('SVMODEL');
    create table ALG_SET (setting_name varchar2(30), setting_value varchar2(4000));
    insert into ALG_SET values ('ALGO_NAME','ALGO_SUPPORT_VECTOR_MACHINES');
    insert into ALG_SET values ('PREP_AUTO','ON');
    commit;
    Begin
    dbms_data_mining.create_model('SVMODEL', 'CLASSIFICATION', 'ODM_PAR_FIN_HIST', 'UNQ_CRT', null, 'ALG_SET');
    end;
    The results is the following error:ORA-40104: invalid training data for model build ( if I run the code) .If I run from graphical interface I have obtained this
    error code " Selected model does not contain any target value prior"(using the similar model - SVM for anomaly detction plus the same source table )
    Please advice what is missing or wrong and if possible how to bypass this issue.
    Thanks in advance for support.
    Best Regards,
    Bogdan

    Here is also a newer example of creating a SVM Anomaly model from ODM sample code (12.1 version but this applies to 11.2):
    Rem
    Rem $Header: rdbms/demo/dmsvodem.sql /main/6 2012/04/15 16:31:56 xbarr Exp $
    Rem
    Rem dmsvodem.sql
    Rem
    Rem Copyright (c) 2004, 2012, Oracle and/or its affiliates.
    Rem All rights reserved.
    Rem
    Rem    NAME
    Rem      dmsvodem.sql - Sample program for the DBMS_DATA_MINING package.
    Rem
    Rem    DESCRIPTION
    Rem      This script creates an anomaly detection model
    Rem      for data analysis and outlier identification using the
    Rem      one-class SVM algorithm
    Rem      and data in the SH (Sales History)schema in the RDBMS.
    Rem
    Rem    NOTES
    Rem   
    Rem
    Rem    MODIFIED   (MM/DD/YY)
    Rem    amozes      01/23/12 - updates for 12c
    Rem    xbarr       01/10/12 - add prediction_details demo
    Rem    ramkrish    06/14/07 - remove commit after settings
    Rem    ramkrish    10/25/07 - replace deprecated get_model calls with catalog
    Rem                           queries
    Rem    ktaylor     07/11/05 - minor edits to comments
    Rem    jcjeon      01/18/05 - add column format
    Rem    bmilenov    10/28/04 - bmilenov_oneclass_demo
    Rem    bmilenov    10/25/04 - Remove dbms_output statements
    Rem    bmilenov    10/22/04 - Comment revision
    Rem    bmilenov    10/20/04 - Created
    Rem
    SET serveroutput ON
    SET trimspool ON 
    SET pages 10000
    SET echo ON
    --                            SAMPLE PROBLEM
    -- Given demographics about a set of customers that are known to have
    -- an affinity card, 1) find the most atypical members of this group
    -- (outlier identification), 2) discover the common demographic
    -- characteristics of the most typical customers with affinity card,
    -- and 3) compute how typical a given new/hypothetical customer is.
    -- DATA
    -- The data for this sample is composed from base tables in the SH schema
    -- (See Sample Schema Documentation) and presented through a view:
    -- mining_data_one_class_v
    -- (See dmsh.sql for view definition).
    --                            BUILD THE MODEL
    -- Cleanup old model with the same name (if any)
    BEGIN DBMS_DATA_MINING.DROP_MODEL('SVMO_SH_Clas_sample');
    EXCEPTION WHEN OTHERS THEN NULL; END;
    -- PREPARE DATA
    -- Automatic data preparation is used.
    -- SPECIFY SETTINGS
    -- Cleanup old settings table (if any)
    BEGIN
      EXECUTE IMMEDIATE 'DROP TABLE svmo_sh_sample_settings';
    EXCEPTION WHEN OTHERS THEN
      NULL;
    END;
    -- CREATE AND POPULATE A SETTINGS TABLE
    set echo off
    CREATE TABLE svmo_sh_sample_settings (
      setting_name  VARCHAR2(30),
      setting_value VARCHAR2(4000));
    set echo on
    BEGIN      
      -- Populate settings table
      -- SVM needs to be selected explicitly (default classifier: Naive Bayes)
      -- Examples of other possible overrides are:
      -- select a different rate of outliers in the data (default 0.1)
      -- (dbms_data_mining.svms_outlier_rate, ,0.05);
      -- select a kernel type (default kernel: selected by the algorithm)
      -- (dbms_data_mining.svms_kernel_function, dbms_data_mining.svms_linear);
      -- (dbms_data_mining.svms_kernel_function, dbms_data_mining.svms_gaussian);
      -- turn off active learning (enabled by default)
      -- (dbms_data_mining.svms_active_learning, dbms_data_mining.svms_al_disable);
      INSERT INTO svmo_sh_sample_settings (setting_name, setting_value) VALUES
      (dbms_data_mining.algo_name, dbms_data_mining.algo_support_vector_machines); 
      INSERT INTO svmo_sh_sample_settings (setting_name, setting_value) VALUES
      (dbms_data_mining.prep_auto, dbms_data_mining.prep_auto_on);
    END;
    -- CREATE A MODEL
    -- Build a new one-class SVM Model
    -- Note the NULL sprecification for target column name
    BEGIN
      DBMS_DATA_MINING.CREATE_MODEL(
        model_name          => 'SVMO_SH_Clas_sample',
        mining_function     => dbms_data_mining.classification,
        data_table_name     => 'mining_data_one_class_v',
        case_id_column_name => 'cust_id',
        target_column_name  => NULL,
        settings_table_name => 'svmo_sh_sample_settings');
    END;
    -- DISPLAY MODEL SETTINGS
    column setting_name format a30
    column setting_value format a30
    SELECT setting_name, setting_value
      FROM user_mining_model_settings
    WHERE model_name = 'SVMO_SH_CLAS_SAMPLE'
    ORDER BY setting_name;

  • InfoObject does not contain alpha confirming values

    Hi ,
    I am trying to load data into the object called ZAML_CODE, the type is CHAR and Length is 3.
    The characteristic is enabled with ALPHA - Conversion exit.
    we are loading data to this object since 3 years successfully.
    But now only it is saying that InfoObject ZAML_CODE does not contain alpha confirming values.
    Especially we are getting error for numeric values like 1, 2,3 etc only.
    Previously numeric values were also loaded successfully.
    Now the source system is ECC6.0, previously it was 4.6C. This is the only difference.
    Please respond with your suggesions.
    Regards
    Srinivas

    Hello,
    Just check if there is space in the front of these numeric values or some garbage value...if there is... then it may not be able to pad it up with zero and hence the issue...check in R/3 about how these values are stored...you can remove the space from the front and change it in PSA...and then load but first do the analysis of the source system and see why its coming up wrong.
    Regards
    Ajeeet

  • Wrong Expiration Value of GBlink Token

    Hello,
    We added 7 days (604800 sec) in distributor link expiration values by admin console, now when we open URLLink.acsm file in notpad as xml file. It shows wrong <expiration> value. Expiration value should be after seven days but it before the GBlink generation date.
    Please help us how to extent the link expiration time.
    With Regards,
    Mangal Varshney

    sn72 wrote:
    Do you mean that the last line in my code above should be removed? Meaning, that you get a cookie from the request and set its expiration age, then that will automatically update it (and no need to add to response)? No, I didn't mean that. A cookie will also also be set with a domain and a path. If you didn't set them manually, then the container will do it for you. If either the domain of the path differs in the another request, then you will get duplicate cookies with the same name, but on the different path. To update exact the same cookie, you should also make sure that the domain and the path are the same. With other words: obtain that cookie with the same name from the request and add it as cookie to the response, or set the domain and the path manually with a fixed value.
    In my testing I found that the cookie expiration date on the browser did not get updated. Maybe my testing was wrong -- for example I should close the cookie window and open it again in order to refresh it. BTW, I was using Mozilla Firefox 2. I just upgraded to Firefox 3 and will try again.That shouldn't make any difference.

  • Error: InfoObject does not contain alpha-conforming value 20030729

    Hi everyone,
    I got this problem, that i tried to fix yesterday, but all i did was that i got realy frustrated...
    I created an infoobject which is copy of 0DOC_NUMBER and has alpha coversion routine.
    Mapping field in DataSource is also type character 10(VBELN) as my infoobject.
    But there are some values in this field that i am trying to load, less then 10 characters long (20030729,20051116...) and shouldn't be there, because the values are dates and not document numbers.
    But anyway on R/3 side this values are in field which is defined as character 10 (VBELN), and when I am trying to load Infopackage(17500 records) I get 52 errors: InfoObject does not contain alpha-conforming value 20030729.
    I don't get it, since both fields are type character 10 and the values are only digits, why do I get this error message and how can I fix it?
    PS:One solution is to remove alpha conversion routine from infoobject, but i don't want that.
    Regards,
    Uroš

    hi,
    sorry... correction, it's 'Exclusive attributes' that cannot be used for navigational attribute, you can set lower case to differentiate ... check the impact of lower case setting here ...
    http://help.sap.com/saphelp_bw33/helpdata/en/b7/f470375fbf307ee10000009b38f8cf/frameset.htm
    Lowercase letters allowed / not allowed
    If this indicator is set, the system differentiates between lowercase letters and capital letters when you use a screen template to input values. If this indicator is not set, the system converts all the letters into capital letters when you use a screen template to input values.
    If you choose to allow the use of lowercase letters, you must be aware of what happens when you input variables:
    If you want to use the characteristic in variables, the system is only able to find the values for the characteristic if the lowercase letters and the capital letters are typed in accurately on the input screen for variables. If, on the other hand, you do not allow the use of lowercase letters, any characters that you type onto the input screen, are converted automatically into capital letters.
    Exclusive attributes:
    If you select Exclusively Attributes, the created characteristic can be used only as a display attribute for another characteristic, and not as a dedicated characteristic in the InfoCube. Furthermore, you then cannot transfer the characteristic into InfoCubes. However, you can use it in ODS objects or InfoSets.

  • 11g-[nQSError: 42029] Subquery contains too many values for the IN predicat

    Hi,
    I am having 2 reports one is for subquery which returns inputs to Main report. Actually the report was working fine in 10g. But in 11g we are gettting following error:
    View Display Error
    Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 42029] Subquery contains too many values for the IN predicate.Please have your System Administrator look at the log for more details on this error. (HY000)
    Please have your System Administrator look at the log for more details on this error.
    Getting same error after modofying the parameter value MAX_EXPANDED_SUBQUERY_PREDICATES to 12000
    Please suggest what could be the other reason it may fail or any other settings we need to check.
    Regards,
    ckeng

    ckeng,
    Normally the IN clause has restriction of 10000 values in general sql/plsql we will go with inline queries i think model your rpd to generate inner queries
    select * from emp where dept_id in (Select distinct dept_id from dept);
    or have a condition/filter on sub report and make one more inner report with sub-filter but definitely it will cause performance issues.
    thanks,
    Saichand.v

  • The InfoCube contains non-cumulative values

    Hi,
    While creating multicube for inventory on two cubes its askin the below "The InfoCube contains non-cumulative values. A validity table is created for these non-cumulative values, in which the time interval is stored, for which the non-cumulative values are valid.
    The validity table automatically contains the "most detailed" of the selected time characteristics (if such a one does not exist, it must be defined by the user, meaning transfered into the InfoCube)", what is that and how to solve this,
    Please through some light on this its urgent.
    Chandan

    Hi,
    Your multi cube is probably based on the infociune 0IC_C03 containing non cumulative key figures taht's why you get this message.
    You generaly don't have to maintain validity area unless you are in a special configuration (for exemple loading data from two source systems).
    UThe following link should give more information about validity area with non cumulative :
    [http://help.sap.com/saphelp_nw04/helpdata/en/02/9a6a1c244411d5b2e30050da4c74dc/frameset.htm|http://help.sap.com/saphelp_nw04/helpdata/en/02/9a6a1c244411d5b2e30050da4c74dc/frameset.htm]
    Hope this helps.
    Cyril

  • Which table contains net book value for Assets created with AS91.

    Which table contains net book value for Assets created with AS91.
    I have a problem locating where the net book value is stored in SAP.  Is it simply calculated and not stored in any one place?  I am trying to predict how SAP will calculate the net book value for some assets we plan on converting, but my formula doesn't always work consistently and I have not idea what is going on.  If it is stored in a table some place, can anyone please let me know!
    Thank you all

    Hi anar.samadzade & Michael Stewart
    It is not possible to directly get net book value of an any Table. You must migrate the gross book value (acquisition cost) and the accumulated depreciation. SAP will then calculate the NBV.
    Gross Block & Accumulate Dep you will get from Table: ANLC
    http://fixedassetsaccounting.net/migrating-fixed-assets-into-sap-a-harlex-guide/
    Dear anar.samadzade Ask Questions politely
    Regards
    Viswa

  • Error  Parameter WI_ID contains an invalid value  in webdynpro abap

    Hi Experts,
    i am working with webdynpro abap and using work flow in my component , here i am getting error and the error is
    Parameter WI_ID contains an invalid value .
    its showing the error in windows parameter : wi_id  and the wi_id is type sww_wiid
    and the same parameter i am using in application parameter .
    can u suggest me how to use this parameter wi_id in window parameter and application parameter and what the type should be for wi_id parameter.and what the value should be for application parameter wi_id.
    her i am giving the complete error  details.
    please give me the requried information if u porvided screen that will be very useful
    The following error text was processed in the system RD1 : Parameter WI_ID contains an invalid value .
    The error occurred on the application server S0164SAPDEV2_RD1_00 and in the work process 0 .
    The termination type was: RABAX_STATE
    The ABAP call stack was:
    Method: HANDLESTART of program /1BCWDY/BP6P95H8W6B71I7VVRPR==CP
    Method: IF_WDR_VIEW_DELEGATE~WD_INVOKE_EVENT_HANDLER of program /1BCWDY/BP6P95H8W6B71I7VVRPR==CP
    Method: INVOKE_EVENTHANDLER of program CL_WDR_DELEGATING_IF_VIEW=====CP
    Method: DISPLAY_TOPLEVEL_COMPONENT of program CL_WDR_CLIENT_COMPONENT=======CP
    Method: INIT of program CL_WDR_CLIENT_APPLICATION=====CP
    Method: IF_WDR_RUNTIME~CREATE of program CL_WDR_MAIN_TASK==============CP
    Method: HANDLE_REQUEST of program CL_WDR_CLIENT_ABSTRACT_HTTP===CP
    Method: IF_HTTP_EXTENSION~HANDLE_REQUEST of program CL_WDR_MAIN_TASK==============CP
    Method: EXECUTE_REQUEST_FROM_MEMORY of program CL_HTTP_SERVER================CP
    Function: HTTP_DISPATCH_REQUEST of program SAPLHTTP_RUNTIME
    Thanks & Regards.
    khanna

    Hi Nawal Kishor Mittal,
    thanks for the reply,
    i have given the application parameter wi_id with type string also but in 120 client (development) is working fine. when i test in 130 client( testing) its going to dump.
    what should be given to application value i given parameter wi_id and what the value should be there.
    waiting for reply.
    Thanks & Regards.
    Khanna.

  • Xpath Debatching in Orchestration -The part 'part' of message 'Message_In_Copy' contained a null value at the end of the construct block

    Hi ,
    Facing strange issue in Xpath debatching in Orchestration.
    Getting following error in construct shape:
    The part 'part' of message 'Message_In_Copy' contained a null value at the end of the construct block
    Code inside the construct block:
    sXpath = System.String.Format("/*[local-name()='Customers' and namespace-uri()='http://Debatch.Customer']/*[local-name()='Customer' and namespace-uri()='http://Debatch.Customer' and position()={0}]", nLoopCount);
    System.Diagnostics.Debug.WriteLine(sXpath);
    Message_In_Copy= xpath(Message_In, sXpath);
    Schema used:
    <?xml version="1.0" encoding="utf-16"?>
    <xs:schema xmlns="http://Debatch.Customer" xmlns:b="http://schemas.microsoft.com/BizTalk/2003" targetNamespace="http://Debatch.Customer" xmlns:xs="http://www.w3.org/2001/XMLSchema">
    <xs:element name="Customers">
    <xs:complexType>
    <xs:sequence>
    <xs:element minOccurs="0" maxOccurs="unbounded" name="Customer">
    <xs:complexType>
    <xs:sequence>
    <xs:element name="name" type="xs:string" />
    <xs:element name="id" type="xs:string" />
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    </xs:schema>
    Can anyone help me out ? to identify the root cause for above issue.
    Thanks,
    Kind Regards,
    girsh
    girishkumar.a

    I agree with Shankycheil here, querying XPath will return XMLNode and thus can't be assigned to XMLNode.
    But for debatching in Orchestration using Xpath is not a very good idea. 
    Because using XPATH loads the complete message in memory(XML Structure) and then performs processing.
    This approach is always prone to throwing Out of Memory exception and low in performance also.
    Therefore I would suggest you to perform debatching by calling XML Disassembler(XMLReceive) pipeline.
    As pipeline works with Stream it will have better performance and you will also get complete control over the messages.
    Refer the below samples for debatching using XML Receive pipeline within Orchestration.
    Comparrison between XPATH and ReceivePipeline for Debatching:-
    De-batching within an orchestration using XPath or calling a pipeline
    Debatching within Orchestration using Pipeline-
    http://tech-findings.blogspot.in/2013/07/debatchingsplitting-xml-message-in.html 
    https://jeremyronk.wordpress.com/2011/10/03/how-to-debatch-into-an-orchestration-with-a-pipeline/
    Thanks,
    Prashant
    Please mark this post accordingly if it answers your query or is helpful.

  • SXPG_COMMAND_EXECUTE  return wrong parameter value

    Dear all.
    We have an Abap program that pulls an encrypted FTP file and saves it to our network.
    After that we activate an external command via transaction SM69 by calling FM SXPG_COMMAND_EXECUTE.
    This command is an execution of a batch file that executes a decryption method via PGP decryption software.
    The problem is that we get an output parameter of this FM (STATUS) as u201CEu201D (error) although the decryption is being executed successfully.
    We have the same process being activated same way successfully with another folders (rest is exactly the same).
    Why does SXPG_COMMAND_EXECUTE return wrong status value ?
    Regards,
    Rebeka

    SXPG_COMMAND_EXECUTE runs under certain operating system user account. Looks like that account does not have enough privileges to do what you want it to do. Look at the operating system for privileges (read,write,execute) of the user account SAPServiceuser or equivalent.
    /Simo

  • Return the rows of the table where a column contains a specific value first

    I want my query to return the rows of the table where a column contains a specific values first in a certain order, and then return the rest of the rows alphabetized.
    For Example:
    Country
    ALBANIA
    ARGENTINA
    AUSTRALIA
    CANADA
    USA
    Now i want USA and CANADA on top in that order and then other in alphabetized order.
    Edited by: 986155 on Feb 4, 2013 11:12 PM

    986155 wrote:
    If it is 2 then i can write a case... i want generalised one where may be around 5 or 6 mentioned should be in descending order at the top and remaining in ascending order there after.Computers tend not to work in 'generalized' ways... they require specifics.
    If you store your "top" countries in a table you can then simply do something like...
    SQL> ed
    Wrote file afiedt.buf
      1  with c as (select 'USA' country from dual union
      2             select 'Germany' from dual union
      3             select 'India' from dual union
      4             select 'Australia' from dual union
      5             select 'Japan' from dual union
      6             select 'Canada' from dual union
      7             select 'United Kingdom' from dual union
      8             select 'France' from dual union
      9             select 'Spain' from dual union
    10             select 'Italy' from dual
    11           )
    12      ,t as (-- top countries
    13             select 'USA' country from dual union
    14             select 'United Kingdom' from dual union
    15             select 'Canada' from dual
    16            )
    17  select c.country
    18  from   c left outer join t on (t.country = c.country)
    19* ORDER BY t.country, c.country
    SQL> /
    COUNTRY
    Canada
    USA
    United Kingdom
    Australia
    France
    Germany
    India
    Italy
    Japan
    Spain
    10 rows selected.

  • Load ODS - InfoObject: InfoObject does not contain alpa-conforming value

    Hello everybody,
    I get following error while uploading from ODS to InfoObject.
    InfoObject /BIC/ZHOUSENUM does not contain alpa-conforming value 0000000000000000004.
    The data flow is as follows: Transactional InfoSource -> ODS -> InfoObject.
    In the Transfer Rules before ODS I have marked the Conversion check box. The data is populated into ODS without any problems.
    I can even activate the ODS, which reporting enabled.
    When I browse the ODS table with the option check conversion exits unmarked, I can see the value '0000000000000000004'.
    But the upload into InfoObject master data failes
    Any help appreciated.
    TIA
    pawel

    sap_all onboard.
    I assume, this infosource is locked, due to the fact it was system generated.
    On the other hand, there is a s-note 559763 which says:
    If an InfoObject is filled with ALPHA exits from an R/3 System, the BW assumes that the data is to arrive in the internal ALPHA format and therefore does not convert the data.
    I know, it is about R/3 as source system, but I assume the same would be for BW.
    p.

Maybe you are looking for

  • How to load a CSF file in indesign cs3 via scripting?

    Hi all How can I load a csf file into indesign from a path?. Once loaded, it is not remaining in dropdown list. Could anyone suggest on this? Thanks Thamil

  • Exception Handling in UDF?

    Hi experts, iam working Scenario file-xi-rfc scenario were in mapping, iam writing an UDF i want to know How to handle the exceptions in UDF and How the user will know where the exception has occurred. please send related links. note:useful answer wi

  • SOA Application Connectivity issue

    I have an Oracle SOA application which is exchanging messages with a third party one way. Third party sends request messages and we process it at our end. I created a web service, which has been tested successfully without any issues with EM test con

  • Transparency flattening & Acrobat 5 compatibility

    I'm guessing the answer is no, but I'll ask anyway: Is there any way to flatten the transparency in a file and still export it with Acrobat 5 compatibility?  I'd like to flatten the file to reduce the PDF size, but I need the interactive elements tha

  • Caching of static files

    Hello all, I was wondering if any of you has a solution to the following problem. If you use javascript and store these files as static files within Apex, the files will be downloaded on the client cache without an expire date. Now if I have an appli