ALE/IDOC TRANSPORT ERROR FOR Z TABLE

Dear Experts,
                    I am new to ALE/IDOC,I am trying a simple scenario by transporting a ztable data between two clients with in a same server.
steps i have done are :
1, created logical system at both sender and receiver.
2, assigned logical system to clients at both side.
3, created segments for fields at Z table and created new message type.
4, created RFC destination and port at both ends.
5, created distribution model at sender.
6, When i generated partner profile at sender i got the below error
  "Port could not be created
RFC destination LOGSTM_210 not specified for system LOGSTM_210
Enter the RFC destination and restart the generation"
can any one please help me to solve this error?
Thanks In Advance,
Sujay.

Hello Sujay,
Have you added the message type SYNCH to the sender Partner Profile(WE20)?
If not, then you have to manually add this message type to the Partner Profile & give the relevant port details. SYNCH is reqd. for RFC (read port) determination. This is reqd. before the generation of partner profile !!
Hope i am clear.
BR,
Suhas

Similar Messages

  • ALE/IDOC Change Pointers for Custom Table

    Hello all,
       There is a requirement for my client, to trigger IDOC based on the Custom Table changes. The custom table has a maintenance view and will be updated/modified/deleted randomly by user. Now, i need to track the changes in that table and should trigger IDOC for the changes. The Message type i am using for this is MATMAS, as I need to incorporate changes to the same IDOC.
        Is it ok to modify BDCP/BDCPS tables to update the changes from custom table? The Custom Table changes can be tracked through DBTABLOG......and i have my logic to trigger the IDOC
      The question is: If i do modifications to BDCP and BDCPS, how this will impact? In future SAP upgrade/enhancements will this affect anyway? Can any one of you share your experience regarding this. Thanks.

    Hi Raja,
    You can send IDOC based on  table maintenance event
    I think you can code in event 02, to generate IDOC.
    Check below link:
    http://help.sap.com/saphelp_nw04/helpdata/en/91/ca9f0ea9d111d1a5690000e82deaaa/frameset.htm
    Regards,
    Nisha Vengal.

  • Error: ORA-16778: redo transport error for one or more databases

    Hi all
    I have 2 database servers"Primary database and physical standby" in test environment( before going to Production)
    Before Dataguard broker configuration , DG setup was running fine , redo was being applied and archived on phy standby.
    but while enabling configuration i got "Warning: ORA-16607: one or more databases have failed" listener.ora & tnsnames.ora are updated with global_name_DGMGRL
    Please help me how can i resolve this issue .Thanks in advance.
    [oracle@PRIM ~]$ dgmgrl
    DGMGRL for Linux: Version 10.2.0.1.0 - Production
    Copyright (c) 2000, 2005, Oracle. All rights reserved.
    Welcome to DGMGRL, type "help" for information.
    DGMGRL> connect sys
    Password:
    Connected.
    DGMGRL> show configuration
    Configuration
    Name: test
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    prim - Primary database
    stan - Physical standby database
    Current status for "test":
    Warning: ORA-16607: one or more databases have failed
    DGMGRL> show database
    show database
    ^
    Syntax error before or at "end-of-line"
    DGMGRL> remove configuration
    Warning: ORA-16620: one or more databases could not be contacted for a delete operation
    Removed configuration
    DGMGRL> exit
    [oracle@PRIM ~]$ connect sys/sys@prim as sysdba
    bash: connect: command not found
    [oracle@PRIM ~]$ lsnrctl stop
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-OCT-2006 19:52:30
    Copyright (c) 1991, 2005, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    The command completed successfully
    [oracle@PRIM ~]$ lsnrctl start
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-OCT-2006 19:52:48
    Copyright (c) 1991, 2005, Oracle. All rights reserved.
    Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...
    TNSLSNR for Linux: Version 10.2.0.1.0 - Production
    System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
    Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=PRIM)(PORT=1521)))
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
    Start Date 08-OCT-2006 19:52:48
    Uptime 0 days 0 hr. 0 min. 0 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
    Listener Log File /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=PRIM)(PORT=1521)))
    Services Summary...
    Service "PLSExtProc" has 1 instance(s).
    Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "PRIM_DGMGRLL" has 1 instance(s).
    Instance "PRIM", status UNKNOWN, has 1 handler(s) for this service...
    The command completed successfully
    [oracle@PRIM ~]$ lsnrctl stop
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-OCT-2006 19:54:46
    Copyright (c) 1991, 2005, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    The command completed successfully
    [oracle@PRIM ~]$ lsnrctl start
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 08-OCT-2006 19:54:59
    Copyright (c) 1991, 2005, Oracle. All rights reserved.
    Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait...
    TNSLSNR for Linux: Version 10.2.0.1.0 - Production
    System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
    Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
    Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=PRIM)(PORT=1521)))
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
    [oracle@PRIM ~]$ dgmgrl
    DGMGRL for Linux: Version 10.2.0.1.0 - Production
    Copyright (c) 2000, 2005, Oracle. All rights reserved.
    Welcome to DGMGRL, type "help" for information.
    DGMGRL> connect /
    Connected.
    DGMGRL> create configuration test as
    primary database is PRIM
    connect identifier is PRIM
    ;Configuration "test" created with primary database "prim"
    DGMGRL> add database STAN as
    connect identifier is STAN
    maintained as physical;Database "stan" added
    DGMGRL> show configuration
    Configuration
    Name: test
    Enabled: NO
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    prim - Primary database
    stan - Physical standby database
    Current status for "test":
    DISABLED
    DGMGRL> enable configuration
    Enabled.
    DGMGRL> show configuration
    Configuration
    Name: test
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    prim - Primary database
    stan - Physical standby database
    Current status for "test":
    Warning: ORA-16607: one or more databases have failed
    DGMGRL> show database verbose prim
    Database
    Name: prim
    Role: PRIMARY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    PRIM
    Properties:
    InitialConnectIdentifier = 'prim'
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '1'
    ReopenSecs = '300'
    NetTimeout = '180'
    LogShipping = 'ON'
    PreferredApplyInstance = ''
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '30'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = '/u01/app/oracle/oradata/STAN, /u01/app/oracle/oradata/PRIM'
    LogFileNameConvert = '/u01/app/oracle/oradata/STAN, /u01/app/oracle/oradata/PRIM'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName = 'PRIM'
    SidName = 'PRIM'
    LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=PRIM)(PORT=1521))'
    StandbyArchiveLocation = '/u01/app/oracle/flash_recovery_area/PRIM/archivelog/'
    AlternateLocation = ''
    LogArchiveTrace = '0'
    LogArchiveFormat = '%t_%s_%r.arc'
    LatestLog = '(monitor)'
    TopWaitEvents = '(monitor)'
    Current status for "prim":
    Error: ORA-16778: redo transport error for one or more databases
    DGMGRL> show database verbose stan
    Database
    Name: stan
    Role: PHYSICAL STANDBY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    STAN
    Properties:
    InitialConnectIdentifier = 'stan'
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '1'
    ReopenSecs = '300'
    NetTimeout = '180'
    LogShipping = 'ON'
    PreferredApplyInstance = ''
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '30'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = '/u01/app/oracle/oradata/PRIM, /u01/app/oracle/oradata/STAN'
    LogFileNameConvert = '/u01/app/oracle/oradata/PRIM, /u01/app/oracle/oradata/STAN'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName = 'STAND'
    SidName = 'STAN'
    LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=STAND)(PORT=1521))'
    StandbyArchiveLocation = '/u01/app/oracle/flash_recovery_area/STAN/archivelog/'
    AlternateLocation = ''
    LogArchiveTrace = '0'
    LogArchiveFormat = '%t_%s_%r.arc'
    LatestLog = '(monitor)'
    TopWaitEvents = '(monitor)'
    Current status for "stan":
    Error: ORA-12545: Connect failed because target host or object does not exist
    DGMGRL>

    This:
    Current status for "stan":
    Error: ORA-12545: Connect failed because target host or object does not exist
    says that your network setup is not correct. You need to resolve that first.
    As for Broker setup steps how about the doc or our Data Guard 11g Handbook?
    It's 3 DGMGRL commands so I am not sure what 'steps' you need?
    Larry

  • Error: ORA-16778: redo transport error for one or more databases.   please help.

    Hello everyone :
    I can't switchover to primary. following is error and information.
    RHEL 6.3 x86-64
    Oracle database 11.2.0.3.0 Enterprise edition
    Primary database = orclprmy
    Standby database = orclstby1
    ##### /etc/hosts on orclstby1
    [root@orclstby1 admin]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.50.211    ttprmy
    192.168.50.212    orclstby1
    ### DG broker error
    DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production
    Copyright (c) 2000, 2009, Oracle. All rights reserved.
    Welcome to DGMGRL, type "help" for information.
    Connected.
    DGMGRL> show configuration;
    Configuration - TTDGConfig1
      Protection Mode: MaxPerformance
      Databases:
        orclstby1 - Primary database
          Error: ORA-16778: redo transport error for one or more databases
        orclprmy  - Physical standby database
    Fast-Start Failover: DISABLED
    Configuration Status:
    ERROR
    DGMGRL>
    ########### listener.ora on orclstby1
    [root@orclstby1 admin]# cat listener.ora
    # listener.ora Network Configuration File: /u2/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
    # Generated by Oracle configuration tools.
    LISTENER =
      (DESCRIPTION_LIST =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = orclstby1)(PORT = 1521))
          (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    SID_LIST_LISTENER =
      (SID_LIST =
        (SID_DESC =
          (GLOBAL_DBNAME=orcl)
          (SID_NAME = orclstby1)
          (ORACLE_HOME = /u2/oracle/product/11.2.0/dbhome_1)
        (SID_DESC =
          (GLOBAL_DBNAME=orclstby1)
          (SID_NAME = orclstby1)
          (ORACLE_HOME = /u2/oracle/product/11.2.0/dbhome_1)
        (SID_DESC =
          (GLOBAL_DBNAME=orclstby1_DGMGRL)
          (SID_NAME = orclstby1)
          (ORACLE_HOME = /u2/oracle/product/11.2.0/dbhome_1)
    ADR_BASE_LISTENER = /u2/oracle
    ############## tnsnames.ora on orclstby1
    ORCL =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.50.212)(PORT = 1521))
        (CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = orcl))
    orclprmy =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.50.211)(PORT = 1521))
        (CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = orclprmy))
    orclprmy_DGMGRL =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.50.211)(PORT = 1521))
        (CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = orclprmy_DGMGRL))
    orclstby1 =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.50.212)(PORT = 1521))
        (CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = orclstby1))
    orclstby1_DGMGRL =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.50.212)(PORT = 1521))
        (CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = orclstby1_DGMGRL))
    ##### alert log on orclstby1.
    Fatal NI connect error 12504, connecting to:
    (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=)(CID=(PROGRAM=oracle)(HOST=orclstby1)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.50.211)(PORT=1521)))
      VERSION INFORMATION:
            TNS for Linux: Version 11.2.0.3.0 - Production
            TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.3.0 - Production
      Time: 06-SEP-2013 13:19:55
      Tracing not turned on.
      Tns error struct:
        ns main err code: 12564
    TNS-12564: TNS:connection refused
        ns secondary err code: 0
        nt main err code: 0
        nt secondary err code: 0
        nt OS err code: 0
    There is problem  in alert log.
    In the /etc/hosts file. The standby server (orclstby1) ip is 192.168.50.212. but alert log is 192.168.50.211.
    Is any idea?
    Thanks for help.
    消息编辑者为:user4914135

    #### on primary database
    SQL>  select dest_name,status,target,archiver,schedule, valid_type,valid_role,db_unique_name,error from v$archive_dest;
    DEST_NAME            STATUS    TARGET  ARCHIVER   SCHEDULE VALID_TYPE      VALID_ROLE   DB_UNIQUE_NAME
    ERROR
    LOG_ARCHIVE_DEST_1   VALID     LOCAL   ARCH       ACTIVE   ALL_LOGFILES    ALL_ROLES    orclprmy
    LOG_ARCHIVE_DEST_2   VALID     REMOTE  LGWR       PENDING  ALL_LOGFILES    PRIMARY_ROLE orclstby1
    LOG_ARCHIVE_DEST_3   INACTIVE  LOCAL   ARCH       INACTIVE ALL_LOGFILES    ALL_ROLES    NONE
    #### on standby database
    SQL> select dest_name,status,target,archiver,schedule, valid_type,valid_role,db_unique_name,error from v$archive_dest;
    DEST_NAME            STATUS    TARGET  ARCHIVER   SCHEDULE VALID_TYPE      VALID_ROLE   DB_UNIQUE_NAME
    ERROR
    LOG_ARCHIVE_DEST_1   VALID     PRIMARY ARCH       ACTIVE   ALL_LOGFILES    ALL_ROLES    orclstby1
    LOG_ARCHIVE_DEST_2   ERROR     STANDBY LGWR       PENDING  ONLINE_LOGFILE  PRIMARY_ROLE orclprmy
    ORA-12504: TNS:listener was not given the SERVICE_NAME in
    CONNECT_DATA
    LOG_ARCHIVE_DEST_3   INACTIVE  PRIMARY ARCH       INACTIVE ALL_LOGFILES    ALL_ROLES    NONE 
    ####  log_archive_dest on primary database  
    SQL> show parameter log_archive_dest
    NAME                                 TYPE        VALUE
    log_archive_dest                     string
    log_archive_dest_1                   string      location=/u3/arch/orcl vali
                                                     d_for=(ALL_LOGFILES,ALL_ROLES)
                                                      db_unique_name=orclprmy
    log_archive_dest_10                  string
    log_archive_dest_11                  string
    log_archive_dest_12                  string
    log_archive_dest_13                  string
    log_archive_dest_14                  string
    log_archive_dest_15                  string
    log_archive_dest_16                  string
    log_archive_dest_17                  string
    log_archive_dest_18                  string
    log_archive_dest_19                  string
    log_archive_dest_2                   string      service="orclstby1", LGWR ASYNC
                                                     NOAFFIRM delay=0 optional comp
                                                     ression=disable max_failure=0
                                                     max_connections=1 reopen=300 d
                                                     b_unique_name="orclstby1" net_ti
                                                     meout=30, valid_for=(all_logfi
                                                     les,primary_role)
    log_archive_dest_20                  string
    log_archive_dest_21                  string
    log_archive_dest_22                  string
    ####  log_archive_dest on standby database
    SQL> show parameter log_archive_dest
    NAME                                 TYPE        VALUE
    log_archive_dest                     string
    log_archive_dest_1                   string      location=/u3/arch/orclstby1 vali
                                                     d_for=(ALL_LOGFILES,ALL_ROLES)
                                                      db_unique_name=orclstby1
    log_archive_dest_10                  string
    log_archive_dest_11                  string
    log_archive_dest_12                  string
    log_archive_dest_13                  string
    log_archive_dest_14                  string
    log_archive_dest_15                  string
    log_archive_dest_16                  string
    log_archive_dest_17                  string
    log_archive_dest_18                  string
    log_archive_dest_19                  string
    log_archive_dest_2                   string      service=orclprmy ASYNC valid_for
                                                     =(ONLINE_LOGFILE,PRIMARY_ROLE)
                                                      db_unique_name=orclprmy
    log_archive_dest_20                  string
    log_archive_dest_21                  string
    log_archive_dest_22                  string
    log_archive_dest_23                  string
    log_archive_dest_24                  string
    log_archive_dest_25                  string
    #### spfile on standby database
    </u2/oracle/product/11.2.0/dbhome_1/dbs> strings spfileorclstby1.ora
    orcl.__db_cache_size=1040187392
    orclstby1.__db_cache_size=1090519040
    orcl.__java_pool_size=16777216
    orclstby1.__java_pool_size=16777216
    orcl.__large_pool_size=16777216
    orclstby1.__large_pool_size=16777216
    orcl.__oracle_base='/u2/oracle'#ORACLE_BASE set from environment
    orclstby1.__oracle_base='/u2/oracle'#ORACLE_BASE set from environment
    orcl.__pga_aggregate_target=536870912
    orclstby1.__pga_aggregate_target=536870912
    orcl.__sga_target=1610612736
    orclstby1.__sga_target=161061273
    orcl.__shared_io_pool_size=0
    orclstby1.__shared_io_pool_size=0
    orcl.__shared_pool_size=503316480
    orclstby1.__shared_pool_size=469762048
    orcl.__streams_pool_size=16777216
    orclstby1.__streams_pool_size=0
    *.archive_lag_target=0
    *.audit_file_dest='/u2/oracle/admin/orclstby1/adump'
    *.audit_trail='db'
    *.compatible='11.2.0.0.0'
    *.control_files='/u2/oracle/oradata/orclstby1/control01.ctl','/u2/oracle/fast_recovery_area/orclstby1/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_nam
    e_convert='orcl','orclstby1'
    *.db_name='orcl'
    *.db_recovery_file_dest='/u2/oracle/fast_recovery_area'
    *.db_recovery_file_dest_size=5218762752
    *.db_unique_name='orclstby1'
    *.deferred_segment_creation=FALSE
    *.dg_broker_start=TRUE
    *.diagnostic_dest='/u2/oracle'
    *.fal_client='orclstby1'
    *.fal_server='orclprmy'
    *.log_archive_config='dg_config=(orclprmy,orclstby1)'
    *.log_archive_dest_1='location=/u3/arch/orclstby1 valid_for=(ALL_LOGFILES,ALL_ROLES) db_unique_name=orclstby1'
    *.log_archive_dest_2='ser
    vice=orclprmy ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=orclprmy'
    *.log_archive_dest_state_2='ENABLE'
    orcl.log_archive_format='orcl_%t_%s_%r.arc'
    *.log_archive_format='orclstby1_%t_%s_%r.arc'
    orclstby1.log_archive_format='orclstby1_%t_%s_%r.arc'
    *.log_archive_max_processes=4
    *.log_archive_min_succeed_dest=1
    orcl.log_archive_trace=0
    orclstby1.log_archive_trace=0
    *.log_file_name_convert='orcl','orclstby1'
    *.open_cursors=300
    *.pga_aggregate_target=536870912
    *.processes=
    1500
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sessions=1655
    *.sga_target=1610612736
    *.standby_file_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    </u2/oracle/product/11.2.0/dbhome_1/dbs>
    Thank you for your help.

  • How is partner profile for ALE/IDOC transported?

    Hi experts,
    How is a request created for transporting configurations in ALE/idoc.
    Valuable answers will be generously rewarded.
    regards,
    Shrita.

    Hi,
    some configurations are client dependent and some are client independent.
    so client dependent configurations are transported to different clients using SCC1 transaction.
    and client independent configurations they refelect automatically.
    Regards
    vijay

  • Transport Error - Deletion of Table Maintenance

    Hi Experts,
    I have been searching SDN, but I cannot find a specific problem relating to mine.
    My problem is, i am deleting a table, where a table maintenance was generated. Somehow, an error occurred during transport - it says 'Screen SAPLXXXXX 00XX: Generation error'. This is the screen for the table maintenance generated. Further on, I found that there is a syntax error, which is because it is referring to the table and its fields which were already deleted in the same transport.
    Question is, is the procedure wrong? Should I have deleted the Table Maintenance first and transport and then create transport for the deletion of table? How to correct the above error, when the table is already deleted?
    Thank you in advance for your help.
    Regards,
    Joy

    Hi ,
    Try to delete the Table maintainace first and then  try to delete Objects for Table maintainace in Transport.
    Then try to create a fresh table maintaiance with Proposed screens then it will work properly.
    Prabhduas

  • How to configure ALE/IDOC in  nace for V3.

    Hi,
    Please let me  know how to configure ALE/IDOC output type in Nace and how to configure the message outputs.
            I require the steps to configure the above.
             Screen shots will also be helpful.
    Regards,

    Hi Somya,
    I have a little manual that explain how to create an outbound Idoc from a message class (for invoice).
    But it is in spanish, if you want give me a email address and I'll send you.
    Rgrds,
    Francisco Castillo

  • Transport Request for Custom Table??

    Hi all,
    I got the requriment for Custom Table.
    "The table should have attributes such as the table contents can be transported from DEV to QA to PROD. Also the table can be maintained individually in QA or PROD too"
    I have understood that I need to have "Delivery Class" as "C". But i want to know what should Transport request type: (Workbanch or Customizing)
    thanks in advnace.

    You say  <i>"the table can be maintained individually in QA or PROD too" I have understood that I need to have "Delivery Class" as "C".</i>
    If table must be maintained in QA and PRD, you need "Delivery Class" as "A". With "Delivery Class" as "C" you can only maintain table in DEV and transport entrie changes to the other systems.
    Regards.

  • Transport Req for Customizing Table

    Hi All,
    Created one Customzing table & a table maintenance generator for that table.
    Now my requirement is, whenever the user add/delete/modify any values to the table, Transport request have to be set in Development Client & to be transported to QAS & PRD System.
    Please, help how this can be acheived.
    Many Thanks in Advance.
    Regards,
    Anbalagan.

    Hi Anbalagan,
    When you generate the table maintainance generator for this table there is a radiobutton for Recording Routine (at the bottom of the table maintainance generator screen). Select the 'Standard Recording Routine' radiobutton. Now whenever a user tries to use SM30 to add/change/delete values in the table, the system will ask for a Transport request to record the changes. You can then use this Transport request to move the table values to other systems.
    Hope this helps!
    Regards,
    Saurabh

  • BD87 idoc inbound error for message type HRMD_A

    Hello there,
    I am getting below error  while fetching Inbound idoc in tcode BD87, see log below from ST22
    Short text
        An SQL error occurred when executing Native SQL.
    What happened?
        The error "-10328" occurred in the current database connection "LCA".
    What can you do?
        Note down which actions and inputs caused the error.
        To process the problem further, contact you SAP system
        administrator.
        Using Transaction ST22 for ABAP Dump Analysis, you can look
        at and manage termination messages, and you can also
        keep them for a long time.
    How to correct the error
        Database error text........: "Mismatch of number of stream members for
         parameter (3) (application 6, database 9)."
        Database error code........: "-10328"
        Triggering SQL statement...: "EXECUTE PROCEDURE "SIM_SIMSESSION_CONTROL""
        Internal call code.........: "[DBDS/NEW DSQL]"
        Please check the entries in the system log (Transaction SM21).
        If the error occures in a non-modified SAP program, you may be able to
        find an interim solution in an SAP Note.
        If you have access to SAP Notes, carry out a search with the following
        keywords:
        "DBIF_DSQL2_SQL_ERROR" "CX_SY_NATIVE_SQL_ERROR"
        "/SAPAPO/SAPLOM_CORE" or "/SAPAPO/LOM_COREU07"
        "SIMSCTRL_EXEC_COM"
        If you cannot solve the problem yourself and want to send an error
        notification to SAP, include the following information:
        1. The description of the current problem (short dump)
           To save the description, choose "System->List->Save->Local File
        (Unconverted)".
        2. Corresponding system log
       Display the system log by calling transaction SM21.
       Restrict the time interval to 10 minutes before and five minutes
    after the short dump. Then choose "System->List->Save->Local File
    (Unconverted)".
    3. If the problem occurs in a problem of your own or a modified SAP
    program: The source code of the program
       In the editor, choose "Utilities->More
    Utilities->Upload/Download->Download".
    4. Details about the conditions under which the error occurred or which
    actions and input led to the error.
    The exception must either be prevented, caught within proedure
    "SIMSCTRL_EXEC_COM" "(FORM)", or its possible occurrence must be declared in
    the
    Do we need SAP_APO in order to install LiveCache?
    We have below landsacpe
    - SAP ECC 6 EHP4 with SQL server 2005 S BACKEND
    -LCAPPS Component release 2005_700 with SP
    SAPKIBHD05.
    - We dont have SAP_APO component
    -SAP LiveCache system have MAXDB 7.7.04.29 as backend.
    - Live cache client is installed on same SAP ECC system where we have SAP ECC EHP4 system.
    any suggestion how to resolve it?
    Mani

    >
    Mani wrote:
    > I installed a standalone server and i need it only for SAP HR (specifically Idoc processing in SAP HR Human capital management module). But i am getting error while running ibound idoc process.
    > So Do i still need SAP_APO component?
    I never heard of this specific scenario for liveCache usage in the past 7 years working in SAP support on the liveCache component.
    Might be useful if you provide some more details (documentation, notes, links) that describe this usage scenario.
    In any case you will need to have a liveCache version that is compatible to your application ABAP coding.
    The error message you posted indicates that your liveCache version does not fit your application version (or vice versa).
    > I installed SAP MAxDB and LiveCache buid using Installation master DVD for SAP ERP. So i think i followed the right method as specified in installation guide.
    Be precise, please. Which installation guide? There are hundreds of them - not THE installation guide.
    > Secondly can you help me something specific to my problem in coding form?
    Already did.
    > i tried in LC10, using LCA monitoring i am not able to run SQLDBC trace under
    > LCA >> LiveCache Monitoring >> Tools
    > but i am not able to execute the same, is it a error or we need to configure something?
    Well, the liveCache needs to be integrated into the LC10. You'd do this in transaction SM59 - but as you've followed an installation guide, this would have been covered in it...
    > Operational status is active green light, below is file status.
    >
    >
    KNLMSG     KnlMsg     1.098.697     28.01.2011     06:59:55     Database Messages     ASCII
    > KNLMSGARC     KnlMsgArchive     8.192     28.01.2011     06:59:50     Database Errors     ASCII
    > KNLMSGOLD     KnlMsg.old     342.663     28.01.2011     06:59:50     Database Messages (OLD)     ASCII
    > KNLTRC     knltrace     6.209.536     28.01.2011     06:59:56     Database Trace (Raw/Binary)     BINARY
    > BACKHIST     dbm.knl     570     10.01.2011     06:18:41     Backup History     ASCII
    > DBMPRT     dbm.prt     133.843     29.01.2011     11:41:27     Database Manager Log File     ASCII
    > DBMPAHI     L00.pah     214.225     10.01.2011     06:18:16     Database Parameter History     ASCII
    > LCINIT     lcinit.log     15.299     28.01.2011     07:00:00     LiveCache Initialisation     ASCII
    > LCINITCMD     lcinit.bat     3.047     28.10.2008     14:45:51     LiveCache Initialisation Script     ASCII
    > LCINITHIS     lcinit.his     46.742     28.01.2011     07:00:00     LiveCache Initialisation History     ASCII
    > INSTPRT     dbm.ins     863.981     10.01.2011     06:18:47     Installation Log File     ASCII
    > DIAGDIR     File     0     10.01.2011     06:18:18     Diagnose History     DIRECTORY
    > ANALYZER     analyzer     0     29.01.2011     00:00:36     DB Analyzer File     DIRECTORY
    > LCTRC#init.his     lcinit.his     46.742     28.01.2011     07:00:00     LiveCache Trace (ASCII)     ASCII
    > LCTRC#init.log     lcinit.log     15.299     28.01.2011     07:00:00     LiveCache Trace (ASCII)     ASCII
    > LCTRC#_apo_version.txt     lc_apo_version.txt     164     28.01.2011     07:00:00     LiveCache Trace (ASCII)     ASCII
    >
    > any clue whats wrong?
    > Mani
    The only thing this tells us is: the liveCache instance itself apparently comes up and writes out the standard log files.
    That's it.
    As I already tried to explain, the problem seems to be the dependence between your application and the liveCache version.
    Usually I'd recommend to open a support message for this, but the colleagues would (hopefully) ask the same questions as I did.
    regards,
    Lars

  • ALE / IDOC Outbound Error Handling

    Hi Experts,
    I got an error in Outbound Process.
    An IDOC gets the status 36 - Timed out.
    Can anyone tell me how can i resolve this Error?.
    Thanks in advance,
    Sudhakar

    Hi ,
            Best option is to keep a sql trace on the outbound idoc which will tell you at which perform or functional module ,the performance is taking time to fetch .
    all check with basis in terms of timed out ,they would have set a paramater for time out in terms of mins or hrs.

  • Transport Error For Bex Query  to BW Production

    Dear Experts,
    We are facing an issue during transport of a Bex report.
    The RC 16 error has taken place.  After I did google;  I found this as below, but not the exact steps to solve this:
    Return code (16) indicates import is cancelled.
    Ex: RC 16 error
    1. Import cancelled due to system down while importing.
    2. Import cancelled due to user expires whileimporting
    3. Import cancelled due to insufficient roles.
    Would you please suggest me how to encounter this error, solve the error and successfully transport the Bex Query?
    Please help.
    Regards, Aparajit

    Hi all,
    The issue is resolved with Re transport.
    The Query is in production now.
    Thank you very miuch for your guidance  Rama,
    Ganesh  and   Nikhil 
    Regards- Aparajit

  • Incoming idocs giving error for  Serialization

    Hi  Friends
    Am getting inbound idocs with this error  :  Serialization  errror for  object  01,S, 34343434,Expected counter 000001<
    and error in details  says,
    The expected serialization counter has the value 000001. However, the serialization counter in the IDoc has the value 000002 and is therefore too big. There are therefore older IDocs with this HR object with the serialization counter values in between. These IDocs have either not yet been posted, or have been posted incorrectly.
    what shd i do for this  few exper said i need to do some thing for Serialization  so automatically system shd take care for this,
    Pls help me on this.
    Regards
    Meeta

    Hello,
              Refer to the SAP Note [65954|https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=65954]. It might be of use for you.
    Thanks and Regards,
    Venkat Phani Prasad Konduri

  • How to Create Transport Request For custom table

    Hello All,
    I have and requirement to create transport request automatically when and record inserted in to the custom table.
    Actually FI whe people create or delete  some bank account from DEV client the transport request  creates automatically and they transport that request to Quality and Production system to delete or create that account from all the systems.  but when they delete the account all the informations related to that bank accounts get deleted from all the standered customize tables .  to keep the track of that deleted account I have created one z table which is of type customize table. this table is filled automatically when any account get deleted from DEV system. now I want to pass that record to Quality and Production systems using transport request.
    how can i create transport request when record get inserted into my ztable automatically.
    Thanks in advance.
    Vinod

    Hello All,
    This is a FAQ and has been answered many times before in the forum.
    1. Maintain delivery class 'C' it will start asking for a TR: False. Delivery class has got nothing to do with the TR prompt. It determines how the data is to be transported. A delivery class 'C' will prompt for a customizing request, whereas a delivery 'A' will prompt for a workbench request.
    2. For the system to automatically prompt for a TR you have to select the "Standard Recording Routine" in the table maintenance generator. This will prompt you for a TR every time you try to create / modify a record & save it to the DB.
    Hope i am clear in my explanation.
    If your table is filled automatically through some coding you have to add the entry to the TR programmatically. Check this link for details: [http://wiki.sdn.sap.com/wiki/display/ABAP/TransportingTableEntriesinABAPprogrammatically|http://wiki.sdn.sap.com/wiki/display/ABAP/TransportingTableEntriesinABAPprogrammatically]
    BR,
    Suhas
    Edited by: Suhas Saha on Mar 31, 2010 11:22 AM

  • Get Message when ALE Idoc-Import cancelled

    Hello,
    I would like to get informed, when our regularly ALE Idoc-Import has cancelled. Today we receive Idocs from an SAP R/3 of our parent company over night within actual currency. Sometimes the Import by ALE Idocs has cancelled for example if there are some currency-codes are missing in target-system BW. We only have a chance get informed about the import-status by using TC BD87 and have a look into the status monitor. Alternative we can have a look by TC CCMS.
    Does anybody has an idea how to design automatically a message into SAP workplace or something like this (in best case getting in E-Mail), if the import has cancelled? In other cases we could design this workaround if a BI process-chain-job has cancelled. But this workaround has been designed by external consultant and are not here any more.
    Maybe anyone at this place has an idea...
    Best regards
    Daniel

    What version of iPhoto are you using and from where (and how) are you trying to import the photos? 
    As a first fix attempt try this:  launch iPhoto with the Option key held down and create a new, test library.  Import some photos and  see if you get the same message.  If you don't then the problem lies with your current library.
    OT

Maybe you are looking for