DataSource / Structures mismatched between Dev and Test Systems. ..!!

Hi,
We are doing a scenario. Where XI will update the data into PSA through ABAP proxy.
Scenario worked perfectly in development system.
We transported the objects from Dev to test. The Strucures are mismatched in SE11 between development and Test systems as below.
Data sources (ZDS_RECIHDR and ZDS_RECTPALL) looks ok. But when I saw the structures in SE11 they are not correct, they got mismatched.
Development:
/BIC/CQZDS_REC00001000 - Header (ZDS_RECIHDR)
/BIC/CQZDS_REC00003000 - Allocation (ZDS_RECTPALL)
Test:
/BIC/CQZDS_REC00001000 - Allocation (ZDS_RECTPALL)
/BIC/CQZDS_REC00003000 - Header (ZDS_RECIHDR)
Kindly let me know where it might have gone wrong?
Thanks
Deepthi

We done it already. Still it is failing.
While transporting, it is failing and showing the error as
Program ZPI_CL_IA_PAYMENT_ALLOCATION1=CP, Include ZPI_CL_IA_PAYMENT_ALLOCATION1=CM001: Syntax error in line 000016
The data object 'L_S_DATA' has no component called'/BIC/ZSALENUM', but there is a component called
Program ZPI_CL_IA_PAYMENT_HEADER======CP, Include ZPI_CL_IA_PAYMENT_HEADER======CM001: Syntax error in line 000016
The data object 'L_S_DATA' has no component called'/BIC/ZTRANDATE', but there are the following com
The Structure is mismatched in SE11 between header and allocation structures. That is the reason it is failing.
Any more ideas pls?

Similar Messages

  • Consistency between dev and prod systems!!!!

    Hi All,
    I wuld like to check the consistency between Dev and Prod systems for all the Data model and other objects existing. I would like to do this sanity check to see whether the two systems are in sync. Are there any tools or transaction codes within SAP framework which can guide me in this direction.
    Kind Regards,
    Surya Tamada.

    Hi Surya ,
    First build whole data flow :
    1 .Create Infoarea ,Infoobjects & Masterdata ,Infoproviders ,Infoobject Catelog under Infoarea,MetaData - Characteristics MeteData - Key Figures 
    2 DSOs  ,Infocubes ,Infosets, Multiproviders ,Views in the development system ,Datasources for BW system,Flat File Datasources,Transformations
    then get data from the se11 tables as per your requirement :
    like RSDVCHA ,RSDBCHATR,RSDCUBEIOBJ,RSDICMULTIIOBJ etc and built view on it  as per your requirement .
    then built flat file data sources and transformation .
    get all data downloaded to flat file in ur PC .In the same way from other system.
    make flow from PC File Datasource -> DSO -> Infocube and build queries for different objects .load the downloaded files to them and then run query .
    ex : query to compare cube will take cube name and systems name as input and compare objects .
    Regards,
    Jaya

  • TOBJ entries missing from upgrade dev and test systems

    Hi,
    We are in the process of ECC Upgrade from 4.7 to ECC 6.0 and upgraded our dev and test systems. We missing some SAP defined object entries in TOBJ table from dev and test. Because of this some of the roles getting transport errors if the roles contain those missing objects. We tried to transport the missing entries but we don't see any provision for that. Does any one through an idea to move those missing entries from dev to test. Not sure we will get same error in production. This table updates during upgrade and may be we missed some thing in test as though we followed the same upgrade process for both systems.
    Thank you
    Venkat

    Possibly you are no longer licensed for components which these objects are called from in their concepts.
    In that case the ability to start the component objects should be removed as well.
    Of course if some Z-programs checked these objects or SAP's own "package concept" extended syntax checks did not respect them then you might have problems.
    This should normally only apply to manual authorization instances, as if you had upgraded the roles and regenerated them, then you would not have this problem (in the roles).
    It tells us something about how you or someone else without any training have built these roles (or subsequently mucked them up...)...
    When reading stuff like this, I am always of two minds...
    1) SAP should make the concepts simpler and more intuitive to use without "cowboy" activities allowed by users with SAP_ALL etc.
    2) Customers should bleed for their own sins for not training people appropriatly (here, faking CV's is also ma major pest!).
    It is always a bit of a cat & mouse game and ends up in forums such as SDN in the long run.
    Some of it might also have become "available" to be used by SAP (I believe it was the 6.20 upgrade which installed everything...).
    Real bugger. I would accept it and test as best you can. You can also compare "where.used-lists" and "code scans" before and after the upgrade to see whether it is used by foreign repository objects.
    The scan is more reliable here, as sometimes the object name itself is a variable from a data declaration or condition which you cannot see in the "procedural" code.
    Cheers,
    Julius

  • Copy data between productive and test system

    Hi everybody,
    I need to copy data from one cube in productive system into the same cube in the test system.
    How can I do it?
    Regards
    Erwin

    If you need to copy data from one system to another maybe you can do a Data Refresh or a SAP_DATA System Copy.
      If you wan't only to copy data from 1 Cube to another in other system you shoul'd try transforming the data on the Cube to a Report or maybe an ODS and then dowload it to a CVS archive, and then make a Transformation From PC_FILE to ODS and Cube o from PC_FILE to Cube.
    Hope it's work.
    Grettings.
    Ignacio.

  • System Properties user.timzone different between dev. and prod. system

    Hi all,
    I am facing the following issue:
    In our development system I get in http://<server:port>/sap/monitoring/SystemProperties -> dispatcher -> system properties a Europe/Berlin for user.timezone.
    In our productive system I get in http://<server:port>/sap/monitoring/SystemProperties -> dispatcher -> system properties a GMT for user.timezone.
    Can anybody explain which system property is requested by the J2EE-engine when calling this property?
    I ask, because I wrote a small programm requesting the user.property via java (expecting j2ee-engine would do that too) an got back an empty string (on both systems).
    So, how/where can I change the parameter, so that the development system uses GMT, too?
    I already checked the spro settings in sap, they are identical. System -> status showes the same for both systems, too.
    I hope for your help.
    Thanks and regards
    Christian

    Hello Chris,
    I checked the system variables with TA SM51 ->goto -> server -> information -> environment.
    There is nothing like the TZ variable on both systems (we use Linux).
    Do you have an other idea?
    How can I change the user.timezone shown in Java system properties?
    Regards
    Christian

  • OSS Patch Comparison between DEV and PRD systems

    Hi All,
    Can you provide me the steps how to compare SNOTE applied in BI Development and BI Production.
    Thanks & Regards,
    Jelina

    check this
    OSS Note Patch Implemented?
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/472443f2-0c01-0010-20ab-fbd380d45881?QuickLink=index&overridelayout=true

  • Org.structure differs on dev and production

    Hi,
    I'm developing a workflow and in one of the steps (activity step), I assign agent using organizational structure (position from HR). The problem is, that the org.structure differs on dev system and on production system. Mainly the IDs of position posts differ. How to solve this? Thanks in advance.
    Best regards,
    Tomas

    >
    Tomas Linhart wrote:
    > thanks for answer, I guess that's the best way of handling such situation. I'm going to creat custom table holding position used in workflow mapped to real position ID from org.structure. Or more generally, holding WF post mapped to org.structure type (position, job, ...) with corresponding ID in org.
    I don't see why you need a custom table?
    The org issues aside, all that is needed is to assign your org unit to the task. Then leave the agent assignment blank in your WF. Job done.
    The other option is to create a dummy rule, which you thought was to be avoided. I don't agree, it takes 5 minutes, less effort than a table, less non-standard stuff ==> less explaining to people how to maintain & easier to troubleshoot.
    Edit: Forgot to answer your other question
    > I'm not sure if you consider org. replication the same as transport of org. I was thinking of transport myself, just didn't know the way how to. I've found transaction RE_RHMOVE30 (also acessible from customizing), that should do that, but am not sure, if that's the right one and/or what options to select when transporting. Also, I'm not sure if this would ensure the position/job IDs are the same on both systems.
    Replication would be carried out via transports. The RHMOVE* reports can also be used, or you could even set up an ALE connection between DEV and PRD. However I still would not recommend it. The development system is where you would need to create test users and/or org units. The other alternative is to have a master data client. All of these are a great deal of work which only make sense in certain types of environments - certainly not for the sake of a workflow.
    Edited by: Mike Pokraka on Sep 27, 2009 9:52 PM

  • FB60 screen change layout different between DEV and PRD

    Hi,
    I'm having problem with transaction FB60. When i try to change the layout in PRD, in screen Edit System Setting, some of the field showing 0 length. For example in WBS element field, in DEV show length 24 but PRD show length 0. I try to compare the version but it is same (program & table structure - ACGL_ITEM). When I debug the program, found out that it will get the structure from sap internal c program call 'AB_GET_CX_DATA'. So, maybe this program version are different between DEV and PRD.
    Anybody knows what is the cause of this problem? Or maybe somebody can tell me how to check sap internal c program.
    Thank you very much.

    Yes. FB60 is Enter Vendor Invoice. In this screen you can see a table with column such as GL Acct, Debit/Credit, Amount in DC, Cost Center and others. You can add or remove some of the field using Table Setting and create Screen Variant. My problem is, the length of WBS Element and Profit Center field is 0. How to change the length.
    I already searched in forum but I only found how to create screen variant.
    Thanks.

  • Xml schema different between Dev and QA

    The XML schemas are different between our DEV and QA systems after applying XI stack 13/SRM stack 10. It may be that they were different before applying the stacks but it has only been spotted in testing. We need to find out why before we live with the SP stack into production.
    We have re-applied the XI content tpz files to both the DEV and QA servers and can't see any differences between in the design time repository apart from between the generated XML schemas:
    A simple example of the difference is included below. The difference
    only seem to be in the SRM Server 5.5 content and seems to be to do with the hierarchies in the header declaration with xmlns:p0="http://sap.com/xi/SRM/Basis/Global" being in a different position in Dev than in QA. I have raised an OSS message but have been waiting for 2 1/2 days now for a response and we are due to go live with the patching in a weeks time. Any help steering me in the right direction would be most appreciated.
    Dev:
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:p0="http://sap.com/xi/SRM/Basis/Global"
    xmlns="http://sap.com/xi/SAPGlobal/Global"
    targetNamespace="http://sap.com/xi/SAPGlobal/Global">
    <xsd:import namespace="http://sap.com/xi/SRM/Basis/Global" />
    <xsd:element name="PurchaseOrderRequest"
    type="p0:PurchaseOrderMessage" />
    </xsd:schema>
    QA:
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns="http://sap.com/xi/SAPGlobal/Global"
    targetNamespace="http://sap.com/xi/SAPGlobal/Global">
    <xsd:import namespace="http://sap.com/xi/SRM/Basis/Global" />
    <xsd:element xmlns:p0="http://sap.com/xi/SRM/Basis/Global"
    name="PurchaseOrderRequest" type="p0:PurchaseOrderMessage" />
    </xsd:schema>
    Many thanks
    Ian

    Like I have mentioned in my reply yesterday to your another post, I am very positive that the role you are looking at in your QA system is the role "Accessible Content Administration".
    Please verify your role assignment and make sure that you have the correct roles assigned.
    Thanks,
    Shanti

  • Performance problems between dev and prod

    I run the same query with identical data and indexes, but one system takes a 0.01 seconds to run while the production system takes 1.0 seconds to run. TKprof for dev is:
    Rows Row Source Operation
    1 TABLE ACCESS BY INDEX ROWID VAP_BANDVALUE
    3 NESTED LOOPS
    1 NESTED LOOPS
    41 NESTED LOOPS
    41 NESTED LOOPS
    1 TABLE ACCESS BY INDEX ROWID VAP_PACKAGE
    1 INDEX UNIQUE SCAN SYS_C0032600 (object id 51356)
    41 TABLE ACCESS BY INDEX ROWID VAP_BANDELEMENT
    41 AND-EQUAL
    82 INDEX RANGE SCAN IDX_BE2 (object id 53559)
    41 INDEX RANGE SCAN IDX_BE1 (object id 53558)
    41 TABLE ACCESS BY INDEX ROWID VAP_BAND
    41 INDEX UNIQUE SCAN SYS_C0034599 (object id 53556)
    1 INDEX UNIQUE SCAN SYS_C0032549 (object id 51335)
    1 INDEX RANGE SCAN IDX_BV1 (object id 53557)Tkprof for Prod is :
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BANDVALUE' (TABLE)
    52001 NESTED LOOPS
    26000 NESTED LOOPS
    26000 NESTED LOOPS
    26000 NESTED LOOPS
    1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_PACKAGE' (TABLE)
    1 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0018725' (INDEX (UNIQUE))
    26000 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BANDELEMENT' (TABLE)
    26000 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_BE2' (INDEX)
    26000 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BAND' (TABLE)
    26000 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0030648' (INDEX (UNIQUE))
    26000 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0018674' (INDEX (UNIQUE))
    26000 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_BV1' (INDEX).The row count varies greatly. But it shouldn't as the data is the same.
    Any ideas?

    From DEV you show the Row Source Operations for the query. The column named "Rows" signifies the actual number of rows processed with each step.
    From PROD you show the Execution Plan for the query; that is, tkprof was executed with the EXPLAIN option which generates the execution plan as of the time when tkprof was run. The "Rows" column in the Explain Plan output comes from the PLAN_TABLE.CARDINALITY, which represents an estimate by the CBO for the number of rows [expected to be] processed with each step.
    So, if by <quote>The row count varies greatly</quote> you meant these "Rows" columns outputs then you're are comparing actuals from a database with estimates from another. Get the Row Source Operations from both.
    "Identical data and indexes":
    1. data may be the same, but it is not necessarily stored physically the same way.
    2. Indexes being the same means their definitions are the same; again, physically they are not necessarily identical
    In other words, data in PROD (the way it is stored on disk) may have evolved as a result of discrete deletes/updates/inserts ... in DEV it could be, for example, stored more compact if you took a copy of PROD and moved into DEV. So, the number of blocks for your segments will likely be different between PROD and DEV, the clustering factor for your indexes are likely different, etc ... things which could [and do] influence the CBO. The statistics may be different.
    I guess what I'm saying is ... it is quite hard, if not outright impossible, to get two identical databases/instances/load ... hence, don't expect the executions to be 100% identical, even if you have "identical data and indexes". By all means compare between DEV and PROD (make sure you compare the same thing though) and use the observed differences as an indicator for further investigation ... don't chase the goal of 100% identical behavior.
    Now, by all means look at that query taking 1 second in PROD ... I have only addressed <quote>The row count varies greatly. But it shouldn't as the data is the same.</quote>

  • One maschine for dev and test

    Hallo,
    i have one maschine for dev and test. i know that it is possible to do this with 2 instances. SAP recommend to separate dev and test on separate maschines. But it is possible. have anyone experiance with that? When i don´t want 2 instances it is possible to have one instance and separate only the systems for dev and test in sld? Have I to copy the business systems, change the servers and make the configurations in integration directory?
    Thanks in advance...
    Frank Schmitt

    Hi Frank,
    one reason for having two systems
    is that you won't be able to import
    directory to your TEST system easily...
    because both r3 (dev and test will have the same integration server - you won't be able to add them to 2 different transport groups... )
    this means that you will have to create almost EVERYTHING
    in the directory twice... 
    at least without doing some tricks...
    make the life of a developer and create 2 servers
    it may cost less then using developers for creating many things twice
    Regards,
    michal

  • What is CTS ? what is the role of CTS in between DEV and QAS?

    Dear
    What is CTS ? what is the role of CTS in between DEV and QAS?
    Thanx & Regards
    Mohamamd Nabi
    [email protected]

    hello, friend.
    actually, you can search for subjects like this in the threads archive.  you will be pleasantly surprised to find that there are already many threads with similar issues and that you can find answers to even more otherwise unrealized questions.  and you don't have to wait for anyone to reply.
    it looks like the issue is urgent for you, and this is why i am attaching this feed from Sadhu Kishore, which was posted some time ago... (if he replies to your thread, you may choose to award him points).
    "Hi,
    The transport workflow provides a framework for transporting enhancements or new developments of existing business functions in a system landscape. It provides a direct connection between development and transport administration. The transport workflow manages the transport process, determines the user for each individual step automatically, and then displays an interface which they can use to perform the task directly.
    It is an efficient method of transporting a selected number of requests into a group of transport targets, and uses clearly defined approval steps to ensure the quality of your target systems. The requests can be transportable change requests, Customizing requests, relocation transports or transports of copies. The transport targets do not need to be located on defined transport routes. However, the transport workflow can involve some risks, caused by the dependencies between transport requests:
    Import sequence
    It is important that you import requests in the correct order, so that development work is up-to-date in the target system.
    Incompleteness
    It is important that the functions transported in the transport proposal are complete; otherwise errors may occur in the import system.
    A request is not imported, but it contains an important data element. You use another request to transport a table that references this data element. Since the referenced data element does not exist in the target system, activation errors will occur when you import the second request.
    The transport workflow is a generic workflow. Its ability to process the transport route configuration in TMS enables it to adapt itself to any system landscape. This means you can transport multiple requests into multiple targets, even if these targets are not located on the transport routes.
    This reduces the amount of work for the transport administrator significantly. The automated nature of the workflow also reduces the likelihood of errors during transports.
    You can use the transport workflow in two different ways.
    Transport workflow as a transport strategy
    If you have production systems in your landscape that can only accept approved transports, we recommend that you use the transport workflow to organize and coordinate the transport process.
    To do this, set Workflow-controlled transports as your transport strategy and configure the transport workflow.
    When you release a transport request, the transport workflow starts automatically and the screen Create Transport Proposal appears. The requests are then released implicitly when the transport proposal is sent to the transport administrator.
    Special transport workflow (mass transports)
    You can use the special transport workflow to make transports that do not follow the defined transport routes or that take place outside the normal transport schedule (part of the mass transport strategy). These transports may be corrections made in the development system that have to be transported into the production system without delay.
    To use the special transport workflow, set Mass transports as your transport strategy and configure the transport workflow."
    thanks.  you have been most generous.
    regards.

  • Compare SPRO between DEV and QAs

    Hi there,
    I'm in GoLive eve and all consultants are desperate with the future problems.
    Is there a way that I can compare the customizing (SPRO) between DEV and QAS or another system?
    Thanks,
    Dany Anderson
    Edited by: Dany Anderson Alves on Mar 4, 2008 9:03 PM

    Hi,
    Use comparison tool in SCU0.
    Select SAP Reference IMG and ssay compare.
    This will show the differences if any.
    You can also compare at component level.
    cross system visibility should be maintained in SCC4 settings for the client
    Regards,
    Revathi

  • PPDS production order dates mismatch between APO and R3

    Hi Friends,
    We are facing one problem regarding production order date mish match between R3 and APO systems specific to a product.
    The product is planned in PPDS and the order automatically transfer to R3 through online CIF.
    PPM has got two operations 0010 and 0020 and activity relationships are like this:
    P(0010) - P(0020) Start - Start relation ship
    S(0010) - P(0010) End - Start relationship.
    In PPDS dates are shown as :
    on operation 0010 the start/end  dates are shown as  08.15.09 to 08.22.09
    on operation 0020 the start/end  dates are shown as 08.15.09 to 08.22.09
    Overall order start date is  08.15.09
    Overall order finish date is 08.22.09
    and in R3 dates are shown as:
    on operation 0010 the start/end  date are shown as  08.15.09 to 08.22.09
    on operation 0020 the start/end dates are shown as 08.22.09 to 08.29.09
    Overall order start date is  08.15.09
    Overall order finish date is 08.29.09
    The order is off one week (APO vs R3) with start and finish dates.  
    if we change DS Board settings to ignore internal relationships manually then the dates are matching exactly in APO and R3.
    We want the production order dates to be matched without manual intervention.
    Could some one please provide some hints on what is happening here and how to correct it.
    Thanks.
    Krish

    Hi Friends,
    Thanks a lot for your valuable replies in this regard.
    Actually this problem is in production environment and it took some time to test the problem with the master data modifications
    you suggested.
    As DB and Siddhrath mentioned, the problem was with routing definition. There is no parallel sequence maintained in the routing but there is start-start relation maintained in APO PPM.
    We corrected the routing definition and checked the order dates. Now the dates  are matching in R3 and APO.
    I am awarding DB and Siddharth each five points in this regard.
    Once again thank you all for your time and valuable replies.
    -Regards
    Krish.

  • Setup of Customer Dev and Test environment

    Customer wants to set up a Dev and Test env for Apex on the same machine. Been trying to wrap my head around the way to do this. Seems like these are the options:
    1) Separate databases, Two APEX homes, Two HTTP Servers
    2) One database, different Workspaces, One APEX home, One HTTP Server
    # 1 is the most 'pristine' from the perspective of separation of everything (and patching), but #2 seems like a more manageable environment. Only down side I see to #2 is that DEV can never have a newer release of APEX than TEST.
    BTW -the customer wants TEST to look like PROD except during code push time. They also want TEST env refreshed regularly from PROD in between code releases.
    Suggestions? Other options?
    Thanks,
    Dwight

    Hello Dwight,
    Check your dads.conf file. There is something like this:
    <Location /pls/apex>
    Order deny,allow
    PlsqlDocumentPath              docs
    AllowOverride                  None
    PlsqlDocumentProcedure         wwv_flow_file_mgr.process_downloadd
    PlsqlDatabaseConnectString     localhost:1521:XE ServiceNameFormat
    PlsqlNLSLanguage               AMERICAN_AMERICA.AL32UTF8
    PlsqlAuthenticationMode        Basic
    SetHandler                     pls_handler
    PlsqlDocumentTablename         wwv_flow_file_objects$
    PlsqlDatabaseUsername          APEX_PUBLIC_USER
    PlsqlDefaultPage               apex
    PlsqlDatabasePassword          apex_public_user
    PlsqlRequestValidationFunction wwv_flow_epg_include_modules.authorize
    Allow from all
    </Location>You can easily change the Location - /pls/apex - to (e.g.) /dev add another Location (/test) and provide it with the different connect strings.
    Then you can access your Dev environment using http://localhost:7778/dev and Test with http://localhost:7778/test.
    You can also change the (default) settings to your /i/ (virtual) directory to point to different dev and test directories.
    Example:
    Alias /dev/ "c:\Oracle\OAS\Apache/dev/images/"
    Alias /tst/ "c:\Oracle\OAS\Apache/test/images/"
    and change the value of the Image Prefix in your Application Definition accordingly.
    Greetings,
    Roel
    http://roelhartman.blogspot.com/
    You can reward this reply by marking it as either Helpful or Correct ;-)

Maybe you are looking for

  • Error rendering element. Exception: ORA-00942: table or view does not exist

    Guru, We are maintaining 10 internal Database through OEM 10g and using seesded reports. When we try to create simple custom reports against one of the Target database , its failing with following error message , Query : select * from v$database Erro

  • Help needed in recovering files/folders after using xfs_repair

    So it started out with whammy! Some files were not playing (audio) which previously they were doing fine without any issue ! So I thought of changing the permission and got many errors regarding abnormal inodes and since I was using  XFS filesystem I

  • Coloring a document

    How does one make a document one color and type in text of a different color?

  • Lapel mic, wired to digital recorder (in pocket!)...

    Don't laugh... I'm new to video and I'm just testing stuff at the moment... Ok, I've taken some footage (on a relatively basic camcorder), brought it into FCP, edited it, run it through Compressor and put it on a webpage (which will be my final desti

  • Online Labview evaluation: opening my files

    Hi, it seems that I can not open previous version Labview files (6 ver) in either online evaluation Labview Express 7 (it seems it gives me just remote directories) or downloaded version (it says that files are not signed). I don't recall that I had