Failed to transport dictionary changes through JDI

I'm using a dictionary DC and if I modify it and check in the activities it transport an old version of the dictionary tables. If I deploy the DC manually from the developer it makes it right. Obviously it happens the same when I transport to test.
Even worse the new changes aren't found in execution time when I try to get the table columns using JDO displaying errors like:
javax.jdo.JDOFatalUserException: column PSCIA_EMAIL in table ZRNT_TM_POLIZACIA not found in catalog at com.sap.jdo.sql.mapping.impl.MappingModelImpl.lookupXMLMappingData(MappingModelImpl.java:423)
I look to the DTR and the files of the dictionary are the same I'm using localy, so where is the server looking for that info? And how can I update it?
Thanks in advance.

Hi,
It seems that some of your changes are local and not added within the activity.
Open your dictionary project under naviagator view.
Open context menu for your dictionary project>DTR>Add-->Subtree.
It will ask for an activity and add all the files to DTR.
Now, try checking in this activity.
Regards,
Murtuza

Similar Messages

  • Transport of PCD and Content objects through JDI.

    Hi all,
    Pls share the views about the trasport procedure for PCD and Content objects.
    Is it possible to transport PCD and Content objects through JDI?
    If it is possible, could you pls share how to configure the JDI setup?
    Thanks in advance,
    Kishore.

    Hi Kishroe,
    First, there is no supported, out of the box way to handle content objects in the JDI (now NWDI- NetWeaver Development Infrastructure) With that said, since it is a repository you could export your content package. Make it a zip file or something like that and add it to the repository.
    Again, I believe this is theoretically possible. I'm sure some of the other SDN members can provide an automated approach to do this.
    Good luck,
    John

  • How to transport generated reports through report painter

    Hi All,
    We are upgrading from 4.6 to ECC6. Previously somebody created reports for Finance  through report painter, now in the new system they are two to three system generated reports which are produced by report paintyer (I guess) are retrieveing wrong data.
    So I debugged the generated reports and found two locations
    1. generated report as syntax error
    2. at selection it is having an additional filter with company code for each table  like coss, cosp, cospp which I dont find that filter in the old 4.6 generated report, I modified the generated report but problem is that it dont give me any transport number.
    So these reports are client specific, can any one suggest me how to transport these changes to quality and production environment.
    I tried with transaction GCTR , but it is failing when it is released.
    Can any one suggest me.
    Thanks
    Tangudu

    No better manual than SAP help!
    Below is the link to Report Painter
    http://help.sap.com/saphelp_erp2005/helpdata/en/5b/d22cee43c611d182b30000e829fbfe/frameset.htm
    Regards
    Sreenivas

  • How to transport Parameter changes in a crystal report

    Hi All,
    Very Good morning!!!
    I have designed a crystal report with static parameters. Earlier i used to have a dropdown kind of input selection for my parameters.
    Now i got a new requirement for a direct input in the field....tht means no dropdown ...single date field is to be entered directly.
    Accordingly i have removed the dropdown and changed to a single direct date field. I saved these changes to a request and transported to quality. Not sure whether the parameter changes are collected into a request.
    Whereas i couldn't found any changes of my parameters in quality. They are as same dropdown manner in the quality whereas i need them to be a direct field date entry which did not affected the quality server after transporting the changes.
    Could some one please let me know how to reflect these changes in quality server regarding parameter changes in a crystal report for BW.
    Thanks in Advance.
    Jitendra

    Please re-post if this is still an issue or purchase a case and have a dedicated support
    engineer work with your directly

  • Purchase order (STO) schedule line change through BAPI_PO CHANGE

    Hi Gurus,
    we have requirement in user exit of MIGO ; to change the purchase order(STO) Scheduline change through BAPI_PO_Change.
    Current process.
    STO -> outbound delivery through VL10B -> PGI -> MIGO to the receipt
    It is working fine MIGO if we give fully quanitty issued in the quantify field of MIGO.
    but if we give less quantity what ever PGI ed earlier then inside the user exit of MIGO it is giving error message 06 089 -> Quanity is smaller than the quantity Issued while changing the STO scheduline line through BAPI_PO_CHNAGE
    If anybody having any idea 06 089 -> Quanity is smaller than the quantity Issued please help me.
    Best Regards,
    Radhakrishna.

    Check the following link it will help u
    http://www.sap-img.com/abap/sample-abap-code-on-bapi-po-change.htm

  • Issue with printing production order after changes through CO02

    Hi,
    Currently I have one issue with printing production order changes through CO02. If I go for print, all the operations are captured in printout. But after the print, if I add any new operations to this production order, and then go for print, its not capturing the newly added operations. Also, there is information popup coming "Copies will be printed for order. Original list already printed". This message effectively means, even after changes, when I go for print functionality, its just going for printing copies of original printed list.
    Can anyone help me out to get the newly added operations reflected in the print through CO02 ?
    Regards,
    Shiva

    Hi Shiva,
    Refer SAP note 28887.
    Source: http://www.sapfans.com/forums/viewtopic.php?f=9&t=308577
    Please search /google before posting queries.
    Regards,
    SuryaD.

  • Photoshop CS6  mac OS X 10.9.5 - cannot update 13.0.6  - error "Update failed - unable to write at location." Tried manual update also failed. Tried to change permissions also failed.

    Photoshop CS6  mac OS X 10.9.5 - cannot update 13.0.6  - error "Update failed - unable to write at location." Tried manual update also failed. Tried to change permissions also failed.
    Follow write permission on thread below - same error message
    Re: CS6 updates and Mac write permissions
    Need Adobe help to solve this. My own searches in Forums and Google all have failed.

    It is not an Adobe problem. You have a OS permissions problem. You can try to correct this by using Disk Utility to Repair Permissions. It is in the First Aid area of Disk Utility.

  • I have iWeb '08 and just switched from Mobileme to GoDaddy and using Filezilla.  Now when I make changes through iWeb it doesn't actually publish to my desktop folder.  In fact, it doesn't publish at all but says that it did.

    I have iWeb '08 and just switched from Mobileme to GoDaddy and using Filezilla.  I also just upgraded to Lion.  Now when I make changes through iWeb it doesn't actually publish to my desktop folder.  In fact, it doesn't publish at all but says that it did.  How can I make changes in iWeb and publish to a folder?

    Choose the destination in the publish settings page as shown in the second example on this page...
    http://www.iwebformusicians.com/iWeb/Publish-Website.html

  • ABAP Query - Not showing the required changes through Z T.Code

    Hi,
    I have a ABAP Query for OPEN DELIVERY QUANTITY (T.Code - ZABC).  I change something in QUERY and activate. After that i execute it, and saw required changes.
    But when i execute it with the T.Code - ZABC, it does not show the changes. Even i Save and Test with the SQ01 t.code.
    Please tell me, what is the problem in it. Why not showing the required changes through T.Code - ZABC.
    Regards...

    HI,
    Dear, it seems that you have changed the query in SQ02 and then just save it and not generated.
    or may be after generation again clicked on SAVE button.
    So untill unless you will not generate the query, it won't work.
    Secondly, by chanegs in the query, i don't think it changes the name of the main program.
    Regds,
    Anil

  • Change detector failed while scanning for changes to type User

    I'm receiving lots of system log warnings in IdM.
    These warnings are not bound to any particular action in IdM, they just do appear while IdM is on.
    Almost one warning per one second.
    Change detector failed while scanning for changes to type Server
    Change detector failed while scanning for changes to type UserForm
    Change detector failed while scanning for changes to type User
    Change detector failed while scanning for changes to type Resource
    SysLog detail:
    Timestamp 123
    Event
    Server server11
    Severity Warning
    Component Repository
    Error Code OCDT00
    Message Change detector failed while scanning for changes to type User
    Reported cause java.lang.NullPointerException
    java.lang.NullPointerException
    at com.waveset.repository.ObjectChange.equals(ObjectChange.java:112)
    at java.util.HashMap.eq(HashMap.java:299)
    at java.util.HashMap.containsKey(HashMap.java:381)
    at java.util.HashSet.contains(HashSet.java:182)
    at com.waveset.repository.ObjectChangeManager$RemoteChangeDetector.dispatchChanges(ObjectChangeManager.java:398)
    at com.waveset.repository.ObjectChangeManager$RemoteChangeDetector.run(ObjectChangeManager.java:314)
    at java.util.TimerThread.mainLoop(Timer.java:512)
    at java.util.TimerThread.run(Timer.java:462)

    same here, just deployed 8.1 with oracle as db. It results in syslog table growing into 10+G and still growing.
    wonder if there are others who are seeing this.
    my "syslog -d 1" output looks like this...
    2009-12-02 04:52:55.972 null W xxx RP OCDT00 Change
    detector failed while scanning for changes to type Account
    2009-12-02 04:53:06.400 null W xxx RP OCDT00 Change
    detector failed while scanning for changes to type Server
    2009-12-02 04:53:06.578 null W xxx RP OCDT00 Change
    detector failed while scanning for changes to type User
    2009-12-02 04:53:06.914 null W xxx RP OCDT00 Change
    detector failed while scanning for changes to type Account
    2009-12-02 04:53:17.479 null W xxx RP OCDT00 Change
    detector failed while scanning for changes to type Server
    C:\Program Files\Apache Software Foundation\Apache Tomcat 6.0.18\webapps\ims\bin
    lh syslog -d 1

  • Standby Database fails to read dictionary from redo log

    hi,
    I am attempting to create a Logical standby database on same machine as the primary database. I have executed the steps outlined in Oracle Documentation several times, but end up with the same error. Detailes of setup and error are provided below. Please help. Thanks.
    ==========
    OS: REdhat 8 (2.4.18-14)
    RDBMS: Oracle EE Server 9.2.0.3.0
    primary db init details:
    *.log_archive_dest_1='LOCATION=/usr3/oracle/admin/lbsp/archive/ MANDATORY'
    *.log_archive_dest_2='SERVICE=STDBY'
    standby db init details:
    log_archive_dest_1='LOCATION=/usr3/oracle/admin/stdby/archive/'
    standby_archive_dest='/usr3/oracle/admin/lbsp/archive_pdb/'
    Standby alert log file (tail)
    LOGSTDBY event: ORA-01332: internal Logminer Dictionary error
    Sun Jul 13 11:37:20 2003
    Errors in file /usr3/oracle/admin/stdby/bdump/stdby_lsp0_13691.trc:
    ORA-01332: internal Logminer Dictionary error
    LSP process trace file:
    Instance name: stdby
    Redo thread mounted by this instance: 1
    Oracle process number: 18
    Unix process pid: 13691, image: oracle@prabhu (LSP0)
    *** 2003-07-13 11:37:19.972
    *** SESSION ID:(12.165) 2003-07-13 11:37:19.970
    <krvrd.c:krvrdfdm>: DDL or Dict mine error exit. 384<krvrd.c:krvrdids>: Failed to mine dictionary. flgs 180
    knahcapplymain: encountered error=1332
    *** 2003-07-13 11:37:20.217
    ksedmp: internal or fatal error
    . (memory dump)
    KNACDMP: Unassigned txns = { }
    KNACDMP: *******************************************************
    error 1332 detected in background process
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01332: internal Logminer Dictionary error
    another trace file created by error is: stdby_p001_13695.trc
    Instance name: stdby
    Redo thread mounted by this instance: 1
    Oracle process number: 20
    Unix process pid: 13695, image: oracle@prabhu (P001)
    *** 2003-07-13 11:37:19.961
    *** SESSION ID:(22.8) 2003-07-13 11:37:19.908
    krvxmrs: Leaving by exception: 604
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01031: insufficient privileges
    ORA-06512: at "SYS.LOGMNR_KRVRDREPDICT3", line 68
    ORA-06512: at line 1
    there are no errors anywhere during the creation, mounting or opening of standby database. After the initial log register, any log switch on primary is communicated to standby and visible in DBA_LOGSTDBY_LOG. Also, archived logs from primary are successfuly copied by system to directory pointed by standby db's standby_archive_dest parameter.
    I noticed, somehow everytime I issue "ALTER DATABASE START LOGICAL STANDBY APPLY" command the procedures and packages related to logmnr get invalid. I compile them and again after "APPLY" they become invalid.
    Invalid object list:
    OBJECT_TYPE OBJECT_NAME
    VIEW DBA_LOGSTDBY_PROGRESS
    PACKAGE BODY DBMS_INTERNAL_LOGSTDBY
    PACKAGE BODY DBMS_STREAMS_ADM_UTL
    VIEW LOGMNR_DICT
    PACKAGE BODY LOGMNR_DICT_CACHE
    PROCEDURE LOGMNR_GTLO3
    PROCEDURE LOGMNR_KRVRDA_TEST_APPLY
    Anybody point out what I am doing wrong. Thanks for the help

    ORA-15001: diskgroup "ORAREDO3" does not exist or is not mounted
    ORA-15001: diskgroup "ORAREDO3" does not exist or is not mountedhave you mentioned parameter LOG_FILE_NAME_CONVERT in standby when online redo log locations are different?
    post from standby:-
    SQL> select name, state From v$asm_diskgroup;
    FAL[client, MRP0]: Error 1031 connecting to MKS01P_PRD for fetching gap sequence
    ORA-01031: insufficient privilegesPost from primary & standby
    SQL> select * from v$pwfile_users;
    User Profile for 919131
    919131     
    Handle:     919131  
    Status Level:     Newbie
    Registered:     Mar 6, 2012
    Total Posts:     16
    Total Questions:     8 (8 unresolved)
    OTN failed 100% to help you, then why you posted another question?
    First close all your old answered threads and then better continue your updates in your thread.
    Edited by: CKPT on Jul 9, 2012 11:45 AM

  • Transport of changes in 3KEI

    Hi All,
    I made changes to 3KEI table, When i generate transport system is showing up objects in the request. System allows me to change directly in Quality & Development. But how can i transport new changes to Production? Is there any way or program to transport to production.
    Its urgent please help.
    Thanks

    If you have captured the changed made in transaction 3KEI, release the transport request to QA, Check  and if happy , move to production.
    It does not matter, whether the QA is directly changeable or not. But, in order ensure the QA is test was good, ensure, that the QA is not changed while you are testing. Then move to PRD.

  • Transport and Change Management

    Hello,
    We need to modify our MDM structure (add fields and tables). We would like to avoid unloading the repository. Can we do that in our test environment and move to Production only the structure while keeping the data in Prod.
    What is the Transport and Change Management patch? Is this what we need?
    Thanks for your help!

    Hello,
    This is not possible, as there is no support for Transport and Change management. You will always need some down-time in production. But there are some strategies to reduce the down-time.
    (1) You could use the Java ADMIN API to create a batch-script that performs the required repository modifications. Once you validate it works, unload your repository, run your program and load the repository again.
    (2) Create a "slave" repository and re-direct your production users to that repository. Unload your production repository and make all changes. Once done, reload that repository and switch back. Remember that this will only help to keep a "look-up" only version of the production repository...
    Regards
    Dirk

  • Failed ABAP Transports metric has grey status

    Dear colleagues,
    in System monitoring I found Failed ABAP Transports metric has grey status. Then I checked Data Collection Status and found:
    Log for mainextractor E2E_DPC_PULL_CORE has 14 warning(s) for the last 180
    minutes.
    In log I found:
    Metric
    005056B62E9D1ED1A7D246512DA45A0D/506A27FD986B4161E1008000AC100BB8 - Status: Data
    Provider E2E_CONFIG_VAL : E2E_CONFIG_VAL: ABAP_TRANSPORTS;16.04.2014 05:03:45
    -16.04.2014 06:04
    Thenh I tried to use E2E_DPC_GET_DATA_CHECK
    and found
    Could you please help me to resolve this issue?
    Thanks a lot ,
    Alexander

    Hello Aleksandr,
    Please chek following notes according to your Solman version:
    1975717 - ST710, SP10,11 System Monitoring 'Failed ABAP Transports' gray status
    1792776 - ST710, SP06 System Monitoring 'Failed ABAP Transports'
    BR,
    K.

  • Transporting Workflow changes to QA

    Hi Experts,
    I have some Qs regarding transporting the changes made to the Workflow.
    My Question is that, a custom workflow has lot of requests associated with it ..( both customizing and workbench ).
    Once my workflow has been transported to QA. Now i have to make sm changes to the send mail step.
    I have made the change and linked it to a transportable workbench request.
    Now releasing this request alone will move the change . Or i have to goto basic data and Active the Strt condition again and transport that too, so that the change works fine in QA ?
    Hope my Qs is clear.
    Regards,
    Radhika.

    I have made the change and linked it to a transportable workbench request.
    Now releasing this request alone will move the change
    Yes, there is no need of transporting again the linkage request.Only the changes are reflected to the other system when ever you try to export a request to other system.
    And make sure each time you do a change and when the request is ported to the other system then run the TXn SWU_OBUF and refresh the organizational assignment . this makes the workflow environment to run smoothly.

Maybe you are looking for