Problem in Transporting ZTABLE and SCRIPT

Hi Experts,
<b>  I am an ABAP Developer. We have developed a Custom Script ZSCRIPT which fetches data from a ZTABLE. The problem here is when I am trying to transport the Objects from Development Server to Quality the ZTABLE and ZSCRIPT are not getting transported, i.e the changes that we have made in Development are not taking effect in the Quality.
Could anyone give me valuable solutions for the above stated problem.</b>
Thanks
AK

Hai Kumar,
first take the request number.
goto se01->give request number
you will get all two numbers:
first release the sub-level, then release the top level.
now it is relaesed from development and u can check it in quality.
cheers,
madhu

Similar Messages

  • Problems With Transporting Interaction Center Scripts

    I have problem when I select a Script and click on Transport button in the Web UI for the interaction center, IC_Manager Role. After clicking on the transport button I need to check in the back end (transaction code SE01), if a customizing request has been created for that script I selected.
    So the problem is how do we find out if there is customizin request for that script or not? Is there a Pop Up window which comes after clicking on Transport button which would give me a customizin request number, which could be later used to find the request in backend?

    Hi,
    Try checking this in the BSP Application CRM_IC_ISE.
    If you open the script on the third tab (Script Transport) you can see the transport details.
    Hope this helps.
    Rgds,
    Rajiv

  • Problem with shell commands and scripts from an Applescript Application

    Hi-
    I am fairly new to OSX software development. I am trying to build an application that creates a reverse SSH tunnel to my computer and starts OSXvnc. The idea is that my Mom or anyone else who needs help can connect to me without having to tinker with their firewalls.
    There are plenty of resources on how to do this, and I have found them. What I have a problem with is the following.
    I am building this application in Xcode as an Applescript application, because Applescript and shell scripting are the only forms of programming I know. I use an expect script to connect through SSH, and a "do shell script" for the raw OSXvnc-server application to allow screen sharing.
    The problem is that when I click on the button to launch OSXvnc-server or the button to launch SSH, the application freezes until the process it spawns is killed or finishes. For example, I can set SSH to timeout after 60 seconds of no connection, and then the Applescript application responds again. I even tried using the ssh -f command to fork the process, but that doesn't seem to help.
    I am also using "try" around each of the items.
    What am I doing wrong? How can I make the buttons in my app launch SSH and OSXvnc-server without hanging the application while it waits for them to finish?
    Thanks so much!

    See here for an explanation of the syntax.
    (20960)

  • Transport.of SAP script form , printer def and device type is not enaugh

    I transported the SAP script form , printer definition and device types of a thermal printer.
    On original system the printout is ok but on target system not. What should I do?

    Thank you for your fast answer
    As the matther of fact I am technical person. I think Output type belongs to application. However the difference is visible in  se71 ->Utilities->Printing test -> output device ->print preview.
    As I wrote I transpored corresponding. SAP script form , printer def and device type

  • Firefox 31.0 - slowness and script problems for using websites

    Since Firefox 31.0 is released, some technical problems of slowness and script problems to use
    websites have been noticed from mine. Sometimes it needs up to some minutes to load a website
    and sometimes there are error messages for an unanswered script.
    pemigabo123

    Hello FredMcD,
    thank you for your message.
    But your answer to my question at support.mozilla.org is not helpful,
    although it is a professional technical answer from a Top 10 Contributor
    from mozilla.
    It seems that there is a permanent unfixed problem always if a new
    version of mozilla is released. After all fixed problems of the earlier
    version of firefox, there is no problem at all.
    But if I am already working with the non-problem earlier version of
    firefox, then suddenly the problems of error messages, slowness and
    script errors are appearing, and if I have looked for the actual version
    of firefox, there was signed a new version and now I have known,
    aha a new vesion of firefox was released and it needs up some time
    for every new version until the error problems were fixed.
    Maybe it is combined with the plugins like flashplayer or shockwave.
    But a professional version of firefox should not sign up error messages,
    because of plugins, while working with these plugins.
    Another answer from another support.mozilla.member was for an
    overload problem, because of opening some windows of firefox.
    But to open many windows of firefox was already no problem with the
    earlier version firefox 30.0 or earlier, until the error messages are fixed.
    The same unfixed problems are since firefox 25.0 up to 30.0, but I was
    wondering for how long version 30.0 was actual in practical working.
    And since version of firefox 31.0 was released the same problems
    of error messages, slowness and script errors were appearing.
    At the moment it seems firefox 31.0 works alright.
    But only the following time shows if it works without error messages,
    slowness and sript problems or not.
    In hope to give a helpful answer to yours
    kind regards
    pemigabo123

  • Transport ERP and BW DataSource problem

    Hi,
    I´m trying to transport from the Development system to Quality. During this transport I have figured out that the Quality BW has a connection problem with the Quality ERP. Then I decided to transport to the Productive system.
    The transports of the ERP works fine (Dev > Quality > Productive). But every time I have a transport error for the BW transports in Quality and Productive. It says that the DataSource and DTP is not available in BW (each system Quality and Productive) and I am activating the DataSource and DTP manually afterwards.
    Is this the normal way?
    I am asking because in Productive I can´t activate anything afterwards.

    Normally check whether is there any change in R/3 data source when you are transporting the changes in BW system.
    So as a practice you need to transport the source system related transport first and then replicate the data source, before moving the transport in BW system. As a practice after the replication you can also activate the data source using the standard program avaialble.
    Also you can check the connection between R/3 and BI system before moving the BW related transport which might require the source system should be active in order to check whether the DS is present in R/3 or not. So make sure this point always.
    About the DTP's, we are facing this issue in BI7.0 , you need to findout what is causing the issue to deactivate the DTP's.. if there is any change at transformation, then this might can also cause the DTP to go deactivated.
    Hope this helps.
    Murali
    Edited by: Murali M on Aug 24, 2010 1:09 PM

  • Portal Transport Set(EXPORT) script not running.

    Hi,
    I have created a Instant portal(10 Appli Rel 2) which is on OS
    [SUSE LINUX Enterprise Server 9 (i586) VERSION = 9 ] .
    Now I want to create transport set and want to export that instant portal to another
    SUSE LINUX server.
    But the problem I am facing is that the script that is generating by the EXPORT WIZARD is
    for UNIX which has .CSH extension.
    So i am not able to rum that thing...as Linux use .SH .....can anyone help me to run that thing .......
    Thanx

    DId you run the script to put the trasnport set ready for been include?
    When you create a transport set, automaticaly you can get the dmp from it, but when you import it, you need to run the script that willmake a import in db level so you can import it in the portal, is like making two imports, one at portal, one at command line using the script, and you run it like script.cmd -mode import -s portalschema -c orcl -pu orcladmin -d filename.dmp where -mode indicate that is going to be an import, -c db SID name, -pu Portal User -d File created during export and -s db schema for portal.
    Hope this helps.
    Greetings

  • Problem with FMIS 4 and streaming of live events

    We have a problem on our platform and its driving us nuts... no seriously... NUTS.
    We have triple checked every possible component from a hardware level up to a software configuration level.
    The problem :  Our platform consists of 2 origin servers with 6 edges talking to them (really beefy hardware).  Once we inject a live stream into our two origins... we can successfully get the stream out via the edges and stream it successfully via our player.  Once we hit around 2200 concurrent connections, the FMIS servers drops all the connections busy with streams.   From the logs the only thing we can see is the following - Tons of disconnects with the Status code 103's which according to the online documentation means Client disconnected due to server shutdown (or application unloaded).
    We simulated the scenario with the FMS load simulator utility... and we start seeing errors + all connections dropped around the 2200 mark.
    The machines are Dell blades with dual CPU Xeons (quad cores) with around 50 gigs of ram per server... The edges are all on 10 Gb/s ethernet interfaces as well. 
    We managed to generate a nice big fat coredump on the one origin and the only thing visible from inspecting the core dumps + logs is the following :
    2011-10-05 
    15:44:10   
    22353   (e)2641112 
    JavaScript runtime is out of memory; server shutting down instance (Adaptor:
    _defaultRoot_, VHost: _defaultVHost_, App: livestreamcast_origin/_definst_). Check the JavaScript runtime size for this application
    in the configuration file.
    And from the core dump :
    warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7fff9ddfc000
    Core was generated by `/opt/adobe/fms/fmscore -adaptor _defaultRoot_ -vhost _defaultVHost_ -app -inst'.
    Program terminated with signal 11, Segmentation fault.
    #0  0x00002aaaab19ab22 in js_MarkGCThing () from /opt/adobe/fms/modules/scriptengines/libasc.so
    (gdb) bt
    #0  0x00002aaaab19ab22 in js_MarkGCThing () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #1  0x00002aaaab196b63 in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #2  0x00002aaaab1b316f in js_Mark () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #3  0x00002aaaab19a673 in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #4  0x00002aaaab19a6f7 in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #5  0x00002aaaab19ab3d in js_MarkGCThing () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #6  0x00002aaaab19abbe in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #7  0x00002aaaab185bbe in JS_DHashTableEnumerate () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #8  0x00002aaaab19b39d in js_GC () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #9  0x00002aaaab17e6d7 in js_DestroyContext () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #10 0x00002aaaab176bf4 in JS_DestroyContext () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #11 0x00002aaaab14f5e3 in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #12 0x00002aaaab14fabd in JScriptVMImpl::resetContext() () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #13 0x00002aaaab1527b4 in JScriptVMImpl::postProcessCbk(unsigned int, bool, int) ()
       from /opt/adobe/fms/modules/scriptengines/libasc.so
    #14 0x00002aaaab1035c7 in boost::detail::function::void_function_obj_invoker3<boost::_bi::bind_t<void, boost::_mfi::mf3<void, IJScriptVM, unsigned int, bool, int>, boost::_bi::list4<boost::_bi::value<IJScriptVM*>, boost::arg<1>, boost::arg<2>, boost::arg<3> > >, void, unsigned int, bool, int>::invoke(boost::detail::function::function_buffer&, unsigned int, bool, int) ()
       from /opt/adobe/fms/modules/scriptengines/libasc.so
    #15 0x00002aaaab0fddf6 in boost::function3<void, unsigned int, bool, int>::operator()(unsigned int, bool, int) const ()
       from /opt/adobe/fms/modules/scriptengines/libasc.so
    #16 0x00002aaaab0fbd9d in fms::script::AscRequestQ::run() () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #17 0x00002aaaab0fd0eb in boost::detail::function::function_obj_invoker0<boost::_bi::bind_t<bool, boost::_mfi::mf0<bool, fms::script::AscRequestQ>, boost::_bi::list1<boost::_bi::value<fms::script::IntrusivePtr<fms::script::AscRequestQ> > > >, bool>::invoke(boost::detail::function::function_buffer&) () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #18 0x00000000009c7327 in boost::function0<bool>::operator()() const ()
    #19 0x00000000009c7529 in fms::script::QueueRequest::run() ()
    #20 0x00000000008b868a in TCThreadPool::launchThreadRun(void*) ()
    #21 0x00000000008b8bd6 in TCThreadPool::__ThreadStaticPoolEntry(void*) ()
    #22 0x00000000008ba496 in launchThreadRun(void*) ()
    #23 0x00000000008bb44f in __TCThreadEntry(void*) ()
    #24 0x000000390ca0673d in start_thread () from /lib64/libpthread.so.0
    #25 0x000000390bed44bd in clone () from /lib64/libc.so.6
    From what it looks like above, FMS is hard crashing when trying to use clone(2) (basically it means when its trying to spawn a new/another process).
    I am really hoping there is someone out there who can guide us in the right direction with regards to how we can pinpoint why our platform cannot cope with a pathetic 2200 connections before the FMIS daemon drops all connected streams.
    There has to be someone out there that has run into this or a similiar problem like this...  HELP !!!!
    Any feedback / ideas would be greatly appreciated.

    Thank you very much for the reply :-)
    We have been fiddling with the platform on many levels yesterday, and one thing we did do was bump that value up from 1024 to 8192... This made a HUGE improvement in ensuring the platform now holds the live streaming connections. (up to 8000 per edge)
    I think for other future reference and to aid other people that might run into this problem in the future, its a good idea to increase this value.  From what we have seen, read and heard, that default value is fairly conservative, its suppose to grow when the load demands it, however, if you have a large scale of connections coming in at once from multiple locations, it can happen that it grows too quickly which can result in the application to be reloaded (which disconnects all users, i.e. all edge servers connected to this origin).
    Another option we were recommended to modify was the following :
    In adaptor.xml you currently have this line:
    <HostPort name="edge1" ctl_channel="localhost:19350" rtmfp="${ADAPTOR.HOSTPORT}">${ADAPTOR.HOSTPORT}</HostPort>
    You can set this to
                    <HostPort name="edge1" ctl_channel="localhost:19350" rtmfp=":80">:80</HostPort>
    <HostPort name="edge2" ctl_channel="localhost:19351" rtmfp=":1935">:1935</HostPort>
    This will create two edge processes for both ports 80 and 1935. Currently both ports are served from the same fmsedge process.
    Of course this is not a huge performance improvement, but should further distribute load over more processes which is a good thing. Especially when there are that many connections.
    This setting can be made on all machines (origin + edge)
    Hopefully this could help other people also running into the same problems we have seen ...

  • Problem while transporting transfer rules in BW 3.5

    Hi All,
    I have a problem while transporting transfer rules in BW 3.5.
    I have just checked the box for conversion to Transfer structure / Transfer rules of an infoobject and tried to transport it to quality. I got this below error message:
    The selections   T ZBW_ZBILLRATP_TEXT_BA specify more than one DataSource
    Diagnosis
    You need to select an individual DataSource based on the selection conditions
    InfoSource =
    Source System =
    Object Version = T
    Transfer Structure = ZBW_ZBILLRATP_TEXT_BA
    although the select event is not unique.
    Procedure
    Check the objects to which the selection applies.
    Reference to transfer structure ZBW_ZBILLRATP_TEXT_BA not available. No activation possible.
    Message no. RSAR436
    Diagnosis
    Transfer structure ZBW_ZBILLRATP_TEXT_BA should be transported into this system.
    However, no DataSource mapping refers to this transfer structure.
    System response
    The transfer structure was not activated or deleted.
    Procedure
    Ensure that DataSource mapping, with a reference to the transfer structure ZBW_ZBILLRATP_TEXT_BA is on the same transport request. Use the transport connection to create a consistent request.
    Can any one provide me details of how to overcome this issue. I tried transporting all related objects.
    Regards
    Lakshmi

    Hi Lakshmi,
    Befor Transporting the Transfer rules, make sure that you are transporting the Active Versions of your Datasource & DataTarget between which your transfer rules exist.
    Also make sure that the Transfer rules are active before it is transported.
    Hope it helps!
    Regards,
    Pavan

  • Problem in transport ID

    Hi gurus,
    We want to transport our PI 7.1 SP06 scenarios from DEV to QA. We have taken the following actions:
    - We've registered our TS in SLD from back-end's Tx. RZ70.
    - We've created our BS
    - We've imported the SWC to the ESB.
    - We want to import the scenarios to ID, but we get the following error:
    com.sap.aii.utilxi.misc.api.BaseRuntimeException: Import failed because of business system transfer of object Receiver Determination  | BS_xxxx | Interface | * | *: Obligatory transport target for business system BS_xxxx not found in System Landscape Directory.
    We have 3 SLD (one for each enviroment). All our BS have the same names in DEV and QA (with the exception of "INTEGRATION_SERVER_<SID>"). Every BS in DEV enviroment belongs to the group "GROUP_PI" and every BS in QA belongs to "GROUP_PI".
    We have found this thread: " About ABAP Technical Systems and Business Systems " and we are trying to apply it, but have this problems:
    - Exporting all the SLD content from DEV to QA we get errors about nonexistent referenced objects.
    We have this questions:
    - Considering that the BS have the same name between enviroments, Why do we get that error importing ID objects?
    - Can we use the procedure from that thread to solve our problem? Why do we get that referency error having exported/imported all SLD's content?
    If we follow the procedure of that thread, QA's SLD will end having all the objects from DEV's SLD plus its own; i.e. when we have 50 BS in DEV (ABAP, JAVA + Third Party) we must have 100 at QA!!?? We don't see this as an optimal solution. Which could be the best way?
    Thanks in advance.

    Hi,
    Whatever is mentioned in the thread is correct and you can follow it . But in that thread, they have asked you to :
    1.Create Transport Groups on your QA SLD- 1 transport group for  all Dev Business Systems and one transport group for all QA business systems.
    2.Then create transport targets.
    But i think u have done step- 1 and 2 in DEV SLD itself, so when u transport the Business system from Dev SLD to QA SLD, it is expecting the transport target defined in DEV SLD to be present in QA SLD failing which is throwing the below error:
    "Import failed because of business system transfer of object Receiver Determination  | BS_xxxx | Interface | * | *: Obligatory transport target for business system BS_xxxx not found in System Landscape Directory."
    And i don't know why you are using the same Business system group name  "GROUP_PI". for both BS in QA and BS in DEV ?
    If you create the transport groups and transport targets correctly, ur import will be successful.
    Thanks,
    Laawanya Danasekaran

  • Problem during transport

    Dear All,
    Now i meet a problem during transport my ABAP in dev to prod server. Not sure is BASIS or my problem.
    The error is "Request after end mark"
    One more question is for the transaction code STMS, the status for production server should be LOCK or IMPORT QUEUE IS OPEN? If it should be IMPORT QUEUE IS OPEN, how i going to do it cos now our status is LOCK?
    Thanks for ur help....
    regards;
    jee

    Hi,
    The "Request after end mark" will be displayed due to the following:
    Requests after the end mark are not imported during the next import. The end mark is automatically deleted only when the next import has completely ended. Afterwards, the requests can be imported. You can see the legend information in the following website:
    http://help.sap.com/saphelp_nw04/helpdata/en/44/b4a3847acc11d1899e0000e829fbbd/content.htm
    However, as per your questions I understand that the TMS configuration is not proper. Can you check the r3trans and the transport routes/layers again and see what is missing?

  • Problems when transporting from DEV to PROD

    Hi experts, I have problem during transport from DEV to Production.
    We have a  DEV, QAS and PRODUCTION and we have transport from DEV to QAS very well. But when we transport from DEV to PROD, recibe an error  like:
    After beginning the import method RS_IOBC_AFTER_IMPORT for Objects types IOBC mode (activation)
      Failed to verify catalog InfoObjetos ZCM_UNIMET_CHA01
      The InfoObjeto ZCM_RENC version does not exist in active
      The InfoObjeto ZCM_RENM version does not exist in active
    1.The Query is: Wy if the transport go OK from DE to QAS, then The transport from DEV to PROD terminated in error?.
    2. There are any items to check in production before to transport?.
    PD: All of the Objects in DEV are in ACTIVE status
    Tanks,
    Ramon Sulvaran

    The transport ended in QAS with error code (4) and ended in Producion with error code (8), the detal of the error in Production begin with:
    Home After import RS_ODSO_AFTER_IMPORT method for type (s) ODSO object (trigger mode)
    Activation of DataStore objects of type Object
    Check DataStore objects of type Object
    DataStore Object Verification ZCM_DS01
    The DataStore object is consistent ZCM_DS01
    Burning the DataStore objects of type Object
    Internal activation (DataStore Object)
    Pretreatment / Creating DDIC objects to DataStore Object ZCM_DS01
    Table / View / BIC/AZCM_DS0100 (type 0) DataStore object recorded ZCM_DS01
    Create / Delete the indexes for the active table
    Table / View / BIC/AZCM_DS0140 (type 4) recorded ZCM_DS01 DataStore object
    Type of table / BIC/WAZCM_DS0100 recorded
    Table / View / BIC/VZCM_DS012 (type VIEW) recorded ZCM_DS01 DataStore object
    Log ZCM_DS01 DataStore object changes was recorded successfully
    Writing Object Catalog entries (Tadiran)
    He has written the object TABL / BIC/AZCM_DS0100 in catálgo objects (Tadiran)
    He has written the object TABL / BIC/AZCM_DS0140 in catálgo objects (Tadiran)
    He has written the object TABL / BIC/VZCM_DS012 in catálgo objects (Tadiran)
    He has written the object TTYP / BIC/WAZCM_DS0100 in catálgo objects (Tadiran)
    Activate all ABAP Dictionary objects (5):
    Error / Warning in progr.activación Dict., Detailed log> Detail
    Activate table / BIC/AZCM_DS0100.
    Lack of expansion to the category table
    Lack of extension category for include or subtype .......
    When i go to BW Production to check teh object. I see tha the Object was create but in inactive status (grey color). When y Try to Actiavate it y see the error:  Error/ Warning in program Activation Dcit. Detailed Log -> Detail

  • Problem in Transporting Function module

    Hi,
    I have created a bapi(RFC).
    For testing I created several copies and Bapi Object and the last version got sucessful .
    but now I am getting problem in transportation, I have included the FM and Function group in the new request no but the include program is not taking the new request no and and when I am going to version management there is no no.
    can anybody help me for this??
    Thanks and Best Regards.
    Kusum.

    hi,
    Pl post questions [SAP Business One] forum questions only.
    Pl close thread.
    Jeyakanthan

  • Problem in transporting change document

    Hi guys,
    I have created one change document for customized field.
    I m getting problem while transporting this change document.
    The view is not generating in production system.
    Function group is coming blank for change document object in production.
    Is there other way of generating the change document ???
    Kindly reply.
    Regards,
    Siya

    Check this.
    1. Deletion flag is set or not.?
    2. Execute MCDOKDEL and check for the document
    3. Archiving path Storage Path location.?
    Thanks
    S.N.

  • Problem in transporting WPC Content

    Hi,
      I am facing  problem in transporting WPC Content from server1 to server2. After transport the webpages are not displaying the contents. Webpages are showing empty layouts. I have done the layout configurations. Webpages are cotaining iViews and Html contents from KM . The webpages are not allowing to edit. Its very urgent.Help me to get through this.
    Regards,
    Mallini.V

    Hi MaliniV,
    Before you transfer the WPC content from one server to another, you need to be very careful about the settings and configurations. Did deply the WPC on the other server ? Did u do any custom page layouts in development server ? if yes, you need to do the same in production server as well. if not , it won't work.
    I too had the same problem before. but ! after checking all these , I corrected them.
    I hope this will help u.
    Thanks
    Suresh

Maybe you are looking for