Large transports a problem?

Hello Folks,
in previous projects we always went for collecting objects into transport by type buttom up e.g.
1. Infoobjects
2. Data Sources
However, now we have enabled standard transport system and get prompts for every object creation. Which is convinient for developers as you save the collection process and you don't forget any object to collect.
General idea we got is to group now objects into process deliveries e.g. shipment load -> this would then contain all DSO, Cubes, Proces chains and even queries associated with this process.
With this approach the transports can get quite packed with objects.
Are large transports of any problem? My assumption is that it simply will run a bit longer on the target and yes there might be some exercise to fix problems (dependent object is in another transport). But overall large transports will work or did anyone had negativ experience.
Thanks for all replies in advance,
Axel

>
Axel wrote:
>
> However, now we have enabled standard transport system and get prompts for every object creation. Which is convinient for developers as you save the collection process and you don't forget any object to collect.
Actually, we dont't have standard transport switched on in our system, as we don't want to collect everything we create, it has its own pros and cons.
Anyway regarding large transports, we once tried to transport 150 queries in one transport along with dataflow  (in a landscape where we have several transports transported in a day across tracks). We were not able to release the transport (the reason may be as suggested by  shailesh patil ) , it used to get stuck in red.
Finally we managed to take help from Basis and load it in splits
On contrary, this may be successful sometimes too.

Similar Messages

  • Transport Stream Problems

    My compressor keep locking up when I set it to export in "MPEG-2/Transport Stream" setting. I would like to export in this setting because I believe that this is how Toast encodes to BluRay.
    Any ideas/thoughts/solutions?
    Go Hornets!

    I'm not sure about your transport stream problem, but Toast doesn't need it; it will use its AVCHD encoder on a regular QT movie; I exported a self contained QT movie from an FCP timeline, dragged it into the Toast window, and let it encode; it worked well.
    If you send Toast a previously compressed file, it will encode it again in the AVCHD format, probably at a loss of quality.
    Toast can burn the Blu Ray format to a standard DVD, giving you about 30 minutes of HD footage, and using your standard DVD burner.

  • Importing large .m2v files problem

    I am currently using cs3 version of encore and have come accross a problem when trying to import a large (3.1gb)m2v file that was created in PPcs3.
    When I try to import, encore starts to try to import but them suddenly closes down completely. I can import smaller m2v files with out a problem.
    Does anyone have any suggestions?
    my system is Inte duo 2 quad processor QX6800, 4gb ram, GEforce 8800GTX graphics card and I'm using XP Pro operating system
    Ian

    I had similar problems with CS3. This is my research and solution (so far...)
    When outputting MPG files for HD, if you dont do multiplexing in Premiere, it generates a M2V file and a WAV file. Encore basically crashed on every import of the m2v, (irrespective of importing as timeline or as asset) if it was larger than some value (experimentaly around 8GB).
    Someone suggested that it had to do with the XMPSES files. So put the m2v and wav file in a separate directory and import from there (or delete the XMP and XMPSES file). I tried this and it does work, but the chapterpoint information is lost, so in the end I tried to avoid creating >7GB files for import in ENCCS3. However, later with another project, Encore did not even accept 4GB files.
    In the end I found the solution to be to have Premiere generate a transport stream (i.e. a multiplexed file m2t in stead of two separate files for audio and video (wav and m2v)). Get a license for the AC3 encoder, switch on multiplexing and Encore happily imports the resulting files without any problem (until now)
    Hope this helps
    Lucien

  • Transport request problem

    Hi all,
    when releasing the transport request from developement client subrequest is released properly but when released the main request getting the error
    " Test call of transport control program (tp) ended with return code 0232
        Message no. TK094
    Diagnosis
        Your transport request could not be exported, since all requirements
        were not fulfilled.
        Calling the transport control program tp
           "tp EXPCHK DEVK904964 pf=/usr/sap/trans/bin/TP_DOMAIN_PRD.PFL
        -Dtransdir=/usr/sap/tr"
        which checks the export requirements, returned the following
        information:
        Return code from tp:    0232
        Error text from tp:     ERROR: Connect to DEV failed (20071030084950,
        prob "
    can anyone suggest about this.
    thanks in advance .
    sk.

    Hi SK,
    As the transport log itself saying that the connection to development failed. There may be some connection issue in STMS. Ask your basis if there is any problem.
    Regards,
    Atish

  • Transport request problem with a ztable

    Hello,
    I have some problems releasing a customizing transport request. It contains "table contents" of a z table. This z table is marked as "customizing table" (option C).
    Error message is "task desk900213 belongs to a different category"
    Thanks.

    Hi Alberto,
    What type of request are you using. Also let me know the classification of task.
    I mean if you a workbench request is giving this error then try using a customising one and if a customising one is giving this error try using workbench transport.
    I guess this is related to the thread:
    Re: change object transport order
    Also I hope you have assigned a development class to the Z table.
    Please award points accordingly.
    Regards.
    Ruchit.
    Message was edited by:
            Ruchit Khushu

  • Stock transport order  problem

    hai guys
    <b>Requirement:</b> Supplying plant good issued valuation type should be automatically updated in the receiving plant while posting goods receipt.
    1.Stock transport order created in receiving plant with valuation type V1.
    2.supplying plant posting goods issue With respect to the Stock transport order with valuation type V2.
    3.But in the receiving plant it is showing valuation type V1 only.i want this should be automatically updated to Valuation type V2 from the supplying plant while posting goods receipt.
    <b>NOTE:</b>both valuation type V1,V2 configured both in the Supplying and receiving plant.
    How to resolve the above problem.

    Use TCode MBSU to recive the material in receing plant against the suppling plant document
    If u do goods recipt against STO it will pick up the data from sto only ie V1 and not V2
    Reward if useful

  • Large Data file problem in Oracle 8.1.7 and RedHat 6.2EE

    I've installed the RedHat 6.2EE (Enterprise
    Edition Optimized for Oracle8i) and Oracle
    EE 8.1.7. I am able to create very large file
    ( > 2GB) using standard commands, such as
    'cat', 'dd', .... However, when I create a
    large data file in Oracle, I get the
    following error messages:
    create tablespace ts datafile '/data/u1/db1/data1.dbf' size 10000M autoextend off
    extent management local autoallocate;
    create tablespace ts datafile '/data/u1/db1/data1.dbf' size 10000M autoextend off
    ERROR at line 1:
    ORA-19502: write error on file "/data/u1/db1/data1.dbf", blockno 231425
    (blocksize=8192)
    ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file
    Additional information: 231425
    Additional information: 64
    Additional information: 231425
    Do anyone know what's wrong?
    Thanks
    david

    I've finally solved it!
    I downloaded the following jre from blackdown:
    jre118_v3-glibc-2.1.3-DYNMOTIF.tar.bz2
    It's the only one that seems to work (and god, have I tried them all!)
    I've no idea what the DYNMOTIF means (apart from being something to do with Motif - but you don't have to be a linux guru to work that out ;)) - but, hell, it works.
    And after sitting in front of this machine for 3 days trying to deal with Oracle's, frankly PATHETIC install, that's so full of holes and bugs, that's all I care about..
    The one bundled with Oracle 8.1.7 doesn't work with Linux redhat 6.2EE.
    Don't oracle test their software?
    Anyway I'm happy now, and I'm leaving this in case anybody else has the same problem.
    Thanks for everyone's help.

  • Km transport related problem

    Hello guys,
                I have done the offline transport of km data, and as the SAP documentation says only ACL's get transported. The problem is the we are not getting the collaboration tab in folder or document details property.
    We are only getting View, Actions and Settings.
    I guess this problem is due to ACL so i tried giving FULL CONTROL to a user but its not working? Also in the Settings tab it only shows few options like propeties permissions versioning and templates.

    I assume that 'virtual' system as QA.
    So you have setup TMS using that real DEV and virtual QA. You setup transport route, create a transport and release it.
    The time when you release the transport, it is already applied now in DEV. Why it should be there in DEV Import queue then ? It would be shown in that Virtual system's Import queue (second system/QA) but not in dev import queue.
    In any case you need not to 'apply/import' that transport 'again' in DEV, when it is already created and released from DEV.
    Thanks

  • Large numbers calculation problem (determinant calculation)

    Hello experts,
    I have really interesting problem. I am calculatig determinant in ABAP with a large numbers (in CRM 5.0 system).
    My formula for determinant is :
    FORM calculate_determinant USING    det      TYPE zsppo_determinant
                               CHANGING value    TYPE f .
      value =
        (  1 * det-a11 * det-a22 * det-a33 * det-a44 ) + ( -1 * det-a11 * det-a22 * det-a34 * det-a43 ) +
        ( -1 * det-a11 * det-a23 * det-a32 * det-a44 ) + (  1 * det-a11 * det-a23 * det-a34 * det-a42 ) +
        ( -1 * det-a11 * det-a24 * det-a33 * det-a42 ) + (  1 * det-a11 * det-a24 * det-a32 * det-a43 ) +
        ( -1 * det-a12 * det-a21 * det-a33 * det-a44 ) + (  1 * det-a12 * det-a21 * det-a34 * det-a43 ) +
        (  1 * det-a12 * det-a23 * det-a31 * det-a44 ) + ( -1 * det-a12 * det-a23 * det-a34 * det-a41 ) +
        ( -1 * det-a12 * det-a24 * det-a31 * det-a43 ) + (  1 * det-a12 * det-a24 * det-a33 * det-a41 ) +
        (  1 * det-a13 * det-a21 * det-a32 * det-a44 ) + ( -1 * det-a13 * det-a21 * det-a34 * det-a42 ) +
        ( -1 * det-a13 * det-a22 * det-a31 * det-a44 ) + (  1 * det-a13 * det-a22 * det-a34 * det-a41 ) +
        (  1 * det-a13 * det-a24 * det-a31 * det-a42 ) + ( -1 * det-a13 * det-a24 * det-a32 * det-a41 ) +
        ( -1 * det-a14 * det-a21 * det-a32 * det-a43 ) + (  1 * det-a14 * det-a21 * det-a33 * det-a42 ) +
        (  1 * det-a14 * det-a22 * det-a31 * det-a43 ) + ( -1 * det-a14 * det-a22 * det-a33 * det-a41 ) +
        ( -1 * det-a14 * det-a23 * det-a31 * det-a42 ) + (  1 * det-a14 * det-a23 * det-a32 * det-a41 )
    ENDFORM.
    Det values are also f type. Problem is, that for several numbers I got the right values and for another det values I got wrong values... I also try to retype variable value on type p, but without success. Maybe I used wrong types or there is some ABAP rounding of numbers which cause wrong result.
    Any good ideas of solutions. <text removed>. Thanks for your time.
    Edited by: Matt on Sep 14, 2010 9:17 AM

    Hi Lubos,
    phew! that sounds far from SAP scope, but from Maths' numerical methods. Let's see if I can remember something about my lessons at University...
    - One issue can arise when adding and subtracting terms which are very similar, because the error tends to arise quite fast. Try to add the positive terms on one hand, and the negative terms on the other hand, then subtract one from the other.
    - Please take into account that the determinant value can be significantly close to zero when the condition number of the matrix is low, that is, when the range is 4 but the whole determinant is close to 0. Instead, try a [Singular Value Decomposition|http://en.wikipedia.org/wiki/SVD_(mathematics)] or an [LU decomposition|http://en.wikipedia.org/wiki/LU_decomposition]
    I hope this helps. Kind regards,
    Alvaro

  • Vsftpd and large downloads - cache problem

    Today I have raised vsftpd to transfer a lot of files (~30 GB) to my friend's machine. At start all was ok - full download speed was 2 MB/s, and iotop showed these MB's read from FS. When about 4 GB where downloaded  from me, strange things happened. Despite config file (max_clients and max_per_ip were 10, friend was using 10 simulateonous connections in filezilla), many following messages appeared in log (xxx.xxx.xxx.xxx - friend's address):
    Tue Aug 24 16:30:05 2010 [pid 2] CONNECT: Client "xxx.xxx.xxx.xxx", "Connection refused: too many sessions."
    Also the speed felt dramatically - at start there was ~220 KB/s per connection, now lower than 10 KB/s. iotop shows only eventual 124 KB reads from vsftpd. Cache filled all my memory (600 MB data - it's running KDE4 plus some documents and Opera opened):
    $ free -m
    total used free shared buffers cached
    Mem: 3862 3806 55 0 441 2766
    -/+ buffers/cache: 599 3263
    Swap: 2047 0 2047
    Also responsiveness of system changed strangely - operations with disk are as fast as usual, but with RAM became much slower. E.g., I use pacman-cage, and now pacman -Syu works many times slower. I know that reboot would take away this problem, but it's only partial solution.
    Is it the intended caching behavior? (IMO, it's rather not)
    How can be I/O optimized to not to overload cache?
    P.S. My vsftpd.conf:
    listen=YES
    # User access parameters
    anonymous_enable=YES
    anon_upload_enable=NO
    local_enable=YES
    write_enable=NO
    download_enable=YES
    connect_from_port_20=YES
    # Userlist
    #userlist_deny=NO
    userlist_enable=NO
    #userlist_file=/etc/vsftpd/vsftpd_users
    force_dot_files=YES
    hide_ids=YES
    deny_file={lost+found}
    ls_recurse_enable=YES
    local_umask=026
    ascii_download_enable=NO
    # Limits
    max_clients=10
    max_per_ip=10
    #connect_timeout=60
    data_connection_timeout=3000
    idle_session_timeout=6000
    # Messages
    dirlist_enable=YES
    dirmessage_enable=YES
    ftpd_banner=Welcome to Stonehenge-III FTP archive
    # Encryption
    ssl_enable=NO
    allow_anon_ssl=NO
    force_local_data_ssl=NO
    force_local_logins_ssl=NO
    ssl_tlsv1=YES
    ssl_sslv2=NO
    ssl_sslv3=NO
    rsa_cert_file=/etc/vsftpd/vsftpd.pem
    pam_service_name=vsftpd
    # Chroot everyone
    chroot_local_user=YES
    passwd_chroot_enable=YES
    chroot_list_enable=NO
    #local_root=/srv/ftp/
    # Logging
    xferlog_enable=YES
    P.P.S. Tried to find appropriate forum. If I was wrong, please move it to a better one.

    since your on a fast network, lower your timeout values, they are much too high, also, increase your connections to 15, but tell your friend to only download using 10 or lower connections.
    are the files downloaded small (many files) or large ones. check /var/log/everything.log for related error messages, e.g, I/O, file descriptors.

  • Are large dimensions a problem?

    Hello, I am looking to possibly purchase Premiere for some video editing.  I have been using Camtasia as a hack method, but I've learned that Camtasia does not deal well with large dimensions.  I'm looking to request authorization to purchase Premiere, but I want to ensure that Premiere is the tool I need for what I'm doing.
    In short, I am taking rather large screen motion captures for instructional purposes.  I need to blur out confidential information and silence portions of the audio.  The dimensions of these captures are 1920 x 1200.  The duration of these videos range from about 40 seconds to 14 minutes (which is 1.45 GB in size).
    Has anyone worked with movies of these dimensions in Premiere?  I'm hoping to find some anecdotes that I can bring to my manager before making this request.  I'd appreciate any input folks have on this.
    Kevin

    Kevin,
    If you are looking to purchase, I would assume that you are looking at PrPro CS4. Is that correct? Unfortunately, you have posted to the Premiere (precursor to PrPro) forum. Maybe one of our tireless MOD's will move the post out to the PrPro forum, where you will get a lot more traffic.
    As to the dimensions, yes, PrPro can handle those easily. Now, your satisfaction will be tied to two things: your computer, and the full specs. of your source footage. With a good, stout editing machine, and appropriate source footage, you will have no problems.
    In the Hardware sub-forum, Harm Millaard has done several worthwhile articles on building/buying an editing rig. In the PrPro forum, there is much discussion on cameras and their footage, to work the best in PrPro.
    When this post gets moved, you will receive a lot of worthwhile comments, that will steer you in the right direction.
    Good luck, and do not be surprised, when Curt or Jeff moves the post.
    Hunt

  • Transporting request problem

    Hi experts,
       I have a peculiar problem. I created a datbase table A and modifed a table B in DEV. The table A was cleared in testing , qa and the transport was released to prodcution. Unfortunately that request also had the modifications made to an another table B which were not finalised and the fn group of maintenance genrator of  table B was deleted. Now we are not able to maintain table B in prodcution. Anyone have ideas regarding this. Plz help .

    dear Anirvesh,
    try regenerate the table maintenance generator in DEV server,
    then check whether the object is complete in SE10 ,
    if not, open the related function in SE80 and edit all included objects,
    then transport the whole objects of table maintenance function to production.
    It is suggested to always check the transport request object by using SE10 before transport it to production .
    Rgds,
    TSWINEDYA

  • Transport landscape problem (cannot change the object)

    Hi All,
    I have problem to change my object in the integration repository.
    Initially, i only have 1 XI server (dev and prod), until certain level that i have to add additional server for development. So i just install new xi in new box (fresh). and export the repository object form production and import into this new box.
    But the problem is i can't change all the object. is that any workaround to change the configuration so i can use my nex box become development and later on transport back all the changes to original production box.
    Appreciate if you can give me some advise.
    Thank you and Best Regards
    Fernand

    Hi,
    If you need to change anything in repository
    you need to click on the software component in IR
    and at the bottom of the screen you will find two checks
    that will allow (or not) you to change objects
    if you want to change anything in communication channel
    you just need to go to change mode
    you can change anything in ID
    Thanks
    Swarup

  • Large list memory problem

    Hi,
    I am using MX2004 (10.1), I am finding that when I am working
    with large lists, there seems to be a memory leak when the memory
    should be returned when the list is finished with. Projector is
    being run on Windows XP.
    This only seems to happen in a projector, it does not happen
    in authoring.
    Please see the attached code for a test sample I made.
    My actual application creates a list of property lists that
    is about 1500 evements x 10 properties, but the attached example
    has the same effect.
    The sample basically repopulates an array with 30000 text
    elements, every 5 seconds.
    If you make this into a projector, you will see (if the
    problem is not limited to the 5 machines I am using) that every 5
    seconds, the memory in use will increment, and never go down,
    despite the fact that the list is VOID at the end of each
    calculation.
    Has anybody got any explanation for this, a work around or
    can at least replicate this so I know I am not going mad.
    I have tested creating the projector on several machines in
    our office, andon several flavours of windows XP - all with the
    same result.
    Thanks for any ideas.

    Tested, and I can verify that you've found a bug. And it's
    probably worst
    than it seems.
    First of all, it has nothing to do with using globals. You
    can replicate the
    leak by publishing a movie containing the following frame
    script:
    on beginsprite me
    the debugplaybackenabled=true
    repeat while not the shiftdown
    nArray = []
    repeat with i=1 to 30000
    nArray.add( "text " &i )
    end repeat
    put the milliseconds
    end repeat
    end
    Second, it's not a list issue - not releasing elements on
    clearup.
    I tried appending lists instead of strings : nArray.add( [] )
    : and the
    issue remained.
    Then I tried using xtrema's strings, _a("myString"&i),
    and other
    non-director native values, and everything was ok - no leaks.
    And finally, I tried using xLists, containing director
    strings. And the leak
    occurred again.
    Based on the above, I'd say that the leak is caused by
    director's failure to
    release the allocated memory of native values that require
    allocated buffers
    for storing their data.
    And now to the 'probably worst' part.
    When adding 'just' 20000 instead of 30000 strings, there was
    no leak. So, I
    guessed that the problem occurred when a large, yet fixed,
    number of
    allocations was involved. But then I tried using a larger
    string
    ("textAAAAAAAAAA"&"i), and there was the leak again.
    So, the leak depends not only to the number of unique
    allocations, but to
    the size of the allocated buffers as well.
    Seems that the issue is fixed on Dir11. However, this bug,
    along with the,
    also fixed in 11, legacy memory allocating issue (a pre v11
    director windows
    projector allocates aprox 10% of the physical ram!!) is
    something that I
    strongly believe justify a dir 10.x update. I bet it won't
    happen, but, in
    my book, bug fixes=update, new features=upgrade/new version.
    "TJW-dev" <[email protected]> wrote in
    message
    news:[email protected]...
    > Hi,
    >
    > I am using MX2004 (10.1), I am finding that when I am
    working with large
    > lists, there seems to be a memory leak when the memory
    should be returned
    > when
    > the list is finished with. Projector is being run on
    Windows XP.
    >
    > This only seems to happen in a projector, it does not
    happen in authoring.
    > Please see the attached code for a test sample I made.
    > My actual application creates a list of property lists
    that is about 1500
    > evements x 10 properties, but the attached example has
    the same effect.
    >
    > The sample basically repopulates an array with 30000
    text elements, every
    > 5
    > seconds.
    >
    > If you make this into a projector, you will see (if the
    problem is not
    > limited
    > to the 5 machines I am using) that every 5 seconds, the
    memory in use will
    > increment, and never go down, despite the fact that the
    list is VOID at
    > the end
    > of each calculation.
    >
    > Has anybody got any explanation for this, a work around
    or can at least
    > replicate this so I know I am not going mad.
    > I have tested creating the projector on several machines
    in our office,
    > andon
    > several flavours of windows XP - all with the same
    result.
    >
    > Thanks for any ideas.
    >
    >
    >
    > global nArray
    >
    > on prepareMovie
    > nArray = []
    > updateTimer = timeout("restartTimer").new(5000,
    #popArray)
    >
    > end prepareMovie
    >
    > on popArray
    > nArray = []
    > repeat with i=1 to 30000
    > nArray.add("Text " & i)
    > end repeat
    > nArray = VOID
    > end popArray
    >

  • Large group mailout problem...

    I am trying to send a mail to a large group from mail v 2.0.5 but it will not send. I have tried various accounts but it will not go from any. I constantly get a message saying that it cannot be sent with that server after a very long wait.
    Can anyone help with this?
    Is there a maximum amount of people that you can send a group message to?

    I had already tried that, my isp says it has a limit of 200 recipients for any single mail, i have therefore changed one big group to blocks of 150 recipients per group.
    This still does not work! I have tried every account i own & still no joy, i have also tried every other fix i can find on this forum but none seem to work.
    i have uncovered another much longer thread than this with a few other people reporting similar problems so this leads me to believe it is a mail/ osx issue that needs resolving.
    Come on apple, listen to your users here, this is ridiculous! no other mail app has this sort of problem !
    any other workarounds that people may have would be gladly tried!

Maybe you are looking for