Odd discovery regarding throughput to/thru TC

Here is a puzzle I wonder if anyone can figure out and also check to see if the behavior can be replicated.
In a previous post I complained about the 300KB/s throughput I was getting when copying files to my TC (using disk sharing) or through TC between two computers. I got this poor performance even with N-only or with ethernet connections and without encryptions.
So I went to the store and exchanged my TC for a new one, after replicating my tests to one of the geniuses. The new TC seemed to perform better in another test (more on this below), so I left the store happy. (btw, I love the Apple customer service!)
I now figured out that the new TC is just like the previous one! I now know what is happening, but it is a curious thing. I wonder if anyone here can figure it out exactly. Here is the deal:
Basically, the "file" I was using originally with the old TC for my tests was the DVD player from the application folder. I chose this just based on its intermediate size of around 50MB. (yes, I know an app file may be actually a bundle of more than one file, but that may or may not be the issue)
However, when they gave me the new TC at the store we ran a different test, using CandyBar download (which is also an app and a bundle, by the way). It is around 35MB. Transfer speeds were much better with that (around 1.5MB/s).
Back home I checked transferring by copying the same "DVD player" app on to the new TC... surprise, surprise, I got the same terrible ~300KB/s transfer rate I originally was getting. I quickly checked copying over to TC (using N only on 5Ghz) a large folder with 300MB and about 1000 files. It transfered at rates fluctuating mostly in the range 1-3MB. Much better again. So this confirmed that my new TC transfers the DVD player very slowly but other apps and other large folders at better speeds. I suspect the old one would have had the same behavior (but I only ever tested it by copying the DVD player app)
I find this bizarre.
Does anyone have a theory of what is going on here? Why are some files transferring so much slower than others?
Also, can anyone confirm by replicating this behavior that the file transfer varies by orders of magnitude in this way?

hi
use the program RSBDCLOG if your in 4.6c..
Cheers,
Abdul hakim

Similar Messages

  • Very odd behavior regarding variable naming with DataService.fill()

    I'm experiencing this odd behavior in LCDS 2.5, wich pertains
    to variable naming for the objects returned by a DataService.fill()
    call. I ceated a simple destination with a Java adapter. the Server
    side Java data object is defined as follows)
    package test;
    public class RepositoryObject
    private String m_strObjectId;
    private boolean m_bIsValid;
    private long m_lSize;
    [public setters and getters here]
    On Flex side, the ActionScript value object is defined as
    follows:
    package test
    [Bindable]
    [RemoteClass(alias="test.RepositoryObject")]
    public class RepositoryObject
    public var m_strObjectId:String;
    public var m_bIsValid:Boolean;
    public var m_lSize:Number;
    public function RepositoryObject()
    The FDS destination definition (in
    data-management-config.xml):
    <destination id="testDs">
    <adapter ref="java-dao" />
    <properties>
    <source>test.TestDS</source>
    <scope>application</scope>
    <metadata>
    <identity property="m_strObjectId"/>
    </metadata>
    <network>
    <session-timeout>20</session-timeout>
    <paging enabled="false" pageSize="10" />
    <throttle-inbound policy="ERROR" max-frequency="500"/>
    <throttle-outbound policy="REPLACE"
    max-frequency="500"/>
    </network>
    <server>
    <fill-method>
    <name>getObjects</name>
    </fill-method>
    </server>
    </properties>
    </destination>
    What I figured while debugging into the client was is the
    data returned by the service (I'm using the mx:DataService call)
    returns objects whose variable names
    do not match my definition:
    m_strObjectId becomes objectId
    m_lSize becomes size
    m_bIsValid becomes isValid
    I wonder why that renaming??? Basically "m_str" prefix was
    stripped and capital "O" became lower case "o" for my first
    variable etc.
    For instance, in a datagrid cell renderer I could not
    reference the returned value as {data.m_strObjectId}, but if I use
    {data.objectId}, it works.
    Any ideas why this behavior (which I found confusing and odd)
    would be greately appreciated.
    Robert

    The latter, as "getM_strObjectId " is an ugly name... don't
    tell me that causes issue...ok kidding, I got you. The
    setter/getter names are important in this context.
    Thanks a lot.
    Robert

  • Odd circumstances regarding an 'unexpected error' in Project Server 2010

    A couple of days ago, a client of mine started experiencing an odd error upon trying to save their project files in Project Professional 2010 while connected to Project Server 2010. They'd receive the following error:
    An unexpected error occurred during command execution.
    Try the following:
    Verify that all argument names and values are correct and are of the correct type.
    You may have run out of memory. To free up available memory, close programs, projects, or windows that you are not using.
    I am familiar with troubleshooting both of the cases as listed above in this error message and quickly determined that the client was not experiencing memory issues. I also could not find any instances where there were incorrect values in any project schedule
    fields.
    We eventually were able to replicate the issue by following these steps:
    A user creates a new project via PWA using an Enterprise Project Type that has been pre-defined to have a specific project template applied to it. Please note that this Enterprise Project Type did not create a workflow... only created a Project Details
    page, Project Information page, and Project Schedule based upon this template.
    This new project is given some basic Project level custom field information and set to the PM owner and then saved and published without any issues.
    Now, when the project schedule is opened immediately afterwards in Project Professional 2010, changes cannot be saved or published. Even the most basic of change at this point such as a new task line or renaming a task name will result in the error message
    given at the top of this post. The user is then forced to close the project file and check it back in. Note this applies to anyone performing some or all of these steps.
    Note: If a project file was created from this same template through Project Professional instead of through PWA, it was fine and no errors would occur upon further editing.
    Because of the above, I decided to try and open the existing project templates used in the Enterprise Project Types that were being affected and simply re-save them with new names. I made no other changes to these templates than just giving them a new name.
    I then created new Enterprise Project Types that would then use these new templates. Other than this change, these Enterprise Project Types were exactly the same as the previous ones which caused errors.
    Upon trying the new Enterprise Project Types, the project schedules behaved normally and new projects created from them would not cause the error listed.
    This is extremely puzzling obviously and now my client is concerned that this is a symptom of a deeper problem in the database that we might not yet know about.
    Is this an issue that anyone here has experienced before, and if so, knows how to correct and prevent from occurring again in the future?
    Thank you.
    Daniel E Crabtree

    Daniel,
    Issue could be due to corruption ? Please provide us information about current server and client patch level for Project. Also ULS log information would be beneficial to investigate this issue
    Hrishi Deshpande Senior Consultant

  • Disturbing Discovery regarding MPEG-LA

    If there is a forum page for this subject please let me know.
    If you are not aware of the copyright agreement for using any MPEG-2 and/or H.264 codex please read the article on this link.
    http://www.osnews.com/story/23236/Why_Our_Civilization_s_Video_Art_and_Culture_is_Threaten ed_by_the_MPEG-LA
    Questions:
    Is this article correct and accurate?
    If so, are big budget movies, which shoot with video, already paying MPEG-LA for their copyright to use MPEG-2 or H.264?
    What is going to happen to the little pro/consumer that is trying to make money shooting in MPEG-2 and H.264?

    Well, it could well be laughable, but then it could be legit. I do not know the author, or how much validity we should place in the article. However, considering the mood of companies like Sony, MicroSoft and others, I would not be surprised. Sony has taken many steps to curtail the individual production of BD. MS pulled playback of burned DVD's (BD's too?) from WMP, though some indicate that they have relented and added that capability back. I still have an older version of WMP on my machines, so cannot comment. I can see the need for protection from piracy, but by a monster margin, the biggest source of pirated titles comes from duplicate glass masters from replication houses in Asia. Millions of pirated titles come from these, and are often for sale on the streets, well before the first shipment of the legit discs ever hit US, or European shores. Burned discs are probably a minuscule part of the equation.
    I tend to take articles, such as this, with a grain of salt, but do try to pay attention to them. Just because I have never heard this in no way insures that the comments are anything but valid. Of course, I am a "conspiracy theorist," so am not above imagining how this could be totally legit. I still recall when Compuserve was planning on going after everyone, who was using GIF. Other similar attempts have been made over the decades.
    With the proliferation of various intellectual property sharing sites, I can see where various organizations would go after anyone, breeching ©. Remember when the RIAA went after all retailers, who had a radio playing in their stores as background music? They wanted royalties for every song played.
    Unfortunately for me, the last media law class I took was in the '70s, and was long before the big © code revisions, and way before electronic media.
    Just some thoughts,
    Hunt
    PS - A Google search for "MPEG-LA" has plenty of hits, regarding licensing.
    Message was edited by: Bill Hunt - Added PS

  • Odd discovery on my iMac 20" w/ iSight

    I've taken apart my 20" g5 w/isight (initially to swap out a hardrive), I know, warranty is shot. I discovered that there is a power re-set button on the backside of the logic board. I suppose since mine has power issues (and is out of warranty), I could drill an access hole so I don't have to take it apart to do a hard reset?
    Thanks.
    JB

    Unfortunately, the iSight models are _*not user repairable*_. Except for RAM installation, you must take the computer to your local AASP for all repairs and/or installation of parts.
    MGW may have some posted links for HD installation for the iSight models. Search these fora if she does not respond to your posting.

  • Odd question regarding files on a mac-formatted external drive...

    I have recently purchased a Western Digital external HDD (1TB) and have synced it with my mac as a time machine.
    I have also, however, moved a bunch of files from my macbook, including family movies, DVD's I've ripped, a lot of my music collection, work files and some games.
    This has worked brilliantly thus far.
    BUT! I have my MBP bootcamped for some boring programs for work etc. but want to be able to listen to music/play games/access files through the WD HDD... This may be obvious to many people; but it wasn't immediately obvious to me: Because the External drive is formatted for Mac, as well as 'time machine-d', it refuses to be recognised by the PC...
    Thus, my question is:
    Is there a way to move files from Mac to PC (bootcamped) through an external drive that is formatted to the Mac.
    If not, is there a way to move files from Mac to PC through the bootcamp disc on the Macintosh Desktop. I understand that files smaller than 4 gigs can be moved across NTFS (?) but some of the files for my work stretch as far as 11 gigs...
    Any help with this would be greatly appreciated! I've been looking around the net for the past week... It's time to be proactive...
    If any more info is required, please let me know.
    Ollie.

    Hi Ollie,
    with the BootCamp 3.0 Drivers Apple introduced a read access from BootCamp Windows to Mac OSX partitions, but this seems to only work with internal harddisks not with external ones.
    MacDrive is the choice get full read and write access on Mac OSX partitions/harddisks while in Windows http://www.mediafour.com/products/macdrive/
    For a simple read-only access HFSExplorer can be used http://hem.bredband.net/catacombae/index2.html
    The 4GB size limitation is only active on FAT32 file system not on NTFS, but Mac OSX itself can only read from NTFS partitions not write to them without additional helper like NTFS-3G http://macntfs-3g.blogspot.com/
    So, with you scenario (external HD = MacOS file system) it seems that MacDrive would be the way to go.
    Hope it helps
    Stefan

  • Have a rather odd question regarding isight...

    I currently host an adult webcam chat room and would like to know if I can use my built in isight cam. I host on a popular cam website and use something called "easycam". I have a pc as well can host off of there but would much rather use my brand new imac since the quality of the cam is just fantastic and the microphone is just wonderful!
    I am a first time mac owner and am not familiar with the ins and outs of any of these programs and am still trying to figure out the basics of my new computer. I have learned that I cannot use the built in isight for much unless it's through ichat or aim ( neither of which I use )

    Welcome to Apple Discussions, ifoundloveatcommutel
    You may need to configure your "Flash" settings as described here:
      http://discussions.apple.com/thread.jspa?messageID=4761707&#38;#4761707
    You can make the settings in any Flash-based video web site. Substitute your site name for Stickam if you like.
    EZ Jim
    PowerBook 1.67 GHz w/Mac OS X (10.4.11) G5 DP 1.8 w/Mac OS X (10.5.2)  External iSight

  • ODD problem regarding certain clips at 0% speed.?

    I have recently updated to Final Cut Pro 6 (and now 6.0.2) and I opened a project that I created with Final cut pro 5. I'm having a really wierd problem where only certain clips are changed to have a speed of 0% in the timeline. I think most of these are shorter clips that may have had their time adjusted before, but not to 0%!
    Because they are at 0%, only one frame is showing up. If I downgrade to Final Cut Pro 5, it will open and play correctly, but I do not want to do that because I have new projects created with fcp6.
    I have a power mac g5 dual 1.8 ghz running leopard 10.5.1
    Any ideas or help would be appreciated! Thanks!

    This is great, although my question is not at all about backwards compatibility, and more about "forwards" compatibility.
    My project file that was created with fcp5 is not playing back correctly in fcp6. It is opening, but certain clips are being affected by having their speed changed to 0%, so I only am seeing one frame during playback...
    Any ideas?

  • Doubt regarding SQL execution

    Hi Friends,
    Am using Oracle 10g DB - 10.2.0.3.0
    I have some basic doubts regarding sql query execution by Oracle.
    Say, Am executing a query from a toad/sqlplus session for the first time, it takes 10 secs then 1 sec and so on.
    Same thing happens for every 15 minutes.(Any specific reason for this ??).
    It takes more time when it executes first because of parsing and all those stuff but from then on it picks from the
    shared pool right??.. How long will it be there in Shared Pool does Oracle maintain any specific time period to clear that query from shared pool memory. How does Oracle handle this.
    Another thing is, say, I have a report query, I run this query monthly. What will be the execution time when I run this query each and every month. Will Oracle parse this query everytime I run. How do I improve the performance in this situation (May sound odd :)).
    Regards,
    Marlon

    Say, Am executing a query from a toad/sqlplus session for the first time, it takes 10 secs then 1 sec and so on.
    Same thing happens for every 15 minutes.(Any specific reason for this ??).
    It takes more time when it executes first because of parsing and all those stuff but from then on it picks from the
    shared pool right??.. How long will it be there in Shared Pool does Oracle maintain any specific time period to clear that query from shared pool memory. How does Oracle handle this. Share Pool caches the SQL statement. So when you execute the same SQL for the second time it goes for a soft parse. But this is not the only reason for the query to execute faster the second time. The time difference between a soft parse and hard parse is very minimal. So it really does not matter unless you are executing the same query several number of times.
    The thing that really matters is the Data Buffer Cache. That is the rows that are selected by your query are cached into the Data buffer that is available in the SGA. So for the next time when you run the same query the IO is reduced as the data is available in the memory and you don't have to go to your disk to get the data.
    But the data in Data Buffer is not persistent, meaning it follows the FIFO rule. That is first in first out. When the Data Buffer is full the content of the buffer is removed in the FIFO order.
    Another thing is, say, I have a report query, I run this query monthly. What will be the execution time when I run this query each and every month. Will Oracle parse this query every time I run. How do I improve the performance in this situation (May sound odd :)). Like the Data Buffer the Shared Pool is also maintained in the FIFO order. So if the query is still in the Shared Pool the query will be soft parsed else it will be hard parsed. But its very rare that you will have a query in your Shared Pool for a month.

  • Identification of date Even or odd

    Hi All,
    could you please tell me how can i identify the given date is even or odd.
    please provide me if there is any function module  or logic.
    Thanks in Advance.
    Sudhakar

    hii
    i dont think there is any FM for this
    you can use following logic to find odd even numbers.take date in one variable and use
    data:
          w_rem type p decimals 2,
          w_val type i.
    w_rem = ( w_val mod 2 ).
    if w_rem = 0.
      write:/ 'Even'.
    else.
      write:/ 'Odd'.
    endif.
    regards
    twinkal

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Loading data thru SQL

    I am working on our very first ESSBASE database, and we are currently loading data for the previous month. Our measures dimension has around 18 children right now.When we loaded the data, we noticed that the numbers were a bit off, even though the load was 'successful'.When we tried to load June data one measure at a time, the numbers for some measures were corrected. Still, some of the measures were off.Does anyone know of any limitations regarding loading data thru SQL?Thank you for your help!

    'Successful' only means that the SQL source was accessed without generating any SQL errors. You may still have had dataload errors (rows which Essbase was unable to load into the database).Check the dataload error file (default location in the %Arborpath%\client directory)David

  • Question regarding Hard Drive Disk Space

    I have a MacBook. only had it about 6 months ( a new Apple convert ) .... I upgraded to a 250 GB hard drive and keep my iTunes library on it. I noticed a performance change recently and discovered I was almost out of disk space. I have managed to free up approx. 7 GB but I continue to get that rotating beach ball icon much more often than ever before. I'm also noticing odd behaviors regarding my video... for example, if I click my desktop and do a click/drag, I everything suddenly is outlines and highlighted. This also occurs randomly on websites when I select something.
    I thought perhaps the video issue could be caused by apps I've downloaded, but the only thing I've added recently was a program called iSale ( for eBay listings ) and a few iPod Touch applications from the Apps store.
    Is 7 GB of free disk space insufficient, or would freeing up more space help ??
    Thanks !
    Chuck

    7GB is way too low for a drive that size. you should have at least 25% of drive space free. you are undoubtedly suffering from disk fragmentation which can cause all sorts of issues. also you should check the integrity of your file system as it can easily be damaged because of low disk space. "run verify disk" in disk utility and if it reports any errors start from the install DVd and repair the drive (you can't do it while booted from the same drive). also, you need to free up some space and defrag the drive. one easy way to defrag is to clone your drive to an external then boot from the external and clone it back. ultimately you need either a bigger internal or an external to keep a lot of your data.

  • Archive file with errors in sender file adapter not working! please help!

    Hi Experts,
       I have a file to RFC scenario. the input is a XML file. I have setup the flag in sender file adapter channel for archiving the input files with errors. But it is not working.
    For testing I have used an invalid xML file for example without the main XML tag. I have also tested with a MSWORD file saved with.xml extension. But in both the cases the files are not getting archived.
    My archive location permissions are fine and in fact normal archive operation is happening. That is, if I select the processing mode as "Archive" and gave the Archive directory then files are getting archived. The problem is only with the "Archive faulty source files" option.
    What am I missing? DO I need to do some more configurations?
    What are the prerequisites if any for this option?
    How to test this?
    Please help me! I will be greatfull to you all!
    Thanks & Regards
    Gopal

    and go thru this links
    Creating a Single Archive of the Version Files
    http://help.sap.com/saphelp_nw04/helpdata/en/79/1e7aecc315004fb4966d1548447675/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/31/8aed3ea86d3d67e10000000a114084/frameset.htm
    Note: reward points if solution found helpfull
    Regards
    Chandrakanth.k

  • Need to display COlumn headers dynamically in ALG Grid

    Hello,
    I need to display column headers dynamically in alv grid Display with its corresponding value.
    Column headers should be picked from a field in Final Internal table and its corresponding field will also need to pick from the same table.
    T_final... Suppose Field STCTS - (To pick coulmn headers)
                                      CCNGN - (To pick appropriate value for that column)
    Can anybody explain me how i can pass these values to ALV Grid using
    CALL METHOD CL_ALV_TABLE_CREATE=>CREATE_DYNAMIC_TABLE
      EXPORTING
        IT_FIELDCATALOG           = Y_I_FCAT
      IMPORTING
        EP_TABLE                  = DY_TABLE.
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
               WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    Any suggestions will be appreciated....
    Regards,
    Kittu

    Hi,
    Go thru this link, and the code of Mr.Dev Parbutteea
    Re: Probelm with Using Field Symbol in FM
    thanks
    Mahesh

Maybe you are looking for

  • Adobe reader XI can't open mp3 files in a pdf document- says the required decoder not installed

    Adobe reader XI can't open mp3 file links contained in a pdf - it says the required decoder is not installed please help!

  • Launch CRM 7.0 from SAP Solution Manager

    We are buiding the BPH in SAP Solution Manager and would like to store the CRM Business Processes in it as well. Therefore we'll also store the TCodes under the "Transaction" Tab. Now, i understand that CRM 7.0 uses a web GUI for configuring TCodes.

  • Search help in Modulepool

    Hi All, I have a screen in Module pool which has 3 input fields, each input field is attached to the same search help. These fields are from ztable . I have created a search help USING these 3 fields. All the 3 fields are marked for IMPORT and EXPORT

  • Connecting to internet after installing snow leopard software

    I just uploaded the new snow leopard mac os x v10.6 software and after my computer restarted it wouldn't connect to my wifi like it usually does. I'm able to get the connection and it says that there's full bars but my webpage says I am not connected

  • Java import problem

    Hi We are writing a class in package util, which needs to import class from dataAccess package. Our Code: import dataAccess.* ; import dataAccess.Abc; Error Mesg: C:\jakarta-tomcat-4.1.24\webapps\oppTrack\WEB-INF\classes\util\ImportToExcel.java:93: c