Small NCP packets for large file read

I am running the lastest 4.91sp4 client and am getting really slow
performance through a application called bimap which reads and processes
large map files. When these files are on a Netware server (OES or 6.5 NSS
volume), Ethereal shows that NCP read requests are for only 512bytes and
then an NCP read OK is returned with 512bytes worth of data. This takes
the program several minutes to read the file and process. When the same
files sits on a Micrsoft Server Ethereal shows bursts of TCP packets all
with 1500bytes of data, resulting in a much faster load.
LIP is turned on and seems to work OK when copying files around NetWare
server, I see the bursts of 1500bytes of data, but doesn't seem to work
when using this bimap application. Any ideas?

Hi,
Sasha wrote:
>
> Their not on the same segments, the routers are all correct according to
> our Networks team. Ethereal definateley shows the NCP read request as
> 512Bytes. Other applications such as Word opening large files are OK from
> same PC, Ethereal shows NCP request of 4096. Once again using the Bimap
> app and reading the same files stored on a Microsoft Server works OK as
> well. My understanding with LIP is that the large packet size will try and
> be negotiated, if it fails it reverts to it's default of 512. I assume it
> is failing and I don't know why?
It sounds like the application is specifically reading 512 byte
fragments on purpose. The fact you see bigger reads on Windows is likely
because on windows oplocking (caching) is enabled, but isn't on Netware.
CU,
Massimo Rosen
Novell Product Support Forum Sysop
No emails please!
http://www.cfc-it.de

Similar Messages

  • Faster alternative to cfcontent for large file downloads?

    I am using cfcontent to securely download files so the user
    can not see the path the file is stored in. With small files this
    is fine, but with large 100mb+ files, it is much, much slower than
    a straight html anchor tag. Does anyone have a fast alternative to
    using cfheader/cfcontent for large files?
    Thanks

    You should be able to use Java to handle this, either through
    a custom tag or you might be able to call the Java classes directly
    in Coldfusion. I don't know much about Java, but I found this
    example on the Web and got it to work for uploading files. Here's
    an example of the code. Hope it gives you some direction:

  • Need PDF Information for large files

    Hi experts,
    I need some information from you regarding PDF related.
    is there any newer adobe version for large files support - over 15mb, over 30 pages.
    is it possible Batch technology with PDF
    if not please suggest me Possible adobe replacements.
    Thanks
    Kishore

    Thanks so long.
    acroread renders the pages very fast. That's what i want - but it's proprietary:/
    For the moment it's ok but i'd like to have a free alternative to acroread that shows the pages as quick as acroread or preloads more than 1 page.
    lucke wrote:Perhaps, just perhaps it'd work better if you copied it to tmpfs and read from there?
    I've tried it: no improvement.
    Edit: tried Sumatra: an improvement compared to Okular etc. Thanks.
    Last edited by oneway (2009-05-12 21:50:10)

  • Thumbnails for large files

    OK..so now I've learned that for my large panorama filesI wont get a thumbnail in the Organizer. Just some Blue Triangle warning with a exclamation point. I do allot of golf course work where I stitch phototos and print 38 x 11 inches @ 300dpi.
    So whats the work around? Is this a problem in the Bid Photoshop program? Sure I can....copy the file...reduce the resolution..save it with a different name..etc.
    What the heck!  You can't tell me that the program writers at Adobe can make this work for large files!  Why the limit?  By the way...what is the limit?
    Thanks

    By the way...what is the limit?
    http://kb2.adobe.com/cps/402/kb402760.html
    Juergen

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Are the brushes in Photoshop CC faster than CS6 - still need to use CS5 for large files

    Hey,
    Are the brushes in Photoshop CC any faster than Photoshop CS6.
    Here's my standard large file, which makes the CS6 brushes crawl:
    iPad 3 size - 2048 x 1536
    About 20-100 layers
    A combination of vector and bitmap layers
    Many of the layers use layer styles
    On a file like this there is a hesitation to every brush stroke in CS6. Even a basic round brush has the same hesitation, it doesn't have to be a brush as elaborate as a mixer brush.
    This hesitation happens on both the mac and pc, on systems with 16 gb of ram. Many of my coworkers have the same issue.
    So, for a complicated file, such as a map with many parts, I ask my coworkers to please work in CS5. If they work in CS6 I ask them to not use any CS6 only features, such as group layer styles. The only reason why one of them might want to use CS6 is because they're working on only a small portion of the map, such as a building. The rest of the layers are flattened in their file.
    Just wondering if there has ever been a resolution to this problem...or this is just the way it is.
    Thanks for your help!

    BOILERPLATE TEXT:
    Note that this is boilerplate text.
    If you give complete and detailed information about your setup and the issue at hand,
    such as your platform (Mac or Win),
    exact versions of your OS, of Photoshop (not just "CS6", but something like CS6v.13.0.6) and of Bridge,
    your settings in Photoshop > Preference > Performance
    the type of file you were working on,
    machine specs, such as total installed RAM, scratch file HDs, total available HD space, video card specs, including total VRAM installed,
    what troubleshooting steps you have taken so far,
    what error message(s) you receive,
    if having issues opening raw files also the exact camera make and model that generated them,
    if you're having printing issues, indicate the exact make and model of your printer, paper size, image dimensions in pixels (so many pixels wide by so many pixels high). if going through a RIP, specify that too.
    etc.,
    someone may be able to help you (not necessarily this poster, who is not a Windows user).
    a screen shot of your settings or of the image could be very helpful too.
    Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
    http://forums.adobe.com/thread/419981?tstart=0
    Thanks!

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

  • Recommended Structure for Large Files

    I am working at re-familiarizing myself with Oracle development and management so please forgive my ignorance on some of these topics or questions. I will be working with a client who is planning a large-scale database for what is called "Flow Cytometry" data which will be linked to research publications. The actual data files (FCS) and various text, tab-delimited and XML files will all be provided by researchers in a wrapper or zip-container which will be parsed by some as-yet-to-be-developed tools. For the most part, data will consist of a large FCS file containing the actual Flow Cytometry Data, along with various/accompanying text/XML files containing the metadata (experiment details, equipment, reagents etc). What is most important is the metadata which will be used to search for experiments etc. For the most part the actual FCS data files (up to 100-300 mb), will only need to be linked (stored as BLOB's?) to the metadata and their content will be used at a later time for actual analysis.
    1: Since the actual FCs files are large, and may not initially be parsed and imported into the DB for later analysis, how best can/should Oracle be configured/partitioned so that a larger/direct attached storage drive/partition can be used for the large files so as not to take up space where the actual running instance of Oracle is installed? We are expecting around 1TB of data files initially
    2: Are there any on-line resources which might be of value to such an implementation?

    Large files can be stored as BFILE datatypes. The data need not be transferred to Oracle tablespaces and the files will reside in OS.
    It is also possible to index bfiles using Oracle text indexing.
    http://www.oracle-base.com/articles/9i/FullTextIndexingUsingOracleText9i.php
    http://www.stanford.edu/dept/itss/docs/oracle/10g/text.101/b10730/cdatadic.htm
    http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle_Text/TEXT_3.shtml
    Mohan
    http://www.myoracleguide.com/

  • Please advise a novice computer artist on system requirements for large files in PS ?

    I am researching my next computer.
    I am an artist currently using CS2 to rework my paintings into prints.
    http://s719.photobucket.com/albums/ww198/blueridernz/Rachel%20Thompson%20Portfolio/?albumv iew=slideshow
    The paintings have been scanned at a high resolution and are 500MB.
    The image files I work with are large but the process simple, I am extracting and collaging.
    Nothing too technical here.
    As I will be printing large, 9' x 4', the end result file could be 2GB which I understand is the limit for PSD files.
    I am not gaming or editing video files, just working with large images.
    From what I have read I believe I need
    i7 Core
    12 GB RAM set of 3
    GeForce GTX 460
    Any advice truly appreciated,
    Rachel.

    The graphics card is perhaps a bit of overkill, if you really just paint and touchup in PS. A less power hungry and cheaper model would do just fine. The rest should do just fine, but keep in mind that all this is not going to do much good if you don't get a 64bit operating system and possibly also upgrade to CS5 to make use of all these resources. You might not even be able to install CS2 under Windows 7...
    Mylenium

  • FTP Sender Adapter with EOIO for Large files

    Hi Experts,
    I am Using XI 3.0 version,
    My scenario is File->XI->FIle, I need to pick the files from FTP Server , there are around 50 files in it each of 10 MB Size.
    So i have to pick the files from FTP folder in the same order as they put into folder , i,e.., FIFO.
    So in Sender FTP Comm channel , i am Specifying
    Qos = EOIO
    queue name =  ACCOUNT
    whether the files will be picked in FIFO into the queue XBTO_ACCOUNT?
      So my question is,
    what is the procedure to specify the parameters so that it will process in FIFO?
      What would be the BEST PRACTICE to acheive it for better performance point of view.
    Thanks
    Sai.

    Hi spantaleoni ,
    I want to process the files using FTP protocol which should be in FIFO ,
    ie.., Files placed first in a folder should be picked up first and remaining one next and so on.
    So if i use FTP ,
    Then to process the files in sequence i.e, FIFO
    the processing parameters
    Qos= EOIO
    Queue name = ACCOUNT
    would process the files in FIFO.
    and to process large files (10MB size) what would be the best  Polling interval secs.
    Thanks,
    Sai

  • Seeburger As2 Receiver channel exception for large files

    Hello Folks,
    We have JMS to seeburger AS2 interface we are facing follwing issue in As2 reciver channel for files larger than 20 MB.But In Production it is wokring fine for even 40 MB also.
    Delivering the message to the application usingconnection AS2_http://seeburger.com/xi failed, due to:
    com.sap.engine.interfaces.messaging.api.exception.MessagingException:javax.resource.ResourceException: Fatal exception: javax.resource.ResourceException:SEEBURGER AS2: org.apache.commons.httpclient.ProtocolException: Unbuffered entity enclosing request can not be repeated. # , SEEBURGER AS2:org.apache.commons.httpclient.ProtocolException: Unbuffered entity enclosingrequest can not be repeated. # .
    Please through some light on the issue .
    Regards
    Praveen Reddy

    Hi Praveen,
    The problem would be related to server size. genrally test system do not have same sizing as production server due to that you can not process large files in test system.
    check the sizing of the system.
    regards,
    Harish

  • Does Airport time capsule work well as a storage for large files, such as RAW Photos and HD Video?

    If Airport time capsule can work as a storage device, how reliable is it?  Are downloads into Airport dependent on internet speed, or on the time capsule wifi?

    If Airport time capsule can work as a storage device, how reliable is it?
    The Time Capsule was designed to be used for Time Machine backups, but can equally be used for storage. Like any network storage device it is relatively reliable. However, you should always have a good backup strategy for important files.
    Are downloads into Airport dependent on internet speed, or on the time capsule wifi?
    Data transfer rates "into" the Time Capsule are dependent on a number of factors. Which factors depend on the original source of the files. Regardless, the common factors will be the TC's internal hard drive interface and data write speeds of the drive itself.

  • Which format on a flash drive for large files for use by Mac and PC

    I need to copy large files (9GB of movie file exported from iMovie of school graduation) onto 16GB flash drives so they can be used by school parents who may have Mac, PC or even TV.
    My first attempt says the files are too large.
    A lot of googling tells me this is a recognised problem but I am confursed by which advice to take.
    I am on a 2012 iMac running OSX version 10.7.4.
    Do I need to download some software to run NTFS.  It all sounds so confusing!
    I ended up in this predicament because the quality of my photo slideshows I copied to my DVD-R was so bad.  Thought a flash drive would be easy. Ha, not at all.
    Please answer in laymans terms - I could barely follow some of the advice I found.....

    Format the flash drives with Disk Utility in ExFAT format.

  • Looking for Word file reader

    I have tried in vain to find a text reader app that will open Microsoft Word files and wraps text at all degress of text magnification. There's a terrific one for Android. Is there anything similar in the iOS world?

    " However, why not export the word files as PDF's and use iBooks? "
    That would be the easiest way....
    That was my 1st thought but he´s looking for an app to do that with word files.

  • IPlanet 6.0 SP4 crashes for large file uploads

    Environment:
    iPlanet 6.0 SP4,iPlanet App Server 6.5, with maintenance update 3 installed running on the same machine on Solaris 8.0 (Sparc)
    When I try to upload large file( > 10 MB) using HTTP POST MULTIPART/FORM-DATA the web server crashes with the following errors.
    [18/Oct/2002:16:52:02] catastrophe ( 600): Server crash detected (signal SIGSEGV) [18/Oct/2002:16:52:02] info ( 600): Crash occurred in function memmove from module /export/home/iplanet/web60sp4/bin/https/lib/liblibdbm.so [18/Oct/2002:16:52:02] failure ( 376): Child process admin thread is shutting down [18/Oct/2002:16:52:05] warning ( 624): On group ls2_default, servername oberon does not match subject "192.168.0.35" of certificate Server-Cert. [18/Oct/2002:16:52:05] info ( 624): Installing a new configuration [18/Oct/2002:16:52:05] info ( 624): [LS ls1] http://oberon, port 80 ready to accept requests [18/Oct/2002:16:52:05] info ( 624): [LS ls2] https://oberon, port 443 ready to accept requests [18/Oct/2002:16:52:05] info ( 624): A new configuration was successfully installed [18/Oct/2002:16:52:05] info ( 624): log.cpp:openPluginLog() reports: Environment variable IAS_PLUGIN_LOG_FILE is not set. All plugin messages will be logged in the web server log file
    [21/Oct/2002:10:40:02] catastrophe ( 1210): Server crash detected (signal SIGSEGV) [21/Oct/2002:10:40:02] info ( 1210): Crash occurred in NSAPI SAF gxrequest [21/Oct/2002:10:40:02] info ( 1210): Crash occurred in function __1cIGXBufferLAllocMemMap6ML_L_ from module /export/home/iplanet/app65mu3/ias/gxlib/libgxnsapi6.so [21/Oct/2002:10:40:02] failure ( 715): Child process admin thread is shutting down [21/Oct/2002:10:40:05] warning ( 1230): On group ls2_default, servername oberon does not match subject "192.168.0.35" of certificate Server-Cert. [21/Oct/2002:10:40:05] info ( 1230): Installing a new configuration [21/Oct/2002:10:40:05] info ( 1230): [LS ls1] http://oberon, port 80 ready to accept requests [21/Oct/2002:10:40:05] info ( 1230): [LS ls2] https://oberon, port 443 ready to accept requests [21/Oct/2002:10:40:05] info ( 1230): A new configuration was successfully installed
    Do I need to set anything in the web server or app server? Any help is appreciated.

    Be sure to rollback to 16.0.1; the initial v16 update had serious security flaw.

Maybe you are looking for

  • Well and truly fed up with poor customer service a...

    I have been logging several issues with bt support and had 4 engineer visits to sort out my broadband. When we first moved in the broadband connection was good approx 4mb. This declined last year to under 1mb The bt customer support line were useless

  • How to generate a PDF output using batch file in 10G

    Hello, I am using .bat file to generate a report PDF output. I have done this many times in 6i but for 10G I am unable to do the same. Can someone please look at the syntax below and let me know where I am going wrong. I understand that reports are d

  • Will the iPhone 5 sync right up my 4s backup?

    I'm well, getting the iPhone 5 and I was wondering if with the new IOS 6 and the new iPhone 5 if it would sync right up and all my data would be all in it?

  • Organizational Unit Synchronization & User Movement  with Active Direct

    Hi All, Can anybody put some light on (Organizational Unit Synchronization) or can ref some doc links. Anticipating Help from you folks. Thanks Randhir Singh

  • What constitutes a "seat" in a team plan?

    Hi there, I am trying to figure out what constitutes a "seat" in a team plan?  Is a seat required for each individual accessing the submission data?  Or is a seat required for just the main people working and moderating the forms?  We have a number o