Large file processing in XI 3.0

Hi,
We are trying to process a large file of ~280 MB file size and we are getting timeout errors. I followed all the required tunings for memory and heap sizes and still the problem exists. I want to know if installation of decentral adapter engine for just this file processing might solve the problem which I doubt.
Based on my personal experience there might be a limitation of file size processing in XI may upto 100 MB with minimul mapping and no BPM.
Any comments on this would be appreciated.
Thanks
Steve

Hi Debnilay,
We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
Thanks
Steve

Similar Messages

  • Large file processing in file adapter

    Hi,
    We are trying to process a large file of ~280 MB file size and we are getting timeout errors. I followed all the required tunings for memory and heap sizes and still the problem exists. I want to know if installation of decentral adapter engine just for this large file processing might solve the problem which I doubt.
    Based on my personal experience there might be a limitation of file size processing in XI may upto 100 MB with minimul mapping and no BPM.
    Any comments on this would be appreciated.
    Thanks
    Steve

    Dear Steve,
    This might help you,
    Topic #3.42
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/70ada5ef-0201-0010-1f8b-c935e444b0ad#search=%22XI%20sizing%20guide%22
    /people/sap.user72/blog/2004/11/28/how-robust-is-sap-exchange-infrastructure-xi
    This sizing guide &  the memory calculations  it will be usefull for you to deal further on this issue.
    http://help.sap.com/bp_bpmv130/Documentation/Planning/XISizingGuide.pdf#search=%22Message%20size%20in%20SAP%20XI%22
    File Adpater: Size of your processed messages
    Regards
    Agasthuri Doss

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Java.io.IOException during large file processing on PI 7.1

    Hello Colleagues,
    for a large file scenario on our PI 7.1 System we have to verify with big file size we are able to process over PI.
    During handing over the large file (200 MB XML) form the Adapter Frame Work (File Adapter) to the Integration Engine we receive following error:
    Transmitting the message to endpoint http://<host>:<port>/sap/xi/engine?type=entry using connection File_http://sap.com/xi/XI/System failed, due to: com.sap.engine.interfaces.messaging.api.exception.MessagingException: Error transmitting the message over HTTP. Reason: java.io.IOException: Error writing to server.
    The message processing stopped and message still lies at Adapter Frame Work. Large files up to 100 MB we are able to process successfully.
    Please, could you tell me why this happened and how we are able to solve it?
    Because there is not a java.outofmemory exception however a IO exception i think it could be an memory issue?!
    Many thanks in advance!
    Regards,
    Jochen

    Hi Jochen,
    Indeed the error is IO Error and it is because the Adapter engine was not able to send the message to Integration server. But it happens due to memory/heap size issues.
    Look at these thread, they are having the same problem. Please try the remedy measures suggested by them
    Mail to Proxy scenario with attachment. AF channel error.
    Error with huge file
    problem with big file in file-to-proxy scenario
    Is there any additional information in Adapter messaging tool.?
    Regards
    Suraj
    Edited by: S.R.Suraj on Oct 1, 2009 8:55 AM

  • Bottleneck in Large file processing

    Hi,
    We are experiencing timeout and memory issues in large file processings. I want to know wheather J2EE adapter engine or Integration engine is the bottleneck in processing large messages like over 300 MB files without splitting the files.
    Thanks
    Steve

    Hi Mario,
    We are testing a scenario to find out what is the maximum file size that XI can handle based on the blog
    ( /people/william.li/blog/2006/09/08/how-to-send-any-data-even-binary-through-xi-without-using-the-integration-repository) without any mapping. Upto 20 MB it works Ok and after that we are getting timeout error .
    Data from Moni:
    com.sap.engine.services.httpserver.exceptions.HttpIOException: Read timeout. The client has disconnected or a synchronization error has occurred. Read [1704371] bytes. Expected [33353075]. at com.sap.engine.services.httpserver.server.io.HttpInputStream.read(HttpInputStream.java:186) at com.sap.aii.af.service.util.ChunkedByteArrayOutputStream.write(ChunkedByteArrayOutputStream.java:181) at com.sap.aii.af.ra.ms.transport.TransportBody.<init>(TransportBody.java:99) at com.sap.aii.af.ra.ms.impl.core.transport.http.MessagingServlet.doPost
    This could be due to ICM timeout settings which we are planning to increase.
    I would like to hear from others experience of maximum file size that they could process. Ofcouse I do know that it depends on the environment.
    Thanks
    Steve

  • Large file processing issue

    Hi,
    A 2MB source file is found to be generating a file of over 180 MB causing it to fail in pre prod and production. The processes successfully in Development Box where there is no web dispatcher or restrictions on size.
    The recommendation from SAP is that we try to reduce the outout file size.
    Kindly look into the issue ASAP.
    Appreciate your help.
    Thanks,
    Satya Kumar

    Hi Satya,
    There are many ways are available check the below links
    /people/stefan.grube/blog/2007/02/20/working-with-the-payloadzipbean-module-of-the-xi-adapter-framework
    /people/aayush.dubey2/blog/2007/10/10/zip-transfer-unzip-increase-the-performance-of-your-java-abap-applications
    /people/pooja.pandey/blog/2005/10/17/number-formatting-to-handle-large-numbers
    /people/alessandro.guarneri/blog/2007/02/21/sap-xi-acting-as-a-huge-file-mover
    /people/alessandro.guarneri/blog/2006/03/05/managing-bulky-flat-messages-with-sap-xi-tunneling-once-again--updated
    One more way is we have to develope the ZIP Adapter and send the zip file after processing again we have to unzip the file.
    Regards
    Ramesh

  • Need advise on large file processing with good performance

    Hi All,
    I am working on a program in which I have to read millions of records from application server file.For this, I am reading 1 million records each time and uploading into the DB-table.
    what is the best approach to process the millions of records.what I am currently doing is, I read 1 million records one by one , modify each and every record based on some conditions and store them in one internal table and update the DB table.
    I am also thinking alternate approach is,read 1 million into one internal table and after that within the loop, modify each and every records for given condition and update the DB table.
    which approach is the best one?
    you can advise me any other alternate approches with good performance.
    Regards,
    Nivas
    Edited by: Nivas4081 on Jul 24, 2008 2:55 PM

    Hi Joshi,
    Thanks for your reply reply. I have tested both ways as I mentioned in my query but reading record by reocrd and update data packets takes less time than reading into iternal table,then modify and update the DB table.
    Hi ralph,
    Thanks for the reply.
    The modifications are similar in all the lines.I get related data from other class/method,do some calculation and modify each each record.
    Are there any performnace tricks to be followed when processing large amount of data.by the way I am reading certail amount of records say 400K and updating DB table using parallel processing.
    Apart from this, any suggestions on this?
    Regards,
    Nivas

  • OutOfMemory error on large file processing in sender file adapter

    Hi Experts,
    I got file to IDOC scenario, sender side we are using remote adapter engine, files are processing fine when the size is below 5mb, if the file size is more than 5mb we are getting java.lang.OutOfMemoryError: java heap space. can anyone suggest me what parameter i need to change in order to process more than 5mb.

    Hi Praveen,
    Suggestion from SAP is not to process huge files at a time. Instead, you can make the into chunks and process.
    To increase the heap Memory:
    For Java heap-size and other memory requirements, Refer to the following OSS note for details:
    Note 723909 - Java VM settings for J2EE 6.40/7.0
    Also check out the OSS Note 862405 - XI Configuration Options for Lowering Memory Requirements
    There are oss notes available for the Java Heap size settings specific to the OS environment too.
    Thanks,

  • Large File Processing Problem

    HI Group,
    I am facing problem in XI while processing 48 MB File through File adapter,I have used Content Conversion in the design.
    I am using normal 64Bit operating system with Max of 2GB heap size,still I am facing the problem,Can any body tell me how much Heap size I required to process 48 MB size file through XI?

    Hi,
    Refer following SAP note-for this go to www.service.sap.com/notes
    File adapter faq- 821267
    Java Heap - 862405
    for java settings- 722787
    This blog may give some insight-/people/sap.user72/blog/2004/11/28/how-robust-is-sap-exchange-infrastructure-xi
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    btw, if the error tells that, trailer is missing.. where are you getting this error ?
    Regards,
    Moorthy

  • Another Large file processing question.

    I have a file that's about 500,000 lines long. I need to comma delimit it. I can't do it in Excel for obvious reason. My last resort is Linux. It contains fixed width columns and I need to put coma between the columns. I know how wide each column is and its position. How can I add comma's to this file in the specific places that I need. Do I need a script. Can I use 'sed'?

    sed or awk may be the tools you want to try for this task. Documentation can be found through the man command or using you're favorite search engine.
    C.

  • Copying large file sets to external drives hangs copy process

    Hi all,
    Goal: to move large media file libraries for iTunes, iPhoto, and iMovie to external drives. Will move this drive as a media drive for a new iMac 2013. I am attempting to consolidate many old drives over the years and consolidate to newer and larger drives.
    Hardware: moving from a Mac Pro 2010 to variety of USB and other drives for use with a 2013 iMac.  The example below is from the boot drive of the Mac Pro. Today, the target drive was a 3 TB Seagate GoFlex ? USB 3 drive formatted as HFS+ Journaled. All drives are this format. I was using the Seagate drive on both the MacPro USB 2 and the iMac USB 3. I also use a NitroAV Firewire and USB hub to connect 3-4 USB and FW drives to the Mac Pro.
    OS: Mac OS X 10.9.1 on Mac Pro 2010
    Problem: Today--trying to copy large file sets such as iTunes, iPhoto libs, iMovie events from internal Mac drives to external drive(s) will hang the copy process (forever). This seems to mostly happen with very large batches of files: for example, an entire folder of iMovie events, the iTunes library; the iPhoto library. Symptom is that the process starts and then hangs at a variety of different points, never completing the copy. Requires a force quit of Finder and then a hard power reboot of the Mac. Recent examples today were (a) a hang at 3 Gb for a 72 Gb iTunes file; (b) hang at 13 Gb for same 72 Gb iTunes file; (c) hang at 61 Gb for a 290 Gb iPhoto file. In the past, I have had similar drive-copying issues from a variety of USB 2, USB 3 and FW drives (old and new) mostly on the Mac Pro 2010. The libraries and programs seem to run fine with no errors. Small folder copying is rarely an issue. Drives are not making weird noises. Drives were checked for permissions and repairs. Early trip to Genius Bar did not find any hardware issues on the internal drives.
    I seem to get these "dropoff" of hard drives unmounting themselves and other drive-copy hangs more often than I should. These drives seem to be ok much of the time but they do drop off here and there.
    Attempted solutions today: (1) Turned off all networking on Mac -- Ethernet and WiFi. This appeared to work and allowed the 72 Gb iTunes file to fully copy without an issue. However, on the next several attempts to copy the iPhoto and the hangs returned (at 16 and then 61 Gb) with no additional workarounds. (2) Restart changes the amount of copying per instance but still hangs. (3) Last line of a crash report said "Thunderbolt" but the Mac Pro had no Thunderbolt or Mini Display Port. I did format the Seagate drive on the new iMac that has Thunderbolt. ???
    Related threads were slightly different. Any thoughts or solutions would be appreciated. Better copy software than Apple's Finder? I want the new Mac to be clean and thus did not do data migration. Should I do that only for the iPhoto library? I'm stumped.
    It seems like more and more people will need to large media file sets to external drives as they load up more and more iPhone movies (my thing) and buy new Macs with smaller Flash storage. Why can't the copy process just "skip" the parts of the thing it can't copy and continue the process? Put an X on the photos/movies that didn't make it?
    Thanks -- John

    I'm having a similar problem.  I'm using a MacBook Pro 2012 with a 500GB SSD as the main drive, 1TB internal drive (removed the optical drive), and also tried running from a Sandisk Ultra 64GB Micro SDXC card with the beta version of Mavericks.
    I have a HUGE 1TB Final Cut Pro library that I need to get off my LaCie Thunderbolt drive and moved to a 3TB WD USB 3.0 drive.  Every time I've tried to copy it the process would hang at some point, roughly 20% of the way through, then my MacBook would eventually restart on its own.  No luck getting the file copied.  Now I'm trying to create a disk image using disk utility to get the file from the Thunderbolt drive and saved to the 3TB WD drive. It's been running for half an hour so far and appears that it could take as long a 5 hours to complete.
    Doing the copy via disk image was a shot in the dark and I'm not sure how well it will work if I need to actually use the files again. I'll post my results after I see what's happened.

  • Upload and Process large files

    We have a SharePoint 2013 OnPrem installation and have a business application that provides an option to copy local files into UNC path and some processing logic applied before copying it into SharePoint library. The current implementation is
    1. Users opens the application and  clicks “Web Upload” link from left navigation. This will open a \Layouts custom page to select upload file and its properties
    2. User specifies the file details and chooses a Web Zip file from his local machine 
    3. Web Upload Page Submit Action will
         a. call WCF  Service to copy Zip file from local machine to a preconfigure UNC path
         b. Creates a list item to store its properties along with the UNC path details
    4. Timer Job executes in a periodic interval to
         a. Query the List to see the items that are NOT processed and finds the path of ZIP file folder
         b. Unzip the selected file 
         c. Loops of unzipped file content - Push it into SharePoint library 
         d. Updates list item in “Manual Upload List”
    Can someone suggest a different design approach that can manage the large file outside of SharePoint context? Something like
       1. Some option to initiate file copy from user local machine to UNC path when he submits the layouts page
       2. Instead of timer jobs, have external services that grab data from a UNC path and processes periodic intervals to push it into SharePoint.

    Hi,
    According to your post, my understanding is that you want to upload and process files for SharePoint 2013 server.
    The following suggestion for your reference:
    1.We can create a service to process the upload file and copy the files to the UNC folder.
    2.Create a upload file visual web part and call the process file service.
    Thanks,
    Dennis Guo
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Dennis Guo
    TechNet Community Support

  • Processing Large Files using Chunk Mode with ICO

    Hi All,
    I am trying to process Large files using ICO. I am on PI 7.3 and I am using new feature of PI 7.3, to split the input file into chunks.
    And I know that we can not use mapping while using Chunk Mode.
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    Thanks in Advance,
    - Pooja.

    Hello,
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    According to this blog:
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    The following limitations apply to the chunk mode in File Adapter
    As per the above screenshots, the split never cosiders the payload. It's just a binary split. So the following limitations would apply
    Only for File Sender to File Receiver
    No Mapping
    No Content Based Routing
    No Content Conversion
    No Custom Modules
    Probably you are doing content conversion that is why it is not working.
    Hope this helps,
    Mark
    Edited by: Mark Dihiansan on Mar 5, 2012 12:58 PM

  • Processing large file in multiple threads

    I have a large plain text file (~1GB) that contains lots (~500,000) of chunks of text, separated by the record separator: "\n//\n". In my application, I have a class that reads the file and turns the records into objects, which is fairly time-consuming (the bottleneck is definitely in the processing, not the IO). The app is running on a muti-core machine, so I want to split the processing up over several threads.
    My plan is to write a "Dispatcher" class, which reads the file from disk and maintains a queue of records, and a "Processor" class, which requests records from the Dispatcher object, turns the records into objects, and add()s them to an array. Is this the correct way to do it? Can anyone point me in the direction of a tutorial or example of this type of multi-threaded file processing?

    public class TaskExecutionr {
        private static final int NTHREADS = 10; // Probably want to do System.getNumberOfProcessors+1 here
        private static final Executor exec
                = Executors.newFixedThreadPool(NTHREADS);
        private static Collection result = new ConcurrentLinkedQueue();
        public static void main(String[] args) throws IOException {
            FileReader reader =
            while (true) {
                final String text = reader.read(500,000); // read 500.000 lines to a string.
                Runnable task = new Runnable() { // create a new runnable that deals with the text
                    public void run() {
                        handle(text);
                exec.execute(task); // Place the task in the Executor
        private static void handle(Socket connection) {
            // make your objects here
            result.add(<your new object>);
    }http://www.javaconcurrencyinpractice.com/listings/TaskExecutionWebServer.java
    Edited by: Fireblaze-II on Jun 19, 2008 5:48 AM

  • Process large file using BPEL

    My project have a requirement of processing large file (10 MB) all at once. In the project, the file adapter reads the file, then calls 5 other BPEL process to do 10 different validations before delivering to oracle database. I can't use debatch feature of adapter because of Header and detail record validation requirement. I did some performace tuing (eg: auditlevel to minimum, logging level to error, JVM size to 2GB etc..) as per performance tuing specified in Oracle BPEL user guide. We are using 4 CPU, 4GB RAM IBM AIX 5L server. I observed that the Receive activity in the begining of each process is taking lot of time, while other transient process are as per expected.
    Following are statistics for receive activity per BPEL process:
    500KB: 40 Sec
    3MB: 1 Hour
    Because we have 5 BPEL process, so lot of time is wasted in receive activity.
    I did't try 10 MB so far, because of poor performance figure for 3 MB file.
    Does any one have any idea how to improve performance of begining receive activity of BPEL process?
    Thanks
    -Simanchal

    I believe the limit in SOA Suite is 7MB if you want to use the full payload and perform some kind of orchastration. Otherwise you need to do some kind of debatching, which you stated will not work.
    SOA Suite is not really designed for your kind of use case as it needs to parocess this file in memory, when any transformation occurs it can increase this message between 3 - 10 times. If you are writing to a database why can you read the rows one by one?
    If you are wanting to perform this kind of action have a look at ODI (Oracle Data Integrator). I Also believe that OSB (Aqua Logic) can handle files upto 200MB this this can be an option as well, but it may require debatching.
    cheers
    James

Maybe you are looking for

  • Sort field in maint.plan&in ip30

    hi does some one implementd sort field use in img of maint plan and does dead line monitoring in ip3o whar are the benifits or scope ponts will be rewarded

  • How to deal with Portal, Workshop, and non-BEA software?

    Hi, I'm really new to working with portal, and have looking into integrating a single sign-on software (RSA ClearTrust) into existing portal code. The ClearTrust installation involves installation of a bunch of software that integrates with the porta

  • Popup message for down payment

    hi, regarding down payment, system will show the popup message while entering FB60 and MIRO, the same popup message is it possible to shown while doing F-53 & f-58 to vendor. govind.

  • "Show in Windows Explorer" option for song not working

    I have the latest verson of iTunes 11.0.1 but have had this problem for awhile Whenever I want to delete a song from iTunes, I usually delete the file as well. To do this, I'll right-click on the song & select "Show in Windows Explorer". However, the

  • Resetting the firewall?

    we have Yosemite on both of our Macs at home; an early-2008 Mac Pro and a 2013 iMac. the firewall on both of the Macs is broken. when the firewall is broken, processes are listed instead of packages, and the firewall actually stops all traffic irresp