Slow "File Processing"

For some reason, any APP that I download from the iTunes store takes FOREVER to "process" before it can be used.  I just download the Disney Tangled: Storybook Deluxe for my kids.  It took only 15 minutes to downlaod but has been "processing" for more than 30 minutes on Dual Processor, 4-core Mac Pro. 
I have had the same problem with other APPS--relatively fast "downloads" but painfully slow "processing times"
Searching the forums, it looks like others have had this issue, but few have found a solution.  Any suggestions?

Yes I'm having the same problem and from what I've read it is Apples sever signing the app... I don't think it's your Mac.
http://apple.stackexchange.com/questions/44072/extremely-slow-processing-file-ti mes-for-apps-in-itunes-10-6

Similar Messages

  • COMPSUMM.BOX Slow to Process Larger SUM Files

    Hey Everyone,
    We have a large SCCM 2007 environment (over 2000 sites, 80,000+ clients) and we are finding the Central Site doesn't appear to be able to keep up with the influx of messages coming into COMPSUMM.BOX.  We can move all of the files out and feed
    them back in slowly, but even if we do nothing but clear out the files the inbox gets backlogged again.
    In this Inbox there are a combination of SVF and SUM files.  The SVF files are small and process without any issues.  The problem is the SUM files.  The SUM files up to about 500KB or less appear to process OK and don't hold
    up the overall inbox processing too much, but the problem is when the 2MB, 3MB, 4MB and 5MB SUM files come in.  As soon as they hit, the inbox comes to a crawl.  It takes a significant amount of time to process each of these larger files (15mins+
    each) and while it's processing one, more come in, then it gets to the next one, and then more come in and by the end of the day we can end up with as many as 300,000 files in this folder.  Eventually, it seems to process a lot of them and occasionally
    even gets caught up (down to only 10,000 files), but in busier periods it needs help by us moving some of the files out.  Also, the large SUM files seem to process at a higher priority and leave other items sitting there not moving while the system works
    on the larger files?  What process generates the larger files flowing through here and how often?
    My question is, is there anything we can do to reduce the amount of data flowing through this specific inbox so it is able to better keep up with the load?  Can we change the processing priority so that the larger SUM files don't automatically jump
    to the front of the queue?  I know we can change the schedule for the "Site System Status Summarizer", but there doesn't seem to be such a schedule for the "Component Status Summarizer".  I don't want to turn off replicating these
    messages from the child primary sites, but we also don't want to be dealing with a constant backlog situation either.
    Any suggestions are much appreciated.
    Thanks!
    -Jeff

    Hi Garth,
    I wasn't saying that the larger files process because they are larger files but rather that SUM files appear to process at higher priority than SVF files in this Inbox.  Once one of the large SUM files hits it's turn in the queue, compsumm.box starts
    growing.  There were 88,000 files in here when I came in this morning and it still had files from the 17th that hadn't processed.  Moved out all the large files (1000+ of them) and all of the other files in the Inbox processed.  Move the 1000
    back in... backlog begins again.  We can have 300 SVF files come in and one SUM file and the SUM file will begin processing immediately and the SVF files don't appear to process until the SUM file is done.  I was thinking more along the lines that
    the content of the SUM files may be prioritized higher than others.
    By 2000 Sites, yes CM07 Primary and Secondary Sites.  19 Primary and 2286 Secondary to be exact.
    No errors in COMPSUMM.LOG.  Just very busy processing.
    HINV = Every 4 Days
    SINV = Every 7 Days
    SWM = Every 7 Days
    Heartbeat = Every 7 Days
    Discovery/Inventory isn't an issue... No DDR or MIF backlogs.  We don't run AD System Discovery, only Group Discovery and it runs on the tier two primary sites once a month and are staggered by region.
    CPU is steady at about 50% with SMSEXEC using about 30%.
    Memory is steady at around 10GB of the 36GB Total
    SQL is interesting... SQL is a cluster on a dedicated server with 48GB physical memory.  SQL Server 2008 SP3 64-Bit SQL is configured to be able to use up to 42GB of memory leaving 6GB for the OS.  In Task Manager, SQLServr.exe is showing
    it is using about 800-900MB most of the time.  However, looking in the status bar in Task Manager it shows Physical Memory: 98%, consistently.  I ran a TASKLIST and exported the results to view in Excel but when I total everything there, it is only about
    2.1GB.  Hmm...  Running RAMMap.exe it shows AWE allocated with 43GB of memory.  AWE is NOT enabled in SQL.  From another Google search, this appears to be something others have seen as well, but I'm not finding any good solution, or
    any really clear indication this is actually a problem as opposed to how Win2K8 is managing memory.  Don't like seeing the server showing 98% memory usage though.  Going to continue to look at this further and feed the info back to the team.
    I have not performed the DBCC commands on the SQL database (yet) but will do so.
    Thanks!
    -Jeff

  • Slow File I/O with newer compilers (SPARCompiler 5 and gcc 3)

    Well it's more like slow File I, as I'm only really testing Input, but anyway, here goes...
    I have a simple program that takes a file of about 250mb and simply loops through the file reading it in chunks of 8,192 bytes.
    Basically I'm timing the read speed to try and figure out why there is such a large discrepancy between the older SPARCompiler 3.0.1 : C++ 4.0.1 and the newer 5.0 as well as gcc 3.0 in reading a file. I realize that the old compiler doesn't support STL and that SPARCompiler 5's .h headers basically point to the STL headers, so there is a difference there, but the numbers I get are around 35s with SC 3.0.1 : C++ 4.0.1 to about 90s with SC 5.0 and 76s with gcc 3.0. I understand the numbers are going to vary depending on the test input file size and the machine you're running on, but the ratio's should stay relatively similar if you do run your own tests. That slowdown is pretty much unacceptable in my opinion and I really want the benefits of a new compiler, am I doing something wrong, is this a known issue??
    Command Lines (I did my best to try with different versions of the OS, but Sun licensing makes it hard!):
    With C++ 4.0.1 (tried on Solaris 2.6 and 2.7)
    CC -fast -o IOtest.CC io_read_tst.cpp
    With C++ 5 (used on Solaris 2.5.1 and 2.8)
    CC -O3 -o IOtest.CC5 io_read_tst.cpp
    With gcc 3.0 (tried on Solaris 2.5.1, 2.6, 2.7, and 2.8)
    gcc -O3 -o IOtest.gcc3 io_read_tst.cpp
    I've got the latest patches for the OS and compilers, I've tried compiling on multiple versions of the OS (Solaris 2.5.1, 2.6, 2.7, and 2.8) and always I get similar results, I know I haven't tested every OS possibility, but you have to re-install the OS to test since the compiler is licensed to only one machine, at least for the SUN Compilers anyway! Any help would be greatly, greatly appreciated, code is as follows:
    #include <iostream.h>
    #include <fstream.h>
    #include <time.h>
    int main( )
    char sBuffer[8192];
    time_t startTime, endTime;
    ifstream TestFile;
    TestFile.open( "/data4/tax101/taxtab" ); // Change this to a large file local to your system (~250mb)
    if( TestFile.fail() )
    return( EXIT_FAILURE );
    time( &startTime );
    while( !TestFile.eof() )
    TestFile.read( sBuffer, 8192 );
    time( &endTime );
    cout << endl << "Read Time: " << endTime - startTime << "s\n";
    return( EXIT_SUCCESS );
    Thanks,
    Lance Beddawi
    [email protected]

    Alright, here is my code reposted:
    // Warning.java
    // Reads student data from a text file and writes data to another text file.
    import java.util.;
    import java.io.;
    public class Warning
    // // Reads student data (name, semester hours, quality points) from a
    // text file, computes the GPA, then writes data to another file
    // if the student is placed on academic warning.
    public static void main (String[] args)
    int creditHrs; // number of semester hours earned
    double qualityPts; // number of quality points earned
    double gpa; // grade point (quality point) average
    Scanner scan=null;
    PrintWriter outFile=null;
    String name, inputName = "students.txt";
    String outputName = "warning.txt";
    try
    // Set up Scanner to input file
    scan=new Scanner(new FileInputStream(inputName));
    // Set up the output file stream
    outFile = new PrintWriter(new FileWriter(outputName));
    // Print a header to the output file
    outFile.println ();
    outFile.println ("Students on Academic Warning");
    outFile.println ();
    // Process the input file, one token at a time
    while (scan.hasNext())
    // Get the credit hours and quality points and
    // determine if the student is on warning. If so,
    // write the student data to the output file.
    name=scan.next();
    creditHrs=scan.nextInt();
    qualityPts=scan.nextDouble();
    gpa=qualityPts/creditHrs;
    if ((gpa < 1.5 && creditHrs < 30) || (gpa < 1.75 && creditHrs < 60) || (gpa < 2.0 && creditHrs >= 60))
    outFile.print(name " ");
    outFile.print(creditHrs " ");
    outFile.print(qualityPts " ");
    outFile.print(gpa);
    //Add a catch for each of the specified exceptions, and in each case
    //give as specific a message as you can
    catch (FileNotFoundException e)
    System.out.println("The file " inputName " was not found.");
    catch (IOException e)
    System.out.println("The I/O operation failed and " outputName + " could not be created.");
    catch (InputMismatchException e)
    System.out.println("The input information was not of the right type.");

  • Error message by periodic weekly: No output from the 1 file processed

    Hi there,
    since four weeks, I got a problem with the maintenance script periodic weekly. Up to December 22nd, the script did, what it should do: rebuilding the database of locate and whatis, rotating log-files. Since one week later, I got the error message: No output from the 1 file processed.
    Normally, I use Anacron to do the job. When I noticed the problem, I tried to start the script with Tinker Tool System getting the same result. Another try using the Terminal (sudo periodic weekly) also failed. The commands locate and whatis are working, locate.updatedb and makewhatis also. I'm running 10.4.8; in the past, I did not have such problems. Anyone with an idea or solution?
    Thanks
    Klaus
    MacBook Pro   Mac OS X (10.4.8)  

    Hi Gary,
    here is the output you were asking for:
    Last login: Thu Jan 25 20:03:55 on console
    Welcome to Darwin!
    DeepThought:~ dirk$ sudo /private/etc/periodic/weekly/500.weekly; echo $?
    Password:
    Sorry, try again.
    Password:
    Rebuilding locate database:
    Rebuilding whatis database:
    find: /usr/local/man: No such file or directory
    makewhatis: /usr/share/man/man1/fetchmailconf.1.gz: No such file or directory
    Rotating log files: ftp.log lpr.log mail.log netinfo.log ipfw.log ppp.log secure.log
    access_log error_log
    Running weekly.local:
    Rotating psync log files:/etc/weekly.local: line 17: syntax error near unexpected token `)'
    /etc/weekly.local: line 17: `if [ -f /var/run/syslog.pid ]; then kill -HUP 0 80 79 81 0cat /var/run/syslog.pid | head -1); fi'
    2
    DeepThought:~ dirk$ ls -loe /private/etc/periodic/weekly/500.weekly
    -r-xr-xr-x 1 root wheel - 2532 Jan 13 2006 /private/etc/periodic/weekly/500.weekly
    DeepThought:~ dirk$
    It seems, Rogers idea, PsynX respectively the deficient uninstalling by me is responsible for my problems, is correct. Should I remove the whole file weekly.local or should I only remove the content? I prefer removing the whole file, because it was created while installing PsyncX. The date of creation is the same as the date of installing the app (December 25).
    Klaus
    By the way: it seems to me, the solution of my problem is in sight. So I want to thank you all for the amazing aid I got from you!

  • A DNG File Processed by an Earlier Version of ACR--How to Re-Process in ACR 8.4

    I have thousands of DNG files processed by the ACR versions that came with CS4 and CS5. I was hoping to re-process some of them with the later ACR versions that came with CS6. But when I open a previously edited DNG file in ACR 8.4, it seems to revert to the earlier version in which the file was originally edited/processed. I made a duplicate of a file and reverted it to Default--sort of like restoring its virginity--but ACR 8.4 was not fooled; it still opened the earlier version. What to do???????

    The earlier version of ACR was using a previous Process Version. 
    If you want the newer adjustment sliders, set your Process Version to 2012 on the Camera Calibration tab.

  • Slow Files Copy File Server DFS Namespace

    I have two file servers running on VM both servers are on different physical servers.
    Both connect with dfs namespace.
    The problem part is both servers never have same copy speed.
    Sometime very slow files copy about 1MBps on FS01 and fast copy 12MBps on FS02.
    Sometime fast on FS01 and slow on FS02.
    Sometime both of them slow..
    So as usual I reboot the servers. Doesn't work.
    Then I reboot the DC01 also doesn't work. There is another brother DC02.
    After I reboot DC02, one of the FS become normal and another FS still slow.
    FS01 and FS02 randomly. They never get faster speed together.
    Users never complain slow FS because 1MBps is acceptable for them to open word excel etc.,.
    The HUGE problem is I don't have backup when the slow FS days.
    The problem since two weeks I'm giving up fixing it myself and need help from you expert guys.
    Thanks!
    DC01, DC02, FS01, FS02 (Win 2012 and All VMs)

    Hi,
    Since the slow copy is also occurred when you tried the direct copy from both shared folder, you could enable the disk write cache on the destination server to check the results.
    HOW TO: Manually Enable/Disable Disk Write Caching
    http://support.microsoft.com/kb/259716
    Windows 2008 R2 - large file copy uses all available memory and then tranfer rate decreases dramatically (20x)
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/3f8a80fd-914b-4fe7-8c93-b06787b03662/windows-2008-r2-large-file-copy-uses-all-available-memory-and-then-tranfer-rate-decreases?forum=winservergen
    You could also refer to the FAQ article to troubleshoot the slow copy issue:
    [Forum FAQ] Troubleshooting Network File Copy Slowness
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/7bd9978c-69b4-42bf-90cd-fc7541ccb663/forum-faq-troubleshooting-network-file-copy-slowness?forum=winserverPN
    Best Regards,
    Mandy 
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Sender File adapter and duplicate file processing

    If I set the sender file adapter to delete or archive and when a file gets picked up and processed, this file will not get deleted/archived unless it was successfully processed.  However, if it errors out during processing, the file remains but it's message will get persisted in the integration engine or adapter engine.  Since there's automatic retry, we have the potential for duplicate processing as in addition to the retry, the file is still continously polling for this file.  In other words, how do we stop this duplicate file processing?
    Thanks.

    Hi Bevan,
    However, if it errors out during processing, the file remains but it's message will get persisted in the integration engine or adapter engine.
    your file wont get deleted unless adapter engine sucesfull pickups. if does not picked up at adapter engine then is not stored in adapter engine . if it reached Intergation Server and failed their then file would be deleted.
    please let me know if you haveany questions
    please reward points
    Regards
    Sreeram.G.Reddy

  • FTP - Run OS Command before file processing

    Hi,
    I have a requirement wherein I need to FTP a file from XI to a folder in a FTP server . Now FTP Server is set up in such a way that I cannot put the file directly. Before transferring the file , I have to use CD ( change directory command ) to access a particular folder and then transfer the file. This means that I cannot give the folder information directly to TARGET DIRECTORY.
    To address this, I decided to use the feature "Run OS Command BEFORE file processing " . And wrote a command 'cd <foldername> .It is not working. Then I tried using "Run OS Command AFTER file processing "  and it also didnot work.
    Does anyone have any clue how can I address this requirement using FILE Adapter.
    thanks,
    rakesh

    HI,
    OS commands will be executed in XI server not in the FTP server. So first you need to connect into FTP server and then you need execute CD command.
    option 1) Get the absolute path ie direct path from FTP server so that you can directly connect to FTP server's specific directoty
    22) In this case , write the file into your XI server itself by NFS File Transport protocol. Then ftp this file from your XI server into FTP server using Shell Script.
    So write a shell script which will be executed in the XI server, inside this write a logic of tranfer of files with FTP protocol. This shell script is executed from the Reciever File adapter with the option OS command.
    Hope this helps,
    Regards,
    Moorthy

  • Huge size file processing in PI

    Hi Experts,
    1. I have seen blogs which explains processing huge files. for file and sftp
    SFTP Adapter - Handling Large File
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    Here also we have constrain that we can not do any mapping. it has to be EOIO Qos.
    would it be possible, to process  1 GB size file and do mapping? which hardware factor will decide that sytem is capable of processing large size with mapping?
    is it number of CPUs,Applications server(JAVA and ABAP),no of server nodes,java,heap size?
    if my system if able to process 10 MB file with mapping there should be something which is determining the capability.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    2. consider pi is able to process 50 MB size message with mapping. in order to increase the performance what are the options we have in PI
    i have come across these two point many times during design phase of my project. looking for your suggestion
    Thanks.

    Hi Ram,
    You have not mentioned what sort of Integration it is.You just mentioned as FILE.I presume it is FILE To FILE scenario.In this case in PI 711 i am able to process 100MB(more than 1Million records ) file size with mapping(File is in the delta extract in SAP ECC AL11).In the sender file adapter i have chosen recordset per message and processed the messages in bit and pieces.Please note this is not the actual standard chunk mode.The initial run of the sender adapter will load the 100MB file size into the memory and after that messages will be sent to IE based on recordset per message.If it is more than 100MB PI Java starts bouncing because of memory issues.Later we have redesigned the interface from proxy to file asyn and proxy will send the messages to PI in chunks.In a single run it will sent 5000 messages.
    For PI 711 i believe we have the memory limtation of the cluster node.Each cluster node can't be more than 5GB again processing depends on the number of Java app servers and i think this is no more the limitation from PI 730 version and we can use 16GB memory as the cluser node.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    If i understand this i think if it is asyn communication then definitely 1GB data can sent to webservice however messages from Proxy should sent to PI in batches.May be the same idea can work for Sync communication as well however timeouts in receiver channel will be the next issue.Increasing time outs globally is not best practice however if you are on 730 or later version you can increase timeouts specific to your scenario.
    To handle 50 MB file size make sure you have the additional java app servers.I don't remember exactly how many app server we have in my case to handle 100 MB file size.
    Thanks

  • Large file processing in XI 3.0

    Hi,
    We are trying to process a large file of ~280 MB file size and we are getting timeout errors. I followed all the required tunings for memory and heap sizes and still the problem exists. I want to know if installation of decentral adapter engine for just this file processing might solve the problem which I doubt.
    Based on my personal experience there might be a limitation of file size processing in XI may upto 100 MB with minimul mapping and no BPM.
    Any comments on this would be appreciated.
    Thanks
    Steve

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Large file processing in file adapter

    Hi,
    We are trying to process a large file of ~280 MB file size and we are getting timeout errors. I followed all the required tunings for memory and heap sizes and still the problem exists. I want to know if installation of decentral adapter engine just for this large file processing might solve the problem which I doubt.
    Based on my personal experience there might be a limitation of file size processing in XI may upto 100 MB with minimul mapping and no BPM.
    Any comments on this would be appreciated.
    Thanks
    Steve

    Dear Steve,
    This might help you,
    Topic #3.42
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/70ada5ef-0201-0010-1f8b-c935e444b0ad#search=%22XI%20sizing%20guide%22
    /people/sap.user72/blog/2004/11/28/how-robust-is-sap-exchange-infrastructure-xi
    This sizing guide &  the memory calculations  it will be usefull for you to deal further on this issue.
    http://help.sap.com/bp_bpmv130/Documentation/Planning/XISizingGuide.pdf#search=%22Message%20size%20in%20SAP%20XI%22
    File Adpater: Size of your processed messages
    Regards
    Agasthuri Doss

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Photoshop elements 8 not proceeding at multiple file processing

    Hi everyone,
    I've got the following problem with photoshop elements 8. I worked for a long time with it. Using the multiple file processing functionality a lot. I do quite a lot of big shoots and I only use the raw settings for processing. After that I let photoshop elements do the rest of the work, which is converting all my raw images into jpegs and resizing them to prepare them for uploading. These shoots often contain more then 200 photographs.
    As of this last weekend when I start up the multiple file processing window it starts opening the first file but does not proceed. Which means that I have to edit all the images by myself and resize them to the proper size etc etc etc. I really don't want to do this. This funcionality is one of the main reasons why I use photoshop. Anyone got any idea why it suddenly stopped working? Anyone else experienced this and solved it?
    Thanks a lot.
    Ben

    One thing to check is the bit depth that camera raw is set to.
    Open a raw photo in the camera raw dialog and look down along
    the bottom of the dialog where it says bit depth. If it says 16bit,
    change to 8 bit and press done.
    Photoshop elements sometimes won't process 16 bit files using process
    multiple files.
    MTSTUNER

  • Empty file processing

    Hi All,
    We are using PI 7.0 with SP 18. In one of our file to Idoc scenarios when posting empty files the file is picked up and archived but we are not getting any logs in moni. Further I couldnt find any option in the sender file adaper for empty file processing.
    Can anyone please clarify on this?

    hi,
    >>>when posting empty files the file is picked up and archived but we are not getting any logs in moni.
    this is good isn't it ?
    >>>Further I couldnt find any option in the sender file adaper for empty file processing.
    either you don't have correct SP level or you didn't import sap basis content approriate for your SP
    Regards,
    Michal Krawczyk

  • Java.lang.AssertionError: WSDL not found in the class file "processes

    Hi,
    I am using WLI 10.3 workshop to build process application. I have designed one JPD which inturn call another process JPD. So i have used worklist process control to create the process and tried to access the process. When i am executing the code, i am getting below error message.
    13-Jun-2011 14:13:00 o'clock BST> <Error> <WLI> <BEA-000000> <Exception processing processes.ISPSSQMsgLisnt
    Java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at com.bea.wli.knex.runtime.core.dispatcher.DispUnit.loadDispFile(DispUnit.java:219)
    Truncated. see log file for complete stacktrace
    Java.lang.AssertionError: WSDL not found in the class file "processes.ISPSSQMsgLisnt", annotated class = processes.ISPSSQMsgLisnt
    --ClassAnnotations:
    --Method Annotations:
    --Field Annotations:
    can't continue
    at com.bea.wli.knex.runtime.jws.dispatcher.JwsDispClass.<init>(JwsDispClass.java:392)
    at com.bea.wli.bpm.runtime.JpdDispClass.<init>(JpdDispClass.java:65)
    at com.bea.wli.bpm.runtime.JpdDispClass.<init>(JpdDispClass.java:55)
    at com.bea.wli.bpm.runtime.JpdDispFile.createPrimaryDispClass(JpdDispFile.java:382)
    at com.bea.wli.knex.runtime.core.dispatcher.DispFile.<init>(DispFile.java:154)
    Truncated. see log file for complete stacktrace
    xception processing processes.ISPSSQMsgLisnt
    ava.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at com.bea.wli.knex.runtime.core.dispatcher.DispUnit.loadDispFile(DispUnit.java:219)
    at com.bea.wli.knex.runtime.core.dispatcher.DispUnit.<init>(DispUnit.java:153)
    at com.bea.wli.knex.runtime.core.dispatcher.DispCache.ensureDispUnit(DispCache.java:628)
    at com.bea.wli.knex.runtime.core.dispatcher.DispCache.ensureDispUnitForURI(DispCache.java:1029)
    at com.bea.wli.knex.runtime.core.dispatcher.DispCache.ensureDispUnitForURI(DispCache.java:950)
    at com.bea.wli.broker.JWSSubscriber.getDispClass(JWSSubscriber.java:231)
    at com.bea.wli.broker.JWSSubscriber.getRequest(JWSSubscriber.java:184)
    at com.bea.wli.broker.JWSSubscriber.doDispatch(JWSSubscriber.java:358)
    at com.bea.wli.broker.JWSSubscriber.doDispatch(JWSSubscriber.java:348)
    at com.bea.wli.broker.SubscriptionDispatcher.doDispatch(SubscriptionDispatcher.java:87)
    at com.bea.wli.broker.MessageBroker$PrivilegedSubscriptionDispatcher.run(MessageBroker.java:179)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at com.bea.wli.security.authentication.AuthenticationService.runAs(AuthenticationService.java:108)
    at com.bea.wli.broker.MsgBrokerSecurityHelper.doDispatch(MsgBrokerSecurityHelper.java:231)
    at com.bea.wli.broker.MessageBroker$PrivilegedSubscriptionDispatcher.doDispatch(MessageBroker.java:165)
    at com.bea.wli.broker.MessageBroker.publishMessage(MessageBroker.java:984)
    at com.bea.wli.mbconnector.jms.JmsConnMDB.publishMBMessage(JmsConnMDB.java:343)
    at com.bea.wli.mbconnector.jms.JmsConnMDB.onMessage(JmsConnMDB.java:475)
    at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466)
    at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371)
    at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327)
    at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4547)
    at weblogic.jms.client.JMSSession.execute(JMSSession.java:4233)
    at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3709)
    at weblogic.jms.client.JMSSession.access$000(JMSSession.java:114)
    at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5058)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    aused by: java.lang.AssertionError: WSDL not found in the class file "processes.ISPSSQMsgLisnt", annotated class = processes.ISPSSQMsgLisnt
    --ClassAnnotations:
    --Method Annotations:
    --Field Annotations:
    can't continue
    at com.bea.wli.knex.runtime.jws.dispatcher.JwsDispClass.<init>(JwsDispClass.java:392)
    at com.bea.wli.bpm.runtime.JpdDispClass.<init>(JpdDispClass.java:65)
    at com.bea.wli.bpm.runtime.JpdDispClass.<init>(JpdDispClass.java:55)
    at com.bea.wli.bpm.runtime.JpdDispFile.createPrimaryDispClass(JpdDispFile.java:382)
    at com.bea.wli.knex.runtime.core.dispatcher.DispFile.<init>(DispFile.java:154)
    at com.bea.wli.knex.runtime.jws.dispatcher.JwsDispFile.<init>(JwsDispFile.java:24)
    at com.bea.wli.bpm.runtime.JpdDispFile.<init>(JpdDispFile.java:108)
    ... 34 more

    Hi
    Are you using wli process control?
    That uses wsdl and could cause the issue.
    I also saw couple of internal bugs CR264315 and CR288904 on the same issue on 9.2.
    Since this is a wli issue could you post in the wli newsgroup to get more answers http://forums.bea.com/forum.jspa?forumID=2047 ?
    Also you can open a BEA support case at http://support.bea.com and an wli support engineer would help you. Please refer to the Cr numbers above in your support case.
    Thanks
    Vimala

Maybe you are looking for

  • How to increase entire web site size in template?

    How would I increase the entire web site size on a premade website template? All of these templates Im currently using created by the same company appear rather small to, especially in width. http://www.englecomputerservice.com/Home.html Do I have to

  • Add selection criteria to Billing due date list VF04

    Dear gurus, I need to execute report VF04  but looking sales documents by material. is there any enhacement point or user-exit to add this option to program? Thank in advance. Best regards Juan

  • Receiver determination based on the filename of the incoming xml file

    hi folks, is it possible to base the receiver determination from the filename (pattern) of the incoming xml file? suppose, if the filename is partner1.xml, then the receiver determination has a condition of partner1.  in this case, the message is rec

  • Kdemod updated, konqueror can't open a link. firefox keep crashing

    upate to the new kdemod. Everything gets improved. But konqueror can't open a link, keep openning a dialog asking whether to save to the disk or open with firefox. My firefox keep crashing too. Don't know why? I think nothing to do with kdemod

  • Excel 2013 PowerMaps: Can't initiate directx.

    I'm trying to insert a powermap into a powerview. I'm using my desktop. Upon clicking insert, I get the error indicating that directx can't initiate. I have confirmed that my DDI version is 10.1. Please advise. Thank you. Natediggity