File processing by date

HI all,
I configured file to file scenario, in the sender comm channel I selected file Processing sequence as "By date" which means that Files are processed according to their time stamp in the file system, starting with the oldest file.
While testing, I placed the files in source system with same date but different timings. But the files are not processing according to oldest time stamp, inturn they are processing in some jumbled manner.
This I came to know when the files are created in target system with their timestamp that includes milliseconds also.
However, I want to process the files according to their timestamp, how can I do this?
Please help me.
Regards,
Anil.

HI
Bhavesh, I am using NFS with By date.
As Lalit suggested, I put the processing mode as Archieve, and gave queue name as "Temp". In the archieve folder I could able to see the file sequence how they are getting processed and it is up to my requirement.
Now can I view the same file processing in "Temp" Queue. For this I think I need to configure the queues.
Please guide me how to configure the queues and view the files in this queue "Temp".
Thanks,
anil.

Similar Messages

  • Intra date file processing in EBS

    Hi,
    While processing Intra date file, we are unable to post the opening balance (BAI code 040) in EBS processing.I have configured 040 External transaction and activated the 21 process type but still we are unable to post the opening balance with 040 value.

    HI
    Bhavesh, I am using NFS with By date.
    As Lalit suggested, I put the processing mode as Archieve, and gave queue name as "Temp". In the archieve folder I could able to see the file sequence how they are getting processed and it is up to my requirement.
    Now can I view the same file processing in "Temp" Queue. For this I think I need to configure the queues.
    Please guide me how to configure the queues and view the files in this queue "Temp".
    Thanks,
    anil.

  • Reading file and dump data into database using BPEL process

    I have to read CSV files and insert data into database.. To achieve this, I have created asynchronous bpel process. Added Filed Adapter and associated it with Receive activity.. Added DB adapter and associated with Invoke activity. Total two receive activity are available in  process, when tried to Test through EM, only first receive activity is completed, and waiting on second receive activity. Please suggest how to proceed with..
    Thanks, Manoj.

    Deepak, thank for your reply.. As per your suggestion I created BPEL composite with
    template "Define Service Later". I followed below steps, please correct me if I am wrong/missing anything. Your help is highly appreciated...
    Step 1-
    Created File adapter and corresponding Receive Activity (checkbox create instance is checked) with input variable.
    Step 2 - Then in composite.xml, dragged the
    web service under "Exposed Services" and linked the web service with Bpel process.
    Step 3 - Opened .bpel file and added the DB adapter with corresponding Invoke activity, created input variable. Web service is created of Type "Service" with existing WSDL(first option aginst WSDL URL).
    and added Assign activity between receive and invoke activities.
    Deployed the composite to server, when triedTest it
    manually through EM, it is promting for input like "subElmArray Size", then I entered value as 1 with corresponding values for two elements and click on Test We Service button.. Ptocess is completing in error. The error is
    Error Message:
    Fault ID
    service:80020
    Fault Time
    Sep 20, 2013 11:09:49 AM
    Non Recoverable System Fault :
    Correlation definition not registered. The correlation set definition for operation Read, process default/FileUpload18!1.0*soa_3feb622a-f47e-4a53-8051-855f0bf93715/FileUpload18, is not registered with the server. The correlation set was not defined in the process. Redeploy the process to the containe

  • Synchronous processing of dat file using Adobe Output Server

    Hello everyone
    I have a doubt regarding Adobe Output Server. Currently my output server is running as a window service named "Adobe Central Output Server" in my machine referring to executable path E:\Jetform\Central\Control\jfservic.exe. It keeps on running irrespective of whether .dat file is present for further processing or not. In case .dat is there in data folder[location where my code writes into .dat file] it picks it up and does all the processing to give the result in form of Print/Fax. But when there is no .dat file present , still the service keeps on running consuming memory & cpu resource.
    Curently my code drops a .dat file in data folder and dont bother about what happens to that .dat file afterwards. Its Adobe Output Server service that is running that keeps on monitoring data folder for any new dat file to process them further.
    I want to stop that window service permanently and i wish to change some logic in my code so that as and when my program drops a .dat file in Data folder then i invoke Adobe Output Server or some of its component so that it picks up that dat file and does all the processing to generate Print/Fax and once its done with processing , the execution of Adobe central server ends. And it remains dead till its invoked by the program to process a new dat file.
    So basically i was wondering whether there is possibiliy of invocation of Adobe Output Server is available or not for synchronous processing of dat file from the code.
    Any suggestions , any ideas , any doubts on it are most welcome.
    jaY

    i faced the same thing
    1- you have to do something in SAP ( i am looking for someone to tell me what )
    2- create a job & name it by (ANYNAME_EMAIL)
    3- Inside the Job create the following Tasks
    a- SAPMERGER A
    b- SAPMERGER
    c- SAPEMSEND
    that's it / AS Simple AS That
    Please tell me if you still facing the same problem

  • Which background process writes date into alert log file in oralce

    which background process writes date into alert log file in oralce

    Hi,
    AFAIK, all the background process are eligible for writing information to alert log file. As the file name indicates to show the alerts, so background process have the access rights in terms of packages (dbms), to write to alert log.
    I might be wrong also..
    - Pavan Kumar N

  • Error message by periodic weekly: No output from the 1 file processed

    Hi there,
    since four weeks, I got a problem with the maintenance script periodic weekly. Up to December 22nd, the script did, what it should do: rebuilding the database of locate and whatis, rotating log-files. Since one week later, I got the error message: No output from the 1 file processed.
    Normally, I use Anacron to do the job. When I noticed the problem, I tried to start the script with Tinker Tool System getting the same result. Another try using the Terminal (sudo periodic weekly) also failed. The commands locate and whatis are working, locate.updatedb and makewhatis also. I'm running 10.4.8; in the past, I did not have such problems. Anyone with an idea or solution?
    Thanks
    Klaus
    MacBook Pro   Mac OS X (10.4.8)  

    Hi Gary,
    here is the output you were asking for:
    Last login: Thu Jan 25 20:03:55 on console
    Welcome to Darwin!
    DeepThought:~ dirk$ sudo /private/etc/periodic/weekly/500.weekly; echo $?
    Password:
    Sorry, try again.
    Password:
    Rebuilding locate database:
    Rebuilding whatis database:
    find: /usr/local/man: No such file or directory
    makewhatis: /usr/share/man/man1/fetchmailconf.1.gz: No such file or directory
    Rotating log files: ftp.log lpr.log mail.log netinfo.log ipfw.log ppp.log secure.log
    access_log error_log
    Running weekly.local:
    Rotating psync log files:/etc/weekly.local: line 17: syntax error near unexpected token `)'
    /etc/weekly.local: line 17: `if [ -f /var/run/syslog.pid ]; then kill -HUP 0 80 79 81 0cat /var/run/syslog.pid | head -1); fi'
    2
    DeepThought:~ dirk$ ls -loe /private/etc/periodic/weekly/500.weekly
    -r-xr-xr-x 1 root wheel - 2532 Jan 13 2006 /private/etc/periodic/weekly/500.weekly
    DeepThought:~ dirk$
    It seems, Rogers idea, PsynX respectively the deficient uninstalling by me is responsible for my problems, is correct. Should I remove the whole file weekly.local or should I only remove the content? I prefer removing the whole file, because it was created while installing PsyncX. The date of creation is the same as the date of installing the app (December 25).
    Klaus
    By the way: it seems to me, the solution of my problem is in sight. So I want to thank you all for the amazing aid I got from you!

  • Inbound CSV file containing  the date in the name

    Dear all,
    I have a very small problem on the import file process.
    I have configure a new SLD with the following pattern
    pluton\edi\B1I_CCO\In\QC_ROUT_*.TXT (this is a DSV file).
    I have defined a bizstep reading these files but the problem is that the file name contains the date.
    I have read in the documentation that a function exists in B1i (page 14 of the Ref2).
    So I have indicated as :
    - Identification Method: File Name
    - Identification Parameter : substring(filename,0,7)
    - Identifier : QC_ROUT
    But it does not work.
    Do you notice something wrong in it?
    thank you for your help.

    Hi Damien,
    Please find more explanation of using the "identification parameter" for file inbound in following segment:
    Identification Method = u201C File Name u201D
    The Scenario Step is subscribed to the incoming file in case the name of the file (without extension) is equal the string in the field Identifier. You can use the field Identification Parameter to define a substring. In this field you can define the position (e.g. 2) which triggers B1if to compare the filename from this position only or even position and length separated by comma (e.g. 2,3) which triggers B1if to compare this particular substring only. Sample: The file u201C test.xml u201D will trigger the Scenario Step in the following cases:
    Identifier = u201C test u201D
    Identifier = u201C est u201D and Identification Parameter = u201C 2 u201D
    Identifier = u201C es u201D and Identification Parameter = u201C 2,2 u201D
    please enter the following:
    - Identification Method: File Name
    - Identification Parameter : 1,7
    - Identifier : QC_ROUT
    De- and reactivate your scenario and it will be working as expected.
    Best regards
    Bastian

  • Huge size file processing in PI

    Hi Experts,
    1. I have seen blogs which explains processing huge files. for file and sftp
    SFTP Adapter - Handling Large File
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    Here also we have constrain that we can not do any mapping. it has to be EOIO Qos.
    would it be possible, to process  1 GB size file and do mapping? which hardware factor will decide that sytem is capable of processing large size with mapping?
    is it number of CPUs,Applications server(JAVA and ABAP),no of server nodes,java,heap size?
    if my system if able to process 10 MB file with mapping there should be something which is determining the capability.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    2. consider pi is able to process 50 MB size message with mapping. in order to increase the performance what are the options we have in PI
    i have come across these two point many times during design phase of my project. looking for your suggestion
    Thanks.

    Hi Ram,
    You have not mentioned what sort of Integration it is.You just mentioned as FILE.I presume it is FILE To FILE scenario.In this case in PI 711 i am able to process 100MB(more than 1Million records ) file size with mapping(File is in the delta extract in SAP ECC AL11).In the sender file adapter i have chosen recordset per message and processed the messages in bit and pieces.Please note this is not the actual standard chunk mode.The initial run of the sender adapter will load the 100MB file size into the memory and after that messages will be sent to IE based on recordset per message.If it is more than 100MB PI Java starts bouncing because of memory issues.Later we have redesigned the interface from proxy to file asyn and proxy will send the messages to PI in chunks.In a single run it will sent 5000 messages.
    For PI 711 i believe we have the memory limtation of the cluster node.Each cluster node can't be more than 5GB again processing depends on the number of Java app servers and i think this is no more the limitation from PI 730 version and we can use 16GB memory as the cluser node.
    this kind of huge size file processing will fit into some scenarios.  for example,Proxy to soap scenario with 1GB size message exchange does not make sense. no idea if there is any web service will handle such huge file.
    If i understand this i think if it is asyn communication then definitely 1GB data can sent to webservice however messages from Proxy should sent to PI in batches.May be the same idea can work for Sync communication as well however timeouts in receiver channel will be the next issue.Increasing time outs globally is not best practice however if you are on 730 or later version you can increase timeouts specific to your scenario.
    To handle 50 MB file size make sure you have the additional java app servers.I don't remember exactly how many app server we have in my case to handle 100 MB file size.
    Thanks

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • File adapter: file processing in correct sequence

    Hello All
    We have inbound file scenario.
    File will be picked up and processed by XI and sent to SAP system.
    All the files have to be processed in the same sequence.
    Once the file is picked and processed in XI, data will be sent to receiving SAP system thru proxy call.
    But, if there is an error in processing received data in the receiving system (because of duplicate data or any other application related error, which is specific to receiving application), how can we stop XI from picking and processing the next file?
    Second file should be picked and processed by XI only after 1st file is successfully processed by the receiving system.
    How can i ensure this?
    Please let me know...
    Many Thanks
    Chandra

    Hi Chandra
    If you are on NFS then you can enable Processing Sequence by Name or Date in sender file adapter. With this you need to implement EOIO this allows messages to be picked in sequence and process in queue. If one of the file failed the queue will error.
    To avoid more than one thread processing you can use additional parameter "clusterSyncMode" refer Note 801926.
    With the above other solution can be creating a SYNC  or interface that can act as an ACK from the system then you process the next message. BPM can also be useful if you use Fault messages and capture them for errors for processing of next message.
    Hope above info can be used to come to a final approach for you
    Thanks
    Gaurav

  • File processing issue -File- PI- RFC

    Dear Experts,
        I have struck with this source file while processing the scenerio FTP(File) ->PI->RFC
    I need to take the data from CSV file,
    Just push the first row data to Char field and second row data to Mean field in RFC.
    FCC structure:
    The source data is simple but i struck with some logic,Kindly guide me how to process this data by using FCC or we can handle by mapping.Also check the source structure whether i followed correctly.
    Best Regards,
    Monikandan

    Hi Amit,
    Exactly i need to send the entire 1st row data to CHAR Description field and 2nd row data to mean field.
    While i am testing in message mapping using the source file will getting the below output
    Exact output i need is Process -001,Date row data will go to CHAR field and 10000,20-02-05 row data will go to MEAN field in RFC.
    How to split it.kindly guide.
    Best Regards,
    Monikandan.

  • File processing in XI

    Hi all,
    I have a question:
    Can XI process records in a file with format A below, and map the fields to an XML format B?
    Format A
    ========
    <Header1>
    <Body1_1>
    <Body1_2>
    <Body1_3>
    <Body1_4>
    <Trailer2>
    <Header2>
    <Body2_1>
    <Body2_2>
    <Body2_3>
    <Body2_4>
    <Trailer2>
    Example of data in format A:
    20070125151000Record1
    100|200|300|400
    1|2|3|0
    10|20|30|40
    10111|20222|30333|40444
    xxxx4
    20070125152000Record2
    100|200|300|400
    1|2|3|0
    10|20|30|40
    10111|20222|30333|40444
    yyyy4
    Each logical record in Format A, will be transformed to format B, and each mapped message will be sent to an inbound channel (E.g. SOAP)
    Format B
    ========
    <Record>
       <Field1>100</Field1>
       <Field2>200</Field2>
       <Field3>300</Field3>
       <Field4>1</Field4>
       <Field5>2</Field5>
       <Field6>3</Field6>
       <Field7>10</Field7>
       <Field8>20</Field8>
       <Field9>30</Field9>
    </Record>
    If it can be done, please point me to one such example.
    Thanks.
    Ron

    Hi,
      Yes it can be done by using file content conversion.
    Check dis links:
    /people/shabarish.vijayakumar/blog/2006/02/27/content-conversion-the-key-field-problem
    /people/sap.user72/blog/2005/01/06/how-to-process-csv-data-with-xi-file-adapter
    /people/arpit.seth/blog/2005/06/02/file-receiver-with-content-conversion
    Hope it will be helpful.
    Cheers,
    Prasanthi.

  • File Processing in Forms

    Help!
    I need to write a forms 6 application that will read and process a data file consisting of a steam of characters (eg. does not contain CR terminated lines).
    I have looked at using the text_io package but the only function it has to read data is get_line which reads a CR terminated line of text.
    Any ideas on how to do this? Hopefully without having to resort to ProC!
    Thanks.
    - Richard.

    Hi Richard
    Probably you can try the CLOB DATATYPE provided in the ORACLE 8 database, so you need to write a stored package procedure on the server, read the file, parse it with ORACLE buiit-ins i.e., DBMS_LOB, fill the data into a PL/SQL table, then pass on the PL/SQL table as IN OUT parameter to Oracle Forms from then on you can open the PL/SQL table to populate the records in your Oracle Form or for what ever purpose.
    Regards
    Mohammed R.Qurashi
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Richard Chadd:
    Unfortunately this would only work for files with size upto 4000 bytes.
    I did try to read file as sequence of blocks of 80 characters by setting oneline variable as varchar2(80) by then get VALUE_ERROR exception if file is greater than 80 bytes.
    If I define oneline as LONG I can read entire file upto 32Kbytes into the variable and process. But when I try to process prices greater than 32K I get an error ORA-302000.
    Unfortunately I will need to process files greater than 80Kbytes in size.
    Any other ideas would be great?
    Thanks<HR></BLOCKQUOTE>
    null

  • Open and move files follow by date

    How do I open files follow by date for button 1 and move the files follow by date for button 2?
    Thanks

    Try something like this
    Imports System.IO
    Public Class Form1
    Const oldFolder As String = "c:\temp"
    Const newFolder As String = "c:\temp1"
    Dim count As Integer = 0
    Dim oldFiles As String()
    Private Sub Button3_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button3.Click
    'open follow by the file names
    If count < oldFolder.Count Then
    Dim ps2 As Process = Process.Start(oldFiles(count))
    count += 1
    End If
    End Sub
    Private Sub Button4_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button4.Click
    'get files
    Dim info As New DirectoryInfo(oldFolder)
    oldFiles = info.GetFiles("*.jpg").AsEnumerable() _
    .OrderBy(Function(x) x.LastWriteTime) _
    .Where(Function(x) x.LastWriteTime.Date = Now.Date) _
    .Select(Function(x) x.FullName).ToArray()
    count = 0
    End Sub
    End Class
    jdweng
    Joel, there are nothing happen after run your codes
    Imports System.IO
    Public Class Form1
    Const oldFolder As String = "C:\Users\hwai\Desktop\Capture"
    Const newFolder As String = "C:\Users\hwai\Desktop\ New folder(3)"
    Dim count As Integer = 0
    Dim oldFiles As String()
    Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
    'get files
    Dim info As New DirectoryInfo(oldFolder)
    oldFiles = info.GetFiles("*.jpg").AsEnumerable() _
    .OrderBy(Function(x) x.LastWriteTime) _
    .Where(Function(x) x.LastWriteTime.Date = Now.Date) _
    .Select(Function(x) x.FullName).ToArray()
    count = 0
    End Sub
    End Class
    Thanks

  • File Processing Suggestion

    Hi,
    My source system will create a file ULYY.dat and start appending the same file within seconds.
    XI needs to poll the files every minute .
    The design strategy is that the source system will append the records if it finds the file else it will create a new file with the same filename ULYY.
    The current design strategy that we have decided is that XI needs to first pick up the filename and rename it before processing.
    Later it will archive it.  By this, XI can guarantee that it does not process partial data.
    Is this good design strategy? or is there any other way out.
    I was thinking that what would happen if the source system tries to append records when xi is renaming the file? Since everything is happening within seconds, i am confused as of what will be the good design approach

    Hi Krish,
    Suppose the file that PI picks currently is having extension as .dat. Now whenever PI sees a new file with extension .dat in the directory it will pick it up (no matter whether it is a completed file or a file which is just getting updated with the source process). So if the source creates a file with extension .tmp (and then keep updating it) and later when the file content is completely transferred (ie the complete file is created) then rename it from .tmp to .dat.
    Like this PI will not be able to pick in the incomplete file. Another approach will be to keep creating the file in some other directory and when it is completely created then move it to the polling directory of PI. (for doing this a schedule batch program can be used)
    Thanks,

Maybe you are looking for

  • Scrolling to the bootom of document

    Hi, It is a loing time. i find some thing very interesting concerning the scrollinpane problem with document. y code is this document = new DefaultStyledDocument(); textArea = new JTextPane(document); textArea.setFont(new Font("Serif", Font.PLAIN, 13

  • Packet Disapears

    Hi, When my arp entery has expired, and I try to send two UDP packets on different sockets to that entry, usually one of the packets doesn't even leave my machine according to Ethereal(Packet Capturing Program). There are no expections thrown or anyt

  • Jpeg Displays pick up Raw file sxif info when displayed

    After creating a jpeg images as the last part of a raw -> psd workflow I noticed that the jpegs were not displaying correctly in CS5's bridge. This can be corrected by opening the jpeg in RAW, setting everything to 0, and clicking on DONE.  This eras

  • Can I Pass String with Spaces in it To exec()

    Hello All, I am passing a Command in Runtime.exec() method, which is a String and contains a folder name(Folder---Name) which is having Spaces in it. For eg. c:/Folder/Folder(Spaces in between)Name/Data.java My Problem is when i pass it to exec() it

  • ICloud Photo Library optimize space

    Hi, I just switched to the iCloud photo library. If I optimize space on my iphone and I need an high quality photo on my iphone (for example to edit it using an app), could I download only the photo that I need in high quality from iCloud? thank you