Issue In Lock Box while processing mutiple files at a time:Tcode FLBP

Hi Experts,
I have an issue while executing Program RFEBLB30 (or tcode FLBP) for Variant which contains more than one file,means.Variant contains two files ,One file contains Single cheque details and other file contains multiple cheques.
the final output showing wrong values in Total advice & cheque amount.
when we process single file at a time(whether file has single or multiple cheque details) we are not facing any issue.
Thanks & Regards,
Veera.

It seems that there is no standard solution there.
Why do you have 2 files, is this 2 different batches ? Are they from the same day ? Can you ask the bank to bundle them ? Can you run FLBP twice a day to pick uop 1 file only ?
In FLBP standard selection fields available are
- Destination / Origin
- Lockbox
- Date
You can maybe also configure a dummy lockbox, use an exit to change it when you process the file.

Similar Messages

  • Problem while processing large files

    Hi
    I am facing a problem while processing large files.
    I have a file which is around 72mb. It has around more than 1lac records. XI is able to pick the file if it has 30,000 records. If file has more than 30,000 records XI is picking the file ( once it picks it is deleting the file ) but i dont see any information under SXMB_MONI. Either error or successful or processing ... . Its simply picking and igonring the file. If i am processing these records separatly it working.
    How to process this file. Why it is simply ignoring the file. How to solve this problem..
    Thanks & Regards
    Sowmya.

    Hi,
    XI pickup the Fiel based on max. limit of processing as well as the Memory & Resource Consumptions of XI server.
    PRocessing the fiel of 72 MB is bit higer one. It increase the Memory Utilization of XI server and that may fali to process at the max point.
    You should divide the File in small Chunks and allow to run multiple instances. It will  be faster and will not create any problem.
    Refer
    SAP Network Blog: Night Mare-Processing huge files in SAP XI
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    /people/michal.krawczyk2/blog/2005/11/10/xi-the-same-filename-from-a-sender-to-a-receiver-file-adapter--sp14
    Processing huge file loads through XI
    File Limit -- please refer to SAP note: 821267 chapter 14
    File Limit
    Thanks
    swarup
    Edited by: Swarup Sawant on Jun 26, 2008 7:02 AM

  • PI FTP: Process 1 file at a time until it is successfully processed at the target system

    Dear Experts,
    As the title said, I'm required to process 1 file at a time and only process the next file after the previous file has been successfully processed at the target system. The interface scenario is asynchronous FTP to ABAP proxy.
    It is similar with the following link:
    File adapter to pick a single file
    I think there could be 2 feasible solutions:
    1. Configure EOIO at the sender CC. According to this link:
         Queues for Asynchronous Message Processing (SAP Library - SAP NetWeaver Exchange Infrastructure)
         It is stated that "Once the message has been processed successfully in the target system, the Integration Engine executes an implicit database commit."
         In the case of asynchronous FTP to ABAP proxy, will the PI wait for the ABAP proxy to finish processing the file before sending the next one?
    2. Configure locking at the ABAP program in ECC. I could ask the ABAPer to create a locking mechanism to lock the document before it is posted to BAPI. If the reference document is being locked, it will try again for the next second up to a certain times before it gives up.
    Which 1 is actually fits the requirement?
    Thank you,
    Suwandi C.

    Dear Experts,
    Sorry, just 1 more question which I'm curious.
    In NFS adapter we have the option to set the processing sequence to by Name / Date.
    In FTP will it only by Date?
    According to this link:
    https://help.sap.com/saphelp_nw73/helpdata/en/f9/17888f490846a9972628525cc28aac/content.htm
    "Messages are delivered with the same queue names (supplied by the application) in the same sequence that they were sent from the sender system"
    If it is in the same sequence sent by the sender, I guess it is in the Date order for FTP. Is that correct?
    Thank you,
    Suwandi C.

  • Processing one file at a time in Biztalk

    We are changing the way we're processing files in that where we used to receive hundreds of small files throughout the day, we are now going to have these files batched into larger files that get dropped to our incoming Biztalk directory. I'm concerned about
    having the server pick up multiple large files at one time. Is there a way to tell a Biztalk application to only process one file at a time, and not to pick up the next file until the previous one has finished? Note, I do not have a way to build this into
    the Orchestration, as this is a Biztalk application that is actually installed by our Trizetto(QNXT) product. It uses Biztalk for XML serialization. 
    Thanks.

    I wouldn't worry about the size of the messages unless they are approaching 1GB. BizTalk can process roughly 400 msgs/sec. Also if you're worried about processing multiple large files try load balancing. With BizTalk it is quite "easy" to load
    balance. In a nut shell you have to have the dll of the application installed on all the servers you want to be apart of load balancing and BizTalk takes care of the rest. Your large files would more than likely split up and processed on different servers.
    http://msdn.microsoft.com/en-us/library/aa578057(v=bts.80).aspx
    The link above is for high availability but could also be used to get a better understanding of load balancing. Also if you use de-batching to split up the 3000-4000 records that would help with processing them. If an orchestration receives a file with 4000
    records in them that instance alone processes the whole file. Whereas if you split up the messages each message gets it's own instance of the orchestration which leads to better performance. The only drawback is that it takes more resources so your environment
    needs to be fairly large. Also if one server is busy another server with the same dll can take the de-batched messages and process them.
    Here is a link to my blog post about de-batching. It is very basic but gives a good base for de-batching.
    http://camartin.azurewebsites.net/post/BizTalk-Debatching

  • Issue while processing xml file

    Hi guys -
    I am getting error 7000 : null : com.sunopsis.sql.l: Oracle Data Integrator Timeout : connection with URL jdbc:snps:xml ...... while processing data from an xml file which is located in linux directory. I have changed the timout parameter from 30 to 500 in my client. Also modified the userpref.xml as suggested by many in forums but no use. When I checked the failed stage it shows that it gets timed out in 30 seconds. The same when i processed from my local window machine worked fine.
    Could you please help me overcome this trouble.? Please help.
    Thanks

    Hi
    You can edit userpref.xml file reside inside bin directory of ODI_HOME (oracledi folder). You need to edit the field name "value" (eg: [30] to [60]). Then restart the application and ODI Agent.
    <Object class="com.sunopsis.dwg.dbobj.SnpUserParam">
    <Field name="Key" type="java.lang.String"><![CDATA[DEFAULT_REPOSITORY_CONNECTION_TIMEOUT]]></Field>
    *<Field name="Value" type="java.lang.String"><![CDATA[30]]></Field>*
    <Field name="Label" type="java.lang.String"><![CDATA[Oracle Data Integrator Timeout]]></Field>
    <Field name="Type" type="java.lang.String"><![CDATA[com.sunopsis.graphical.userparam.SnpsTextFieldEditor]]></Field>
    <Field name="Help" type="java.lang.String"><![CDATA[Timeout used by Oracle Data Integrator for the database connections.]]></Field>
    <Field name="Visible" type="boolean"><![CDATA[true]]></Field>
    <Field name="Updatable" type="boolean"><![CDATA[true]]></Field>
    <Field name="Position" type="int"><![CDATA[4]]></Field>
    </Object>
    or increase your machine virtual memory
    Regards,
    Phanikanth

  • Error while processing csv file

    Hi,
    i get this messages while processing a csv file to load the content to database,
    [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL;
     data flow performance counters are not available.  To resolve, run this package as an administrator, or on the system's console.
    [SSIS.Pipeline] Warning: The output column "Copy of Column 13" (842) on output "Data Conversion Output" (798) and component
     "Data Conversion" (796) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
    can someone please me with this.
    With Regards
    litu Here

    Hello Litu,
    Are you using: OLEDB Source and Destination ?
    Can you change the source and destination provider to ADO.NET and check if the issue still persist?
    Mark this post as "Answered" if this addresses your question. 
    Regards,
    Don Rohan [MSFT]
    Regards, Don Rohan [MSFT]

  • Exception thrown while processing pdf file with composite fonts using adobe pdf library (v 9.1 )

    Hi All,
    I have an issue with processing composite fonts with adobe pdf library (v 9.1 ).
    While processing a pdf file having composite fonts, the pdf library is throwing an exception.
    The api which throwed exception is "PDPageAcquirePDEContent()". In my code i am calling PDDocGetNumPages(), PDDocAcquirePage() before this api is called, but all those functions suceeded. In the HANDLER, using the ASGetErrorString(), i got this exception error as  "The encoding (CMap) specified by a font is missing."
    Now coming to the input file (which is also attached), this document have three different composite fonts ( details are given below )
    Font Name : TicketBold, Bold(Embedded)
    Font Type : Trueype (CID)
    Encoding : Identity-H
    Font Name : Times-Roman (Embedded)
    Font Type : Type 1 (CID)
    Encoding : Identity-H
    Font Name : TimesNewRomanPSMT (embedded)
    Font Type : TrueType(CID)
    Encoding: Identity-H
    If i convert all the composite fonts to outline using pitstop before processing, it works fine.
    So my question is that whether pdf library doesnt support composite fonts (which i dont think so ) or i need to do a special handling for these kinds of fonts in my application ( which i strongly belive ). If its the latter case, please let me know how to handle it in my application.
    thanks in advance
    best regards
    ~jafeel

    Hi Leonard,
    Thanks for your reply. May i ask you which sample of the PDF Library you used to test my scenario.
    One question i would like to put to you beofre going for filing a formal issue to Adobe will be does this issue has anything to do with the initialization of the pdf library?
    What i meant is that when we call the PDFLInit() we pass a PDFLDataRec structure which is initialized by various path to font folders, cmap folders and unicode folders. Whether if i miss any of these folders will it cause this issue???
    thanks again
    regards
    ~jafeel

  • Lock box post process

    Hi,
    Can anybody please let me know what is the error comes at the time of processing the Bank File?
    and can please let me know what would be happened if we getting the less amount then the invoice amount. what kind of error the system will pass?
    It's really very urgent.
    THanks,

    HI,
    To have Lock Box Customizing in the system you need to defien the following
    FA-Bank Accounting-Business Transaction-Payment Transaction-Lock Box
    Define Control Parameters for BAI format as
    Doc Length : 10
    No of Doc nu in type 6 : 3
    No of Doc nu in type 4 : 6
    GL Account posting : x
    Incoming customer payments : x
    GL account posting type : 1
    Enter all the details and save the BAI format
    Define Posting Date as
    Destination : 1000123456
    Origin : 0011000390
    Company Code : ABCD
    House Bank : HSBC
    Account ID: HSBCA
    Bank GL account: Deposit bank A/c
    Bank Clearing A/C : Clearing Account
    Bank Posting Doc type : SA
    Cus Posting Doc Type : DZ
    Posting Key  Dr : 40
    Posting Key Cr : 50
    Posting Key credit Cust : 15
    Posting Key DR cust : 06
    And save the posting date
    And in Customer master Payment Transaction Tab create Customer Bank details with bank key and Bank Account.
    then create a text file as show below
    100100012345600110003900712110100
    210001234560011000390
    5086001007250407121110001234560011000390
    60860020000150000 044115126189175247210 11000206
    408600360171800000003        0000150000000000000000   testtesttesttest
    7086007007250407121100100001500000
    80860080072504071211000100001500009    015000    015000
    9000001
    If any confusion in text file give ur mail id i will send you the file with explanation.
    Regards
    Balaji

  • 12.1.1 upgrade fails while processing .xdf files

    Hi,
    We are in the process of upgrading our EBS 12.0.4 version to 12.1.1. We followed the Doc ID 752619.1 to achieve this.
    Performed all the pre updates steps in the document
    1. Upgraded the Database to Version 11.1.0.7
    2. Upgraded OracleAS 10g Release 3 (10.1.3) Oracle Home to version 10.1.3.5 (Doc ID 454811.1)
    3. Upgraded OracleAS 10g Release (10.1.2) for Forms and Reports to Patchset 10.1.2.3. (Doc ID 437878.1).
    4. Upgraded Oracle E-Business Suite Release 12 JDK to Java 6.0 update 16 (Doc ID 455492.1).
    5. Applied the Patch 7461070 (R12.AD.B.1).
    6. Applied patch 8764069:R12.FND.B.
    And then started applying the patch 7303030.
    The issue is that the patch gets stuck while processing some xdf files, and its not the same file which is causing the problem when I restart the patch.
    Worker logile shows the following error
    Exception in thread "main" java.sql.SQLException: Io exception: Socket read timed out
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255)
    at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:387)
    at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:439)
    at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165)
    at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35)
    at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801)
    at java.sql.DriverManager.getConnection(DriverManager.java:582)
    at java.sql.DriverManager.getConnection(DriverManager.java:185)
    at oracle.apps.ad.worker.AdJavaWorker.getAppsConnection(AdJavaWorker.java:1036)
    at oracle.apps.ad.worker.AdJavaWorker.main(AdJavaWorker.java:276)
    and the worker status is still running. At the same time the database alert log file shows the following message
    ORA-609 : opiodr aborting process unknown ospid (21951_182909510944)
    Sun Jan 24 09:41:40 2010
    ORA-609 : opiodr aborting process unknown ospid (21949_182909510944)
    Sun Jan 24 09:52:53 2010
    ORA-609 : opiodr aborting process unknown ospid (22020_182909510944)
    Sun Jan 24 09:53:38 2010
    ORA-609 : opiodr aborting process unknown ospid (22026_182909510944)
    Sun Jan 24 09:54:21 2010
    ORA-609 : opiodr aborting process unknown ospid (22024_182909510944)
    Sun Jan 24 09:55:04 2010
    ORA-609 : opiodr aborting process unknown ospid (22022_182909510944)
    Sun Jan 24 09:57:56 2010
    Please help.
    Regards
    Navas

    Hi Hussein,
    Yes, the database is up and running. Initially the alert log was showing the message
    Fatal NI connect error 12537, connecting to:
    (LOCAL=NO)
    VERSION INFORMATION:
    TNS for Linux: Version 11.1.0.7.0 - Production
    Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
    TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
    Time: 14-JAN-2010 11:17:19
    Tracing not turned on.
    Tns error struct:
    ns main err code: 12537
    TNS-12537: TNS:connection closed
    ns secondary err code: 12560
    nt main err code: 0
    nt secondary err code: 0
    nt OS err code: 0
    ORA-609 : opiodr aborting process unknown ospid (2189_182909510944)
    I followed the Doc ID 745167.1 and set the parameter DIAG_ADR_ENABLED=OFF. Afterwards the alertlog shows
    ORA-609 : opiodr aborting process unknown ospid (22312_182909510944)
    Sun Jan 24 10:44:51 2010
    ORA-609 : opiodr aborting process unknown ospid (22314_182909510944)
    Sun Jan 24 10:45:31 2010
    ORA-609 : opiodr aborting process unknown ospid (22310_182909510944)
    Sun Jan 24 10:46:33 2010
    ORA-609 : opiodr aborting process unknown ospid (22318_182909510944)
    Sun Jan 24 10:47:23 2010
    ORA-609 : opiodr aborting process unknown ospid (22322_182909510944)
    Sun Jan 24 10:48:10 2010
    ORA-609 : opiodr aborting process unknown ospid (22320_182909510944)
    Sun Jan 24 10:58:00 2010.
    I am able to connect to the database from Database Tier as well as Application Tier using sqlplus. All the adadmin utilities are working fine..
    Regards
    Navas

  • Working with WebLogic81sp2 and getting error while processing a file.

    Hi,
    getting error while working with the weblogic81sp2 and java based application. In this m processing a file. the file gets processed but doing command 'ps -ef' shows process still running. The exception log is like:
    <26-Mar-2008 10:10:48 o'clock GMT> <Warning> <WebLogicServer> <BEA-000337> <ExecuteThread: '21' for queue: 'weblogic.kernel.Default' has been busy for "645" seconds working on the request "Http Request: /shield/xml", which is more than the configured time (StuckThreadMaxTime) of "600" seconds.>
    <01-Apr-2008 06:50:00 o'clock BST> <Error> <HTTP> <BEA-101017> <[ServletContext(id=197633402,name=/shield,context-path=/shield)] Root cause of ServletException.
    javax.servlet.jsp.JspException: Can't insert page '/jsp/layouts/mainLayout.jsp' : Broken pipe
    at org.apache.struts.taglib.tiles.InsertTag$InsertHandler.processException(Ljava.lang.Throwable;Ljava.lang.String;)V(InsertTag.java:956)
    at org.apache.struts.taglib.tiles.InsertTag$InsertHandler.doEndTag()I(InsertTag.java:884)
    at org.apache.struts.taglib.tiles.InsertTag.doEndTag()I(InsertTag.java:473)
    at jsp_servlet._jsp.__main._jspService(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(main.jsp:2)
    at weblogic.servlet.jsp.JspBase.service(Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(JspBase.java:33)
    at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run()Ljava.lang.Object;(ServletStubImpl.java:971)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;Lweblogic.servlet.internal.FilterChainImpl;)V(ServletStubImpl.java:402)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(ServletStubImpl.java:305)
    at weblogic.servlet.internal.RequestDispatcherImpl.forward(Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(RequestDispatcherImpl.java:301)
    at org.apache.struts.action.RequestProcessor.doForward(Ljava.lang.String;Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(RequestProcessor.java:1069)
    at org.apache.struts.tiles.TilesRequestProcessor.doForward(Ljava.lang.String;Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(TilesRequestProcessor.java:274)
    at org.apache.struts.action.RequestProcessor.processForwardConfig(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;Lorg.apache.struts.config.ForwardConfig;)V(RequestProcessor.java:455)
    at org.apache.struts.tiles.TilesRequestProcessor.processForwardConfig(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;Lorg.apache.struts.config.ForwardConfig;)V(TilesRequestProcessor.java:320)
    at org.apache.struts.action.RequestProcessor.process(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(RequestProcessor.java:279)
    at org.apache.struts.action.ActionServlet.process(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(ActionServlet.java:1482)
    at org.apache.struts.action.ActionServlet.doGet(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(ActionServlet.java:507)
    Please help me!

    Not sure why you want to replace. Since the response of the proxy would remain to hold the request body by default.
    If you have stored the opaque element in a variable ($var_opaque), then you can do the following.
    XPath : .
    In variable : body
    Expression : $var_opaque
    Check - "Replace node content"

  • ACR 64bit processing multiples files at a time - performance

    Hi all -
    I recently built a new CPU with a raid 0 drive and a core i7 processor. The first few times I processed photos in ACR hosted in Photoshop 64 it would process 4 photos at a time which seems like it would optimally utilize all 8 cores. Recently I've noticed that sometimes it will process 4 photos at a time but usually it will do 3 at a time. I can't consistantly get it to run 4 at a time any more which makes me think photoshop isn't getting as many resources as it could/ should.
    Does anyone have an idea how acr decides how many raw images it will grab at once when it is processing out to jpgs? OR how to improve ACR processing performance?
    Thanks,
    -Dan

    function(){return A.apply(null,[this].concat($A(arguments)))}
    MadManChan2000 wrote:
    ACR supports multiple cores and multi-threading on both 32-bit and 64-bit machines. Depending on other activities, ACR may not use all available cores when processing images (e.g., one core may be reserved for handling foreground activities such as user input, UI redraw, etc.).
    There are definitely differences between 32 and 64 bit builds.   I have observed this myself.  I have 8 processor threads (2 actual cores with hyperthreading in each of two separate Intel Xeon DP 5060 3.2 GHz processors).
    I just tested ACR 6.3, with Photoshop CS5 on Windows 7 Ultimate x64 8 GB, with ATI Radeon HD 4670 video card...
    Photoshop CS5 12.0.3, File - Open, Canon EOS-40D raw file conversion to upsampled resolution of 6144 x 4096 pixels, some lens corrections, sharpening and noise reduction enabled.  Same exact image and settings for both tests.
    32 bit Photoshop running 32 bit Camera Raw:  13.8 seconds, Task Manager shows less than 25% load.
    64 bit Photoshop running 64 bit Camera Raw:  6.0 seconds, Task Manager shows 100% load.
    This is easily repeatable.
    Computer shows 5% to 8% load just sitting here (I'm playing music, and this forum chews up cycles too).
    I'm forced to conclude, despite what you wrote above MadManChan2000, that Camera Raw is using only 1 or 2 of 8 available threads in the 32 bit build, all 8 threads in the 64 bit build.
    Can you explain the difference please?  Is it possible the logic isn't simple, and while the 32 bit Camera Raw may use multiple threads for some things, that noise reduction or lens corrections aren't using all threads?
    -Noel

  • FF_5 - Issue with Account Balance option while processing BAI file

    Hi All,
    We are getting a runtime error while trying to process the same bank file again through transaction FF_5 with 'Account balance' option checked. And the message says 'The ABAP/4 Open SQL array insert results in duplicate database records.'  with the command 'Insert_FEBPI'. This happens only when we have combination of uploaded and not uploaded statements in the bank file.
    I could find a OSS note for the same issue but that was for the file format MT942. We are using BAI file format. Can some one please help me in this regard.
    Thanks in advance!
    Regards,
    Jalendhar

    This is standard SAP functionality. If there are no applicable notes, then you should open an OSS message.
    Rob

  • "process mutiple files" freezes computer when resizing images.PSE3

    This function has always worked perfectly for me for years. It started about 2 weeks ago and I have no clue why it is doing this.
    Whenever I try to resize a folder containing mutiple or even single images, it freezes the program and computer until I restart everything.
    More specifics;
    files are being downsized not upsized.
    the original image loads into the program, then as it is being resized, it freezes up at the same point each time. This is when the "progress bar" (not sure of exact name) at the bottom of the page shows about 25% progress.
    I have cleaned up the hard drive and done a file reorganization.
    I have uninstalled and reinstalled the program.
    I can resize images directly through the image/resize function one at a time. The problem is that I frequently have to resize large batches of images.
    Thanks for any help.
    Dave

    Dave
    This has caught other people. Check you Resize Image box to see if some strange value has been entered. One user had accidentally entered 4 pixels instead of 4 inches so all his pictures came out really small. I wonder if you have the opposite problem and are resaving to some extremely large size that Elements does not like.
    If not post back.
    Additional question-are you having problems with the same set of pictures? If so, could you run a test on a second set of pictures. Perhaps there is something strange in the source files.

  • Issue with FTP command while deleting a file.

    Hello All,
    I am trying to delete a file from the FTP server after the file is processed. I am able to connect to the server, I am able to pick the .csv file and place the data in an internal table, and am able to process it as required in SAP. After the processing, I need to delete that file from the FTP server. I did not change the working directory and gv_filename is the variable which is holding the filename. I am using the following code
    concatenate 'delete' gv_filename into lv_cmd separated by space.
      REFRESH mdata.
      CALL FUNCTION 'FTP_COMMAND'
        EXPORTING
          handle        = gv_handle
          command       = lv_cmd
        TABLES
          data          = mdata[]
        EXCEPTIONS
          tcpip_error   = 1
          command_error = 2
          data_error    = 3
          OTHERS        = 4.
    When I check the mdata which returns the result of the command, I get the message 550: unable to delete <pat>\filename,
    where as the file does exist. Is there any condition where in we may not be authorized to delete the file? If so is there any way to check that?
    Thanks and Regards,
    Sachin

    Hi Sachin Dargan,
    Yes.. You need to get read and write access for the directory of FTP file server where it is located..
    Ask to BASIS or Network admin to give your authorization for it..
    Also using upper case in single quotes is better.. as sometimes it may cause problem..
    concatenate 'DELETE' gv_filename into lv_cmd separated by space.
    Hope it will solve your problem..
    Thanks & Regards
    ilesh 24x7
    ilesh Nandaniya

  • Cannot Release Lock error while Importing mdl file?

    Hi,
    I am trying to import an mdl file in the test enviroment from the development environment.
    No users are working on test environment.
    I start importing the mdl file.
    Then I got the error as "Cannot Release lock:.
    I checked that, no sessions are active and no one is using parallely the owb for testenviroment.
    Could anyone explain, why this error came?
    Thank you,
    Regards,
    Gowtham Sen.

    Did the instance go down whilst doing this?
    Is there more to the log before where you started it?
    Cheers
    David

Maybe you are looking for