FVU file conversion process related issues

Hi,
I am in process of conversion of text file into FVU file for TDS purpose.
When I get text file through J1INQEFILE, its not conatining TAN of deductor.
I checked in configuration.....like section code place, business place and even in SM30 J_1I_SECCODE....TAN is there at all three places...
I have three requirements now
1. How to update TAN in text file
2. I want to get mobile no of TDS deductor in text file
3. Where to update BSR code and even i want it to be updated in text file.
I expect some of the experts to reply who had done already for their requirements. Its urgent please...

Hi,
Issue resolved. Problem is TAN no, MOBILE no of TDS deductor, and BSR code not been updated in text file that we generate thrugh J1INQEFILE.
I have done manually by putting these things in text file and uploaded. I got the FUV file.
For TAN updating, SAP told us to apply some note.
SO, issue is NO MORE now.
Thanks

Similar Messages

  • Reg: FVU file conversion error

    Dear All,
    While i try to validate the txt file generated from SAP by FVU utility, I'm getting the error that
    'T-FV-1000 Invalid File Header Record Length'
    The file header line of my file is below for reference
    1^FH^NS1^R^11032011^123456789^D^ABCD12345F^1^^^^^^^
    Also, I went through these threads too, but no gain.
    [Re: Generate Quarterly e-TDS File. prob|Re: Generate Quarterly e-TDS File. prob]
    [Re: REGARDING TDS|Re: REGARDING TDS]
    Regards,
    Bala

    Hi,
    Issue resolved. Problem is TAN no, MOBILE no of TDS deductor, and BSR code not been updated in text file that we generate thrugh J1INQEFILE.
    I have done manually by putting these things in text file and uploaded. I got the FUV file.
    For TAN updating, SAP told us to apply some note.
    SO, issue is NO MORE now.
    Thanks

  • PI Interface Posting Files - Trigger Process Chain Issue

    Dear Reader,
    Situation -
    We get multiple flat files from source system via PI interface. To process this in BW 7.0 side we have created the web service interface. In the function module we have written a code to trigger process chain, once a data is posted.
    Issue -
    As there are multiple files being posted, the PC runs on completion of the every single post, which is not desired. We need to run the process chain only once at the end of all the files being posted.
    Notes -
    1. Number of files keep varying.
    2. Clubbing all the files in a single file and then posting it would cause performance issues.
    Request your help in find a way like -
    1. A file being posted of name, say 'START PC', which can be trapped in the funciton module and controll the PC call.
    2. <any Other idea>
    regards,
    vinay gupta

    Hi Dhanya,
    This is the code i have in the ABAP program in the process chain. I just included the API_SEMBPS_POST part, but still it doesn't work. Please give me your email address so that i can send some screenshots.
    REPORT  ZHTEST.
    DATA: l_subrc TYPE sy-subrc.
    DATA: ls_return TYPE bapiret2.
    CALL FUNCTION 'API_SEMBPS_POST'
    IMPORTING
       E_SUBRC         = l_subrc
       ES_RETURN       = ls_return.
    CALL FUNCTION 'RSAPO_CLOSE_TRANS_REQUEST'
      EXPORTING
        I_INFOCUBE               = 'ZMAP_TAB'
    EXCEPTIONS
      ILLEGAL_INPUT            = 1
      REQUEST_NOT_CLOSED       = 2
      INHERITED_ERROR          = 3
      OTHERS                   = 4
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.

  • Conversion process(job work - MTO) delivery issue

    Hi All,
    We are unable to create delivery for a conversion process sales order(Job work - MTO) for which stocks are reserved from 5th nov 2011. The sale order is created on 5th nov 2011.
    We are getting the following error message: Message no. VL455 "The sales order you want to deliver, sales order 1101200011 with order type ZO03, either contains no delivery-relevant items or its delivery-relevant items have already been delivered."
    For the same sale order we are able to do the delivery from 23rd nov 2011 onwards.
    But we are not able to create deliver for the materials between 5th to 23rd.
    Please could any one suggest a solution for the above issue.
    Regards
    Srinivas
    Edited by: yenigalla on Nov 4, 2011 9:15 AM

    Hie,
    This is because the schedule line in the sales order has confirmed the date for delivery as 23rd novemner. If the material is available (it should be reserved for that sales order since it is MTO) at 3rd novbr then re-run the availability check in the sales order. This will confirm the date for delivery as 3rd novbr.
    Regards,
    Denish.

  • Several itl files - itunes process runs but itunes wont open

    This is a little complicated and I have now run into several related issues.  This all started a few weeks ago when  I attempted to move my itunes library from one external hard drive to a new and larger external hard drive on my windows vista PC.  I copied/transfered all of what was on the older driver onto the new drive.  That's fine and dandy.  I also at the time did some maintenance removing old music folders and itunes library files from several years of use. I had all kinds of duplicate music files (that got copied over at the same time) and a horde of dead links and playlists too.  Very messy.  In short my library had become unmanageable a long time ago, its size of close to 500 gigs.
    Everything copied over and itunes opened and worked okay. 
    However problems have now happened after (and it may be coincidental) loading the latest version of iTunes.  And at the same time using several programs to attempt to find and delete duplicates, fix playlists, located files and dead tracks etc.  These included "iTunes Library Tool Kit" and duplicate file finder, plus Steve MacGuire's vbs scripts.
    It all came to a head now when the other day iTunes would open (you can see the process running in TaskManager) but the window would not open.  Basically I followed http://support.apple.com/kb/ts1717.  And it seems to have fixed the "not opening" issue.  However.  Before going through and recreating my library I realize I have several itl files and several xml files and all kinds of dead or old itunes music files. 
    So I guess I'm asking .... is there some way of going back to square one?  Scrubbing any and all history of iTunes, and having iTunes recreate my library as if my PC was new to Apple and iTunes? 
    If so what is my best approach?  Should I erase old folders and files first? 
    I've attached an example of what I found after searching for .itl files.  (lines 2 - 5 were moved to the desktop)  thats where they still reside, but I stopped and posted this inquiry before doing anything else. 
    Don't know if it matters I have my library also located in iCloud.

    Hi there. If you want a clean start I'd suggest you shift-start iTunes and create a new empty library at F\iTunes, make sure the media folder under Edit > Preferences > Advanced is set to F:\iTunes\iTunes Media and the options to Keep... & Copy... are enabled. I'd then move the existing media inside the iTunes Media folder before adding it to iTunes. Either use the Automatically Add to iTunes folder or move everything deeper inside, e.g. F:\iTunes\iTunes Media\Imports so that once iTunes has imported everything and moved it to where it thinks it belongs you can easily clean up any artwork, playlist files etc. that iTunes ignores.
    Note that a clean start like this will lose all ratings, play counts, playlists, date added information, and if you have any devices may mean that all media needs to be resynced. Chances are the media folders may still contain duplicates but these can be cleaned using my DeDuper script.
    Take a look at this backup tip... Ideally you would backup what you have before you get started, just in case you get over enthusiastic at some point, then redo the backup once you have the rebuilt library working as you want it.
    tt2

  • Adobe Acrobat Distiller Process. Issue with Font(L0420P not found, using courier)

    Hi Everybody,
    We are working for upgrade activities, which involves moving applications from Windows 2000 to Windows 2003 Servers.
    My applications uses Adobe Acrobat distiller, which is also getting upgraded along with the server from Adobe 4.0 to 5.0 (Installed licensed Adobe distiller).
    I have been using Adobe distiller to convert .PS files into .PDF files. The conversion process requires the PSFonts such as (MB006, L0B, L0420P etc..), each font got a separate PFB and PFM extension files, which are kept in the folder name called "Psfonts" and has been included in "Font location" under Distiller settings.
    Using Distiller( Adobe Acrobat distiller 4.0 ) in Windows 2000 didn't create any issues or error while converting .PS files into .PDF files, all the PDF files are generated by Distiller as expected and process without any Font issues.
    In Windows Server 2003, I installed Adobe Acrobat distiller 5.0 and moved the Font Folder "Psfonts" from older server(Windows 2000 server) to the new server(Windows 2003) and started Adobe Acrobat distiller process(5.0) to convert .PS to .PDF, most of the .PS files gets converted into .PDF files but when .PS files which require the Font L0420P comes in for Distilling, it displays an error message as "L0420P not found, using courier".
    I checked with the old Server's (Windows 2000) font folder (Psfonts), it has the same files(PFB and PFM) as present in the new Server(Windows 2003).
    I tried to check the Particular Font name(L0420P) in Distiller-->Setting-->Job options-->Font-->Embedding-->Psfonts, but L0420P font alone is missing in the List.
    Could you please help in resolving this Font issue.

    You may like to try deleting the PsFonts entry from distiller and closing it. Restart the machine, make the entry of PSFonts agina and try creating PDF.

  • MTS File Conversion for iMovie

    Dear Apple Support Community,
    I am struggling with movie file conversions, and getting the optimum results from such a process before starting my editing work in iMovie.
    I have a Canon HD camcorder which records movies according to AVCHD standards. Before using iMovie, I need to convert the .MTS files to a format which is compatible with iMovie, and I have tried the following:
    Copy the files from the SD card on to my HDD.
    Using Toast 11 (Titanum), I have selected different profiles to convert the file to formats such as MOV, M4V, MP4 etc.
    The converted files yields a size that does not match the original one i.e. a 300 MB file after being converted, is now either 791 MB, or 96 MB in this case.
    When importing the converted file into iMovie, iMovie still wants to optimize the file for some reason (I believe using APPLE INTERMEDIATE CODEC resolves the optimization issue)
    Import the files directly from the SD card into iMovie, provided I have not altered the original file structure i.e. when inserting the SD card, iMovie recognizes the SD card the same way it would for connecting the Camera.When I select the videos to be imported, iMovie does so without hassles, but the files again turn out to be bigger than the original file by a factor of between 2 and 3, depending on the original file.
    What is the best way to convert files to preserve the quality as close as possible to the original?
    Is it normal for coverted files to be larger than the original file after being converted (smaller makes sense, but bigger just baffles me)
    Thanks for any feedback!

    Thanks for the great feedback.
    I do have struggles though in general, part of which originates from my camera itself.
    My camera still has a tendency to record an interlaced effect even though my settings are confidently selected to progressive (I have 50i and 25p to choose from - 25p is my selection).
    However, when I use Toast Titanium, I have an option to select the source video to be deinterlaced, but Toast does not do a good job at deinterlacing, compared to Handbrake. The benefit of my Toast Preset is that I can covert the video to an Apple Intermediate Codec, which means that when the files are imported into iMovie, the process is rather quick.
    When I convert using Handbrake, the software does a pretty **** good job at deinterlaing, but the downside is that I cannot find a preset that makes import into iMovie satisfactory.
    So, in light of all this long discussion I gave, my question is really, what is the best file converter for iMovie to convert MTS files that need to be deinterlaced.
    Thanks for any help!
    Adios

  • A failure occurred in the document conversion process

    A failure occurred in the document conversion process. The document cannot be viewed at this time.
    No matter what I can not view attachments. jpg will render, but it seems like nothing else will be.
    I do not see Document Viewer Agent (DVA) as a service in GW 2012 (OES2 SLES10)
    Seb

    I am having the same issue, but only with wav files.
    I can click on the <filename>.wav link and open the attachment fine in a selected program, but if I select the "Play" link, the window switches back to the login page with "[962D] A failure occurred in the document conversion process. The document cannot be viewed at this time" displayed underneath.
    Any ideas what this could be?
    Emily

  • Many Sporadic Preview Related Issues

    Not sure if this is where to post/report Lightroom 5 issues or not, but here are preview-related issues I am experiencing...
    Preview panel "preview" freezes indefinitely.
    App very slow to load while creating new previews. (previews take forever to create and update)
    Wrong previews showing for wrong images when swapping between folders, etc. (ie: all photos, last import, custom collections)
    This one is a little different...
    Ratings lost when switching between folders, etc. (ie: all photos, last import, custom collections)
    This was a clean install that ran beautifully at first. Started noticing this more and more after using the spot tool. (my last step in processing a job)
    I use separate catalogs per job. Some as small as a few dozen images. (family stuff) Others, as large as 500-600. (engagement and wedding clients)
    Hope you are working on this. It's making work a struggle. I'm not just a "consumer". ;-)

    This is a user to user forum so the responces you recieve here will mostly be from users like your self. I am sure someone will respond on how to help dealing with the problem you are experiencing.
    The official forum for reporting bugs and making feature requests is at the web link below.
    http://feedback.photoshop.com/photoshop_family/products/photoshop_family_photoshop_lightro om

  • Flat File automation process - limitations

    Hello Everyone,
    I would really appreciate any insight on the process improvement suggestions.
    Background:
    Currently we have around 12 territories providing a new flat file with new data on a daily basis depending on the business activity. Which would also mean that, on a given day if there is no activity would mean no flat file provided to BI for loading process.
    The flat files provided need to be loaded into the BI system (PSA - DSO - InfoCube).
    The flat file loading process has been automated for the daily file by implementing the logical file name for each territory.
    1. The process variant in the process chain is to ensure if the flat file is available on the App server (Custom ABAP program).
    2. 12 InfoPackages have been created to pick the data from the flat file on the app server and load the data over into the PSA.
    3. All the InfoPackages merge into an "AND" event in the process chain before the DTP load into the DSO kicks off.
    4. DSO Activation
    5. Recon between the flat file and the DSO to ensure all the data from flat file has been loaded into the DSO.
    6. DTP loads into the InfoCube.
    7. Recon between the InfoCube and the DSO itself.
    8. Moving the flat file from one folder into another.
    All the above processes are automatically performed without any issues if the flat file is available on the server.
    Problem / Issue:
    As one of the major limitations of the above design is the flat file for sure needs to be made available on the app server in order for the whole data flow in the process chain to continue without any breakpoints.
    Current workaround / process improvement in place:
    Based on the above limitation and upon further research, I was able to apply the OSS Note to give us the option of maintaining multiple DTPs for the same data target with different filter values.
    So, even if have individual data stream for each territory with a different DTP the issue still remains where the process variant (ABAP program to check if file exists) or the InfoPackage load if the ABAP program is removed will fail.
    Which means due to the above fact, the support team is alerted about the process chain failure.
    Question / Suggestions required:
    The main questions or any suggestions would be welcome, if one of you can let us know an approach where the flat file check program doesn't have to give a hard failure in the process chain for the rest of the process chain to continue with the loading process. (As in order for the rest of the process chain to continue the only options we have are Error, Success, Always).
    I have also looked into the Decision process variant available in the process chain, but based on the options available within I cannot utilize it for the loading process.
    Error can be caused by generating an error message in the ABAP program which in turn is causing the issue of alert being sent over even if the rest of the process chain finishes.
    Success would mean the flat file needs to be available. Always cannot be used in this case as it will cause a failure at the InfoPackage level.
    If the InfoPackage load can be avoided without a hard error to be generated, the process chain does not have to remain in the failed state which in turn will not trigger any alert to the support team.
    Please do let me know if you need more details about the above process improvement question.
    Thanks
    Dharma.

    The main issue with this as you mentioned is that the file has to be available for sure.
    We had a similar issue - we had a very critical data load that had to happen everyday , failure of the file getting loaded would mean that the reports for the day would be delayed.
    We were running on UNIX and we implemented a simple UNIX command that would not complete till the file was available in the app server.
    Something like
    while ( the file does not exist )
    delay of 15 seconds
    you will come out of the while only after the while completes which means that the file becomes available.
    You can write a similar ABAp program to check file availability if required and put it into your program.
    we also had a failover process where we created a zero byte file with the same name if the file did not come beyond a certain number of tries and the PSA would load zero records and the data load will continue.
    Edited by: Arun Varadarajan on Jan 26, 2009 10:18 AM

  • Sender File Adapter with file conversion

    Hi guys,
    I’m using a Sender File adapter with file conversion. The message to be processed has a structure with fixed lengths and in your content are some values that needs to be ignored.
    An example:
    value1  <b>value2</b>  value3…
    I want to ignore <b>value 2</b> but I can’t find a parameter for that! Do I need to define dummy fields on my data type and ignore those fields during mapping? Or there is a specific parameter for that?
    Thanks in advance,
    Ricardo.

    hi,
    there always is another way:)
    you can import the whole line to one field
    and cut it inside the adapter module
    (then you can define start and stop of the substring that you need to use)
    but of course it's not standard even though it's quite easy to achieve in java
    Regards,
    michal
    <a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a>

  • Could not archieve file after processing error in Rwb of sender adapter

    Till today afternoon it is running fine.
    The rpocess here is we will keep the files in FTP site. and pi polls and places the files in NFS Mount server AL11(FILE/STAGING).
    Now these file will be process from FILE/STAGING to FILE/PROCESS in al11 .
    Then proxy get trigerred and sends the data to sap where workflow will be trigerred.
    now the issues file is getting polled from ftp and placed in file/staging folder perfectly.
    Then the file needs to be moved to file/process folder using file adapter. here the issue is coming
    error is as shown below
    Time Stamp  Message ID  Explanation 
    Could not archive file '/FILE/Staging/20100517112845_140011140001.tif' after processing
    com.sap.aii.af.service.util.transaction.api.TxManagerException: Unable to commit transaction: The transaction has been rolled back: com.sap.engine.services.ts.transaction.TxRollbackException
    I had done the cache refresh still the problem persists.

    HI,
    1. check whether you have authorization for write access.
    2. Check the path provided for process.
    3. Ensure both the staging and the process paths are different.
    Thanks,

  • Big File to IDOC - performance issue

    Hi All,
    I am trying to create scenario where I have a file with aproximately 10 000 rows. From each row I am creating one IDOC and want to send it to R/3. Interface looks fine - it is working, but it is killing XI box for some time and u cant access it.
    Full scenario look like this
    File -> BPM (for 1:n) -> IDOC
    I tried to find some solutions for doing the workload smaller by splitting file to less lines (500rows per file) but then file adapter picks up all file and processed them in parallel. So this is new scenario:
    BigFile -> XI -> File -> BPM(1:n) -> IDOC
    I tried to put second file sender communication channel as EOIO but looks like this does not work - or messages from queue are processed to fast. When one message starts BPM another file message start to be processed.
    Do You have any ideas on how to make it more responsive and less performance impact?
    thanks in advance.
    Dawid

    Hi ;
    Since mappings are processed by the J2EE Engine, the maximum available Java heap may be a limit-ing factor for the maximum document size the XI mapping service is able to process. Tests have shown that processing of XSLT mappings consumes up to 20 times the source document size (using identity mapping). The maximum available Java heap for 32bit JVMs is platform-dependent. Using 64bit JVM platforms is an option here.
    Current maximum heap sizes – 32bit
    OS
    Maximum heap (GB)
    Linux
    2
    Windows
    1.2 – 1.4
    The Java heap is limited by the heap limit of the process (may be limited by address space because operating system code or libraries may also be loaded within the same address space). Also, Java internal memory areas such as the permanent space for loading Java classes must fit into the same address space.
    Java VM tuning is one of the most crucial tuning steps, especially for more complex scenarios. For information about setting baseline JVM parameters, see SAP Note 723909. You must also take plat-form-specific parameters into account (for example, JIT compiler settings). The impact of Garbage Collection (GC) behavior especially may become a critical issue. Overall GC times for the J2EE appli-cation should be well below 5%. For more information about GC behavior and settings, see also SAP Note 552522.
    Specific to XI is the fact that you sometimes need to process large documents for mapping or when using signatures. This can lead to excessive memory usage on the Java side. Therefore, you must observe Garbage Collection and the available Java heap in order to evaluate performance and pre-vent OutOfMemory exceptions. Since XI mapping is processed by stateless session beans that are called using a JCo interface, this may lead to a reduction of parallel JCo server threads within the JCo RFC Provider service of a J2EE server node (you can compensate for this by adding J2EE server nodes).
    Mudit

  • XML file conversion after sender file content conversion

    Hi,
    I have issue refarding file content conversion.
    My input structure is
    <MT_RCICrecords>
    <TRNH>
      <RCIC>
        <RECH>
        <RECL>
      <RCIC>
    <TRNH>
    Afetr sender File content conversion (csv to xml) it produces xml file as below (since file conversion does not support 3rd level of hierarchy)
    <TRNH>
    <RECH>
    </RECH>
    <RECL>
    </RECL>
    </TRNH>
    It does not recognize RCIC.
    Now i am trying to map this to IDOC and getting error as
    'MT_RCICRecords tag found instead of IDOC BEGIN ='.
    CAn anyone suggest me how to chaage this xml output after File content conversion to add RCIC tag in xml file?
    I am new to XI so please give me some sample code to.
    Thanks.
    Yashpal
    Its urgent!

    My problem is xml generated from content conversion is like below
    <MT_RCICrecords>
    <TRNH></TRNH>
    <RECH></RECH>
    <RECL></RECL>
    <TRLR></TRLR>
    </MT_RCICrecords>
    and i want it to be
    <MT_RCICrecords>
    <TRNH></TRNH>
    <RCIC>
    <RECH></RECH>
    <RECL></RECL>
    </RCIC>
    <TRLR></TRLR>
    </MT_RCICrecords>
    which is not happening
    My input message structure is
    MT_RCICrecords
    TRNH
    RCIC
    RECH
    RECL
    i hope it is clear now
    TRLR

  • Illustrator files from China & output issues

    I just need to bounce this off of someone - we received several files (40 in fact) from China for Wal Mart -
    My Rip would not accept the pdf files created from these Illustrator files.
    They were created in Ilustrator CS4 in China and opened in Illustrator CS4 (here in the US) and saved as a PDF per my specs
    The input would fail just at the point that color information was being gleaned from the file -
    We sent these files to Screen and they ripped them and sent them back to me (they used a newer workflow) - these files went through fine -
    Today I asked our client to send one of the original files and all linked images (they were embedded in the ai files I had received originally so I couldn't dig in and get any onfo)
    When looking at the images (photoshop eps files that opened in mac preview with a double click - however, when opened in Photoshop there were guidelines drawn so I feel it began as a photoshop file) used in the file, they all had Japan Color 2001 Coated color profiles -
    Now my question is this - is it at all possible for that to have hindered the ripping of these pdfs?
    PDF's that would not rip were opened in Pitstop and flattened, "preflighted" and repaired all errors in an attempt to get them to go over - final preflight in acrobat 9 & pitstop indicated there were no problems -
    any ideas? Is there a software incompatibility going on here?

    I know that this thread is pretty old but as the OP I wanted to provide an update.
    We finally discovered what the problem was - however, we still don't know exactly why unless it is because our workflow is so old.
    I ended up splitting the file in question into several files by color. Fortunately many of the colors were overprinting each other so trapping was not an issue.
    All colors went through just fine except ONE.
    I opened that file back up in Illustrator and simply changed the swatch color to a different spot and guess what - it went through just fine.
    I assume that there was something about the original swatch that Trueflow gagged on.
    Coincidentally, my old boss called me that same week with a problem he had getting an Illustrator file to process in his Harlequin rip - turns out that the issue was similar - a pattern swatch in the file was preventing the file from ripping.
    I know it's a little late - but should anyone come across a similar issue the swatches may be the problem, particularly if your rip is an older one - as you know, we don't all have the luxury of having the latest and greatest equipment to work with, particularly with the economy being what it has been.

Maybe you are looking for