What is the most appropriate way to do reversal and reconcile the reversed

Dear All,
I am facing a problem to reconcile reversal transaction. The scenario as below:
Date 28.04.2008 Company A issue billing to Customer XYZ amount 3,000 MYR
Date 30.04.2008 Company A want to reverse the above transaction.
Questions:
1.Where is the most appropriate menu path to do the reversal?
2. If the reversal needs to be done via menu path: Financial module à Reversal Transaction, then how to reconcile both items to ensure system will read as cleared items?
3.The purpose of Question no.2 due to have same result upon executing customer ageing report regardless using Journal Posting tab or
using Sales Document tab.
Thank you in advance for your kindly reply.

For the function you mentioned :reverse transaction
under the Financial module
you could refer to note:1004113 for more detailed information
To update a journal entry such that it will auto-reverse at a later date follow the steps below:
1.  Go to Financials -> Journal Entry.
2.  Open the journal entry that needs to be reversed.
3.  Place a tick in the box 'Reverse' and a new field for the reversal
     date will be displayed.
4.  Enter the date when this journal enrty is to be reversed and click
     on 'Update'.
For the customer aging report ,there is one case that could explain the difference between 'by journal posting' and 'by sales document':
In case an Invoice has been partially cleared by an Incoming
Payment, the transactions created by these documents will be displayed
as fully open when you run the report using Due Date by Journal Post.
However, if you run the report using Due Date by Sales Doc., the Invoice
will be displayed with its open amount only.
A common occurrence is that exchange rate differences and/or conversion differences were posted without the user having specified to reverse them at a particular date.
Could you explain more about your case?
Best Regards
Helen Sun

Similar Messages

  • What is the most appropriate way to generate a static IPv6 for a domain controller?

    DNS Role Best practives is giving errors. Looks like I need to assign ONE static IPv6 to each domain controller and use IT in DNS and DHCP. There are two routers on the network, each assigning a 2002: IP, plus a link local FE80: IP is also assigned.
    Is there a way to generate a static IPv6 for domain controllers that will not change even if the network cards or routers are changed?
    What is the best practice so that domain integrated DNS and DHCP with Exchange 2010 in the domain, will continue to function?
    There is ambiguous information as to whether DC's should have static or dynamic IPv6 IPs. I have tried variations such as IPv4 compatible. IPv4 mapped, ISATAP, etc. but over time have gotten different errors from different sources.
    It is one thing for Microsoft to give error messages about IPv6 but I cannot find any definitive recommednations on this.
    Thanks if anyone finds a universal answer.
    Bob.

    Excellent and valid points, Bob. Your outlook explains in an easy way how the challenges setting up Windows Server are in a sense, self-generated, and in every sense fully avoidable.
    No changes have been made to the warnings or errors in 2013 R2 despite improvements in other areas. This release mainly brought improvements to the setup in areas that were truly broken like automatic account generation for ADFS. Since that's a decade old
    feature it's probably best not to wait for Microsoft to clarify, and I appreciate your recommendations.
    I'm bumping this thread since it's the first result for 192.168.1.1 on ipv6 on Google right now, and since there's no way to see how often it's being referenced I wanted to add some additional information.
    Multiple NIC's can be specified by using the scope ID parameter supported since Vista, that appears as a percent-sign at the end of IPv6 addresses. It uniquely identifies the network adapter even when that adapter shares the same host portion of the IPv6
    address space (i.e. essentially, has the same IP, which in IPv4 is invalid.) I'll give some examples at the end of the post.
    Following the recommendation to deprecate the fec0 prefix while maintaining a link-local addressing scheme is possible through the prefix length at the beginning of the IPv6 address. As
    this reference at IBM explains, fe80:: maps to a link-local prefix length of 64 equivalent to the IPv4 version of 24, and anything else before the double-colon refers to the network portion of the IPv6 address.
    The host portion of the IP address then _could_ be ::20, ::21, etc., as you said, but to follow
    this MSDN recommendation, it would be more appropriate to use the same host portion and add a suffix for the scope ID documented on that page. The suffix may be specific to Windows
    and may not work in an equivalent way in heterogeneous platform deployments. But since the effect is limited to the local machine it should help anything past XP differentiate NICs when assigned the same host portion.
    The approach taken in the random IPv6 generator linked elsewhere on this page leaves open the possibility, however unlikely, that the generated IP can route to some other host on an open network that happens to have generated the same network portion of
    the address (the other host would be sharing the same network.) If any part should be random, it's the host portion after the double-colon, not the network portion at the beginning, so that the possibility does not exist.
    Additionally, the host portion doesn't have to be random, it's just done that way because it's usually automatically generated; a random number is safer for a computer than relying on a sequence that may not fully cover all the numbers used so far. If you're
    doing a manual deployment you can combine the above information with the inline 0-supression in IPv6 to assign numbers in the following way:
    fe80::1:1%1 (first computer is 1:1, first interface is %1)
    fe80::1:1%2 (second interface)
    fe80::1:2%1 (second computer, first interface)
    Effectively here we're swapping "192.168.1" for "fe80::1" which is roughly the same length (taking into account variations like 10.0.0). The only gotcha is that _either_ the string after the double-colon can't be 1 by itself since that's
    reserved for local machine loopback, _or_ that the second-to-last number after the double-colons can't be 0, since that's equivalent due to inline supression.
    Other combinations are fine, like fe80::2%1 and fe80::2%2 for the first computer, then ::3 for the second, etc. I thought having a 2-index for the first machine is too uncommon to look familiar so I chose the alternative, but even something like fe80::fe%80
    is perfectly fine.
    If you don't need to identify individual NICs then omitting the part after the percent sign makes fe80::10, fe80::11 a valid sequence for 2 computers. For over 255 computers just add another number before the last, so that it looks like fe80::1:10, fe80::1:11,
    etc. That should be easier to remember than the randomly generated numbers.
    There is also another way if the preference is to use IPv4-lookalike addresses. The mapped address spec is defined in RFC 4291 and it goes along the lines of "::ffff:192.168.1.1" for a valid IPv6 address to the gateway, for example. That is a newer
    recommendation than the RFC which the random-number generated linked elsewhere on this page relies on.

  • What's the most efficient way to extend product and catalog classes?

    If I have additional product and catalog properties in a new database table, should
    I create a new class and extend the ProductItem and Catalog classes or should
    I manage these properties via the set/get property methods on the existing classes?
    Is there a difference in performance with the two approaches?

    Performance wise using the set/get property methods is going to be more
    expensive... However, I would recommend the property approach. You really
    don't want to modify or extend the provided source code. I have done this
    and the approach works out fine...
    Another approach that I have used is to create additional tables and classes
    that key of the product and catalog classes. Again with the intension that
    we don't modify provided stuff... For example, we implemented related
    products using this approach.
    "Bill" <[email protected]> wrote in message
    news:3fbcf598$[email protected]..
    >
    If I have additional product and catalog properties in a new databasetable, should
    I create a new class and extend the ProductItem and Catalog classes orshould
    I manage these properties via the set/get property methods on the existingclasses?
    Is there a difference in performance with the two approaches?

  • Most robust way to preserve data insert into CLOB?

    Hello.
    When inserting data into a CLOB column, some of the characters are not being preserved, such as ellipsis characters and other unusual characters. The data is being inserted via JDBC as a simple Java string sql statement in a Statement object. So, what is the most appropriate way to insert information into a CLOB such that all characters are preserved? (Do I have to use PreparedStatements or Clob.setString()?)
    For example, the following sql insert:
    "INSERT INTO t1 (myClob) VALUES ('Hello ... how are you?');"
    Will end up inserting something like:
    "Hello ? how are you?"
    The ellipsis character (Unicode U+2026, I believe) is changed to something else. How can I preserve this and other characters? Thanks in advance.

    Hi,
    Must have something to do with NCHAR or Unicode characters.
    Here are the notes i put in chapter 8 of my book, section 8.2.2
    Notes:
    Pre-10.2 JDBC does not support NCHAR literal (n'...') containing
    Unicode characters that cannot be represented in the database character
    set.
    Using 10.2 JDBC against 10.2 RDBMS, NCHAR literals (n'...')
    are converted to Unicode literals (u'...'); non-ASCII characters
    are converted to their corresponding Unicode escape sequence.
    Using 10.2 JDBC against pre-10.2 RDBMS, NCHAR literals (n'...') are
    not converted and generate undetermined content for characters that
    cannot be represented in the database character set.
    Fwiw, here is my blog entry relative to Lobs (CLOB, BLOB, BFILE) :
    http://db360.blogspot.com/2006/11/get-bolder-with-lobs-manipulation-in.html
    Kuassi, http://db360.blogspot.com/2006/08/oracle-database-programming-using-java_01.html

  • What is the most efficient way to pass LV data to a dll?

    For efficiency, this question primarily becomes important when passing large arrays, structures containing large arrays, or generally, any large block of data to a dll.
    The way the dll setup appears and the .c file it create for the dll call, it appears that labVIEW directly passes data in whatever native passing format LV requires without copying the actual data if you select the "Adapt to Type" as the "Type" option. If I pass an array, for example, the .c file contains the type definition of a labVIEW array, i.e., (size,data), and depending whether I select handle or handle pointer, the data passed is either a handle or handle pointer to the array. Likewise, if I pass a LV structure, the .c file will con
    tain the typedef to the structure and the data passed is either a pointer, or pointer to a pointer. These are, I believe, labVIEW native types and do not require copying.
    On the other hand if an array is passed as an array type, then it is converted to a C array that requires LV to copy the array on both sides of the call.
    I further assume all structures can be passed to the memory manager to be manipulated, although I'm actually not sure that you could resize an array pointer in the dll. That seems a bit dubious, but then I guess upon return LV could query the memory manager to determine the array pointer size.
    That�s how I would think things work. If not, could someone please correct me?
    Kind regards,
    Eric

    Eric,
    Let me tell you something about me too...
    I've been working with LabVIEW for (just) 4 years. That is, 40 hours a week
    professionally, 10 hours a week privatelly. I've started with LV4, and went
    through all versions and revisions until 6.0.2 (6.1 will come soon, but
    first I have to finish some major projects.
    During this time I've been working on lots of interfaces with the windows
    OS. Some 'dll' things I've worked on: OpenGL interface, MSXML driver,
    keyboard hooks, mouse hooks, GDI interfacing, calling LV dll's from
    assembler, calling assembler dll's from LV, creating threads, using serial
    interrupts, etc. I'm now (also) working on a way to automatically generate
    documentation (much more then the 'export VI stings') from a VI. This
    requires 'under the hood' knowledge about how VI's work.
    When I had to make a fast routine for a project one time, I choose
    assembler, because I had this knowledge. Also, I wanted to use pure SIMD
    opperations. The operation had to modify an array of DBL's. The SIMD uses
    the same format (IEEE 754, I think), so it was easy. But when it came to
    testing, it appeard that the routine only paid off if the routine was 'long'
    enough. The routine was n*O^2, where n was a parameter. When the array was
    large, and n small, the overhead of copiing the array to modifiable memory
    was relativelly large, and the LV routine was faster.
    When I get a pointer to a LV array, I can use this pointer to modify the
    data in the array. This can (I think) only be done if LV copied this data,
    just like LV is doing when a wire is split to be modified.
    It might be that this copiing can be prevented, e.g. by using other data
    types, or fiddling with threads and reentrance... If you want to optimally
    benefit from dll's I'd look for a way to keep the data in the dll space, or
    pass it once at initialisation. You could use CreateHeap, HeapAlloc,
    AllocGlobal, and other functions. You can use these functions in LV, or in
    the dll. Once you have a pointer to the (one and only) data space, you can
    use this to pass to the dll functions.
    I think LV does not show the memory in question in the profiler, but I'm not
    sure.
    Using the "Adapt to type" option might just result in a internal convertion
    during 'compile' time, and might be exactly the same as doing it yourself.
    Perhaps you can share a bit about the application you are making, or at
    least why you need the speed you are seeking?
    Regards,
    Wiebe.
    "Eric6756" wrote in message
    news:50650000000500000025C60000-1042324653000@exch​ange.ni.com...
    Greg,
    There are two relevant documents which are distributed with labVIEW,
    in labVIEW 6i, (hey, I'll get around to upgrading), the first is
    titled, "Using External Code in LabVIEW", the second is application
    note 154, "LabVIEW Data Formats".
    Actually, a statement Wiebe@air made on my previous question regarding
    dll calls, "Do dll calls monopolize the calling thead?" provoked this
    line of questions. Based on other things he has said, I gather he is
    also using dlls. So as long as we're here let me ask the next
    question...
    If labVIEW must make a copy of the passed data, does it show up as
    additional memory blocks in the vi profiler? In other words, can you
    use the profiler to infer what labVIEW is doing, or as you put it,
    infer whether there is a clever passing method available?
    As a personal note Greg:
    First, as a one time engineering student and teaching assistant, I
    don't recall hearing or using the terms "magical", or "clever". Nor I
    might add, do I find them in print elsewhere in technical journals.
    While I don't mind NI marketing in their marketing documents, used
    here in this mostly educational forum, they strike me as arrogant,
    and/or pompous.
    I like NI's products because they work and are reliable. I doubt it
    has anything to do with magic, has somewhat more to do with being
    clever, but is mostly due to the dogmatic persistence of your
    engineers. I rather doubt any of them adjoin the term "magical" or
    even "clever" to their solutions. I believe the term "best" is
    generally accepted with the qualifier "I've or we've found". At
    least, that has been my engineering experience.
    Second, many of my questions I can sort out on my own, but I figure as
    long as your willing to answer the questions, then your answers are
    generally available to others. The problem is that one question seems
    to lead to another and specific information gets buried in a rather
    lengthy discourse. When I come here with a specific question, it
    would be nice to find it asked and answered specifically rather than
    buried in the obscurity of some other question. As such, at some
    point in these discussions it might be appropriate to reframe a
    question and put at the top. In my opinion, that decision is
    primarily yours as you have a better feel for the redundancy of
    questions asked and/or your answers.
    Anyway, the next question I'm posting at the top is, "Do the handles
    passed to a dll have to be locked down to insure other threads don't
    move the data?"
    Thanks,
    Kind Regards,
    Eric

  • What is the most reliable way to run one Windows accounting program in Osx 10.5.8?

    I have a simple Windows application where I basically do my accounts. I do not want to tie up my Imacs resources with running a full Windows Os "emulation" like Paralells or VMware Fusion just to run a simple accounting program.
    I have heard about apps that can run one windows application at the time through a kind of Osx Application which emulates windows some how. Is is true that this is a better way of running my accounting program? Is it reliable? What I do is to punch in invoices and print out reports basically. How much resources does this "consume" to run it in "one simple application mode" in comparison with for example Paralells?
    And what is the term for running windows in "one simple application mode"?

    I guess you could try the free trial of CrossOver. http://www.codeweavers.com/products/crossover/ You did not say what app you want to run so it is difficult for us to recommend crossover as being able to run the app.
    But your question is that you are looking for the "most reliable" way to run a Windows program. In my opinion you need Parallels or Fusion. These run the real Windows OS so my thinking is it is the "most reliable" method of running a Windows app on an Apple computer, other than Boot Camp.
    But you said you don't want Parallels. This is probably why you have not received any responses. You eliminated the most reliable and easiest method, in my opinion, when you initially posted. Parallels does not consume any resources until you start it up to use your app. You can close it after you are done with your app. Of course hard drive space is required for storing the OS, your app, and data. But you still need to store your app and data regardless of method you choose.
    One more thing, Parallels and Fusion create a virtual machine for running Windows.
    So my recommendation, use Parallels or Fusion.

  • What is the most efficient (in terms of cost) way to transfer 35 mm slides to my iMac hard drive?

    What is the most cost efficient way to transfer 35mm slide images to my iMac hard drive?

    This gives you the basics
    http://www.tech-faq.com/how-to-copy-slides-to-disk.htm
    The first option, the professional one, isn't the cheapest - but the results should be good.
    The second option is very appealing. Some printers come with brackets that can be used to scan film or slides.
    The third option is a slide scanner. I haven't used this option, but there's a significant startup cost of under $100.
    In short, a scanner is the cheapest and most efficient option.

  • What is the best, most efficient way to read a .xls File and create a pipe-delimited .csv File?

    What is the best and most efficient way to read a .xls File and create a pipe-delimited .csv File?
    Thanks in advance for your review and am hopeful for a reply.
    ITBobbyP85

    You should have no trouble doing this in SSIS. Simply add a data flow with connection managers to an existing .xls file (excel connection manager) and a new .csv file (flat file). Add a source to the xls and destination to the csv, and set the destination
    csv parameter "delay validation" to true. Use an expression to define the name of the new .csv file.
    In the flat file connection manager, set the column delimiter to the pipe character.

  • I have an ipod classic with 4gb, it is full. What is the most economical way to upgrade to at least 16gb

    I have an ipod classic with 4gb, it is full. What is the most economical way to upgrade to at least 16gb?
    (I don't want to delete anything)

    To recover the photos from an iPod Classic you'll need to use third-party software and the photos gained will be quite low resolution.
    See https://discussions.apple.com/docs/DOC-3991 for possibilities.

  • What is the most cost effective way to have a broken screen repaired?

    What is the most cost effective way to have a broken screen repaired?

    One thing to bear in mind, if your iPad is still under any sort of warranty, using any unauthorized repair place may void that.
    So you may find less expensive places that are non-authorized, but once you use them, Apple disowns your device and if you have any warranty left, it'll be void.

  • I have a small production client looking to run 1 workstation running Mac Lion 10.7.3 and two work stations running Windows7 64 bit. They will all be talking to the same storage array through 8Gb FC. What is there most cost effective way to do this?

    I have a small production client looking to run 1 workstation running Mac Lion 10.7.3 and two work stations running Windows7 64 bit. They will all be talking to the same storage array through 8Gb FC. What is there most cost effective way to do this?

    Thank you for your help.
    The client has already made the jump to 8Gb including HBA's, switch and RAID Storage.
    The other question will be if they need a seperate Mac Server to run the Meta Data or are they able to use the current Mac they are running to do this?
    The Mac is a 201073 model they say with 12 Dual Core 2.66Mhz processors and 16 GB of Memory. This system is currently doing rendering. It has the XSAN Client but I understand for the solution to work they need to also run XSAN Server on a MDC.

  • What is the best way to keep 7 days most current data in tabledata

    Hi,
    Can anybody help me to keep most recent 7 days data in a table? any data older than 7 days should be removed which cen be done once a day.
    What is the best way to do this? partition, re_create table or anything else?
    Thanks
    George

    Have you considered partitioning the table? You could then drop the oldest partition every day, which would be marginally quicker & easier than deleting the 10,000 rows. 100,000 rows is a pretty small table to partition, but it may be worth considering.
    I wouldn't expect there to be a significant difference on query performance no matter how you set this up. You won't reset the high water mark, but that is only important when you are doing a full table scan and Oracle will reuse the space for the current day's data, so it's not like the high water mark will be constantly increasing.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • I am still running OS x 10.5.8 what is the next step up and the easiest way to get it. Does the Genius bar have the capability to get me to the most recent software if I take my Imac in

    I am still running OS x 10.5.8 what is the next step up and the easiest way to get it. Does the Genius bar have the capability to get me to the most recent software if I take my Imac in

    iMac G5 - you are at the end of the line, though you may be able to upgrade your RAM, or hard drive.
    iMac CoreSolo, CoreDuo - you can upgrade to 1 GB of RAM or 2 GB of RAM and get up to 10.6.8.
    iMac Core2Duo - you can upgrade to 2 GB of RAM or more and get up to 10.7.2.
    Go to Apple menu -> About This Mac to determine which Mac model you have.
    Here's what you have to watch out for on 10.6 to 10.6.8.
    Here's what you need to watch out for on 10.7 to 10.7.2.
    Some stores may not have 10.6 installer CD available, or may not have 10.7 Flash drive available.  For those you have to make the purchase from elsewhere,
    and bring it in to the store with you.  http://www.apple.com/retail has phone numbers for all their stores, and extension 5 often will get you a real person at the store, though if yuo don't get it, you might try a different extension.

  • What is the most effective way to write Statement to catch and reverse errors during query excution?

    Hello my friends:
    I am wondering what is the most effective way to deal with errors, specifically in a
    Stored Procedure.
    I wrote something like this:
    BEGIN TRY
    BEGIN TRANSACTION
    /*My statements goes in here*/
    IF ERROR_NUMBER() = 0 -- Do I need this line?
    COMMIT TRANSACTION;
    END TRY
    BEGIN CATCH
    IF ERROR_NUMBER() > 0 --Do I need this line?
    ROLLBACK TRANSACTION;
    END CATCH;
    It would make sense using the if Statement when attempting to log errors.
    Just too many variations.
    Thanks

    Also read this great article
    http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/05/13/your-try-block-may-fail-and-your-catch-block-may-be-bypassed.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • What is the most efficent way to create both a standard and HD DVD

    Hi,
    I'm just getting into HD video editing, and I have relatives who will be several years behind me. I will be making HD DVD's for myself to use, but I will also need to make standard DVD's to send to my relatives.I am assuming that the HD DVD's I create will not play in a standard DVD player. I may be wrong. However, If this is correct, is there a particular point in the process of working with Encore where I can make that choice, create the DVD, and then go back and alter it in order to create the other type without destroying and having to redo everything that was done up to that point?

    Thank you Hunt. Being ahead of most of the rest of the world and getting the best technology out there can be a pain sometimes. Reading through Jon Geddes' article left me scratching my head several times, and some of it went way over my head, but I'll keep at it, until it sinks in. Some language, terms, and shorthand, I'm sure are simple to understand to a lot of people, but I'm not in that catagory. I'll just keep working at it. 
    Terry Lee Martin
    Date: Sun, 18 Oct 2009 13:39:08 -0600
    From: [email protected]
    To: [email protected]
    Subject: what is the most efficent way to create both a standard and HD DVD
    I would edit the Project in HD in PrPro. The BD authoring part will be straight workflow.
    For the SD DVD-Video, you have a few choices. You can Export to DV-AVI Type II for Import into a new Encore Project for the DVD. Some feel that PrPro does not do a good job at down-rezing from HD to SD. For a workflow that will likely yield better quality, see this http://www.precomposed.com/blog/2009/07/hd-to-sd-dvd-best-methods/. If you have PrPro/Encore CS4, then Jeff Bellune's /thread/487134?tstart=0 might be useful to you. Just follow the links to the tutorial.
    Good luck,
    Hunt
    >

Maybe you are looking for

  • Adobe flash player and reading pdf files.

         I've recently been having issues with Safari reading online pdf files and loading up sites using adobe flash player.      I've somehow tracked down a temporary solve by going into Safari's "preferences", then the security tab.      Where it show

  • Zip archives won't open on E90

    I have tried various zip archives, and without exception they do not open, with an error saying the file is corrupt. Both supplied zip program and Handy Zip do this. Nokia support recommended restarting the phone, which I did, but no difference. Ever

  • HDR toning changes after closing it

    Hello! I am trying to learn what can be done using HDR Toning in Photoshop CS5.  At first I was impressed by its capabilities.  I was working on a photograph of a train in a wooded valley in eary spring (before most leaves were out), with bright sunl

  • Output Condition Records

    Hi,    I need to upload Output Condition Records For SD. The Xtion is VV13 but currently i am unable to find any standard object for the same in LSMW. The CONDITION object looks to load more of Pricing condtions. Has any body worked ont the same and

  • WLST - Multiple Server Connects

    Im currently using WLST on WLS10 Platform The issue: Im trying to have WLST connect to a server, add some users, then reconnect to another server. The server info is being passed via a property file It appears the "For x" does read the property file,