Big transaction log file

Hi,
I found a sql server database with a transaction log file of 65 GB.
The database is configured with the recovery model option = full.
Also, I noticed than since the database exist, they only took database backup.
No transaction log backup were executed.
Now, the "65 GB transaction log file" use more than 70% of the disk space.
Which scenario do you recommend?
1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.
2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.
Does the " 65 GB file shrink" operation would have impact on my database users ?
The sql server version is 2008 sp2 (10.0.4000)
regards
D

I've read the other posts and I'm at the position of: It really doesn't matter.
You've not needed point in time restore abilities inclusive of this date and time since inception. Since a full database backup contains all of the log needed to bring the database into a consistent state, doing a full backup and then log backup is redundant
and just taking up space.
For the fastest option I would personally do the following:
1. Take a full database backup
2. Set the database recovery model to Simple
3. Manually issue two checkpoints for good measure or check to make sure the current VLF(active) is near the beginning of the log file
4. Shrink the log using the truncate option to lop off the end of the log
5. Manually re-size the log based on usage needed
6. Set the recovery model to full
7. Take a differential database backup to bridge the log gap
The total time that will take is really just the full database backup and the expanding of the log file. The shrink should be close to instantaneous since you're just truncating the end and the differential backup should be fairly quick as well. If you don't
need the full recovery model, leave it in simple and reset the log size (through multiple grows if needed) and take a new full backup for safe keeping.
Sean Gallardy | Blog |
Twitter

Similar Messages

  • Transactional log file issue

    Dear All,
    There have been issues in the past where the transactional log file has grown too big that it made the drive to limit its size. I would like to know the answers to the following
    please:
    1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
    2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
    3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
    4. How often should the update stat job should run please?
    Thank you in advance!

    Hi
    My answers might be very similar to geeks already answer, but hope it will add something more
    1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
     --> If database recovery model is full \ bulk then tlog backup is helpful, and it doesnt help try to increase frequency of log backup and you can refer :
    Factors That Can Delay Log Truncation
    2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
    Auto grow for very large db is very crucial if its too high can cause active vlf and too less can cause fragmentation. In your case your priority is to control space utilizatiuon.
    i suggest you to keep minimum autogrowth and it must be in size not in percentage.
    /*******Auto grow formula for log file**********/
    Auto grow less than 64MB = 4 VLFs 
    Autogrow  of 64MB and less than 1GB = 8 VLFs 
    Autogrow of 1GB and larger = 16 VLFs
    3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
    ---> If below query returns log_backup for respective database then yes you can to increase log backup frequency. But if it returns some other factor , please check above
    mention link
    "select name as [database] ,log_reuse_wait , log_reuse_wait_desc from sys.databases"
    4. How often should the update stat job should run please?
    this totaly depend on ammount of dml operation you are performing. you can select auto update stats and weekly you can do update stats with full scan.
    Thanks Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem

  • What is stored in a transaction log file?

    What does the transaction log file store? Is it the blocks of transactions to be executed, is it the snapshot of records before beginning the
    execution of a transaction or is it just the statements found in a transaction block? Please advice.
    mayooran99

    yes, it will store all the values before and after that were modified. you,first, have to understand the need for transaction log, then, it will start to become apparent, what is stored in the transaction log
    before the transaction can be committed, sql server will make sure that all the information is hardened on the transaction log,so if a crash happens, it can still recover\restore the data.
    when you update some data - the data is feteched into memory and updated- transaction log makes note of it(before and after values etc).see, at this point, the changes were done but not physically present in the data page, they present only in the memory.
    so, if crash happens(before a check piont\lazy writer could be issued), you will that data...this where transaction log comes handy, because all this information is stored in physical file of transaction log. so, when your server comes back on, if the transaction
    is committed, the transaction log will roll forward this iinformation
    when a checkpoint\lazy writer happens, in simple recovery, the transaction log for that txn is cleared out, if there are no other older active txns.
    in full recovery you will take log backups, to clear that txn from the transaction log.
    in transaction log data generally is faster because 1. it is written sequentialyl...it will track the data pageno, lsn and other details that were modified and makes a note of it.
    similar to data cache, there is also transaction log cache, that makes this process faster.. all transactions before being committed, it will wait to make sure everything related to the txn is written to the transaction log on disk.  
    i advice you to pick up - kalen delaney, sql internals book and read - recovery an logging chapter..for more and better understanding...
    Hope it Helps!!

  • Very high transaction log file growth

    Hello
    Running Exchange 2010 sp2 in a two node DAG configuration.  Just recently i have noticed a very high transaction log file growth for one database. The transaction logs are growing so quickly that i have had to turn on circular logging in order to prevent
    the log lun from filling up and causing the database to dismount. I have tried several things to try find out what is causing this issue. At first i thought this could be happening because of virus, an Active Sync user, a users outlook client, or our salesforce
    integration, howerver when i used exmon, i could not see any unusual high user activity, also when i looked at the itemcount for all mailboxes in the particular database that is experiencing the high transaction log file growth, i could not see any mailboxes
    with unusual high item count, below is the command i ran to determine this, i ran this command sever times. I also looked at the message tracking log files, and again could see no indication of a message loop or unusual high message traffic for a
    particlar day. I also followed this guide hopping that it would allow me to see inside the transaction log files, but it didnt produce anything that would help me understand the cause of this issue. When i ran the below tool againts the transaction log files,
    i saw DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD, or OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO, or HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.
    I am starting to run out of ideas on how to figure out what is causing the log file build up. Any help is greatly appreciated.
    http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
    Get-Mailbox -database databasethatkeepsgrowing | Get-MailboxStatistics | Sort-Object ItemCount -descending |Select-Object DisplayName,ItemCount,@{name="MailboxSize";exp={$_.totalitemsize}} -first 10 | Convertto-Html | out-File c:\temp\report.htm
    Bulls on Parade

    If you have users with iPhones or Smart Phones using ActiveSync then one of the quickest ways to see if this is the issue is to have users shot those phones off to see if the problem is resolved.  If it is one or more iPhones then perhaps look at
    what IOS they are on and get them to update to the latest version or adjust the ActiveSync connection timeout.  NOTE: There was an issue where iPhones caused runaway transactions logs and I believe it was resolved with IOS 4.0.1
    There was also a problem with the MS CRM client awhile back so if you are using that check out this link.
    http://social.microsoft.com/Forums/en/crm/thread/6fba6c7f-c514-4e4e-8a2d-7e754b647014
    I would also deploy some tracking methods to see if you can hone in on the culprits, i.e. If you want to see if the problem is coming from an internal Device/Machine you can use one of the following
    MS USER MONITOR:
    http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en and here is a link on how to use it
    http://www.msexchange.org/tutorials/Microsoft-Exchange-Server-User-Monitor.html
    And this is a great article as well
    http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
    Also check out ExMon since you can use it to confirm which mailbox is unusually active , and then take the appropriate action.
     http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en
    Troy Werelius
    www.Lucid8.com
    Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline EDB's and Live Exchange Servers with Lucid8's DigiScope

  • Shrink Transaction log file - - - SAP BPC NW

    HI friends,
    We want to shrink the transaction log files in SAP BPC NW 7.0, how can we achieve thsi
    Please can you throw some light on this
    why we thought of ghrinking the file ?
    we are getting "out of memory " error when ever we do any activity. so we thought of shrinking the file (this is not a production server - FYI)
    example of an activity where the out of memeory issueee appears
    SAP BPC excel >>> etools >>> client options >>> refresh dimension members >>> this leads to a pop-up screen stating that "out of memory"
    so we thought of shrinking the file.
    Please any suggestions
    Thank you and Kindest Regards
    Srikaanth

    HI Poonam,
    Not only the excel is throwing this kind of message (out of memory) - the SAP note is helpful, if we have error in excel alone
    But we are facing this error every where
    We have also found out that our Hard disk capacity as run out of space.
    we want to empty the log files and make some space for us.
    our hard disk is now having only few MegaBytes now
    we want to clear our all test data, log files, and other stuffs
    Please can you recommed us some way
    Thank you and Kindest regards
    Srikaanth

  • Delete transaction log file

    Hi,
    I have three T-log files in my database, Now I want to delete 2 Transaction log files. 
    Can I do the below action:
    1. dbcc shrinkfile(log1,truncateonly) 
    2 dbcc shrinkfile(log2,truncateonly)
    2. Then remove the file using command or SSMS.
    Regards

    Hi Satheesh,
    What about this:
    Can I use the below procedure:
    dbcc shrinkfile(LOG2,emptyfile)
    dbcc shrinkfile (LOG3,emptyfile)
    alter database PRT remove file LOG2
    alter database PRT remove file LOG3
    Note: I have already LOG1 as my primary logfile existing there, I want to remove only secondary logfiles
    Regards

  • WebDAV Query generates a high number of transaction log files

    Hi all,
    I have a program that launch WebDAV queries to search for contacts on an Exchange 2007 server. The number of contacts returned for each user's mailbox is quite high (about 4500).
    I've noticed that each time the query is launched, about 15 transaction log files are generated on the Exchange server (each of them 1Mb). If I ask only for 2 properties on the contacts, this number is reduced to about 8.
    This is a problem since our program is supposed to launch often (about every 3/5min) as It will synchronize Exchange mailboxes with a SQL Server DB. The result is that the logs increase very quickly on the server side, even if there are not so many updates.
    Any idea why so many transaction logs are generated when doing a WebDAV search returning many items? I would understand that logs are created when an update is done on the server, but here it's only a search with many contacts items returned.
    Is there maybe a setting on the Exchange server to control what kind of logs to generate?
    Thank for your help,
    Alexandre

    Hi Alex,
    Actually circular logging/backup was not a solution, I was just explaining that there is an option like that on server but it is not recommended hence not useful in our case :)
    - I am not a developer but AFAIK, WebDAV search query shouldn't generate transaction log because it just searches the mailboxes and gives the result in HTTP format and doesn't produce any Exchange transaction.
    - I wouldn't open transaction logs since it is being used by Exchange which may generate errors and may corrupt Exchange database sometime too. However it is not readable, as you observed, other than Exchange Information Store service (store.exe).
    - You can post this query in development forum to get better idea on this, if any other programmer observed similar symptom while using WebDAV contact search query in Exchange 2007 or can validate your query.
    Microsoft TechNet > Forums Home > Exchange Server > Development
    Well, I just saw that you are using Exchange 2007, in that case why don't you use Exchange Web Service which is better and improved method to access/query mailboxes where as WebDAV is also de-emphasized in Exchange 2007 and might be disappeared in next version of Exchange. Checkout below article for further detail.
    Development: Overview
    http://technet.microsoft.com/en-us/library/aa997614.aspx
    Amit Tank | MVP - Exchange | MCITP:EMA MCSA:M | http://ExchangeShare.WordPress.com

  • Transaction Log File Drive is missing from SAN

    HI all,, we had some SAN issues and we dont have Transaction Log files for some databases..
    This is SQL Server 2008 R2 Cluster.. The drive which was holding this Tlog files were missing.. PLease let me know, how to bring back databases.. Awaiting for early reply..

    As others have said, the SAN folks needs to get their act together and bring back disk with the log files.
    If the log files are truly lost, you should restore a clean backup. If you don't have a clean backup, well, there are some people in your company that are likely to ask you some questions of what is going on the data centre.
    It was suggested that you should detach the data file and reattach it to have a new log file created. I strongly recommend against this. You will get a database that is likely to have corruption and inconsistency on both SQL Server level and application,
    due to transactions that were in flight when the log files were lost.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Log Reader Agent: transaction log file scan and failure to construct a replicated command

    I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
    Error Message
    2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
    2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
    2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
    2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
    Replication agent job kept trying after specified intervals and kept failing with that message.
    Investigation
    I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
    SELECT * FROM dbo.MSrepl_transactions -- 1162
    SELECT * FROM dbo.MSrepl_commands -- 821922
    The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
    -- Returns the commands for transactions marked for replication
    EXEC sp_replcmds
    -- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
    EXEC sp_repltrans
    -- Returns the commands for transactions marked for replication in readable format
    EXEC sp_replshowcmds
    Resolution
    Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
    exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
    In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
    EXEC sp_addlogreader_agent
    @job_login = 'LoginName',
    @job_password = 'Password',
    @publisher_security_mode = 1;
    When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
    --Updates the record that identifies the last distributed transaction of the server
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
    -- Flushes the article cache
    EXEC sp_replflush
    Bingo !
    Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
    Question
    Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.

    Hi Hilary,
    Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

  • Exchange 2010 SP3, RU5 - Massive Transaction Log File Generation

    Hey All,
    I am trying to figure out why 1 of our databases is generating 30k Log Files a day! The other one is generating 20K log files a day. The database does not grow in size as the log files are generated, the problem is log file generation.
    I've tried running through some of the various solutions out there, reviewed message tracking logs, rpc client access logs, IIS Logs - all of which show important info, but none of which actually provide the answers.
    I Stopped the following services to see if that would affect the log file generation in any way, and it has not!
    MS Exchange Transport
    Mail Submission
    IIS (Site Stopped in IIS)
    Mailbox Assistants
    Content Indexing Service
    With the above services stopped, I still see dozens (or more) log files generated in under 10 minutes, I also checked mailbox size reports (top 10) and found that several users mailboxes were generating item count increases for one user of
    about 300, size increases for one user of about 150Mb (over the whole day).
    I am not sure what else to check here? Any ideas?
    Thanks,
    Robert
    Robert

    Hmm - this sounds like an device is chewing up the logs.
    If you use log parser studio, are there any stand out devices in terms of the # of hits?
    And for the ExMon was that logged over a period of time?  The default 60 second window normally misses a lof of stuff.  Just curious!
    Cheers,
    Rhoderick
    Microsoft Senior Exchange PFE
    Blog:
    http://blogs.technet.com/rmilne 
    Twitter:   LinkedIn:
      Facebook:
      XING:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
    Rhoerick, 
    Thanks for the response. When checking the logs the highest number of hits were the (Source) Load Balancers, Port 25 VIP. The problem i was experience was the following: 
    1) I kept expecting the log file generation to drop to an acceptable rate of 10~20 MB Per Minute (Max). We have a large environment and use the exchange sevrers as the mail relays for the hated Nagios monitoring environment
    2) We didn't have our enterprise monitoring system watching SMTP traffic, this is  being resolved. 
    3) I needed to look closer at the SMTP transport database counters, logs, log files and focus less on the database log generation, i did do some of that but not enough of that. 
    4) My troubleshooting kept getting thrown off due to the monitoring notifications seeming to be sent out in batches (or something similar) stopping the transport service for 10 ~ 15 minutes several times seemed to finally "stop the transactions logs
    from growing at a psychotic rate". 
    5) I am re-running my data captures now that i have told the "Nagios Team" to quit killing the exchange servers, with their notifications, sometimes as much as 100+ of the same notifications for the same servers, issues. so far at a quick glance
    the log file generation seems to have dropped by about 30%. 
    Question: What would be the best counters to review in order to "Put it all together"? Also note: our Server roles are split, MBX and CAS/HT. 
    Robert 
    Robert

  • Why is the transaction log file not truncated though its simple recovery model?

    My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
    data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
    mayooran99

    My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
    data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
    mayooran99
    If log records were never deleted(truncated) from the transaction log it wont show as 99% free.Simple recoveyr model
    Log truncation automatically frees space in the logical log for reuse by the transaction log and thats what you are seeing. Truncation wont change file size. It more like
    log clearing, marking
    parts of the log free for reuse. 
    As you said "When I shrink it does shrink" I dont see any issues here. Log truncation and shrink file is 2 different things.
    Please read below link for understanding "Transaction log Truncate vs Shrink"
    http://blog.sqlxdetails.com/transaction-log-truncate-why-it-didnt-shrink-my-log/

  • Cancel the query which uses full transaction log file

    Hi,
    We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
    is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.

    Hi,
    We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
    is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.
    Hello,
    Instead of Putting limit on trn log it would be good to find out cause causing high utilization.Even if you find that your log is growing because of some transaction it would be a blunder to rollback its little easy to do it for Index rebuild but if you
    cancel for some delete operation you would end up in mess.Please don't create a program to delete or kill running operation.
    You can create custom job for alert for trn log file growth.That would be good.
    From 2008 onwards Index rebuild is fully logged so sometimes it causes trn log issue.To solve this either you run index rebuild for specific tables or for selective tables.
    Other option is widely accepted Ola Hallengren script for index rebuild.I suggest you try this
    http://ola.hallengren.com/
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Transaction log file

    hello ;
    how can i build a log file that save all transactions happened on schema ?
    such as inserting or updating or deleting on any table .
    hani khalil

    Well, if you are really concerned about generating a log file, then I would suggest you to go for sql trace , else there is another option called auditing, which I think suits more to your problem
    hare krishna
    Alok

  • Unable to delete records as the transaction log file is full

    My disk is running out of space and as a result I decided to free some space by deleting old data. I tried to delete 100,000 by 100,000 as there are 240 million records to be deleted. But I am unable to delete them at once and shrinking the database doesn't
    free much space. This is the error im getting at times.
    The transaction log for database 'TEST_ARCHIVE' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
     How can I overcome this situation and delete all the old records? Please advice.
    mayooran99

    In order to delete the SQL Server need to write the information in the log file, and you do not have the place for those rows in the log file. You might succeeded to delete less in each time -> next backup the log file each time -> next shrink the
    log file... but this is not the way that I would chose.
    Best option is probably to add another disc (a simple disk do not cost a lot), move the log file there permanently. It will increase the database work as well (it is highly recommend not to put the log file on the same disk as the data file in most cases).
    If you can't add new disk permanently then add one temporary. Then add file to the database in this disk -> create new table in this disk -> move all the data that you do
    not want to delete to the new table -> truncate the current table -> bring back the data om the new table -> drop the new table and the new file to release the temporary disk.
    Are you using full mode or simple recovery mode ?
    * in full mode you have to backup the log file if you want to shrink it
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • Cannot backup Transactional log file.

    Hi Experts!!!!!
    We have installed our EP7.0 System on SQL server 2005.
    We are able to take the full back of data files through enterprise manager but we are not able to take the transactional log backup of it.
    Any ideas!!!!!
    Regards,
    Vamshi.

    Dear Vamsi,
    how you are trying to take transaction log backup?
    you can schedule this transaction log backup by using SQL Server Maintenance Plan
    check the following url.
    [SQL Server Maintenance Plan|http://www.databasejournal.com/features/mssql/article.php/3530486]
    Regards,
    Nagendra.

Maybe you are looking for

  • Macbook Pro Running Very Slowly

    Lately, my Macbook Pro has been running very slowly. The rainbow wheel keeps popping up and it takes a very long time to open windows or quit applications. Also, the computer gets really slowly even if only a few applications are open (e.g. Microsoft

  • Conformed dimensions across several fact tables

    Hi, not exactly sure how to phrase my question, so I'll just start babbling and maybe it'll come across. In our OBIEE repository (from BI applications "project analytics"), we have dimensions of CONTRACT, CONTRACT LINE, PROJECT, and TASK (CONTRACT an

  • Express and Remote Files

    I am looking for a way (within Express) to get a file that is not local to the box that Express is on. I know I could write a shell script to FTP the file, but has anyone ever accomplished this with Express 4GL? Any help is greatly appreciated.

  • Error in creating order?

    HI, When i create an order i enter customer,PO no,Delivery Date, Material , quantity? A.How do i find the total net value? B.i want to know for each material , what condition have been applied,how to find? Thanx.

  • Error while caluculationg shipment cost

    dear all  i have a problem  while caluculating the shipment cost in  VI02 i am getting this error when i wanted to transfer to FI and Co doucments General error during accepatnce posting Message SE 117 thanks