Tolerance question

how do i maintain this in the tolerance fields:
if the fields are based on goods receipt qty (100%), now can i set my over delivery tolerance to be below 100%?
if example an order qty is 100pc. the material's tolerance is for under delivery its 3% and for over delivery, its tolerance is below 2%, meaning what is allowed for GR is 97 - 98 pcs only. yet system wont allow me to input a negative value.

>
Abhijit Gautam wrote:
> Hie Raf
>
> You can do posting from 97 to 102. have check the setting in OPK4.
>
>
> you should check the tab of Underdelivery & overdelivery tolerance, set it Error when qt falls below......
>
> Revert back
>
> Abhijit gautam
i want the system to allow me to post from 97 - 98 only. how do i maintain this?

Similar Messages

  • Tolerance questions

    Hi all,
    I have a questions regarding the tolerance fields in the material master, work scheduling view.
    1. If i confirm an order for that material, will confirmation be dependent on these fields?
    2. if I maintain a value of 0, and when i save it, the zero disappears, does the system say that Over delivery is prohibited ( meaning there is 0 tolerance percentage)?

    Dear
    If in OPK4 u select
    1. Underdelivery tolerance is not cheked. Then suppose ur order qyt is 100 & irrespective of wht u put in W.S tab system will not allow you to do under delivery.
    2  Error when underdelivery tolerance fall below underdelivery tolerance. System will giv error msg if ur Qty fall below wht u had specifed in W.S tab of undel. tole.
    3. Warning when underdelivery tolerance fall below underdelivery tolerance. System will allow u to confirm only with Waring msg displayed. In this case even if ur tole limit is 0 system will allow you to post 101 qty against order qty 100 with Warning msg.
    & Vice versa Setting of Over del. tole.
    Hope this is Clr to you. If any query revert back
    Abhijit gautam
    Edited by: Abhijit Gautam on Jul 23, 2008 1:50 PM

  • R/3 - SRM [ Limits and Tolerances questions ]

    Hi
    I was wondering if anyone out there could help provide any helpful info in relation to any of my questions below.
    1) PM order requirement against an operation for service, requirement (pur. req.) created in MM, requirement sent to SRM, PO created in SRM and copied back to R/3.
    Q. Overfulfillment tolerance - Should this be calculated into the Total
    Value of the Operation or Activity so that the GOA (approvel) is granted on this
    amount?
    2) Overall vs. Expected Value - Should GOA (approval) be granted on the Overall
    Limit instead of the Expected Value?
    3) Overfulfillment Tolerance - should this appear on the SRM PO?  Should
    it be used to calculate the total value of the PO so that GCA (approvel)_is granted
    on this amount?
    4) Should a Buyer be able to revise the SRM PO without the change also
    being made to the R/3 PR?  If not, should there be validations in SRM to
    prevent the Buyer from doing this?
    I really appreciate any help anyone out there can provide.
    Many Thanks,
    Paul

    Hi,
    Register the queues via t.code sxmb_adm where u have an option manage ques.
    Once you register the queues, they default activated.
    If u want to manually unlock then u need to go to transaction SMQR,SMQ1,SMQ2 for queue management. There u have to select the queue and activated all entries.
    SMQ1 u2013 qRFC Monitor for the outbound queue You use this transaction to monitor the status of the LUWs in the outbound queue and restart any hanging queues manually.
    SMQ2 u2013 qRFC Monitor for the inbound queue. You use this transaction to monitor the status of the LUWs in the outbound queue.
    U may schedule the report RSQOWKEX (Outbound Queues) and RSQIWKEX (Inbound Queues) to restart the queues automatically.
    To Cancel the messages from MONI with the Status Scheduled for Outbound Process
    Schedule the Report RSXMB_CHECK_MSG_QUEUE
    SAP_BC_XMB_DELETE
    Reward points if help ful.
    Regards,
    Pradeep A.
    Edited by: Pradeep Amisagadda on May 21, 2008 6:59 AM

  • Fault tolerant, highly available BOSE XI R2 questions

    Post Author: waynemr
    CA Forum: Deployment
    I am designing a set of BOSE XI R2 deployment proposals for a customer, and I had a couple of questions about clustering. I understand that I can use traditional Windows clustering to setup an active/passive cluster for the input/output file repositories - so that if one server goes down, the other can seamlessly pick up where the other left off. On this Windows-based active/passive cluster, can I install other BOSE services and will they be redundant, or will they also be active/passive. For example: server A is active and has the input/output file repository services and the Page Server. Server B is passive and also has the input/output file repository services and the Page Server. Can the page Server on B be actively used as a redundant Page Server for the entire BOSE deployment? (probably not, but I am trying to check just to make sure) If I wanted to make the most fault-tolerant deployment possible, I think I would need to:Setup two hardware load-balanced web front-end serversSetup two servers for a clustered CMSSetup two web application servers (hardware load-balanced, or can BOSE do that load-balancing?)Setup two Windows-clustered servers for the input/output file repositoriesSetup two servers to provide pairs of all of the remaining BOSE services (job servers, page servers, webi, etc.)Setup the CMS, auditing, and report databases on a cluster of some form (MS SQL or Oracle)So 10 servers - 2 Windows 2003 enterprise and 8 Windows 2003 standard boxes, not including the database environment.Thanks!

    Post Author: jsanzone
    CA Forum: Deployment
    Wayne,
    I hate to beat the old drum, and no I don't work for BusinessObjects education services, but all of your questions and notions of a concept of operations in regards to redundancy/load balancing are easily answered by digesting the special BO course "SA310R2" (BusinessObjects Enterprise XI R1/R2 Administering Servers - Windows).  This course fully covers the topics of master/slave operations, BO's own load balancing operations within its application, and pitfalls to avoid.  Without attending this course, I for one would not have properly understood the BusinessObjects approach and would've been headed on a collision course with disaster in setting up a multi-server environment.
    Best wishes-- John.

  • Invoice Verification Tolerance Limit Question

    Hi All,
    I am in the process of learning MM basics and have a question. Supposing, I set all the tolerance limits for logistic invoice verification to "Do Not Check". This means that I set both BR ( Percentage OPUn Variance (IR before GR)) and BW ( Percentage OPUn Variance (GR before IR)) to "Do Not Check". Now when I create a purchase order, can I post an Invoice Receipt before Goods Receipt or not.
    My reasoning is that since we do not check whether IR/GR comes before GR/IR, the system should allow us to post IR before GR. Is this correct. If not, where is the flaw in my reasoning.
    Thanks for your help and time.
    Regards
    Anu

    Hi All,
    Appreciate any help from the expert,
    Let's say I have created PO in ECC 6.0. I have un-ticked the GR based IV, and over delivery qty or under delivery qty is being st to 10%.
    The reason it had been being set up as above since the user might have invoice comes before GR really done.
    then the question is : 
    1. the User wants to control in terms of amount/qty  of invoice that should not be exceed the amount or qty PO. what should be configured in invoice blocking setting?
    Thank you in advance.
    Best Regards,
    Daniel

  • Odd Question - Temperatur​e Tolerance of W530/Bedbu​g Removal

    Hi all,
    I've owned a W530 for more than a year. I can include more detailed specs if necessary, but for now it has a Samsung 840 250GB HDD and 32GB of RAM. Currently there is the HDD that shipped with the W530 in the ultrabay.
    I've followed this forum for some time now - I have two much older thinkpads, a T60 and a z60m as well. The advice here has been extraordinarily helpful time and time again. It's truly a wonderful resource.
    So my question:
    I live adjacent to an apartment which has recently been confirmed to have bedbugs. Mine has been inspected, and found clean. However, the other day I found a dead bedbug and my girlfriend has some suspicious looking bites. Given this set of circumstances, I am assuming we have some.
    I've made arrangements to stay elsewhere, and have undertaken a laborious process of ensuring that all items taken will not have any unwelcome travelers.
    However I am at a loss as to how to ensure that the laptop is not carrying anything. Heat greater than 120 degrees F, sustained for a period of time, will kill any bedbugs or eggs (it's how we ensure that the clothes being taken will be clean).
    If turned off, can the W530 be exposed to such temperatures (possibly up to 140) without damage? If not, are there any other suggestions?
    Thanks again for any help anyone is able to provide.
    Solved!
    Go to Solution.

    Check the specifications for maximum storage temperature for the laptop.
    Ted

  • ME54N Release strategy question over delivery tolerance limits

    I set up a release strategy base on the over delivery tolerance level. My problem is when a Purchase requisition is saved as rejected, and i click on the button cancel reject it deletes the last release strategy level because now it is looking at the actual total value disregarding the over delivery tolerance level.
    It works fine when the Purchase requisition is not rejected, and i can cancel or reject the purchase requisition without saving it and it does not delete the last line of the release strategy taking into account the over delivery tolerance level. I am using the user exit ZXM06U13.
    Can someone give me some insight into this ? Thanks.

    Hi Siva:
    You should request the development key for this structure under your SAP system, then you can add the extra fields in CEKKO.
    Good luck
    Z.T

  • Tolerance group question

    While doing my GL posting, I cannot go to the next screen because of the following error message. " No amount tolerance range for company code". I have assigned my company code to max.tolerance limits for documents and line items for employees to enter. Any help on this would be greatly appreciated.Thanks.
    No amount tolerance range entered for company code
    Message no. F5103
    Diagnosis
    No tolerance group is specified in company code
    In a tolerance group you define the upper limit for a posting.
    Procedure
    If you have entered the correct company code, ensure that at least one tolerance group is created for this company code and that it is also assigned to the company code. You do this in Customizing for Financial Accounting under Financial Accounting Global Settings -> Document -> Line Item -> Define Tolerance Groups for Employees.
    Edited by: Taz75 on Mar 19, 2009 8:32 AM

    Hi,
    Goto OBA4 - FI Tolerance Group for users.
    This has to be filled without fail else postings would not happen.
    Goto new entries and fill in the following details.
    Group : Blank
    Company Code: Your Company Code
    Currency :INR
    Amount per Document : 99,999,999,999.00
    Amount per open item account : 99999,999,999.00
    Cash discount per line item : 10%
    Save it.
    Now try posting.. It will post now..
    Cheers
    Redoxcube

  • Questions on Print Quote report

    Hi,
    I'm fairly new to Oracle Quoting and trying to get familiar with it. I have a few questions and would appreciate if anyone answers them
    1) We have a requirement to customize the Print Quote report. I searched these forums and found that this report can be defined either as a XML Publisher report or an Oracle Reports report depending on a profile option. Can you please let me know what the name of the profile option is?
    2) When I select the 'Print Quote' option from the Actions drop down in the quoting page and click Submit I get the report printed and see the following URL in my browser.
    http://<host>:<port>/dev60cgi/rwcgi60?PROJ03_APPS+report=/proj3/app/appltop/aso/11.5.0/reports/US/ASOPQTEL.rdf+DESTYPE=CACHE+P_TCK_ID=23731428+P_EXECUTABLE=N+P_SHOW_CHARGES=N+P_SHOW_CATG_TOT=N+P_SHOW_PRICE_ADJ=Y+P_SESSION_ID=c-RAuP8LOvdnv30grRzKqUQs:S+P_SHOW_HDR_ATTACH=N+P_SHOW_LINE_ATTACH=N+P_SHOW_HDR_SALESUPP=N+P_SHOW_LN_SALESUPP=N+TOLERANCE=0+DESFORMAT=RTF+DESNAME=Quote.rtf
    Does it mean that the profile in our case is set to call the rdf since it has reference to ASOPQTEL.rdf in the above url?
    3) When you click on submit button do we have something like this in the jsp code: On click call ASOPQTEL.rdf. Is the report called using a concurrent program? I want to know how the report is getting invoked?
    4) If we want to customize the jsp pages can you please let me know the steps involved in making the customizations and testing them.
    Thanks and Appreciate your patience
    -PC

    1) We have a requirement to customize the Print Quote report. I searched these forums and found that this report can be defined either as a XML Publisher report or an Oracle Reports report depending on a profile option. Can you please let me know what the name of the profile option is?
    I think I posted it in one of the threads2) When I select the 'Print Quote' option from the Actions drop down in the quoting page and click Submit I get the report printed and see the following URL in my browser.
    http://<host>:<port>/dev60cgi/rwcgi60?PROJ03_APPS+report=/proj3/app/appltop/aso/11.5.0/reports/US/ASOPQTEL.rdf+DESTYPE=CACHE+P_TCK_ID=23731428+P_EXECUTABLE=N+P_SHOW_CHARGES=N+P_SHOW_CATG_TOT=N+P_SHOW_PRICE_ADJ=Y+P_SESSION_ID=c-RAuP8LOvdnv30grRzKqUQs:S+P_SHOW_HDR_ATTACH=N+P_SHOW_LINE_ATTACH=N+P_SHOW_HDR_SALESUPP=N+P_SHOW_LN_SALESUPP=N+TOLERANCE=0+DESFORMAT=RTF+DESNAME=Quote.rtf
    Does it mean that the profile in our case is set to call the rdf since it has reference to ASOPQTEL.rdf in the above url?
    Yes, your understanding is correct.3) When you click on submit button do we have something like this in the jsp code: On click call ASOPQTEL.rdf. Is the report called using a concurrent program? I want to know how the report is getting invoked?
    No, there is no conc program getting called, you can directly call a report in a browser window, Oracle reports server will execute the report and send the HTTP response to the browser.4) If we want to customize the jsp pages can you please let me know the steps involved in making the customizations and testing them.
    This is detailed in many threads.Thanks
    Tapash

  • Grey Screen with Flashing Folder with Question Mark

    Hi,
    I need some help with an issue I'm having on my Mid-2012 Macbook Pro (13"; OSX 10.9.2; 8GB RAM).  I've had issues with this Macbook for the past year.  I've worked with Apple Support and had it into the Mac Store prior to the warranty expiring (Dec 2013).  I have the exact same model with all of the same specs that my employer purchased at the same time for work as my work computer, and I haven't had any issues with that one.
    Previously, the system would slow down excessively and eventually start hanging. Occasionally the screen would start flashing.  Apple phone support had me wipe the hard drive and re-install the OS and all of my file, apps, and setting from my Time Machine backup.  That worked for about four months, and then it started again. Since it was getting close to the warranty expiration, I took it to an Apple Store. They ran a bunch of diagnostics, said the hardware was all fine but the OS needed to be re-installed.  They did that in early Dec, and everything was cool again until about three days ago.  Three days ago, it started slowing down and freezing again (even when doing non-memory intensive tasks such as broswing the web with only a couple of tabs open and no other applications open).  Last night, it froze hard and wouldn't shut down, so I had to cold boot it.  When I tried to power it back on, it came to the grey screen with the flashing folder with the question mark (which I know means it can't find the boot sector).  I waited until this morning, and it stil wouldn't boot. I then rebooted into Startup Manager, and the HDD was there. I selected the HDD, and it booted fine and ran fine for a couple of hours (I was able to do a Time Machine backup).  Then it froze up solid again. I waited for an hour or so before cold booting (don't like doing that), and when I tried rebooting, I got the flashing folder with the question mark. I tried booting into the Startup Manager again, but this time, my HDD wasn't listed. I then booted into the OSX Recovery utility (CMD R on boot), went into the Disk Utility hoping to do a disk repair, but my HDD wasn't listed. I have an external SATA to USB adapter, so I pulled the HDD, hooked it to a USB port on my other (identical except it doesn't have problems) Macbook Pro.  Once my other Macbook booted, the HDD from the bad Macbook Pro showed up fine.  I ran a verify and repair disk on the HDD from the bad Macbook, and it didn't show any issues.
    So I figured I'd be really brave. I took the HDD from the bad Macbook Pro and put it in my work (good) Macbook Pro (I took the HDD out of my working work Macbook Pro).  It booted fine.  I then did the verify and repair disk (again no errors) and verify and repair permissions (it found a few, but no more than it has in the past). I ran it that way for about an hour with no issues. That led me to believe that the HDD for my personal Macbook was fine, and it must be an issue with the SATA cable or the mainboard. 
    Here's where it gets odd.  I put the HDD from my work Macbook Pro into the bad Macbook Pro thinking it wouldn't even recognize it.  It did recognize it, and it booted fine.  I ran it like that for about 30 minutes.  It did have a couple of short freeze ups, but it didn't lock up solid. I didn't want to push my luck and possibly damage the HDD for my work Macbook, so I shut down the bad Macbook Pro ended the experiment at that point.
    I put the original HDD back in the Macbooks where they originally came from. I then ran the Apple Hardware Test (press and hold D on startup) on the bad Macbook Pro; I did the extended testing option. It ran for about an hour, but it didn't find any issues with the bad Macbook Pro. 
    I put the HDD from the bad Macbook back in my working Macbook and wiped the disk and reinstalled OSX from a Time Machine Backup from last week (before the problems occured).  Put it back in the bad Macbook and still no luck. Finally I tried resetting the PRAM because I saw that as one of the options on this discussion board. 
    I've searched and read everything I can find related to this, but I can't find anything that works, and I'm at my wits end.  Can anyone point me in a direction of what might be wrong and what else to try?
    Thanks!
    Mike

    You performed thorough and methodical troubleshooting, and this appears to be the most important result:
    I put the HDD from my work Macbook Pro into the bad Macbook Pro thinking it wouldn't even recognize it.  It did recognize it, and it booted fine.  I ran it like that for about 30 minutes.  It did have a couple of short freeze ups, but it didn't lock up solid.
    Given compatible hardware, you ought to be able to swap hard disk drives in exactly that manner, so it shouldn't surprise you that it worked. However, installing the "known good" HDD in the problem machine should not have resulted in any freeze-ups at all.
    You can conclude the hard disks (both of them) are serviceable and whatever fault exists probably lies elsewhere. Often the SATA cable is damaged or not seated properly, and is likely to fail more than anything on the logic board. Inspect the logic board's SATA connections and make sure there are no contaminants or damage. The two drives and two logic boards are going to have slightly different component tolerances, so perhaps the defective one is simply exceeding some limit.
    Apple Hardware Test is very cursory and essentially tests for the presence of operable hardware. It is far from an exhaustive test, and only a report of a failure can be relied upon for accuracy. For a more thorough test you would need to have Apple evaluate it using the time-consuming Apple Service Diagnostics. Even then, they may come up without a clue, and eventually someone will suggest a logic board replacement which can be expensive.
    It is an unusual problem, and I don't know how much time Apple would invest in diagnosing it before they conclude you really ought to buy a new Mac instead. They might surprise you though in that a "depot repair", if yours is eligible, is a very cost effective option so consider it.
    Given your ability you might also consider purchasing a replacement logic board from PowerbookMedic, or even sending it to them for a flat rate repair.

  • MSI GE60 2PL Cooling Question

    Hi there,
    I just bought a MSI GE60 Apache-629 Laptop. I Never owned a MSI Laptop but I am Very Satisfied so Far. Even though it doesn't have the NVidia GTX 860m , like the Apache Pro, I can Still Run Battlefield 4 on Medium Spec in 1920x1080.
    My Question is should I Press the Fan Button to Speed Up the Fan when Playing Battlefield 4 ?
    It seemed like my Laptop was getting VERY HOT !
    Is this Normal ?

    Hello ericpatterson89,
    If you don't mind the fan noise and the power consumption while you're playing BF4, just speed it up. The temperatures of the CPU & GPU are always hot when playing these types of 3D FPS games, no matter which model. Some models are just bigger and thicker so that you don't feel so much heat on the surface. But actually they are all very hot in those cores.
    So the best and accurate way is to run "Dragon Gaming Center" to monitor the temperatures of the CPU & GPU. Its icon can be found on the system tray (right-bottom of the screen). Just double-click it to open its window and it will show you the temperatures of both CPU & GPU.
    There's an EC firmware inside the laptop which will automatically increase/decrease the RPM of the fan or boost up/throttle down the GPU power consumption according to the heat generated. So if you always turn the fan in full speed during the game play, according to the principle of the NVIDIA GPU, it will run as fast as possible if needed until it's throttled by EC firmware. In my past experience, setting the fan in full-speed during the game play will get more time running fast and smooth than setting it automatic. That's why I am saying that it depends on the user's tolerance to the fan noise...

  • LR 4.4 (and 5.0?) catalog: a problem and some questions

    Introductory Remark
    After several years of reluctance this March I changed to LR due to its retouching capabilities. Unfortunately – beyond enjoying some really nice features of LR – I keep struggling with several problems, many of which have been covered in this forum. In this thread I describe a problem with a particular LR 4.4 catalog and put some general questions.
    A few days ago I upgraded to 5.0. Unfortunately it turned out to produce even slower ’speed’ than 4.4 (discussed – among other places – here: http://forums.adobe.com/message/5454410#5454410), so I rather fell back to the latter, instead of testing the behavior of the 5.0 catalog. Anyway, as far as I understand this upgrade does not include significant new catalog functions, so my problem and questions below may be valid for 5.0, too. Nevertheless, the incompatibility of the new and previous catalogs suggests rewriting of the catalog-related parts of the code. I do not know the resulting potential improvements and/or new bugs in 5.0.
    For your information, my PC (running under Windows 7) has a 64-bit Intel Core i7-3770K processor, 16GB RAM, 240 GB SSD, as well as fast and large-capacity HDDs. My monitor has a resolution of 1920x1200.
    1. Problem with the catalog
    To tell you the truth, I do not understand the potential necessity for using the “File / Optimize Catalog” function. In my view LR should keep the catalog optimized without manual intervention.
    Nevertheless, when being faced with the ill-famed slowness of LR, I run this module. In addition, I always switch on the “Catalog Settings / General / Back up catalog” function. The actually set frequency of backing up depends on the circumstances – e.g. the number of RAW (in my case: NEF) files, the size of the catalog file (*.lrcat), and the space available on my SSD. In case of need I delete the oldest backup file to make space for the new one.
    Recently I processed 1500 photos, occupying 21 GB. The "Catalog Settings / Metadata / Automatically write changes into XMP" function was switched on. Unfortunately I had to fiddle with the images quite a lot, so after processing roughly half of them the catalog file reached the size of 24 GB. Until this stage there had been no sign of any failure – catalog optimizations had run smoothly and backups had been created regularly, as scheduled.
    Once, however, towards the end of generating the next backup, LR sent an error message saying that it had not been able to create the backup file, due to lack of enough space on the SSD. I myself found still 40 GB of empty space, so I re-launched the backup process. The result was the same, but this time I saw a mysterious new (journal?) file with a size of 40 GB… When my third attempt also failed, I had to decide what to do.
    Since I needed at least the XMP files with the results of my retouching operations, I simply wanted to save these side-cars into the directory of my original input NEF files on a HDD. Before making this step, I intended to check whether all modifications and adjustments had been stored in the XMP files.
    Unfortunately I was not aware of the realistic size of side-cars, associated with a certain volume of usage of the Spot Removal, Grad Filter, and Adjustment Brush functions. But as the time of the last modification of the XMP files (belonging to the recently retouched pictures) seemed perfect, I believed that all my actions had been saved. Although the "Automatically write changes into XMP" seemed to be working, in order to be on the safe side I selected all photos and ran the “Metadata / Save Metadata to File” function of the Library module. After this I copied the XMP files, deleted the corrupted catalog, created a new catalog, and imported the same NEF files together with the side-cars.
    When checking the photos, I was shocked: Only the first few hundred XMP files retained all my modifications. Roughly 3 weeks of work was completely lost… From that time on I regularly check the XMP files.
    Question 1: Have you collected any similar experience?
    2. The catalog-related part of my workflow
    Unless I miss an important piece of knowledge, LR catalogs store many data that I do not need in the long run. Having the history of recent retouching activities is useful for me only for a short while, so archiving every little step for a long time with a huge amount of accumulated data would be impossible (and useless) on my SSD. In terms of processing what count for me are the resulting XMP files, so in the long run I keep only them and get rid of the catalog.
    Out of the 240 GB of my SSD 110 GB is available for LR. Whenever I have new photos to retouch, I make the following steps:
    create a ‘temporary’ catalog on my SSD
    import the new pictures from my HDD into this temporary catalog
    select all imported pictures in the temporary catalog
    use the “File / Export as Catalog” function in order to copy the original NEF files onto the SSD and make them used by the ‘real’ (not temporary) new catalog
    use the “File / Open Catalog” function to re-launch LR with the new catalog
    switch on the "Automatically write changes into XMP" function of the new catalog
    delete the ‘temporary’ catalog to save space on the SSD
    retouch the pictures (while keeping and eye on due creation and development of the XMP files)
    generate the required output (TIF OR JPG) files
    copy the XMP and the output files into the original directory of the input NEF files on the HDD
    copy the whole catalog for interim archiving onto the HDD
    delete the catalog from the SSD
    upon making sure that the XMP files are all fine, delete the archived catalog from the HDD, too
    Question 2: If we put aside the issue of keeping the catalog for other purposes then saving each and every retouching steps (which I address below), is there any simpler workflow to produce only the XMP files and save space on the SSD? For example, is it possible to create a new catalog on the SSD with copying the input NEF files into its directory and re-launching LR ‘automatically’, in one step?
    Question 3: If this I not the case, is there any third-party application that would ease the execution of the relevant parts of this workflow before and/or after the actual retouching of the pictures?
    Question 4: Is it possible to set general parameters for new catalogs? In my experience most settings of the new catalogs (at least the ones that are important for me) are copied from the recently used catalog, except the use of the "Catalog Settings / Metadata / Automatically write changes into XMP" function. This means that I always have to go there to switch it on… Not even a question is raised by LR whether I want to change anything in comparison with the settings of the recently used catalog…
    3. Catalog functions missing from my workflow
    Unfortunately the above described abandoning of catalogs has at least two serious drawbacks:
    I miss the classification features (rating, keywords, collections, etc.) Anyway, these functions would be really meaningful for me only if covering all my existing photos that would require going back to 41k images to classify them. In addition, keeping all the pictures in one catalog would result in an extremely large catalog file, almost surely guaranteeing regular failures. Beyond, due to the speed problem tolerable conditions could be established only by keeping the original NEF files on the SSD, which is out of the question. Generating several ‘partial’ catalogs could somewhat circumvent this trap, but it would require presorting the photos (e.g. by capture time or subject) and by doing this I would lose the essence of having a single catalog, covering all my photos.
    Question 5: Is it the right assumption that storing only some parts (e.g. the classification-related data) of catalog files is impossible? My understanding is that either I keep the whole catalog file (with the outdated historical data of all my ‘ancient’ actions) or abandon it.
    Question 6: If such ‘cherry-picking’ is facilitated after all: Can you suggest any pragmatic description of the potential (competing) ways of categorizing images efficiently, comparing them along the pros and contras?
    I also lose the virtual copies. Anyway, I am confused regarding the actual storage of the retouching-related data of virtual copies. In some websites one can find relatively old posts, stating that the XMP file contains all information about modifying/adjusting both the original photo and its virtual copy/copies. However, when fiddling with a virtual copy I cannot see any change in the size of the associated XMP file. In addition, when I copy the original NEF file and its XMP file, rename them, and import these derivative files, only the retouched original image comes up – I cannot see any virtual copy. This suggests that the XMP file does not contain information on the virtual copy/copies…
    For this reason whenever multiple versions seem to be reasonable, I create renamed version(s) of the same NEF+XMP files, import them, and make some changes in their settings. I know, this is far not a sophisticated solution…
    Question 7: Where and how the settings of virtual copies are stored?
    Question 8: Is it possible to generate separate XMP files for both the originally retouched image and its virtual copy/copies and to make them recognized by LR when importing them into a new catalog?

    A part of my problems may be caused by selecting LR for a challenging private project, where image retouching activities result in bigger than average volume of adjustment data. Consequently, the catalog file becomes huge and vulnerable.
    While I understand that something has gone wrong for you, causing Lightroom to be slow and unstable, I think you are combining many unrelated ideas into a single concept, and winding up with a mistaken idea. Just because you project is challenging does not mean Lightroom is unsuitable. A bigger than average volume of adjustment data will make the catalog larger (I don't know about "huge"), but I doubt bigger by itself will make the catalog "vulnerable".
    The causes of instability and crashes may have NOTHING to do with catalog size. Of course, the cause MAY have everything to do with catalog size. I just don't think you are coming to the right conclusion, as in my experience size of catalog and stability issues are unrelated.
    2. I may be wrong, but in my experience the size of the RAW file may significantly blow up the amount of retouching-related data.
    Your experience is your experience, and my experience is different. I want to state clearly that you can have pretty big RAW files that have different content and not require significant amounts of retouching. It's not the size of the RAW that determines the amount of touchup, it is the content and the eye of the user. Furthermore, item 2 was related to image size, and now you have changed the meaning of number 2 from image size to the amount of retouching required. So, what is your point? Lots of retouching blows up the amount of retouching data that needs to be stored? Yeah, I agree.
    When creating the catalog for the 1500 NEF files (21 GB), the starting size of the catalog file was around 1 GB. This must have included all classification-related information (the meaningful part of which was practically nothing, since I had not used rating, classification, or collections). By the time of the crash half of the files had been processed, so the actual retouching-related data (that should have been converted properly into the XMP files) might be only around 500 MB. Consequently, probably 22.5 GB out of the 24 GB of the catalog file contained historical information
    I don't know exactly what you do to touch up your photos, I can't imagine how you come up with the size should be around 500MB. But again, to you this problem is entirely caused by the size of the catalog, and I don't think it is. Now, having said that, some of your problem with slowness may indeed be related to the amount of touch-up that you are doing. Lightroom is known to slow down if you do lots of spot removal and lots of brushing, and then you may be better off doing this type of touch-up in Photoshop. Again, just to be 100% clear, the problem is not "size of catalog", the problem is you are doing so many adjustments on a single photo. You could have a catalog that is just as large, (i.e. that has lots more photos with few adjustments) and I would expect it to run a lot faster than what you are experiencing.
    So to sum up, you seem to be implying that slowness and catalog instability are the same issue, and I don't buy it. You seem to be implying that slowness and instability are both caused by the size of the catalog, and I don't buy that either.
    Re-reading your original post, you are putting the backups on the SSD, the same disk as the working catalog? This is a very poor practice, you need to put your backups on a different physical disk. That alone might help your space issues on the SSD.

  • EBS Customer open items automatic clearing within tolerances

    Hi Experts,
    We have already implemented Elecronic Bank Statements automatic uploading functionality.  As part of this process, customer open items need to be cleared automatically. 
    As per the standard process, SAP will be able to clear off the open items ONLY if the collection amount (reflected in MT940) is exactly equal to the invoice amount (reflected in the open item).  But general practical situation is that customers may under / over pay.  For example, if the customer invoice is for USD 100,005 the actual payment from the customer could be for USD 100,000.  In this case, it is normal business practice that open item will be cleared off rather than keeping balance USD 5 in ledger and chasing the customer.
    So tolerances are already set up both at GL account level (Trans OBA4) and customer level (Trans OBA3) with blank tolerance group with permitted payment difference of USD 15.  And no tolerance group is assigned to the customer master.  Also account determination is maintained for the differences with nil reason code.
    Inspite of all these settings, I am getting an error message indicating that the difference is too large for clearing while trying to upload this bank statement.  Here the difference is only USD 5.
    My doubts are:
    1. Does the standard EBS program RFEBKA00 considers the permitted payment difference tolerances set in transaction OBA4 and OBA3 ?  I don't need to post the differences with reason code.  That's why account determination is maintained with blank reason code as well. 
    2. SAP Note 124655 (Point No 3) says that entering reasons for differences is not possible in the standard system through EBS.  Does that mean clearing of open items WITHOUT reason code is possible ?
    3. SAP Note 549277 (Point No 5) says that even though Note 124655 specifies certain functionalities as not supported in standard system, they can be achieved by customer through user exits.  Does that mean it is possible to code in the user exit saying that system should consider the payment difference tolerances for clearing off of the open items ?
    Could you kindly let me know your experience with these questions.  I will be very glad to hear from you.  Thanks for your time.
    Warm regards,
    Sridhar

    Hi Experts,
    I got this resolved by using the enhancement FEB00001.  If the field "Distribute by age" is activated through this enhancement, then system will be able to match the paid amount and the combination of open items for automatic clearing.  If there is no exact match, then automatic clearing through FF.5 is not possible as indicated in SAP Note 124655.
    There is no more necessity to use BAdI FIEB_CHANGE_BS_DATA.  Complete relevant logic can be maintained directly in the enhancement itself.
    Have a nice day.
    Regards,
    Sridhar

  • Questions on new XServe

    Apologies for the lengthy post.
    We're looking to update an older dual G5 XServe to a new Nehalem model. Our current server handles file sharing (AFP & SMB) and is our OD Master (we have 5 other XServes). It's connected via Fibre to an XServe Raid configured with 4 active drives (250 GB each) and a standby backup drive. It total we have @ 700 GB of usable storage. We have a cloned system drive in the machine, and another offsite. We also have extensive Firewire data backups, including offsite.
    We're looking to replace this with a Dual Quad core machine but I'm still mulling over configurations. As the XServe RAID has been removed from the support list, I'm considering removing it from the equation and perhaps just use it for some non critical archiving. It has been wonderfully reliable in the 5 odd years we've had it.
    For the new server I was thinking 3x 1TB drives (& Raid Card) which I suspect would give @ 2TB of storage (more than enough for us - the 700 GB is already adequate but if we're going to upgrade we may as well expand a bit as well).
    And now for the questions...
    Would I be correct in assuming I could expect similar/better performance from the new drives as opposed to the old XServe RAID? Will the G5 XServe's Fibre Card fit the new XServe?
    Are the Solid State Drives user swappable? I was thinking perhaps a second SS drive with a cloned system as a backup.
    Can a boot partition be setup on this RAID setup as a backup to the Solid State Drive? What's the general feeling with regards SS Drives? Are they worth considering or should I just stick to 'conventional' technology?
    Will these XServes boot OK from a firewire device?
    Any advice, suggestions or personal experiences would be appreciated.
    many thanks

    Would I be correct in assuming I could expect similar/better performance from the new drives as opposed to the old XServe RAID?
    Comparable direct-attached storage (DAS) disk storage tends to be faster than SAN storage, yes.
    And you can connect DAS via SAS or SAS RAID PCIe controller, an older PCI-X controller, or via FireWire or USB buses. PCIe disk interconnects will be the fastest path, and USB the slowest.
    Here, I'd tend to consider the Apple RAID controller or a PCIe RAID controller if I needed extra bays.
    Will the G5 XServe's Fibre Card fit the new XServe?
    I don't know. That probably depends on the card, but it's certainly worth a try. Ensure you have the right slot in the new box; you've probably got a PCI-X FC SAN card here, and you'd thus need a PCI-X riser on the new box. Or punt and get a newer PCIe FC SAN card. Put another way, I'd ask Apple to confirm your particular card will work when carried forward, or wait for another responder here.
    Are the Solid State Drives user swappable? I was thinking perhaps a second SS drive with a cloned system as a backup.
    The Xserve drive bays are user swappable.
    Do you need SSD? If you're on your current box and are not already I/O bound, I'd guess not. And the newer HDD drives are faster, too.
    Can a boot partition be setup on this RAID setup as a backup to the Solid State Drive? What's the general feeling with regards SS Drives? Are they worth considering or should I just stick to 'conventional' technology?
    There is flexibility in what you boot your Xserve box off of, yes.
    I've been using SSD off and on for around twenty years in its various incarnations. Back when there was battery-backed DSSI drives (don't ask) and SCSI-1 RAM-based SSD gear.
    Whether SSD is appropriate really depends on what you need for I/O rates. An SSD is one piece of a puzzle of gaining faster I/O, and some of the other available pieces can be faster disks, striping (RAID-0), faster controllers, or faster controllers with bigger caches, or more and larger host caches. If your applications are disk-bound, definitely consider it. But if you're not being limited by the disk I/O speeds for your box, then SSD doesn't (usually) make as much sense.
    (There are other cases where SSD can be appropriate. SSD is more tolerant of getting bounced around than HDD, which is why you see it in laptops and in rough-service and mil-spec environments such as aircraft-mounted computer servers. With an HDD, one good head-slap can ruin your whole day. But I digress.)
    Will these XServes boot OK from a firewire device?
    AFAIK, yes. I haven't had to do that, but that's been a longstanding feature of Xserve boxes.
    Any advice, suggestions or personal experiences would be appreciated.
    I'd relax; the Nehalem-class processors are screaming-fast boxes, and with massively better I/O paths than Intel has had available before.
    Ensure your LAN is Gigabit Ethernet or IEEE 802.11n 5 GHz dual-slot, too. Upgrade from any existing 100 Mb LAN and from pre-n WiFi networking.

  • Can anybody answer these questions on AD and DNS?

    How can two Domain Controllers both be Primary DNS Servers for the same Domain? If a DC is also a Primary DNS Server, where are the DNS Records stored and how does the equivalent of a “Zone Transfer” happen?
    Is your system fault tolerant? Will AD work when one DC is down? Will DNS work? What modifications may you need on DNS clients?
    List some of the SRV Records are created in the DNS for each DC
    When a user is added to AD on the GUI Server, how is it copied to the other DC?
    Can you create a “Template User Account”, from which new accounts can be copied? Which main attributes are copied when you copy a user account? Which main attributes must be re-entered?
    In what order are Group Policies applied to a user on PC1, in an Organisational Unit OU1, in a Site SITE1, in domain DOM1? – from least significant to most significant. What are the implications of this for a delegated administrator of an OU in a Domain,
    regarding Domain Policies and OU Policies?
    An administrator wants to implement a new  software package throughout the organisation. However, the software will only run on PCs with 2GB RAM or more. Describe how an administrator can automate the software installation process.
    An administrator implements a new restrictive security policy throughout the organisation, and finds that she cannot perform some necessary functions. How can she ensure that the new security policy does not apply to her, so that she can do her job?
    Any help would be great! 

    You'll increase your chance of getting answers by limiting one question per post. I'd try asking them over here or try the support forum for the instructional you're working through.
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/home?category=windowsserver&filter=alltypes&sort=lastpostdesc
    Regards, Dave Patrick ....
    Microsoft Certified Professional
    Microsoft MVP [Windows]
    Disclaimer: This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.

Maybe you are looking for