WhiteLevel consideration

The WhiteLevel tag specifies the clipping level of the raw data; a raw processor can determine, which pixels are "valid" based on this specification. Unfortunately, some cameras are not consistent, the clipping level changes between camera copies, though usually not much. Other cameras don't have a single clipping level but a clipping range.
The Adobe DNG converter does not analyze the image content to determine the clipping level but creates WhiteLevel as a constant, depending on the camera model and ISO. This is all right, if the specification is somewhat reliable. The question is, what should be the WhiteLevel value, if the clipping level is not known exactly or it is not a single point.
Unfortunately, the DNG converter is inconsequent in this question. Examples: the Canon 40D's clipping level with ISO 100 is 13824 (several cameras sampled); the DNG converter writes WhiteLevel=13600, i.e. quite under the true clipping level. On the other hand, the Canon 7D's clipping level is 13584, and the DNG converter writes again 13600, i.e. a bit higher than the true clipping level.
If WhiteLevel is lower than the actual clipping level, then the raw processor regards the unclipped pixels between the WhiteLevel and the real clipping level as "clipped" incorrectly. This is a small problem. However, if WhiteLevel is greater than the real clipping level, then the raw processor does not notice any actual clipping; that is a big problem.
Obviously, ACR "solves" this problem on a simple way: by "underguessing" WhiteLevel, i.e. assuming clipping even on pixels, which are clearly lower than WhiteLevel (I don't know, how large this tolerance is). However, this is a mess, to say the least. The DNG specification says nowhere, that WhiteLevel should be treated this or that way - it states only, that it is the fully saturated encoding level for the raw sample values.
A somewhat prudent solution would be if the DNG converter inserted either the exact clipping value (this is well known in many cases), or a somewhat lower value, to make sure, that the raw processor treats all clipped pixels as such. 
Another issue is, that WhiteLevel is a single value for all pixels of the CFA of a mosaic type sensor. For example the Nikon D200's clipping points are between 3982 and 4095, depending on the channel. One should think about expanding the WhiteLevel specification for allowing channel related clipping levels.
Gabor

Hi,
As a general view I apply service packs regardless of anything driving them. Fix Pack's I tend to deploy if the customer is getting an error/problem that is addressed in the FP (and HotFix's definitely only if the customer is getting the error).
The main reason I do service packs regardless of if it will fix something is that it is quite feasible for their to be undocumented fixes pushed through in a service pack, this is less likely to be the case for a FixPack/Hotfix.
The only deviation would be if it is a brand new deployment then I will often do both Service Packs and Fix Packs.
Regards
Alex

Similar Messages

  • PXI 2527 & PXI 4071 -Questions about EMF considerations for high accuracy measurements and EMF calibration schemes?

    Hi!
    I need to perform an in-depth analysis of the overall system accuracy for a proposed system. I'm well underway using the extensive documentation in the start-menu National Instruments\NI-DMM\ and ..\NI-Switch\ Documenation folders...
    While typing the question, I think I partially answered myself while cross-referencing NI documents... However a couple of questions remain:
    If I connect a DMM to a 2 by X arranged switch/mux, each DMM probe will see twice the listed internal "Differential thermal EMF" at a typical value of 2.5uV and a max value of less than 12uV (per relay). So the total effect on the DMM uncertainty caused by the switch EMF would be 2*2.5uV = 5uV? Or should these be added as RSS: = sqrt(2.5^2+2.5^2) since you can not know if the two relays have the same emf?
    Is there anything that can be done to characterize or account for this EMF (software cal, etc?)?
    For example, assuming the following:
    * Instruments and standards are powered on for several hours to allow thermal stability inside of the rack and enclosures
    * temperature in room outside of rack is constant
    Is there a reliable way of measureing/zeroing the effect of system emf? Could this be done by applying a high quality, low emf short at the point where the DUT would normally be located, followed by a series of long-aperture voltage average measurements at the lowest DMM range, where the end result (say (+)8.9....uV) could be taken as a system calibration constant accurate to the spec's of the DMM?
    What would the accuracy of the 4071 DMM be, can I calculate it as follows, using 8.9uV +-700.16nV using 90 days and 8.9uV +- 700.16nV + 150nV due to "Additional noise error" assuming integration time of 1 (aperture) for ease of reading the chart, and a multiplier of 15 for the 100mV range. (Is this equivalent to averaging a reading of 1 aperture 100 times?)
    So, given the above assumptions, would it be correct to say that I could characterize the system EMF to within  8.5uV+- [700.16nV (DMM cal data) + 0.025ppm*15 (RMS noise, assuming aperture time of 100*100ms = 10s)] = +-[700.16nV+37.5nV] = +- 737.66nV? Or should the ppm accuracy uncertainties be RSS as such: 8.5uV +- sqrt[700.16nV^2 + 37.5nV^2] = 8.5uV +-701.16nV??
     As evident by my above line of thought, I am not at all sure how to properly sum the uncertainties (I think you always do RSS for uncertainties from different sources?) and more importantly, how to read and use the graph/table in the NI 4071 Specifications.pdf on page 3. What exactly does it entail to have an integration time larger than 1? Should I adjust the aperture time or would it be more accurate to just leave aperture at default (100ms for current range) and just average multiple readings, say average 10 to get a 10x aperture equivalent?
    The below text includes what was going to be the post until I think I answered myself. I left it in as it is relevant to the problem above and includes what I hope to be correct statements. If you are tired of reading now, just stop, if you are bored, feel free to comment on the below section as well.
    The problem I have is one of fully understanding part of this documenation. In particular, since a relay consists of (at least) 2 dissimilar metal junctions (as mentioned in the NI Switch help\Fundamentals\General Switching Considerations\Thermal EMF and Offset Voltage section) and because of the thermo-couple effect (Seebeck voltage), it seems that there would be an offset voltage generated inside each of the relays at the point of the junction. It refeers the "Thermocouple Measurements" section (in the same help document) for further details, but this is where my confusion starts to creep up.
    In equation (1) it gives the expression for determining E_EMF which for my application is what I care about, I think (see below for details on my application).
    What confuses me is this: If my goal is to, as accurately as possible, determine the overall uncertainty in a system consisting of a DMM and a Switch module, do I use the "Differential thermal EMF" as found in the switch data-sheet, or do I need to try and estimate temperatures in the switch and use the equation?
    *MY answer to my own question:
    By carefully re-reading the example in the thermocouple section of the switch, I realized that they calculate 2 EMF's, one for the internal switch, calculated as 2.5uV (given in the spec sheet of the switch as the typical value) and one for the actual thermocouple. I say actual, because I think my initial confusion stems from the fact that the documenation talks about the relay/switch junctions as thermocouples in one section, and then talks about an external "probe" thermocouple in the next and I got them confused.
    As such, if I can ensure low temperatures inside the switch at the location of the junctions (by adequate ventilation and powering down latching relays), I should be able to use 2.5uV as my EMF from the switch module, or to be conservative, <12uV max (from data sheet of 2527 again).
    I guess now I have a hard time believeing the 2.5uV typical value listed.. They say the junctions in the relays are typically an iron-nickel alloy against a copper-alloy. Well, those combinations are not explicitly listed in the documenation table for Seebeck coefficients, but even a very small value, like 0.3uV/C adds up to 7.5uV at 25degC. I'm thinking maybe the table values in the NI documentation reffers to the Seebeck values at 25C?
    Project Engineer
    LabVIEW 2009
    Run LabVIEW on WinXP and Vista system.
    Used LabVIEW since May 2005
    Certifications: CLD and CPI certified
    Currently employed.

    Seebeck EMV needs temperature gradients , in your relays you hopefully have low temperature gradients ... however in a switching contact you can have all kind diffusions and 'funny' effects, keeping them on same temperature is the best you can do. 
    Since you work with a multiplexer and with TCs, you need a good Cold junction ( for serious calibrations at 0°C ) and there is the good place for your short cut to measure the zero EMV. Another good test is loop the 'hot junction' back to the cold junction and observe the residual EMV.  Touching (or heating/cooling) the TC loop gives another number for the uncertainty calculation: the inhomogeneous material of the TC itself..
    A good source for TC knowledge:
    Manual on the use of thermocouples in temperature measurement,
    ASTM PCN: 28-012093-40,
    ISBN 0-8031-1466-4 
    (Page1): 'Regardless
    of how many facts are presented herein and regardless of the percentage
    retained,
                    all will be for naught unless one simple important fact is
    kept firmly in mind.
                    The thermocouple reports only what it "feels."
    This may or may not the temperature of interest'
    Message Edited by Henrik Volkers on 04-27-2009 09:36 AM
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'

  • MRP Type V1 - How to take open Purchase Orders into consideration

    hi I'm having a problem with a material which is setup as MRP type V1.
    When running MRP using NEUPL a purchase requisition is created as expected based on re-order point on the material master. At this point the Purchase Req is converted to a PO, but the delivery date maybe pushed out before saving. Running MRP after pushing out the deliver date on Purchase Orer generates another Purchase Requistion with a delivery date before the PO. Is there a way to get MRP to take the open PO into consideration rather than suggesting another PO ?
    Our planning horizon is set to 365 days.
    Is it possible to get MRP to see the PO and not suggest a req ?
    Thanks for any help

    I have checked the availability check and we are using one that is ticked to include purchase orders. this does not seem to have any effect.
    The lead time on the item is 5 on the material master and 15 on the infor rec which is not that high ?
    and when running MB02 for the item I am running with Create Purchase Requistion = 1
    is it that what I'm expecting is unrealistic? i.e that for re-order point materials MRP is only going to take into account the lead times and if there is a Purchase Order with a delivery date outside of the leadtime the system will see that we need to reorder and generates a Req within the leadtime without looking out futher to see what's on order ?

  • Considerations should be taken into account when designing an RTF Template

    What overall considerations should be taken into account when designing an RTF template?.
    For example, how the tables should be set. When the report is running, I don't want to happen to the fields leaving the position defined during design.

    Hi,
    from my point of view, so general questions are not answered there...
    One note,
    I'm trying to get the data first, "load it" (for example) into Word and only then generate the tables using the wizards. This will give the effect.
    Eldar A.

  • After uploading our phones with the latest update, we have noticed that the battery life has deminished considerably.  I now have to charge my phone overnight and two or three times a day. Prior to the update, my battery life lasted me at least a full day

    After uploading our phones with the latest update, we have noticed that the battery life has deminished considerably.  I now have to charge my phone overnight and two or three times a day. Prior to the update, my battery life lasted me at least a full day.  We have several phones in our office and the ones that have updated (4) now have issue holding a charge/battery life. I really liked this phone and can not believe that you are now going to charge us $79 a battery to fix what is most definately a problem with your latest update.  I know other people outside of our company that are having the same problem. Not to mention when I called AT&T it was confirmed to me that they are hearing the same issue and then some from other customers as well.  Your own people, once I talked to them earlier today, told me they are showing a history of issues that are showing up after the update was put in place. Of course she tried to say, "Maybe the age of the battery and the update both contributed".  Whatever. 
    I want you all to know how disappointed I am in your company for the handling of this issue.  I always thought "Apple" was the line I didn't have to worry about having any types of issue. And that you all would stand behined your product 100%. Now I am not so sure.   
    I would love to hear back from your company on how you perceive the issue with all of these phones that prior to the update didn't have any issues and how after the update THEY ARE NOW having issues.  I do not believe this was an issue due to the age of a battery and that was pretty lame to say so.  It was fine and now its not.
    Please feel free to contact me and help me figure out a way to pay for all of the batteries that will be needed for our company to contiue doing business as needed.
    Thank you.
    Web Address (URL):
    5106 McNarney

    Sorry this is a user to user technical forum.  There is NO APPLE here as stated in the term of use when you signed up for this forum.
    here are some battery tips
    http://osxdaily.com/2013/09/19/ios-7-battery-life-fix/
    http://www.apple.com/batteries/iphone.html

  • How to use airport time capsule on a dell portable pc with windows 7 taking in consideration that Time machine doesn't run with Windows ?

    how to use airport time capsule on a dell portable pc with windows 7 taking in consideration that time machine doesn't run with Windows ?

    TM does not work like that.
    If you want files to use later.. do not use TM.
    Or do not use TM to the same location. Plug a USB drive into the computer and use that as the target for the permanent backup.
    Read some details of how TM works so you understand what it will do.
    http://pondini.org/TM/Works.html
    Use a clone or different software for a permanent backup.
    http://pondini.org/TM/Clones.html
    How to use TC
    http://pondini.org/TM/Time_Capsule.html
    This is helpful.. particularly Q3.
    Why you don't want to use TM.
    Q20 here. http://pondini.org/TM/FAQ.html

  • My husband and I both have email accounts on my iMac using Mavericks. Often we both receive an the same emails. If one of us opens the email on their account it automatically considers the email as read on both accounts. I would like to change this.

    My husband and I both have email accounts on my iMac using Mavericks. Often we receive the same emails. Now when one of us opens an email that has been sent to both of us, Mavericks considers the email as read on both accounts. Mountain Lion did not operate this way. Can I change this? Thanks

    By the sounds of it you have linked Email Accounts, if so this means that you will both recieve the majority of the same Emails, and if you do and you read them on your account (which is linked with your husbands) then they will appear to read on the both of them because they are linked. With the majority of Email providers these days they're Emails are (simplified) in the cloud you could say, meaning that if you read or delete shared Emails then it will do the same on both accounts rather than just the yours.
    I hope all that made sense, sorry that I couldn't be of anymore assistance.. If you havent already try googling your problem - you may find some answers on there

  • For Your Consideration: Ultimate Lync 2010 client install with SCCM 2007

    While the subject of my post may be very presumptuous, I submit the following for your consideration to answer the often-asked question about how to deploy Lync 2010 client with SCCM.
    Background:
    I cannot understand why Microsoft made the Lync install so darned confusing, complex, and convoluted.
    After our Lync 2010 FE server was up and running and all users migrated off our OCS server to the Lync environment, I spent about a month and a half trying to figure out how to:
    1.  Uninstall the OCS 2007 R2 client
    2.  Install all prerequisites for the Lync client
    3.  Install Lync on all user workstations silently.
    While researching this, the simple answer I kept seeing given to this question was, "just use the .exe with the right switches according to the TechNet article here: http://technet.microsoft.com/en-us/library/gg425733.aspx".  Well, my response is, I
    tried that and while the program installed itself correctly pushed through SCCM, because I was doing it using an administrative account (i.e. the SYSTEM account) due to our users not having admin rights, when the install was done, Lync would automatically
    start up, but in the SYSTEM context so that the user couldn't see it was running, they go to run it and it won't run for them.  I was unable to find any switch or option to prevent the automatic launch.  I suppose the simple solution to that would
    be to have the user reboot, but that's unnecessarily disruptive and was contrary to the desire to make this a silent install.
    The next simplest answer I saw was, "extract the MSI and use that with the right switches".  Problem with that is that the MSI by itself doesn't remove the OCS client or install the prerequisites, and also either requires a registry change to even allow
    the MSI to be used or a hacked MSI that bypasses the registry key check.  I tried to put a package together to uninstall OCS, install the prereqs, and use a hacked MSI, but I never could get the MSI hacked properly.  The other problem I ran into
    was detecting if the OCS client was running in a predictable way so I could terminate it, properly uninstall it, and then do the rest of the installations.  It was this problem that ultimately led me to the solution that I'm about to detail and that has
    worked marvellously for us.
    Solution:
    As I said before, when I first looked at this problem, I started by building a typical software deployment package (Computer Management -> Software Distribution -> Packages) and then created the programs to do the install.  My first attempt was
    just with the .exe file provided as-is by Microsoft using the switches they document in the link above for IT-Managed Installation of Lync, and...well, the end result wasn't quite as desirable as hoped.  So, my next attempt was to extract all the prerequisite
    files and the Lync install MSI (both for x86 and x64), attempt to hack it to get around the "UseMSIForLyncInstallation" registry key, and make the command-lines to terminate OCS and uninstall it.
    In the past when I had an install to do with SCCM that also required uninstalling an older version of a given application, I typically used the program-chaining technique.  That's where you have, for example, 3 or more programs that run in a package
    in a sequence and you have Program 3 be set to run after Program 2 does and then set Program 2 to run after Program 1 so you get the desired sequence of Programs 1-2-3 running in that order.  So, I created programs to 1) kill Communicator.exe 2) uninstall
    Communicator 2007 R2 by doing an "msiexec /uninstall {GUID}" 3) install Silverlight 4) install Visual C++ x86 5) optionally install Visual C++ x64, and then 6) install the Lync x86 or x64 client.  That final step was always the point of failure because
    I couldn't get the hacked MSI for the Lync Client install to work.  I also realized that if Communicator wasn't running when the deployment started, that step would fail and cause the whole process to bail out with an error.  That's one of the downsides
    of program-chaining, if one step fails, SCCM completely bails on the deployment.  This is what also led me to the key to my solution:  TASK SEQUENCES.
    I'm not sure how many people out there look in the "Operating System Deployment" area of SCCM 2007 where Task Sequences normally live, but I also wonder how many people realize that Task Sequences can be used for more than just Operating System deployments. 
    One of the biggest advantages of a task sequence is you can set a step to ignore an error condition, such as if you try to terminate a process that isn't running.  Another advantage is that task sequences have some very good built-in conditionals that
    you can apply to steps, for example, having the sequence skip a step if a certain application (or specific version of an application) is not installed on the machine.  Both of those advantages factor highly into my solution.
    OK, for those who already think this is "TL;DR", here's the step-by-step of how to do this:
    First, you need to extract all the files from the LyncSetup.exe for your needed architectures.  We have a mix of Windows XP and Windows 7 64-bit, so my solution here will take both possibilities into account.  To extract the files, just start up
    the .exe like you're going to install it, but then when the first dialog comes up, navigate to "%programfiles%\OCSetup" and copy everything there to a new location.  The main files you need are: Silverlight.exe, vcredist.exe (the x64 LyncSetup.exe includes
    both x86 and x64 Visual C++ runtimes, you need them both, just rename them to differentiate), and Lync.msi (this also comes in an x86 and x64 flavor, so if you have a mix of architectures in your environment, get both and either put them into their own directories
    or rename them to reflect the architecture).
    For my setup, I extracted the files for the x86 and x64 clients and just dumped them each into directories named after the architectures.
    Next, move these files into a directory to your SCCM file server, whatever it might be that you deploy from, in our case, it was just another volume on our central site server.  Go to the SCCM console into Computer Management -> Software Distribution
    -> Packages and then create a new package, call it something meaningful, and then point to the directory on your SCCM file server for the source files.
    Now you need to create 3 to 5 programs inside the package:
    1.  Name: Silverlight
       Command Line: x86\Silverlight.exe /q     (remember, inside my main Lync install folder on my distribution point, I have an x86 directory for the files from the x86 installer and an x64 folder for the files from the x64 installer. 
    The fact is the Silverlight installer is the same in both, so you only need one of them.)
       On the Environment tab:  Program can run whether or not a user is logged in, runs with administrative rights, Runs with UNC name
       On the Advanced tab:  Suppress program notifications
       All other options leave default.
    2.  Name:  Visual C++ x86
        Command Line:  x86\vcredist_x86.exe /q
       On the Requirements tab: Click the radio button next to "This program can run only on specified client platforms:" and then check off the desired x86 clients.
       Environment and Advanced tabs:  same as Silverlight
       (If you have only x64 clients in your environment, change all x86 references to x64.  If you have a mixed environment, create another program identical to this one, replacing references to x86 with x64.)
    3.  Name:  Lync x86
        Command Line:  msiexec /qn /i x86\Lync.msi OCSETUPDIR="C:\Program Files\Microsoft Lync"  (The OCSETUPDIR fixes the issue with the Lync client wanting to "reinstall" itself every time it starts up)
        Requirements, Environment, and Advanced tabs:  Same as with Visual C++ and Silverlight
        (Same deal as above if you have all x64 clients or a mix, either change this program to reflect or make a second program if necessary)
    Now you need to make the Task Sequence.  Go to Computer Management -> Operating System Deployment -> Task Sequences.  Under the Actions pane, click New -> Task Sequence.  In the Create a New Task Sequence dialog, choose "create a
    new custom task sequence", Next, enter a meaningful name for the task sequence like "Install Microsoft Lync", Next, Next, Close.
    The task sequence will have up to 12 steps in it.  I'll break the steps down into 3 phases, the prereqs phase, uninstall OCS phase, and then Lync install phase.
    Prereqs Phase:
    These are the easiest of the steps to do.  Highlight the task sequence and then in the Actions pane, click Edit.
    1.  Click Add -> General -> Install Software.  Name: "Install Microsoft Silverlight".  Select "Install a single application", browse to the Lync package created earlier and then select the Silverlight program.
    2.  Add -> General -> Install Software.  Name: "Install Microsoft Visual C++ 2008 x86".  Install Single Application, browse to the Lync package, select the Visual C++ x86 package.
    As before, if you're an all-x64 environment, replace the x86 references with x64.  If you have a mixed environment, repeat step 2, replacing x86 with x64.
    3.  Add -> General -> Run Command Line.  Name: "Enable Lync Installation".  This step gets around the UseMSIForLyncInstallation registry requirement.  The Lync client MSI simply looks for the presence of this key when it runs, so
    we'll inject it into the registry now and it doesn't require a reboot or anything.  It just has to be there before the client MSI starts.
    Command Line: reg add "hklm\Software\Policies\Microsoft\Communicator" /v UseMSIForLyncInstallation /t REG_DWORD /d 1 /f
    Uninstall OCS Phase:
    This part consists of up to 6 Run Command Line steps.  (Add -> General -> Run Command Line)
    4.  Name: "Terminate Communicator".  Command Line: "taskkill /f /im communicator.exe".  On the Options page, check the box next to "Continue on error".  This will terminate the Communicator process if it's running, and if it's not, it'll
    ignore the error.
    5.  Name: "Terminate Outlook".  Command Line: "taskkill /f /im OUTLOOK.exe".  Check the "Continue on error" on the Options page here too.  Communicator 2007 hooks into Outlook, so if you don't kill Outlook, it might prompt for a reboot
    because components are in use.
    (NOTE:  If necessary, you could also add another step that terminates Internet Explorer because Communicator does hook into IE and without killing IE, it might require a restart after uninstalling Communicator in the next steps.  I didn't run into
    this in my environment, though.  Just repeat step 5, but replace OUTLOOK.EXE with IEXPLORE.EXE)
    6.  Name: "Uninstall Microsoft Office Communicator 2007".  Command Line: "msiexec.exe /qn /uninstall {E5BA0430-919F-46DD-B656-0796F8A5ADFF} /norestart" On the Options page:  Add Condition ->  Installed Software -> Browse to the
    Office Communicator 2007 non-R2 MSI -> select "Match this specific product (Product Code and Upgrade Code)".
    7.  Name:  "Uninstall Microsoft Office Communicator 2007 R2".  Command Line:  "msiexec.exe /qn /uninstall {0D1CBBB9-F4A8-45B6-95E7-202BA61D7AF4} /norestart".  On the Options page:  Add Condition -> Installed Software ->
    Browse to the Office Communicator 2007 R2 MSI -> select "Match any version of this product (Upgrade Code Only)".
    SIDEBAR
    OK, I need to stop here and explain steps 6 and 7 in more detail because it was a gotcha that bit me after I'd already started deploying Lync with this task sequence.  I found out after I'd been deploying for a while that a tech in one of our remote
    offices was reinstalling machines and putting the Communicator 2007 non-R2 client on instead of the R2 client, and my task sequence was expecting R2, mostly because I thought we didn't have any non-R2 clients out there.  So, at first I just had our Help
    Desk people do those installs manually, but later on decided to add support for this possibility into my task sequence.  Now, when you normally uninstall something with msiexec, you would use the Product Code GUID in the command, as you see in steps 6
    and 7.  All applications have a Product Code that's unique to a specific version of an application, but applications also have an Upgrade Code GUID that is unique for an application but common across versions.  This is part of how Windows knows that
    Application X version 1.2 is an upgrade to Application X version 1.1, i.e. Application X would have a common Upgrade Code, but the Product Code would differ between versions 1.1 and 1.2.
    The complication comes in that Communicator 2007 and Communicator 2007 R2 have a common Upgrade Code, but different Product Codes and the "MSIEXEC /uninstall" command uses the Product Code, not the Upgrade Code.  This means that if I didn't have step
    6 to catch the non-R2 clients, step 7 would be fine for the R2 clients, but fail on non-R2 clients because the Product Code in the MSIEXEC command would be wrong.  Luckily, we only had one version of the non-R2 client to deal with versus 4 or 5 versions
    of the R2 client.  So, I put the command to remove Communicator 2007 non-R2 first and checked for that specific product and version on the machine.  If it was present, it uninstalled it and then skipped over the R2 step.  If non-R2 was not present,
    it skipped that step and instead uninstalled any version of the R2 client.  It's important that steps 6 and 7 are in the order they are because if you swap them, then you'd have the same outcome as if step 6 wasn't there.  What if neither is on the
    machine?  Well the collection this was targeted to included only machines with any version of Communicator 2007 installed, so this was not a problem.  It was assumed that the machines had some version of Communicator on them.
    8.  Name:  "Uninstall Conferencing Add-In for Outlook".  Command Line:  "msiexec.exe /qn /uninstall {730000A1-6206-4597-966F-953827FC40F7} /norestart".  Check the "Continue on error" on the Options Page and then Add Condition ->
    Installed Software -> Browse to the MSI for this optional component and set it to match any version of the product.  If you don't use this in your environment, you can omit this step.
    9.  Name:  "Uninstall Live Meeting 2007".  Command Line:  "msiexec.exe /qn /uninstall {69CEBEF8-52AA-4436-A3C9-684AF57B0307} /norestart".  Check the "Continue on error" on the Options Page and then Add Condition -> Installed Software
    -> Browse to the MSI for this optional component and set it to match any version of the product.  If you don't use this in your environment, you can omit this step.
    Install Lync phase:
    Now, finally the main event, and it's pretty simple:
    10.  Click Add -> General -> Install Software.  Name: "Install Microsoft Lync 2010 x86".  Select "Install a single application", browse to the Lync package created earlier and then select the "Lync x86" program.  As before, if you
    only have x64 in your environment, replace the x86 with x64, or if you have a mixed environment, copy this step, replacing x86 references with x64.
    And the task sequence is done!  The final thing you need to do now is highlight the task, click Advertise in the Actions pane, and deploy it to a collection like you would with any other software distribution advertisement.  Go get a beer!
    Some final notes to keep in mind:
    1.  You can't make a task sequence totally silent...easily.  Users will get balloon notifications that an application is available to install.  The notifications cannot be suppressed through the GUI.  I've found scripts that supposedly
    hack the advertisement to make it be silent, but neither of them worked for me.  It was OK, though because in the end we wanted users, especially laptop users, to be able to pick a convenient time to do the upgrade.  The task sequence will appear
    in the "Add/Remove Programs" or "Programs and Features" Control Panel.  You can still do mandatory assignments to force the install to happen, you just can't make it totally silent.  On the plus side, the user shouldn't have to reboot at any point
    during or after the install!
    2.  In the advertisement setup, you can optionally show the task sequence progress.  I've configured the individual installs in this process to be silent, however, I did show the user the task sequence progress.  This means instead of seeing
    5 or 6 Installer windows pop up and go away, the user will have a single progress bar with the name of the step that is executing.
    3.  One step that I didn't consider when I actually did this was starting the Lync client as the user when the install was complete.  The user either had to start the client manually or just let it start on its own at the next logon.  However,
    while I was writing this, I realized that I could possibly start the client after installing by making another Program in the Lync Package with a command line that was along the lines of "%programfiles%\Microsoft Lync\communicator.exe" and then in the Environment
    tab, set it to "Run with user's rights" "only when a user is logged on".
    4.  My first revision of this task sequence has the Prereqs phase happening after the OCS uninstall phase, but I kept running into problems where the Silverlight installer would throw some bizarre error that it couldn't open a window or something wacky
    and it would fail.  Problem was, I couldn't re-run the task sequence because now it would fail because OCS had been uninstalled, so that's why the Prereqs happen first.  It ran much more reliably this way.
    5.  For some reason that baffles me, when I'd check the logs on the Site Server to monitor the deployment, I'd frequently see situations where the task sequence would start on a given machine, complete successfully, almost immediately start again, and
    then fail.  I'm not sure what is causing that, but I suspect either users are going to Add/Remove Programs and double-clicking the Add button to start the install instead of just single-clicking it, or the notification that they have software to install
    doesn't go away immediately or Lync doesn't start up right after the install, so they think the first time it didn't take and try it a second time.
    I hope this helps some of you SCCM and Lync admins out there!

    On Step 8 I found multiple product codes for the Conferencing Add-In for Outlook.  Here's a list of the ones I found in the machines on my network:
    {987CAEDE-EB67-4D5A-B0C0-AE0640A17B5F}
    {2BB9B2F5-79E7-4220-B903-22E849100547}
    {13BEAC7C-69C1-4A9E-89A3-D5F311DE2B69}
    {C5586971-E3A9-432A-93B7-D1D0EF076764}
    I'm sure there's others one, just be mindful that this add-in will have numerous product codes.

  • E-Recruitment 6.0 Architecture considerations

    I am looking for some documentation which covers the points to be considered while designing the E-rec architecture. Target versions :
    E-rec 6.0,
    ERP/HCM : ERP 6.0 EhP4 on NW7.0 EhP1
    ESS/MSS portal (in intranet): NW7.0 SP??
    Existing SAP portal (with KM) : NW7.0 SP18
    Here are our current considerations :
    1) Standalone Vs Integrated : We are decising to go for integrated approach, i.e. E-rec and ERP/HCM on the same instance. With this we are accpecting the challanges with version dependencies and taking the advanntage of lesser systems and so less administration/ configuration (RFC/ALE) cost.
    2) Fronend (UI) or w/o Frontend : Here if the role of the fronend is just a proxy, then we can use ISA or apachy instead of using additional SAP instance (Landscape). With ISA or apachy we can maintain the reverse proxy rules and so the external candidate willl not see hostname and other sensitive details of the URL.
    Can someone please throw some light on this? Can someone redirect me to the right documentations?
    Thanks in advance.
    Shrikrishna

    Thanks Sunny for your reply.
    My question was around necessity of E-recruitment frontend.
    My uderstanding is as follows:
    E-recruitment frontend is ABAP+JAVA (NW 7.0) install with E-recruitment add-on (same as E-recruitment backend install). E-recruitement frontend (UI) contains user data and repositories of webdynpro required for E-recruitment application. So this means in all E-recruitment implementation needs this UI, webdynpros and user data. This frontend piece will be accessed by expernal candidates and non-registered recruiters (anonymous users) from internet, and so this piece MUST be in DMZ (or in other words accessible from inetrnet)
    Question is When we are using integrated E-recruitment option (i.e. install ERP 6.0with HCM and E-recruitment backend/frontend for internal candidates and ALSO E-RECRUITMENT FRONTEND for EXTERNAL CANDIDATES on same instance), then we are probably exposing our ERP or HCM data on internet. Am I right? Can this be addressed by Proxis like Apache? Are there any BIG security concerns? Does customer go with such approach?
    or we can completely deloy this E-recruitment frontend for external candidates functionality on SAP enterprise portal, but it do not have any ABAP stank.. Any issues?
    Thanks,
    Krishna

  • Images (w/correct meta data) are in catalog and on disk, but LR 5.7 considers them new on Import

    For reasons explained below, I want to try to re-import all my images into LR and hope that none/few are in fact considered new and are imported.  Yet, for some folders, LR is apparently unable to detect that my source images are already in the catalog, and are on disk, and that the source file meta data matches what LR knows about the images.  When I click an image in LR and Show in Finder, I do see the imported image on disk.  I can edit the image in the Develop module.  So, it seems good, but all is not well.   Sorry for the long post here, but I wanted to provide as much info as I could, as I am really seeking your help, which I'd very much appreciate.
    Here are some screen shots that illustrate the problem:
    Finder contents of the original images
    LR folder hierarchy
    an image as seen in LR
    Finder content of external LR copy of images
    import showing 10 "new" photos
    The original images ... (I'm not sure why the file date is April 2001 but the actual image date is January 2011; I may have just used the wrong date on the folder name?)
    The LR folder hierarchy ...
    An image as seen in LR ...
    The external folder containing the images in the LR library
    But on import of the original source folder, LR sees 10 "new" photos ...
    I tried "Synchronize Folder ..." on this particular folder, and it simply hangs half-way through as seen in the screen shot below.   IS THIS AN LR BUG?   This is really odd, since "Synchronize Folder ..." on the top-level folder completes quickly.
    I have a spreadsheet of of the EXIF data for the original files and those created by LR.  (I extracted this info using the excellent and free pyExifToolGui graphical frontend for the command line tool ExifTool by Phil Harvey.)   Almost all of the Exif data is the same, but LR has added some additional info to the files after import, including (of course) keywords.  However, I would not have expected the differences I found to enter into the duplicate detection scheme.  (I didn't see a way to attach the spreadsheet to this posting as it's not an "image".)
    I'm running LR 5.7 on a 27" iMac with Yosemite 10.10.2, having used LR since LR2.  I have all my original images (.JPEGs and RAWs of various flavors) on my internal drive on the Mac.   To me this is like saving all my memory cards and never re-using them.   Fortunately, these files are backed up several ways.   I import these images (copying RAWs as DNG) into LR with a renaming scheme that includes the import number, original file creation date and original file name.   There should be one LR folder for each original source file folder, with the identical folder name (usually a place and date).  I store the LR catalog and imported images on an external drive.  Amazingly and unfortunately my external drive failed as did it's twin, same make/size drive that I used as a backup with Carbon Copy Cloner.   I used Data Rescue 4 to recover to a new disk what I thought was almost all of the files on the external drive.
    So, I thought all would be well, but, when I tried "Synchronize Folder" using the top-level folder of my catalog, the dialog box appeared saying there were over 1000 "New" photos that had not been imported.  This made be suspicious that I had failed to recover everything.   But actually things are much worse than I thought..   I have these counts of images:
    80,0061 files in 217 folders for my original source files (some of these may be (temporary?) copies that I actually don't want to import into LR)
    51,780 files in 187 folders on my external drive containing the LR photo library
    49,254 images in the top-level folder in the LR catalog (why different from the external file count?)
    35,332 images found during import of the top-level folder containing original images
    22,560 images found as "new" by LR during import
    1,074 "new" images reported by Synchronize Folder ... on the top-level folder in the catalog; different from import count
    Clearly things are badly out of sync.   I'd like to be sure I have all my images in LR, but none duplicated.   Thus, I want to try to import the entire library and have LR tell me which photos are new.  I have over 200 folders in LR.  I am now proceeding to try importing each folder, one at a time, to try to reconcile the differences and import the truly missing images.  This will be painful.  And it may not be enough to fully resolve the above discrepancies.
    Does anyone have any ideas or suggestions?  I'd really appreciate your help!
    Ken

    Thanks for being on the case, dj!   As you'll see below, YOU WERE RIGHT!      But I am confused.
        1. Does the same problem exist if you try to import (not synchronize) from that folder? In other words, does import improperly think these are not duplic
    YES.  Import improperly thinks they are NOT duplicates, but they are in fact the same image (but apparently not the EXACT SAME bytes on disk!)
        2. According to the documentation, a photo is considered a duplicate "if it has the same, original filename; the same Exif capture date and time; and the same file size."
    This is my understanding too.
        3. Can you manually confirm that, for an example photo, that by examining the photo in Lightroom and the photo you are trying to synchronize/import, that these three items are identical?
    NO, I CAN'T!  The ORIGINAL file name (in the source folder) is the SAME as it was when I first imported that folder.  That name is used as part of the renaming process using a custom template. However, the file SIZES are different.    Here is the Finder Get Info for both files.  Initially, they appeared to be the same SIZE, 253KB, looking at the summary. But, if you look at the exact byte count, however, the file sizes are DIFFERENT: 252,632 for the original file and 2252,883 for the already-imported file:
    This difference alone is enough to indicate why LR does not consider the file a duplicate.
    Furthermore, there IS one small difference in the EXIF data regarding dates ... the DateTimeOriginal:
                                                                                                     CreateDate              DateTimeDigitized                    DateTimeOriginal              FileModifyDate                              ModifyDate
    ORIGINAL name: P5110178.JPG                                     2001:05:11 15:27:18    2001:05:11 15:27:18-07:00        2001:01:17 11:29:00        2011:01:17 11:29:00-07:00       2005:04:24 14:41:05  
    After LR rename:  KRJ_0002_010511_P5110178.JPG    2001:05:11 15:27:18    2001:05:11 15:27:18-07:00        2001:05:11 15:27:18        2011:01:17 11:29:02-07:00       2005:04:24 14:41:05
    So ... now I see TWO reasons why LR doesn't consider these duplicates.   Though the file NAME is the same (as original), the file sizes ARE slightly different.  The EXIF "DateTimeOriginal" is DIFFERENT.   Therefore, LR considers them NOT duplicates.
         4a. With regards to the screen captures of your images and operating system folder, I do not see that the filename is the same; I see the file names are different. Is that because you renamed the photos in Lightroom (either during import or afterwards)?
    I renamed the file on import using a custom template ...
            4b. Can you show a screen capture of this image that shows the original file name in the Lightroom metadata panel (it appears when the dropdown is set to EXIF and IPTC)?
    SO ....
    The METADATA shown by LR does NOT include the ORIGINAL file name (but I think I have seen it displayed for other files?).  The File SIZE in the LR metadata panel (246.96 KB) is different from what Finder reports (254 KB).  There are three "date" fields in the LR metadata, and five that I've extracted from the EXIF data.   I'm not sure which EXIF date corresponds to the "Data Time" shown in the LR metadata.
    I don't understand how these differences arose.   I did not touch the original file outside LR.   LR is the only program that touches the file it has copied to my external drive during import.  (though it was RECOVERED from a failed disk by Data Rescue 4),
    NOW ...
    I understand WHY LR considers the files different (but not how they came to be so).  The question now is WHAT DO I DO ABOUT IT?   Is there any tool I can use to adjust the original (or imported) file's SIZE and EXIF data to match the file LR has?  Any way to override or change how LR does duplicate detection?
    Thanks so very much, dj.   Any ideas on how to get LR to ignore these (minor) differences would be hugely helpful.

  • Any special considerations using Win 7 in Mavericks with Parallels

    Just wondering -- I'm going to, I think, move up to Win 7 home premium edition because XP is no longer supported; it's on my bootcamp partition and I use that for parallels.
    Any problems or specail considerations when using parallels with Win 7?  I'm on Mavericks and I don't have any problems with XP. 
    I do seem to have one problem -- all the versions of win 7 out there are something called OEM system builds and I'm not sure exactly what that means, but I guess it's not the same as a version that you would buy at a store and could move from one machine to the other; can anyone shed more light on this and offer any advice?  I'm not sure that I want to go to Windows 8.  I use windows for work and that's it; otherwise I don't want to deal with it, but I have to for my job, and that's it.  I'd stick with XP if I could, but no such luck; cant' seem to use the remote desktop connection anymore.
    In any event, I can't seem to find a 'regular' version of Win 7 home premium; all the one's I've come across are system build versions; but they are fairly cheap -- about $100 or so.  Should I just go with windows 8 and pay the extra money for the latest OS or is it not worth it, in your opinion?

    So just one more question -- the graphics display for Win 7 is
    DirectX 9 graphics device with WDDM 1.0 or higher driver
    My MB Pro has is a 13" mid 2010
    Chipset Model:          NVIDIA GeForce 320M
    2.4 ghz intel core 2 duo (which will work).
    The HD is more than large enough.
    I assume this will work fine with Windows 7?  I don't know if the NVIDIA is equivalent or superior to the directx

  • Goodnight. as you know apple has evolved considerably until today but there are people who do not have ability to buy new products. in my case I would like to know why the apple do not let the iphone 3g update to iOS 4.3 as are several applications that c

    goodnight. as you know apple has evolved considerably until today but there are people who do not have ability to buy new products. in my case I would like to know why the apple do not let the iphone 3g update to iOS 4.3 as are several applications that can not put on my iphone 3g due to this reason. Why does not Apple make a repository with applications for ios 4.2.1? or else do not work in an update to ios 4.3 on iphone 3g?

    This has nothing to do with Apple, as Apple does not make the apps.
    This is the decision of the app maker.
    Contact those app makers and ask them why they do this.

  • Sales order is not taking shipping transit lead time into consideration

    Hi,
    We have following scenario -
    1. Sales order is created in AT organization.
    2. There is no onhand in any of the organizations including AT.
    3. The scheduled ship date for sales order comes as order creation date + item lead time = 15-Sep-2011.
    3. PO for sufficient quantity is available at UK org with need-by date as 01-Jun-2011.
    5. Post processing item lead time for this item is 1 day.
    4. intransit shipment is setup between UK and AT.
    5. Shipment Transit lead times are also set up at 3 days between locations of UK and AT.
    6. ASCP plan is run.
    7. Sales Order is unscheduled and rescheduled via API MSC_ATP_PUB, which takes the ATP details from UK (based on the sourcing rules). The sheduled ship date comes as 02-Jun-2011 (PO need-by date + 1 day of post processing).
    8. Expectation is that API should also take transit lead time into consideration while calculating the ATP dates and hence the Expectation is that scheduled ship date should come as 05-Jun-2011 (PO need-by date + Transit lead time + 1 day of post processing).
    Please let me know
    a. if this is standard functionality of oracle or not.
    b. if this is standard functionality then what setups are missing required for the same to be achieved.
    UAT is held up for this issue. Please help urgently.
    Regards.

    Oracle is supposed to take the in-transit time into consideration.
    Are you running a constrained plan or an unconstrained one?
    Make sure you defined the inter-org network as described in http://download.oracle.com/docs/cd/A60725_05/html/comnls/us/inv/shipne01.htm.
    or go to Inventory > Setup > Organization > Inter-Location Transit Times and enter it there.
    Sandeep Gandhi

  • Goods Receipt Acccounting document considers  condition price from PO

    Hi,
    During Goods Receipt with reference to an old PO, the accounting document created considers old overheads condition price from PO and not the updated condition price from condition record (Transaction MEK1) which was subsequently changed after creation of the PO.
    Please advise how the accounting document should consider the updated price (MEK1) as on date of creating the GR.
    Thanks and Regards,
    Pratap Mukund Shetty

    Hi Pavan,
    You are right. But we are unable to save the PO with catg 5 as the gross price is pbxx (manual entry) .Our requirement is for the additive cost conditions we have for overheads mantained in MEK1 and in pricing schema show that they are picked up in the PO. At the time of GR if these condition prices were changed it should consider the updated price in GR accounting document but presently it picks the previous price from PO.
    If PB00 is mantained  and picked from info record   then the price date catg works.. But most of our PO's have PBXX (manual entry). How do we address this.
    Regards,
    Pratap

  • What are the considerations shd be taken while creating multiprovider?

    hi all,
    What are the consideration / prequisite shd be taken before creating an multi provider. i am creating an multiprovider with four ods. And each ods is derived from different data source like r/3, etc.
    can anyone let me know abt it
    regds
    hari

    Hello Hari,
    please have a look at the document
    BI Data Modeling: MultiProviders and InfoSets
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2f5aa43f-0c01-0010-a990-9641d3d4eef7
    It is a good guideline for using Virtual InfoProviders like an MultiProvider.
    Regards
    Sascha

Maybe you are looking for

  • PF Tech. wage types

    Hi Will these following wagetypes populate in RT during the Payroll run? is it necessary to populate. If not populated what happens? /3F1     Ee PF contribution /3F2     Ee VPF contribution /3F3     Er PF contribution /3F4     Er Pension contribution

  • Frm-41211 SSS error

    When i'm running the reports from a form under Forms6i it is giving an error like FRM:41211 integration error.SSL failure running another product. please give me the solution.

  • Message "Do you mean to switch apps? Reader is trying to open Internet Explorer" in blue bar. Why

    First time user.  Instructions on how to use not clear.  My problem is the message "Do you mean to switch apps?  Reader is trying to open Internet Explorer?" suddenly appeared in blue bar.  Occurs with I click on links in PDF.  What caused this?  How

  • Can I disable broken headphone jack? and get sound to internal speakers?

    HP G72T-200 CTO Notebook PC WINDOWS 7 My laptop has a broken headphone jack (piece of plastic fell out of it) and thinks there is something plugged into the jack and won't default to the internal speakers. I've rolled back, reloaded drivers, run audi

  • Installation of JRE-1_4_2-linux-i586.rpm: Linux RH9

    I am kinda of a noob at this so this might be a really simple questions. I just recently installed jre 1.4.2, after following all the processes on the installion page it said this: Preparing packages for installation... j2re-1.4.2-fcs [root@localhost