Need to write a formula in ICM to strip some PV data, but not all...

In previous steps of the ICM call flow, we are setting PV6 with some call data.  For example, Parts^(DateTime)^N^N.  The carrot is just for a separator.
In a later step, we want to change the Parts to be Sales, but we don't want to delete the datetime and ^N^N.  How can I write a formula to strip the first part of the call variable, but leave the remaining and insert the new portion "sales?"
ICM 7.5
CVP 7.0
I'm racking my brain on this one here.
Thanks all!
Cheri

Try setting PV6 to:
concatenate("Sales",right(Call.PeripheralVariable6,(len(Call.PeripheralVariable6) - find("^",Call.PeripheralVariable6))))
The find function will give you the character number of the first occurrance of whatever you specify, so combining "Sales" with the rightmost number of characters of the PV should do the trick. To determine the rightmost (if the Parts part is variable length) you want the total length minus the first x characters before your "^".
I haven't tested it out, so you may need a +1 or a -1 in there somewhere before the last two parentheses to get the exact right length.
Update: I just tried it out in Excel. As posted, it will not include the first "^", so you'll either need to have:
concatenate("Sales^",.....
or
concatenate("Sales",...))+1))

Similar Messages

  • On 3G we cannot send emails from either our iPhones or iPad and this also happens with some WiFi connections but not all.  Yet we can always send emails from our Hotmail Email account.  What is causing this and what do we need to do to resolve it?

    On 3G we cannot send  Business emails from either our iPhones or Ipad and this also happens with some WiFi connections but not all.  Yet we can always send emails from our Hotmail Account using both 3G and WiFi.
    We bought the iPhones and Ipad so that we could send emails while we are out of the Office, but we are not able to do this unless we can find a WiFi connection. Incoming emails are fine.  We use IMAP, for Business emails just incase this is relevant and I know that Hotmail is POP3.
    Our technical IT knowledge is not great, so we look forward to your suggestions as to how to resolve this. 

    Contact whomever supports the email account and get the correct Outgoing email server settings from them.

  • I am batch processing in PS 2014 (watermark and saving as jpeg from ps file). I get the message for some but not all 'this file needs to be saved as a copy with this option'. And then I have to save it manually. Does anyone know why this happens? (It is j

    I am batch processing in PS 2014 (watermark and saving as jpeg from ps file). I get the message for some but not all 'this file needs to be saved as a copy with this option'. And then I have to save it manually. Does anyone know why this happens? (It is just a plain photoshop file, a watermark is added, then save as jpeg - the jpeg is saved to a different folder than the original photoshop file.)  It happens for about 10 of 30/40  files approximately . Thank you, Kathryn

    I believe I have figured it out - I need to flatten the image, even though there are no layers except for layer 0, first.

  • I downloaded ios 6 for my iPad 2 and some of the upgrades went through but not all. It sounds like I need to reinstall ios 6. How do you do that? Everytime I try to upgrade it says I'm already updated

    I downloaded ios 6 for my iPad 2 and some of the upgrades went through but not all. It sounds like I need to reinstall ios 6. How do you do that? Everytime I try to upgrade it says I'm already updated

    If you mean Siri then that is only on the iPad 3, it is not on the iPad 2 in iOS 6 (possibly because the iPad 2 doesn't have the Audience chip which the iPad 3 does). If there are other things that you think are missing (YouTube has been removed, and Passbooks is iPhone and iPod Touch only) ?

  • Need to send some notifications by mail (not all notif)

    Hi!
    We run workflow embeded into EBS (11.5.10.2 4RUP). We have a custom workflow that uses Role (WF_LOCAL_ROLES) connected to employee register (ORIG_SYSTEM = 'PER' and orig_system_id = (person_id in per_all_people_f). we get the Role in the db package that starts the workflow and set it to the attribute Recipient.
    I assume there are standard notifications (WF) that are sent to these roles. the roles have a notification_preference = 'SUMMARY' because the administrators/user do not want the system to send many e-mails but a summary e-mail and the user should check theis notif in Applications.
    But... an here is the but....our specifc custom workflow creates a notification that we would like to send as an e-mail. what can we do?
    The mails should be sent specific to users/employees induvidually. I mean the e-mail is specicif constructed per employee, so it is not a good idea to create a Role that connect many users or something. Can we create new roles specific per user/employee with notification_preference = 'MAILHTML' (or other mail pref)??
    Would this work without intefiering with the other role. I need help to understand this functionality and how to get this to work. Thank you in advance!
    Regards,
    Patricia

    Hi,
    The only way that I can think of to do this would be to create an ad-hoc user at runtime which has an email preference set to MAILHTML (or any other preference that sends an email), and an email address set to the email address of the recipient. This would then send an email to the employee, and when the emails have been sent then you can purge the ad hoc directory so that you don't end up with lots of obsolete roles in the database.
    The downside of this is that there is no way for anyone to see the notification from within eBS. Because it isn't sent to the employee, it would not appear in their worklist, which may or may not be acceptable to the business. One possible workaround would be to create another ad-hoc role, and asssign both the employee user and the ad-hoc user to the role, and expand the role on the notification so that multiple copies are sent (one to each member of the role). If the notification requires a response, then you need to write a post notification function to deal with the possibility of multiple responses, which makes the solution more complex still.
    HTH,
    Matt
    WorkflowFAQ.com - the ONLY independent resource for Oracle Workflow development
    Alpha review chapters from my book "Developing With Oracle Workflow" are available via my website http://www.workflowfaq.com
    Have you read the blog at http://thoughts.workflowfaq.com ?
    WorkflowFAQ support forum: http://forum.workflowfaq.com

  • DAX error "Formula is invalid" when updating underlying data, but not changing the formula?

    Hello,
    first of all: I use the 64-bit versions of Excel 2010 and PowerPivot on Windows Server 2008 R2.
    I use this formula to calculate the median of my data:
    MINX( FILTER( VALUES( TableName[ColumnName] ),
                  CALCULATE( COUNTROWS( TableName ),
                             TableName[ColumnName] <= EARLIER( TableName[ColumnName]
                  > COUNTROWS( TableName ) * 0.5 ),
          TableName[ColumnName] )
    The data comes from a view on an MS SQL Server and has about 3.5 million rows. With one dataset ("dataset 1"), everything is working out fine and VERY fast ;-). When I change the view on the SQL server to filter for different data ("dataset 2", the result
    set still containing about 3.5 million rows) and update the PowerPivot data and then the pivot table, the status bar reads "Executing OLAP query..." and excel.exe utilizes one CPU core at 100% for a long time and its memory usage increases significantly, but
    nothing happens. If I interrupt that process by pressing Esc, I get the following error (original German text included):
    ============================
    Fehlermeldung: (Error message:)
    ============================
    Ausnahme von HRESULT: 0x800A03EC (Exception from HRESULT: ...)
    Das Median-Feld konnte der PivotTable nicht hinzugefügt werden, weil die Formel ungültig ist. (Could not add the field "Median" to the PivotTable because the formula is invalid.)
    ============================
    Aufrufliste: (Stack trace:)
    ============================
    Server stack trace:
    Exception rethrown at [0]:
       bei System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
       bei System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
       bei Microsoft.Office.Interop.Excel.PivotTable.AddDataField(Object Field, Object Caption, Object Function)
       bei Microsoft.AnalysisServices.Modeler.FieldList.ExcelInterOpUtil.AddToDataFields(ICalculatedMember calculatedMember, Int32 positionIndex, Boolean isSpecialColumnBasedNamedSetPresent)
       bei Microsoft.AnalysisServices.Modeler.FieldList.ExcelInterOpUtil.AddToDataFields(ICalculatedMember calculatedMember, Int32 positionIndex, Boolean isSpecialColumnBasedNamedSetPresent)
       bei Microsoft.AnalysisServices.Modeler.FieldList.GeminiPivot.SetFieldOrientation(FieldLocation location, IGeminiColumn column, Int32 positionIndex)
       bei Microsoft.AnalysisServices.Modeler.FieldList.GeminiPivot.SetFieldOrientation(FieldLocation location, IGeminiColumn column, Int32 positionIndex)
       bei Microsoft.AnalysisServices.Modeler.FieldList.GeminiPivot.AddField(IGeminiColumn column, Int32 index)
       bei Microsoft.AnalysisServices.Modeler.FieldList.FieldListControl.fieldsTreeView_AfterCheck(Object sender, TreeViewEventArgs e)
    ============================
    I have analyzed the two datasets for differences and found the following ones:
    Dataset 2 has also negative values in its ColumnName column, dataset 1 does not. Filtering dataset 2 (by changing the view on the SQL server) so that the column contains only positive values does not help.
    A displayed column contains text with square brackets in it in dataset 2. Changing the SQL view to replace them with an empty string (replace(column2, '[', '')) does not help.
    I do not know what else to try. Can anybody help me? The two datasets are very large, but if anyone can give a recommendation how to export them in a reasonable size, I can make them available.
    Best regards
    Michael

    Hello Javier,
    I use a new measure (button "New Measure" in the toolbar or option in the context menu of the table). Here are the data samples:
    Dataset 1:
    Probability    RThreshold    SThreshold    vector    value
    0    -1    -1    DHTTestApp: Total GET Success Ratio    0.98
    0    -1    -1    DHTTestApp: Total GET Success Ratio    1
    0    -1    -1    DHTTestApp: Total GET Success Ratio    1
    0    -1    -1    DHTTestApp: Total GET Success Ratio    1
    0    -1    -1    DHTTestApp: Total GET Success Ratio    1
    0    -1    -1    DHTTestApp: Total GET Success Ratio    0.98
    0    -1    -1    DHTTestApp: Total GET Success Ratio    1
    0    -1    -1    DHTTestApp: Total GET Success Ratio    1
    0    -1    -1    DHTTestApp: Total GET Success Ratio    1
    0    -1    -1    DHTTestApp: Total GET Success Ratio    1
    Dataset 2:
    Probability    RThreshold    SThreshold    vector    value
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.011353711790393
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.20522161505768
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.12309191295875
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.26926457661881
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.1911946574326
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.066699727250186
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.32597014925373
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.11977454203852
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.24751410911045
    0    -1    -1    [MKTBR] BaseOverlay: All nodes: Own routing trust value    0.076218041485769
    In both datasets, probability has a range from 0 to 1, both thresholds from -1 to 1. In dataset 1, value has a range from 0 to 1, in dataset 2, it's -1 to 1.
    Thanks for your help!

  • Select formula works with one set of data but not another??

    Post Author: rkckjk
    CA Forum: Formula
    I have the folloing Select formula:{Sheet1_.Assign Group History} = '{"COMPUTER OPERATIONS"}' and{Sheet1_.Resolved By Group} = "COMPUTER OPERATIONS"That I'm using to select records from an Excel spreadsheet. It works fine with the Jan. 2008 spreadsheet file: Test MTTR Jan 2008.xlsbut not with the Feb. 2008 spreadsheet: Test MTTR Feb 2008.xlsWhen I use the Feb. 2008 file I get this error:"Query Engine Error:'Error code: 0x800a06ff Source: DAO.Recordset Description: this expression is typed incorrectly, or it is too complex to be evaluated."Basically both Excel files are in the same format but have different data in them. I don't understand why Jan's works, but not Feb's???
    Since, I can't upload the two files without contacting the system administrator, maybe someone can help me debug the error code or how to debug this error using another method or idea??

    Post Author: sharonmtowler
    CA Forum: Formula
    did you try each sheet while removing one of the select statements?  i would start there and try feb with no select statement, add one in, then the next
    see what is causing your problem. it sounds like some data in the xls file is in an incorrect format.

  • How i need to write a logic to generate idoc for some of infotypes, when i change the single infotype

    Hi experts,
         I have an requirement , when I change any infotype record from pa30 or pa40, i.e
    suppose I am changing the infotype 0002  in pa30 , the idoc only generated for that changed infotype only,
    but I need it to generate the idoc for  some other infotypes also I.e(0000,0002,t528t, 0016).
    So please give me some help to generate idoc when I run a rbdmidoc(bd21).
    Thanks in advance.
    Venkat 

    yes, visible bounds is reading the non-visible masked objects too.
    you're going to have to do it the hard way, loop through all your objects to get your bounds manually, and while you're at it, test for clipping masks and use the masking path instead.

  • I need to upload a pdf with more than 500 fields, but not all need to be registered.

    We are working on a pdf to send to our customers. The form has more than 500 fields but we only need to register some of them, the other fields will be filled in the pdf, saved and sent bak to us. Is there any way to do that?
    Thanks in advance.

    I am sorry but there is no way to designate some fields for online collection and others not.
    Andrew

  • Need to use Group By but only want to group some of the columns not all

    Hello all! I am having some issues here. I am rather new to SQL and I am getting stuck with grouping. I have the query below but I only want to group by these columns, instead of all the columns in my select statement.
    ah.fund,
    ah.dept,
    ah.org,
    ah.acct,
    t.fund,
    t.dept,
    t.org,
    t.acct
    This will eventually go into Oracle reports builder. Is there any way I can archive this at all? The query will return all the t for a given time period, but they need to be grouped by the fully qualified account number which consists of the fund, dept, org and acct columns.
    Thanks in advance!
    SELECT ah.fund,
         ah.dept,
         ah.org,
         ah.acct,
         LPAD(ah.fund,3,0)||LPAD(ah.dept,2,0)||LPAD(ah.org,4,0)||SUBSTR(ah.acct,1,2) acct_no,
         LPAD(ah.fund,3,0)||LPAD(ah.dept,2,0)||LPAD(ah.org,4,0)||ah.acct acct_no1,
         t.fund,
         t.dept,
         t.org,
         t.acct,
         t.ACTIVITY_DATE,
         t.TYPE,
         t.AMT,
         t.description,
         t.TRANS_NO,
         t.RECEIPT_NO,
         DECODE(t.PO_NO,NULL,t.JOURNAL_NO,t.PO_NO) J_NO,
         DECODE(t.WARRANT_NO,NULL,t.WIRE_NO,t.WARRANT_NO) W_NO,
         t.VENDOR_NO,
         v.name||' ' ||v.first_name name,
         MIN(ah.eod_date)
    FROM ah,
         t,
         v
    WHERE ah.fund BETWEEN SUBSTR(:p_acct_from,0,3) AND SUBSTR(:p_acct_to,0,3)
         AND ah.dept BETWEEN SUBSTR(:p_acct_from,4,2) AND SUBSTR(:p_acct_to,4,2)
         AND ah.org BETWEEN SUBSTR(:p_acct_from,6,4) AND SUBSTR(:p_acct_to,6,4)
         AND ah.acct BETWEEN SUBSTR(:p_acct_from,10,5) AND SUBSTR(:p_acct_to,10,5)
         AND FLOOR(ah.acct/10000) IN (6,8)
         AND SUBSTR(ah.acct,3) != '000'
         AND ah.eod_date BETWEEN :p_from_date-1 AND :p_to_date
         AND t.fund (+) = ah.fund
         AND t.dept (+) = ah.dept
         AND t.org (+) = ah.org
         AND t.acct (+) = ah.acct
         AND TO_DATE(t.activity_date, 'dd-mon-yy') >= TO_DATE(:P_FROM_DATE,'dd-mon-yy')
         AND TO_DATE(t.activity_date, 'dd-mon-yy') <= TO_DATE(:P_TO_DATE,'dd-mon-yy')
         AND t.type IN( 'PI','JE','PR','VD','VU','AC','AD')
         AND (
              (:p_year = TO_CHAR(CURRENT_DATE,'YYYY')
              AND (t.po_no IS NULL
              OR (select TO_CHAR(open_date,'YYYY') FROM r WHERE po_no = t.po_no ) = TO_CHAR(CURRENT_DATE,'YYYY') ) )
              OR ((select TO_CHAR(open_date,'YYYY') FROM r WHERE po_no = t.po_no ) = :p_year )
    AND v.vendor_no (+) = t.vendor_no
    GROUP BY ah.fund,
         ah.dept,
         ah.org,
         ah.acct,
         t.fund,
         t.dept,
         t.org,
         t.acct,
         t.ACTIVITY_DATE,
         t.TYPE,
         t.AMT,
         t.description,
         t.TRANS_NO,
         t.RECEIPT_NO,
         DECODE(t.PO_NO,NULL,t.JOURNAL_NO,t.PO_NO),
         DECODE(t.WARRANT_NO,NULL,t.WIRE_NO,t.WARRANT_NO),
         t.VENDOR_NO,
         v.name||' ' ||v.first_name
    ORDER BY LPAD(ah.fund,3,0)||LPAD(ah.dept,2,0)||LPAD(ah.org,4,0)||SUBSTR(ah.acct,1,2),
         LPAD(ah.fund,3,0)||LPAD(ah.dept,2,0)||LPAD(ah.org,4,0)||ah.acct;

    In reports builder you can group the columns without having to group it in your query. It is also known as the break report which contains multiple groups in its data model.
        |            Q_1               |
                      |
                      |
        |       GRP_department         |
        | dept_no                      |
        | dept_name                    |
                      |
                      |
        |         GRP_employee         |
        | emp_no                       |
        | emp_first_name               |
        | emp_last_name                |
        | emp_middle_name              |
        | emp_date_of_birth            |
        | ...                          |
        ------------------------------

  • I need set of API that works only in Flash player debugger version but not in Flash player non-debug

    My Application does not have any Sampler API, trace , getStackTrace methods, still its not working. My application has HTTP calls to server side ,

    Thank you so much for your time!  Okay, I think I have what you need, if I understood it all correctly. Here goes...
    I just installed FP again, did not get the video showing on Adobe site that it was successful even though the download said it was.
    Uninstalled using Adobe uninstaller, then tried install once more with the same result.
    The macromed Flash folder still has the same files as listed before:
    Flash10i.ocx
    FlashInstall (text doc)
    FlashUtil10i_ActiveX.dll
    FlashUtil10i_ActiveX
    install (text doc)
    In Control Panel I see:
    Adobe Download Manager
    Adobe Flash Player 10 ActiveX
    Adobe Reader 9.3.4
    Adobe Shockwave Player 11/5
    Showing & enabled in IE Add-ons:
    get_atlcom Class
    Shockwave Flash Object
    Adobe PDF Link Helper
    Adobe PDF Reader
    The Active X Control for Flash Player is not listed in IE
    Don’t see any relevant add-ons in Firefox. BTW, I don’t usually use FF and only downloaded it again because I’d read that FP was working for some people in FF when it wasn’t in IE.  So, if we don’t get FF working with FP I’m good with that.
    No NPSWF files in the Flash folder for me to right click.
    Can't give you any version numbers.
    Did I miss anything?

  • When is there going to be a select all/delete all option in email? I need to search for certain emails and delet them from my phone but not my email program (Outlook)

    when will there be a select all/delete all option in Emails? I have a large number of emails that come to my phone which i dont want to delete from my outlook mail account (On my PC). However, i want to be able to search, select all and delete these emails from my phone? I do not use ICloud either as it causes issues with my calendars at work.
    These features should be a standard as there are plenty of other users out there that are trying to do the same thing and as yet there has been no progress? The email on the iphone is one of the biggest downfalls of the phone, blackberry and android handle them much better, but i dont have an option with my phone at work. Please help??

    No one here would have any idea when or if such a feature would be introduced until and unless Apple announces it.

  • Need help in member formula

    Hi All,
    i need a help in memberformula
    i've two sparce dimenions as below:
    Dim1:
    A
    --B
    --C
    Dim2:
    a
    ---b
    ---c
    d
    ---e
    ---f
    i need to write member formula on C from Dim1 if member is parent level member from dim2 then its to sum up with its childen values against B from dim1.
    If member is parent level memer from dim2 ex:d
    C->d = B->e + B->f
    Thanks in advance,
    Kiran
    Edited by: kirannch on Oct 16, 2012 4:16 PM

    Hi Tim,
    Thanks for your response.
    I'm using all HFM dimensions in Essbase Except Cust2 and Cust4.
    In my outline Account, Period and Year are dense dimensions and rest are Sparse.
    i'm comparing the data at parenttot with usdtot, contr as we are using flat members.
    Below script is not updating any parent level value of USDTOT combination. i'm running the aggregation with exclude elim data before executing this calc script.
    USDTOT,Contr are sparse dimension members and Entity also sparse.
    Please can you help on the below scripts.
    SET CALCPARALLEL 3;
    SET AGGMISSG OFF;
    SET FRMLBOTTOMUP OFF;
    SET CACHE HIGH;
    SET LOCKBLOCK HIGH;
    EXCLUDE ( "Elim")
    /Calculation "USDTOT" at Parent level of Entity Dimension with sum of children same parent entity with Contr member */
    SET UPDATECALC OFF;
    FIX("ACT","FY12")
    "USDTOT"(
    IF(NOT @ISLEV("ENTITY", 0));
    "USDTOT" = @SUM(@CHILDREN(@CURRMBR("ENTITY"->"Contr")));
    ENDIF;);
    ENDFIX;
    ENDEXCLUDE;
    Thanks in advance,
    Kiran
    Edited by: kirannch on Oct 24, 2012 7:38 PM

  • "Date Modified" for all files being changed if "Automatically write to XMP" is on

    Recently upgraded from LR3 (v3.4.1) from LR2 on OSX 10.6.8 and have always had Catalog Settings > Automatically write changes into XMP turned on.
    When browsing existing JPG files in my Library (no Develop changes, no keywording, no Presets, no Import), LR3 is writing to disk — i.e., when I look at files in Finder, almost every viewed file’s “Date Modified” is being set to today’s date and time. (It actually creates a .swp file, then changes it's name back to .jpg)
    This is really bad, as it's making it impossible for me to use Finder to figure out when I last worked with a file, it is triggering needless Time Machine and Backblaze backups, and unnecessarily churning my disk.
    If I turn off "Automatically write..." this behavior stops. Per David Marx at thelightroomlab.com, I tried turning off this preference, manually doing a "Save Metadata to File" for all files, letting that complete, then turning the preference back on. This does not solve the problem.
    Per a suggestion at photoshop.com, I used ExifTool to see what changes LR was writing to a sample file; from the diff below, you can see that LR is adding a bunch of new fields as well as moving other fields around. But my point is that LR3 should never overwrite a file on disk if all I am doing is browsing thru them.
    Is anyone else seeing this? Any ideas would be greatly appreciated!
    -- David
    diff Exif5609_original Exif5609_update
    2c2
    < FileName: DSC_5609_original.JPG
    > FileName: DSC_5609_update.JPG
    5,6c5,6
    < FileModifyDate: 2009:11:27 21:32:54-08:00
    < FilePermissions: rwxr-xr-x
    > FileModifyDate: 2011:08:07 22:06:47-07:00
    > FilePermissions: rw-r--r--
    27a28,29
    > ShutterSpeedValue: 1/200
    > ApertureValue: 7.1
    55d56
    < SerialNumber: 3209521
    75d75
    < Lens: 18-200mm f/3.5-5.6
    185,188d184
    < UserComment:
    < SubSecTime: 00
    < SubSecTimeOriginal: 00
    < SubSecTimeDigitized: 00
    211a208,299
    > XMPToolkit: Adobe XMP Core 5.2-c004 1.136881, 2010/06/10-18:11:35
    > CreatorTool: Ver.1.00
    > MetadataDate: 2011:08:07 22:06:47-07:00
    > SerialNumber: 3209521
    > LensInfo: 18-200mm f/3.5-5.6
    > Lens: 18.0-200.0 mm f/3.5-5.6
    > ImageNumber: 26634
    > RawFileName: DSC_5609.JPG
    > SavedSettingsName: Import
    > SavedSettingsType: Snapshot
    > SavedSettingsParametersVersion: 6.4.1
    > SavedSettingsParametersProcessVersion: 5.0
    > SavedSettingsParametersWhiteBalance: As Shot
    > SavedSettingsParametersIncrementalTemperature: 0
    > SavedSettingsParametersIncrementalTint: 0
    > SavedSettingsParametersExposure: 0.00
    > SavedSettingsParametersShadows: 0
    > SavedSettingsParametersBrightness: 0
    > SavedSettingsParametersContrast: 0
    > SavedSettingsParametersSaturation: 0
    > SavedSettingsParametersSharpness: 0
    > SavedSettingsParametersLuminanceSmoothing: 0
    > SavedSettingsParametersColorNoiseReduction: 0
    > SavedSettingsParametersChromaticAberrationR: 0
    > SavedSettingsParametersChromaticAberrationB: 0
    > SavedSettingsParametersVignetteAmount: 0
    > SavedSettingsParametersShadowTint: 0
    > SavedSettingsParametersRedHue: 0
    > SavedSettingsParametersRedSaturation: 0
    > SavedSettingsParametersGreenHue: 0
    > SavedSettingsParametersGreenSaturation: 0
    > SavedSettingsParametersBlueHue: 0
    > SavedSettingsParametersBlueSaturation: 0
    > SavedSettingsParametersFillLight: 0
    > SavedSettingsParametersVibrance: 0
    > SavedSettingsParametersHighlightRecovery: 0
    > SavedSettingsParametersClarity: 0
    > SavedSettingsParametersDefringe: 0
    > SavedSettingsParametersHueAdjustmentRed: 0
    > SavedSettingsParametersHueAdjustmentOrange: 0
    > SavedSettingsParametersHueAdjustmentYellow: 0
    > SavedSettingsParametersHueAdjustmentGreen: 0
    > SavedSettingsParametersHueAdjustmentAqua: 0
    > SavedSettingsParametersHueAdjustmentBlue: 0
    > SavedSettingsParametersHueAdjustmentPurple: 0
    > SavedSettingsParametersHueAdjustmentMagenta: 0
    > SavedSettingsParametersSaturationAdjustmentRed: 0
    > SavedSettingsParametersSaturationAdjustmentOrange: 0
    > SavedSettingsParametersSaturationAdjustmentYellow: 0
    > SavedSettingsParametersSaturationAdjustmentGreen: 0
    > SavedSettingsParametersSaturationAdjustmentAqua: 0
    > SavedSettingsParametersSaturationAdjustmentBlue: 0
    > SavedSettingsParametersSaturationAdjustmentPurple: 0
    > SavedSettingsParametersSaturationAdjustmentMagenta: 0
    > SavedSettingsParametersLuminanceAdjustmentRed: 0
    > SavedSettingsParametersLuminanceAdjustmentOrange: 0
    > SavedSettingsParametersLuminanceAdjustmentYellow: 0
    > SavedSettingsParametersLuminanceAdjustmentGreen: 0
    > SavedSettingsParametersLuminanceAdjustmentAqua: 0
    > SavedSettingsParametersLuminanceAdjustmentBlue: 0
    > SavedSettingsParametersLuminanceAdjustmentPurple: 0
    > SavedSettingsParametersLuminanceAdjustmentMagenta: 0
    > SavedSettingsParametersSplitToningShadowHue: 0
    > SavedSettingsParametersSplitToningShadowSaturation: 0
    > SavedSettingsParametersSplitToningHighlightHue: 0
    > SavedSettingsParametersSplitToningHighlightSaturation: 0
    > SavedSettingsParametersSplitToningBalance: 0
    > SavedSettingsParametersParametricShadows: 0
    > SavedSettingsParametersParametricDarks: 0
    > SavedSettingsParametersParametricLights: 0
    > SavedSettingsParametersParametricHighlights: 0
    > SavedSettingsParametersParametricShadowSplit: 25
    > SavedSettingsParametersParametricMidtoneSplit: 50
    > SavedSettingsParametersParametricHighlightSplit: 75
    > SavedSettingsParametersSharpenRadius: +1.0
    > SavedSettingsParametersSharpenDetail: 25
    > SavedSettingsParametersSharpenEdgeMasking: 0
    > SavedSettingsParametersPostCropVignetteAmount: 0
    > SavedSettingsParametersGrainAmount: 0
    > SavedSettingsParametersLensProfileEnable: 0
    > SavedSettingsParametersLensManualDistortionAmount: 0
    > SavedSettingsParametersPerspectiveVertical: 0
    > SavedSettingsParametersPerspectiveHorizontal: 0
    > SavedSettingsParametersPerspectiveRotate: 0.0
    > SavedSettingsParametersPerspectiveScale: 100
    > SavedSettingsParametersConvertToGrayscale: False
    > SavedSettingsParametersToneCurveName: Linear
    > SavedSettingsParametersCameraProfile: Embedded
    > SavedSettingsParametersCameraProfileDigest: D6AF5AEA62557FCE88BC099788BBD3CC
    > SavedSettingsParametersLensProfileSetup: LensDefaults
    > SavedSettingsParametersToneCurve: 0, 0, 255, 255
    > IPTCDigest: d41d8cd98f00b204e9800998ecf8427e
    228,230d315
    < SubSecCreateDate: 2009:11:27 21:32:54.00
    < SubSecDateTimeOriginal: 2009:11:27 21:32:54.00
    < SubSecModifyDate: 2009:11:27 21:32:54.00
    http://feedback.photoshop.com/photoshop_family/topics/lr3_date_modified_for_all_files_bein g_updated_when_browsing_photos_if_catalog_settings_automatically_write_changes_into_xmp_is /replies/6313647

    clvrmnky wrote:
    davidpope007 wrote:
    Then when LR3 loaded my old LR2 images into memory, it "dirtied" the in-memory copy of the file by adding in these new LR3 XMP fields. Then, because I had "Automatically write XMP" on, it said "I better write these changes to disk".
    Yuck. As a former software engineer, this is very bad software engineering.
    It should wait until the user dirties the file (via Develop, keywords, etc.) before presuming to add a bunch of metadata fields that are unique to the new version of LR3.
    Well, I'm a current software developer, and this is, really, a perfectly reasonable thing to do. It is a reasonable trade-off for a convenient feature required by a small subset of users.
    Yes, in most cases the in-memory copy should "never" be dirtied unless the user makes a gesture of some sort, but like I said earlier, this option (once set by the user) sets up the situation where this gesture becomes implicit. This is a clear trade-off for the sake of convenience. And if the XMP is out of date and needs to be updated en masse, so be it.
    The fact is, there is no easy way around this. Do we save up /every/ dirty buffer somehow until you make a gesture that /might/ require the XMP to be up-to-date before acting on that gesture? Now we have to worry about unflushed buffers if something goes wrong and the app exits. Do we save the buffers to the DB? Now we have to block some calls to make another blocking call to flush some or all of those to DB, and then write some or all of it out to one or more files. In what order? What if there is a gesture to have X files with up-to-date XMP and some or all of those are in unflushed buffers, unflushed DB writes or we have to wait for the DB.
    As you can see, this is a transactionality nightmare, and the easiest and safest thing to get what the user wants (i.e., up-to-date XMP for the purpose of talking to a third-party XMP aware app) is to simply update the sidecar or XMP block in an atomic manner using the correct file IO. The file will have to change at some point, so it may as well be now.
    [Thanks to both of you for your detailed replies. I am aware of the need for tradeoffs so when you say the approach taken is quite reasonable, I do believe you. I also apologize in advance for the length of the following and am extremely aware of the time it must have taken you to compose the above replies, but I'm going to add a bit more, if only for my own piece of mind and in hopes of coming up with a solution for my workflow.]
    From my naive point of view, I was expecting the answer to be simply "don't raise the XMPDirtyFlag upon reading in a file". Obviously if your architecture requires you to "upgrade to latest XMP format" upon read, and another part of the system auto-detects "out of date XMP", then it's going to write those changes to disk.
    But it didn't need to be designed that way. LR obviously has mechanisms to know when a user has made a change to XMP so it is able to write XMP changes to disk only when necessary.
    The promise (to me) of "Automatically Write XMP changes to disk" was to auto-save my changes, and not those made for any internal (i.e., XMP versioning related) changes.
    Perhaps the premise is that it is LR3's job to update an individual file's XMP to the latest version so that other XMP-aware apps can make use of it? I would argue that those third-party XMP-aware apps already have to know how to deal with all prior versions of XMP, so LR3 should just leave well enough alone.
    You asked if my problem with your approach was that it was "inelegant"; not at all, it is based on my own perception of what I need from my workflow, so let me describe that so maybe we can find a better way:
    * Part of the appeal of LR to me is that it preserves my original file as it came off the memory card, allowing me to move to a different workflow/toolset in 2025 if I choose to do so
    * However, with all of changes contained in a single database file, I'm concerned about rare (but possible) corruption, so to mitigate this risk, I let LR backup my database weekly and it's also backed up continuously in the cloud via Backblaze
    * Even with backups of the database, there is still a chance that I could lose changes made to individual files (e.g., LR corrupts the DB and I have to go back to last week's DB)
    * Thus the appeal of the "auto-write to XMP" flag -- that way critical changes (develop, crop, keywords) are saved on a per-file basis; I liked the "automatic" part of this (as opposed to a manual save) because then I don't have to teach others in my family how to manually save XMP changes
    * A nice side-effect of this setting is now when I use Finder to find a file and double-click on it to edit it in Photoshop, all my develop changes are right there; (in other words, I like the flexibility of not having to fire up LR in order to just invoke PS from within it); also when I use Bridge I see all the keywords there
    * So with LR2, I had gotten used to what I thought was the best of all worlds -- autosave of changes at the file level via XMP + raw negatives untouched (i.e., Date Modified == the date I took the picture); this allows me to use operating-system-level tools -- Finder -- to locate/search for files
    * Now I upgrade to LR3 and I'm finally now understanding that a concept "XMP versioning" is going to result in changes to many, but not all my files. (That's something else that's annoying about this issue -- I open up the Grid and browse a folder of files, and only seemingly random ones I've cursored over seem to get written to disk -- if it's so urgent that LR3 update the XMP, then it should do it for all the files in the catalog or at least in that directory)
    Here's a screenshot from Finder of what I see everytime I look at this folder:
    * So now I have to assume that each new version of XMP and/or LR is going to touch my files on disk. Sigh.
    * What I don't like about this is that it is ruining the promise of "untouched raw negative". Yes, the image data is untouched -- which I agree is most of the benefit; but the file has been touched.
    * Perhaps you might empathize a bit more if you imagined that someone went thru all your source code or Word files and randomly changed the date to "today" because you upgraded compilers or moved to Word 2011.
    I agree all of this would be solved by having an XMP sidecar file for JPGs, but you indicate that's not going to happen.
    You've also alluded to the solution of "resetting the Date Modifed" to it's original value -- which I believe is what Finder does when you move or copy a file -- but that that is fraught with issues as well. I believe you when you say there are issues, but again the naive part of me wonders why that soultion would be so bad...
    I just thought of another potential solution -- turning on Date Created in Finder -- but it turns out that's changed, too.
    I am really at a loss as to what to do and would welcome your suggestions.
    Thanks again and kind regards,
    -- David

  • FOX Formula works in IP Modeler but not in BEx Workbook

    Hi, I've created a FOX function to do a series of actions.
    If I test the function with a simple filter and execute with trace in Modeler I get exactly the results I want. For example, 5 rows read, 4 rows modified, 4 rows deleted.
    If I add the function as a button on a workbook associated with a data provider (input-ready query) I only get 5 rows read but no rows modified or deleted.
    What is happening? Is there a way to know exactly why he is reading the data but not changing it as required? I think the function is reading the same set of data because it is finding 5 rows in both cases. Any ideas on how I can debug the function or find what what kind of data the function is reading or not reading so that it doesn't execute the code?

    One final question. I am trying to use the sequence in the workbook. I want the variables used by the sequence (in the planning function) to take the values that I entered in the query when entering the workbook. I am using the same variables in both places.
    CMD                                         1 EXECUTE_PLANNING_SEQUENCE
    PLANNING_SEQUENCE_NAME 1 PL06_PS01
    VAR_NAME_1                           1 FISCY
    VAR_NAME_2                           1 Y_VERSION
    VAR_NAME_3                           1 YBP_FCTR_INT
    If I execute like this I get error messages saying I am missing VAR_VALUE_1 and so on but the sequence executes as I intended. What is happening? Do I need to add the VAR_VALUE lines? I don't think so... so why the error messages if the sequence is running and working properly now?
    Additional points will be distributed for help on this issue.

Maybe you are looking for