Question on LGF/LGX - Validation

Hello Experts,
From my understanding, There is LGF (logic file) which is being "compiled" into LGX (logic with all the parameters from database as per SELECT statements). We get LGX file by "Validate and Save" in the application.
Is there any way to automate this?
In the "Admin Task" SSIS  task there is "Validate" option - would it also produce LGX with fresh parameters? Is there is an option when you run logic to use LGF and automatically compile it to LGX with refreshed data?
Please keep in mind that I'm not BPC application user and it is new to me.
Thanks in advance,
Akim

Hi Akim,
I'm not sure why you would need to automate this, as part of an administrative batch process.
I would either validate logic when I'm developing it -- manually, via the admin consol -- or else I'd want it to be validated each time a user runs it, which can be done for batch-mode logic operation by calling the Logic.LGF file (instead of the Logic.LGX file) in the LOGICFILE parameter of the "advanced" script modification in the eData -> Organize Package List setup in Excel.
The second option allows you to take selections from the user's current view, and pass them into the *SELECT statetments and other parts of the logic file, to control the logic execution.
This second option, however, does not actually update the LGX file in the AdminApp\Application directory. The LGF file is validated, and the results are (I assumed) stored somewhere in the logic engine's memory on the app server, but not stored in an LGX file anywhere. (In fact, if you call the LGF file from the package, you can delete the LGX file entirely, with no ill effects.)
Note that this validation-at-run-time option is not available for default logic. This limits the flexibility of what you can do in default logic, compared to batch-mode logic.
But to answer your question directly -- I have seen that "validate logic" admin task, but I've never used it. As a practical matter, the logic should validate exactly the same when it's run as part of this admin package, as when you run it manually from the admin console. So I'm not sure what benefits that admin task would give.
And at least in BPC 5.1, the logic validation itself -- and the error messages that it does and does not provide -- is limited at best. It checks the syntax but it oftens misses critical flaws in the logic, that you'll only discover by either (1.) scanning through the LGX file carefully, or (2.) running the logic and analyzing the debug logs.
Hope that helps,
Tim

Similar Messages

  • XML Publisher question - Not generating a valid XML file

    I am working through an Oracle document that walks you through creating an XML Pub report. I am using HCM 8.9, Tools 8.49.15 and it has XML Pub installed and I have the Microsoft plug-in installed
    I have created a query and have downloaded an rtf template and now am on a page where you enter your data source and then click ‘Generate’ for the sample data file. I did it and it created ‘PERSONAL_DATA_PAY.XML’ which is created from a PS Query. However if I click on ‘PERSONAL_DATA_PAY.XML’ it generates a blocky text file that is not an XML file and I can’t go any further.
    Do you know why it’s not generating a valid XML file when I click on 'generate'?
    Thanks
    Allen H. Cunningham
    Data Base Administrator - Oracle/PeopleSoft
    Sonoma State University

    You mean to say that you create a new data source by specifying Data Source Type as 'PS Query' and Data Source ID as your query name, you are not able to generate a valid XML file (by clicking on Generate link).
    did you cross check your query by running it?
    On field change of Generate link, PeopleSoft uses PSXP_RPTDEFNMANAGER and PSXP_XMLGEN app packagaes and query objects to create the file
    It should work if you query is valid..

  • Two EO questions (dynamic attrs and validation)

    Hello all (I really hope Steve M gets to see this),
    I am re-visiting an old project (fantasy) of mine to implement spell checking of certain fields in an Entity Object, The concept is that I will implement the spell checking in an overriden validateEntity method. The way I would like this to work is for spelling errors to be reported to the user when they try to commit; if the user commits again without making any changes, I will assume they want to ignore the spelling errors and will allow the Entity to commit. In order to implement this in a really cooI and generic way, Ineed to be able to do a couple of things that have really stumped me.
    The first is in relation to entity validation. The doc states that any EO is assumed to be valid after it is read from the database. Is there any way that I can alter this behaviour so that the EO needs to be re-validated even if it has just been read from the database? Extra credit if this can be done in such a fashion as to not cause the EO's data to be re-posted to the database if the user doesn't make any changes. I have tried over-riding lots of things in the EntityImpl and EntityDefImpl, but I cannot find the right place to do this.
    The second thing I would like to do is to add dynamic attributes to an EO instance. The reason I want to do this is so that I can add the spell check code in a generic framework extension class and not have to modify each EO for this behaviour. The attribute I want to add is used to track whether spell checking has already been done on the EO row. I cannot use a java class member variable, as the same instance of the java class is not necessarily used across different web requests - in fact, I have found that the same instance is rarely-to-never re-used in my testing. I can make this work by adding a transient attribute to the EO itself, but this defeats my goal of a generic implementation. I have found some methods in EntityImpl (setDynamicAttributeValue and getDynamicAttributeValue), but they don't appear to do what I want. I've also played around a bit with the EntityDefImpl, but no luck there either.
    I do promise to post a blog entry detailing how to do this if this pans out. We have looked at some nice JavaScript-based spell checkers that work well (give suggestions in a pop-up window), but we would like multiple fields on a web page to be spell-checked, and this is difficult to do with those spell-checkers (particularly when you have an af:table with varying number rows).
    Any assistance is appreciated,
    John

    Your mechanism to manage css style classes is a good approach; I have used that many times. I do wonder why the style classes were implemented as a list, instead of a set, but there may be good use cases for using a list.
    In some cases you can also consider using CSS PsuedoClasses, which were introduced in JavaFX 8. These are a bit easier to use, especially if you have only two options. But a use case might look like:
    public class Message {
        public enum Status { NORMAL, WARNING, CRITICAL }
        private final ObjectProperty<Status> status = new SimpleObjectProperty<>(Status.NORMAL);
        private final StringProperty message = new SimpleStringProperty();
        // constructor, getters, setters, and property accessors....
    public Label createLabel(Message message) {
        PseudoClass warning = PseudoClass.getPseudoClass("warning");
        PseudoClass critical = PseudoClass.getPseudoClass("critical");
        Label label = new Label();
        label.textProperty().bind(message.messageProperty());
        message.statusProperty().addListener((obs, oldStatus, newStatus) -> {
            label.pseudoClassStateChanged(warning, newStatus == Message.Status.WARNING);
            label.pseudoClassStateChanged(critical, newStatus == Message.Status.CRITICAL);
        return label ;
    And then your css looks like
    .label:warning {
        -fx-text-fill: orange ;
    .label:critical {
        -fx-text-fill: red ;

  • A question about license date validity during OS migration: AIX to Linux

    Dear all,
    I would appreciate if you could kindly give us
    your advice about the following issue.
    We have an environment dedicated for training. The environment was
    initially installed on AIX platform. However, after a while the
    technical direction decided to migrate to Linux.
    The migration was done at 18 September 2009, and during this process
    the very date was chosen and designated by SAP to be the
    valid start date of the new SAP license.
    Now, our problem is, that the data programmed in this SAP envionment
    (used for training) is not compatible with the current date. In
    order to make them work properly, the host system date must always be
    set to the last year (this is because we had unfortunately some delay for
    the migration).
    It is fundamental to remark, that currently, we are absolutely not able to
    modify the data and the sole solution (although not the best one) to
    tackle this problem is to modify the operating system's date. This is
    precisely how we proceeded, but unfortunately we encountered a SAP license
    check problem during logon.
    As I explained, the migration from AIX to Linux was performed at
    18 September 2009, which means, this was the date for the new license.
    Yet, we need to go further in the past and to choose 24th July 2008.
    On the operating system, of course, there is no problem and the
    administrator can modify the host date with the appropriate date, but
    on the other hand, by choosing this date, we are no more able to work
    within this SAP environment. At the logon time, we receive a license
    check error, which is normal, because the host date (24th July 2009) is anterior
    than the start date for the SAP license (18 September 2009).
    Also, we cannot ask for another license, because we will have the same problem,
    they never create a license dated for the last year, and they always create an
    up-to-date license.
    Could you kindly guide us how we can manage to solve this issue?
    Thanks in advance,
    Kind Regards,
    Dariyoosh

    I think you're better off opening an OSS message in this particular case.
    Regards
    Juan

  • TopLink - Best Practice Question - Object Validation

    We are fairly new to TopLink and have a question about doing object validation. We would like perform some validation on our objects that are about to be persisted to the DB. Since TopLink will persist all objects that are reachable from the ojects that are registered (I think) we would like to send a message to each of these objects (validate) before they are persisted. If the object validation fails, we will throw an exception (and the objects will not be persisted), hopefully such that the exception can be received by the client. Has anybody done anything similar to this before, or is there a better way of approaching this?
    TIA for any help.

    Are you using EJBs?
    If so, there are a couple of options that may work better for you.
    1. You could use the ejbStore method for your validation and throw the exception in there. You should be able to get the exception on the client side, but it may be wrapped in several layers.
    2. For more complex validation, you may want to use a session bean to control the validation process. It can interact with your entity beans and provide feedback about why validation fails.

  • LR 4.4 (and 5.0?) catalog: a problem and some questions

    Introductory Remark
    After several years of reluctance this March I changed to LR due to its retouching capabilities. Unfortunately – beyond enjoying some really nice features of LR – I keep struggling with several problems, many of which have been covered in this forum. In this thread I describe a problem with a particular LR 4.4 catalog and put some general questions.
    A few days ago I upgraded to 5.0. Unfortunately it turned out to produce even slower ’speed’ than 4.4 (discussed – among other places – here: http://forums.adobe.com/message/5454410#5454410), so I rather fell back to the latter, instead of testing the behavior of the 5.0 catalog. Anyway, as far as I understand this upgrade does not include significant new catalog functions, so my problem and questions below may be valid for 5.0, too. Nevertheless, the incompatibility of the new and previous catalogs suggests rewriting of the catalog-related parts of the code. I do not know the resulting potential improvements and/or new bugs in 5.0.
    For your information, my PC (running under Windows 7) has a 64-bit Intel Core i7-3770K processor, 16GB RAM, 240 GB SSD, as well as fast and large-capacity HDDs. My monitor has a resolution of 1920x1200.
    1. Problem with the catalog
    To tell you the truth, I do not understand the potential necessity for using the “File / Optimize Catalog” function. In my view LR should keep the catalog optimized without manual intervention.
    Nevertheless, when being faced with the ill-famed slowness of LR, I run this module. In addition, I always switch on the “Catalog Settings / General / Back up catalog” function. The actually set frequency of backing up depends on the circumstances – e.g. the number of RAW (in my case: NEF) files, the size of the catalog file (*.lrcat), and the space available on my SSD. In case of need I delete the oldest backup file to make space for the new one.
    Recently I processed 1500 photos, occupying 21 GB. The "Catalog Settings / Metadata / Automatically write changes into XMP" function was switched on. Unfortunately I had to fiddle with the images quite a lot, so after processing roughly half of them the catalog file reached the size of 24 GB. Until this stage there had been no sign of any failure – catalog optimizations had run smoothly and backups had been created regularly, as scheduled.
    Once, however, towards the end of generating the next backup, LR sent an error message saying that it had not been able to create the backup file, due to lack of enough space on the SSD. I myself found still 40 GB of empty space, so I re-launched the backup process. The result was the same, but this time I saw a mysterious new (journal?) file with a size of 40 GB… When my third attempt also failed, I had to decide what to do.
    Since I needed at least the XMP files with the results of my retouching operations, I simply wanted to save these side-cars into the directory of my original input NEF files on a HDD. Before making this step, I intended to check whether all modifications and adjustments had been stored in the XMP files.
    Unfortunately I was not aware of the realistic size of side-cars, associated with a certain volume of usage of the Spot Removal, Grad Filter, and Adjustment Brush functions. But as the time of the last modification of the XMP files (belonging to the recently retouched pictures) seemed perfect, I believed that all my actions had been saved. Although the "Automatically write changes into XMP" seemed to be working, in order to be on the safe side I selected all photos and ran the “Metadata / Save Metadata to File” function of the Library module. After this I copied the XMP files, deleted the corrupted catalog, created a new catalog, and imported the same NEF files together with the side-cars.
    When checking the photos, I was shocked: Only the first few hundred XMP files retained all my modifications. Roughly 3 weeks of work was completely lost… From that time on I regularly check the XMP files.
    Question 1: Have you collected any similar experience?
    2. The catalog-related part of my workflow
    Unless I miss an important piece of knowledge, LR catalogs store many data that I do not need in the long run. Having the history of recent retouching activities is useful for me only for a short while, so archiving every little step for a long time with a huge amount of accumulated data would be impossible (and useless) on my SSD. In terms of processing what count for me are the resulting XMP files, so in the long run I keep only them and get rid of the catalog.
    Out of the 240 GB of my SSD 110 GB is available for LR. Whenever I have new photos to retouch, I make the following steps:
    create a ‘temporary’ catalog on my SSD
    import the new pictures from my HDD into this temporary catalog
    select all imported pictures in the temporary catalog
    use the “File / Export as Catalog” function in order to copy the original NEF files onto the SSD and make them used by the ‘real’ (not temporary) new catalog
    use the “File / Open Catalog” function to re-launch LR with the new catalog
    switch on the "Automatically write changes into XMP" function of the new catalog
    delete the ‘temporary’ catalog to save space on the SSD
    retouch the pictures (while keeping and eye on due creation and development of the XMP files)
    generate the required output (TIF OR JPG) files
    copy the XMP and the output files into the original directory of the input NEF files on the HDD
    copy the whole catalog for interim archiving onto the HDD
    delete the catalog from the SSD
    upon making sure that the XMP files are all fine, delete the archived catalog from the HDD, too
    Question 2: If we put aside the issue of keeping the catalog for other purposes then saving each and every retouching steps (which I address below), is there any simpler workflow to produce only the XMP files and save space on the SSD? For example, is it possible to create a new catalog on the SSD with copying the input NEF files into its directory and re-launching LR ‘automatically’, in one step?
    Question 3: If this I not the case, is there any third-party application that would ease the execution of the relevant parts of this workflow before and/or after the actual retouching of the pictures?
    Question 4: Is it possible to set general parameters for new catalogs? In my experience most settings of the new catalogs (at least the ones that are important for me) are copied from the recently used catalog, except the use of the "Catalog Settings / Metadata / Automatically write changes into XMP" function. This means that I always have to go there to switch it on… Not even a question is raised by LR whether I want to change anything in comparison with the settings of the recently used catalog…
    3. Catalog functions missing from my workflow
    Unfortunately the above described abandoning of catalogs has at least two serious drawbacks:
    I miss the classification features (rating, keywords, collections, etc.) Anyway, these functions would be really meaningful for me only if covering all my existing photos that would require going back to 41k images to classify them. In addition, keeping all the pictures in one catalog would result in an extremely large catalog file, almost surely guaranteeing regular failures. Beyond, due to the speed problem tolerable conditions could be established only by keeping the original NEF files on the SSD, which is out of the question. Generating several ‘partial’ catalogs could somewhat circumvent this trap, but it would require presorting the photos (e.g. by capture time or subject) and by doing this I would lose the essence of having a single catalog, covering all my photos.
    Question 5: Is it the right assumption that storing only some parts (e.g. the classification-related data) of catalog files is impossible? My understanding is that either I keep the whole catalog file (with the outdated historical data of all my ‘ancient’ actions) or abandon it.
    Question 6: If such ‘cherry-picking’ is facilitated after all: Can you suggest any pragmatic description of the potential (competing) ways of categorizing images efficiently, comparing them along the pros and contras?
    I also lose the virtual copies. Anyway, I am confused regarding the actual storage of the retouching-related data of virtual copies. In some websites one can find relatively old posts, stating that the XMP file contains all information about modifying/adjusting both the original photo and its virtual copy/copies. However, when fiddling with a virtual copy I cannot see any change in the size of the associated XMP file. In addition, when I copy the original NEF file and its XMP file, rename them, and import these derivative files, only the retouched original image comes up – I cannot see any virtual copy. This suggests that the XMP file does not contain information on the virtual copy/copies…
    For this reason whenever multiple versions seem to be reasonable, I create renamed version(s) of the same NEF+XMP files, import them, and make some changes in their settings. I know, this is far not a sophisticated solution…
    Question 7: Where and how the settings of virtual copies are stored?
    Question 8: Is it possible to generate separate XMP files for both the originally retouched image and its virtual copy/copies and to make them recognized by LR when importing them into a new catalog?

    A part of my problems may be caused by selecting LR for a challenging private project, where image retouching activities result in bigger than average volume of adjustment data. Consequently, the catalog file becomes huge and vulnerable.
    While I understand that something has gone wrong for you, causing Lightroom to be slow and unstable, I think you are combining many unrelated ideas into a single concept, and winding up with a mistaken idea. Just because you project is challenging does not mean Lightroom is unsuitable. A bigger than average volume of adjustment data will make the catalog larger (I don't know about "huge"), but I doubt bigger by itself will make the catalog "vulnerable".
    The causes of instability and crashes may have NOTHING to do with catalog size. Of course, the cause MAY have everything to do with catalog size. I just don't think you are coming to the right conclusion, as in my experience size of catalog and stability issues are unrelated.
    2. I may be wrong, but in my experience the size of the RAW file may significantly blow up the amount of retouching-related data.
    Your experience is your experience, and my experience is different. I want to state clearly that you can have pretty big RAW files that have different content and not require significant amounts of retouching. It's not the size of the RAW that determines the amount of touchup, it is the content and the eye of the user. Furthermore, item 2 was related to image size, and now you have changed the meaning of number 2 from image size to the amount of retouching required. So, what is your point? Lots of retouching blows up the amount of retouching data that needs to be stored? Yeah, I agree.
    When creating the catalog for the 1500 NEF files (21 GB), the starting size of the catalog file was around 1 GB. This must have included all classification-related information (the meaningful part of which was practically nothing, since I had not used rating, classification, or collections). By the time of the crash half of the files had been processed, so the actual retouching-related data (that should have been converted properly into the XMP files) might be only around 500 MB. Consequently, probably 22.5 GB out of the 24 GB of the catalog file contained historical information
    I don't know exactly what you do to touch up your photos, I can't imagine how you come up with the size should be around 500MB. But again, to you this problem is entirely caused by the size of the catalog, and I don't think it is. Now, having said that, some of your problem with slowness may indeed be related to the amount of touch-up that you are doing. Lightroom is known to slow down if you do lots of spot removal and lots of brushing, and then you may be better off doing this type of touch-up in Photoshop. Again, just to be 100% clear, the problem is not "size of catalog", the problem is you are doing so many adjustments on a single photo. You could have a catalog that is just as large, (i.e. that has lots more photos with few adjustments) and I would expect it to run a lot faster than what you are experiencing.
    So to sum up, you seem to be implying that slowness and catalog instability are the same issue, and I don't buy it. You seem to be implying that slowness and instability are both caused by the size of the catalog, and I don't buy that either.
    Re-reading your original post, you are putting the backups on the SSD, the same disk as the working catalog? This is a very poor practice, you need to put your backups on a different physical disk. That alone might help your space issues on the SSD.

  • Validation: Two items cannot be null or filled at the same time

    Hello forum members,
    I have a question about item level validation and was hoping someone would be able to shed some light on my problem.
    I'll go into a quick background of what the app does first so you will have an idea of what I'm trying to get at (or if you can suggest a better approach).
    Our team, on a regular basis, receives records that require analyzing. Over the years, we've compiled a pretty good list of possible analyses for any record. We used to do this by manually going into sql developer and changing the analysis in the analysis field, but that posed problems of spelling mistakes and other human errors.
    I've created a front end application that will scrape all possible analyses from the table into a drop down menu thus limiting the amount of human input as possible. The problem here is that as things grow, there will always be a new problem that does not fit the analysis we have used in the past. So I would like to give the ability for the team to use a text field to enter a new analysis for those cases (and then subsequent records can find it in the drop down menu).
    The two application items used in this problem is: p_select_list and p_text_field.
    In my first iteration of this application, I added an extra value in p_select_list called "Other". Some nifty javascript would check to make sure that if anything but "Other" was selected, p_text_field would be disabled (there was a problem with that where on page load, the field would be enabled even though "Other" wasn't selected).
    We've been having some javascript issues while migrating our application from development to testing to production, so I would like to avoid javascript unless there really is no other option.
    What I'm trying to implement now is a page level validation on both p_select_list and p_text_field to test to make sure that either p_select_list is not null or p_text_field is not null.
    I know there are 3 cases to test for (both null, both not null and only one is null), so I used a function returning boolean validation.
    I started off small and just had the following:
    if ((p_select_list is not null) and (p_text_field is not null)) then
    return true;
    else
    return false;
    end if;
    I then chose something in the select list and entered some garbage in the text field and hit submit. The validation worked like a charm!
    Now I added the both null check:
    if ((p_select_list is not null) and (p_text_field is not null)) then
    return true;
    elsif ((p_select_list is null) and (p_text_field is null)) then
    return true;
    else
    return false;
    end if;
    However, I go into a record and enter something into the select list and leave the text field blank, and it throws the error at me. Then I tried both items as null and that breaks too. However, I can't say that this is the work of the script since it failed the one case it wasn't supposed to.
    Do you know what I'm doing wrong here? Is there some way for APEX to ensure that only one of the two fields has a non-null value?
    Thank you in advance,
    Ivan
    Edited by: ichin on Jan 28, 2009 3:40 PM

    Hi Ichin,
    I am a bit confused with your statement -
    if ((p_select_list is not null) and (p_text_field is not null)) then
    return true;
    elsif ((p_select_list is null) and (p_text_field is null)) then
    return true;
    else
    return false;
    end if;Here this statement -
    if ((p_select_list is not null) and (p_text_field is not null)) then
    return true;returning the result TRUE when both fields are not null
    And this statement -
    elsif ((p_select_list is null) and (p_text_field is null)) then
    return true;returning the result TRUE again when both fields are null. If you see closely basically is not validating becoz it returning true whatever the input is.
    I do not know what you want to use the second statement when you have else statement at the end. It will return false if one or both fields are null unless you want to validate either text field.
    Hope this make sense.
    Regards,
    M Tajuddin
    web: http://tajuddin.whitpeagesbd.com

  • Certificate validation when server is the same machine as client

    Hi guys i realize this is the most talked question about jsse, the validation of local certificates.
    I found a 2001 o'reilly page where they explain what is jsse and gives a complete tutorial of it.
    It comes with a sample secure http server and browser, and when i try to connect to the server with that browser it bombs out with the "couldn't find trusted certificate"
    Having read some posts here and googled around i found out that this sometimes happens because the name on the signed certificate does not match the url accessed from the server.
    So, if the server and client is on the same machine (127.0.0.1) and my machine name is FJL, can someone explain me how should i run the keytool?
    This is what i have been using:
    keytool -genkey -keystore certs -keyalg rsa -alias espectro -storepass serverkspw -keypass serverpw
    The keytool then prompted me for information to put into the certificate. My answers are shown in bold.
    What is your first and last name?
    ��[Unknown]: francisco leon
    What is the name of your organizational unit?
    ��[Unknown]: licom
    What is the name of your organization?
    ��[Unknown]: la universidad del zulia
    What is the name of your City or Locality?
    ��[Unknown]: maracaibo
    What is the name of your State or Province?
    ��[Unknown]: zulia
    What is the two-letter country code for this unit?
    ��[Unknown]: VE
    Is <CN=francisco leon, OU=licom O=la universidad del zulia L=maracaibo ST=zulia, C=ve> correct?
    ��[no]: y
    the web server is found here http://www.onjava.com/pub/a/onjava/2001/05/03/java_security.html?page=1 to page=5 or so
    page=4 explains something:
    You may wonder what happens when you run SecureBrowser againstSecureServer. It doesn't work. That's because SecureBrowser won't acceptSecureServer's phony certificate. However, we can trick SecureBrowser into accepting SecureServer's certificate. Here's how:
    So i use the keytool again to do what it is suggested:
    keytool -export -keystore certs -alias espectro -file server.cer
    then:
    keytool -import -keystore jssecacerts -alias espectro -file server.cer
    the jssecacerts file is located on the dir where i did the keytool thing, so i copy it to c:\j2sdk1.4.1_02\jre\lib\security
    and finally i try to connect to the secure httpserver found on that url with the secure browser found there too and i get the "couldn't find trusted certificate"
    could someone please explain me how to fix it? the article is kind of old and lists some properties which i haven't been able to find, along with some .jar (the article is dated before java 1.4 was available) and maybe i am doing something wrong.
    Thanks in advance!

    Ok, indeed, when keytool asks me about my name, i tried it with my machine name and now it works.

  • Validation - Business Rule or/and UJ_Validation

    Hi experts,
    I'm on BPC 7.5 NW, I'm facing problem to construct a simple validation where I need to compare the amount from one parent account against to other. Let's explain the business scenario and after the technical solutions.
    Business Scenario
    Compare the Total Assets is equal to the Total Liabilities. The Total Assets is represented by a parent account "1", the Total Liabilities is represented by a parent account "2". If it is different show a warning.
    This is need to trigger, after the Actual Transactional Data Load + Journals.
    Technical Solution
    Application: Legal
    Dimensions: Empresa (Entity), Conta (Account), Fonte (C_DataScr), Versao (C_Category), Groups, Intco, MesAno (Time), TipMov (Flow), CCusto (User Defined), CLucro (User Defined)   
    1 - Business Rule
    Validation Definition
    Validation Account         Remark                       Validation Operand           Other destination dimension Members                              Validation Tolerance
    ZATIVO_X_PASSIVO     Ativo x Passivo                       =                           CONTA=VALIDATIVPASS,INTCO=SPTOTAL,CLucro=ACTEDUMMY,CCusto=ACTENONE                0
    Account 1                    Flow 1                  Sign 1               Account 2                 Flow 2                       Sign 2             Remark
    1                           TMTOTAL*                  1                          2                        TMTOTAL*                    1                  Ativo x Passivo
    *The TMTotal Flow is a parent from every data on the master data TipMov (flow)
    Validation.lgf
    *RUN_PROGRAM VALIDATION
        CATEGORY = %VERSAO_SET%
        CURRENCY = %GROUPS_SET%
        TID_RA = %MESANO_SET%
        OTHER = [ENTITY=%EMPRESA_SET%]//For More than one other scope parameters: OTHER = [ENTITY=%ENTITY_SET%;INTCO=%INTCO_SET%...]
    *ENDRUN_PROGRAM
    Result
    When I run with this parameters I receive the message : "UJP_PROCESS_EXCEPTION:Data for category  not found in application LEGAL"
    2 - Validation with UJ_Validation
    Assign the driver dimension to Legal - in case I used the CONTA (Account)
    Rule Maintenance
    Assigned Member: "1" and "2"
    Use Logic Table
    Dimension = Empresa (Entity)
    Operator "="
    Members = TECSA - This is a parent from every Entities.
    Result
    When I run with this parameters I receive the message : "UJP_PROCESS_EXCEPTION:Data for category  not found in application LEGAL"
    3 - Validation with UJ_Validation and BADI
    Assign the driver dimension to Legal - in case I used the CONTA (Account)
    Rule Maintenance
    Assigned Member: "1" and "2"
    Use BAdI Implementation
    BADI_UJ_VALIDATION_RULE_LOGIC
    Create a Enhancement ZATIVO_X_PASSIVO
    Filter
    Rule_Num = 1
    APPSET_ID = ZTECSA
    DIMENSION = CONTA
    Class
    METHOD if_uj_validation_rule_logic~do_validation_logic.
      FIELD-SYMBOLS:
                       <field1> TYPE ANY,
                       <field2> TYPE ANY.
      ASSIGN COMPONENT 'FIELD1' OF STRUCTURE is_data TO <field1>.
      ASSIGN COMPONENT 'FIELD2' OF STRUCTURE is_data TO <field2>.
      IF <field1> NE <field2>.
        es_message-message = 'Error in Validation'.
        es_message-recno = 1.
        es_message-MSGTY = 'W'.
      ENDIF.
    ENDMETHOD.
    And add this line to the script
    *START_BADI_UJ_VALIDATION_RULE_LOGIC~DO_VALIDATION_LOGIC
      QUERY = ON
      WRITE = ON
    *END_BADI
    Result
    Data Region:
    [WARNING!] NO MEMBER SPECIFIED FOR DIMENSION:CCUSTO WILL QUERY ON ALL BASE MEMBERS.
    [WARNING!] NO MEMBER SPECIFIED FOR DIMENSION:CLUCRO WILL QUERY ON ALL BASE MEMBERS.
    [WARNING!] NO MEMBER SPECIFIED FOR DIMENSION:CONTA WILL QUERY ON ALL BASE MEMBERS.
    [WARNING!] NO MEMBER SPECIFIED FOR DIMENSION:FONTE WILL QUERY ON ALL BASE MEMBERS.
    [WARNING!] NO MEMBER SPECIFIED FOR DIMENSION:INTCO WILL QUERY ON ALL BASE MEMBERS.
    [WARNING!] NO MEMBER SPECIFIED FOR DIMENSION:TIPMOV WILL QUERY ON ALL BASE MEMBERS.
    [WARNING!] MEASURES IS NOT SPECIFIED!
    So what could I make to maintain all my options to do what I need ?
    I appreciate any help
    Best Regards
    Alexandre Mendoza Collepicolo

    Hi,
    Just to check, can you try and hardcode the category in the rules itself...just for a test to see if it is working.
    You can have the category mentioned as CATEGORY=ACTUAL in the rules itself for Other source dimension members and other destination members. J
    This is to check if the validation package runs successfully or not.
    Thanks,
    Sreeni

  • Oaf: putting validation in VO based on sql query

    Hi All,
    I need few inputs for the customization of OAF page.
    Business Need: Restriction of special character in supplier oaf pages for supplier name.
    There are 3 oaf pages involved in supplier creation/update for R 12.1.3
    1)     oracle\apps\pos\supplier\webui\OrganizationPG (underlying VO is sql query based).
    2)     oracle\apps\pos\supplier\webui\QuickUpdatePG (underlying VO is sql query based).
    3)     oracle\apps\pos\supplier\webui\SuppCrtPG (underlying VO is based on EO - HzPuiOrgProfileQuickEOEx) .
    For pages referring EO, we have extended the underlying EO and override the validateEntity() method to throw the OAAttrValException. After substitution, validation is working.
    For VO based on query, the standard AM’s are instantiating the VO and using Rowimpl class they are setting the attributes.
    Question: How to enforce validation logic in VO or AM in such cases where oracle standard code is instantiating the VO dynamically?
    One possible solution is to use the PPR event on input text message bean and catch it in custom controller’s processFormRequest method and before super. processFormRequest(), we can put the validation.
    Is there any other way to achieve it?

    VO extension will not work as the
    Standard Controller code : oracle.apps.pos.supplier.webui.QuickUpdateCO is calling the AM's method on the click of save button and AM is dynamically invoking the methods in VO.
    Standard Controller code:
    OAApplicationModule oaapplicationmodule = oapagecontext.getApplicationModule(oawebbean);
    if(oapagecontext.getParameter("btnSave") != null)
    oaapplicationmodule.invokeMethod("updateVendor");
    oaapplicationmodule.invokeMethod("updateVendorSites");
    oaapplicationmodule.getTransaction().commit();
    Corressponding standard standard AM code...
    CallableStatement callablestatement;
    Exception exception;
    oadbtransaction = getOADBTransaction();
    connection = oadbtransaction.getJdbcConnection();
    Object obj = null;
    oaviewobjectimpl = (OAViewObjectImpl)findViewObject("SupplierVO");
    callablestatement = null;
    try
    callablestatement = oadbtransaction.createCallableStatement(" BEGIN savepoint upd_vndr_ab ; END;", 0);
    callablestatement.executeQuery();
    catch(SQLException sqlexception1)
    throw OAException.wrapperException(sqlexception1);
    finally
    if(callablestatement == null) goto L0; else goto L0
    if(callablestatement != null)
    try
    callablestatement.close();
    catch(SQLException sqlexception) { }
    break MISSING_BLOCK_LABEL_97;
    try
    callablestatement.close();
    catch(SQLException sqlexception2) { }
    throw exception;
    VendorsVORowImpl vendorsvorowimpl;
    label0:
    oaviewobjectimpl.reset();
    vendorsvorowimpl = (VendorsVORowImpl)oaviewobjectimpl.first();
    String s = vendorsvorowimpl.getPosModified();
    String s1 = vendorsvorowimpl.getHzModified();
    Number number = vendorsvorowimpl.getVendorId();
    int i = number.intValue();.....
    Extending the VO will not server the good as the handle will not be given to extended class....
    Edited by: user13791631 on Jun 20, 2012 9:29 PM

  • Putting validation in VO based on sql query

    Hi All,
    I need few inputs for the customization of OAF page.
    Business Need: Restriction of special character in supplier oaf pages for supplier name.
    There are 3 oaf pages involved in supplier creation/update for R 12.1.3
    1)     oracle\apps\pos\supplier\webui\OrganizationPG (underlying VO is sql query based).
    2)     oracle\apps\pos\supplier\webui\QuickUpdatePG (underlying VO is sql query based).
    3)     oracle\apps\pos\supplier\webui\SuppCrtPG (underlying VO is based on EO - HzPuiOrgProfileQuickEOEx) .
    For pages referring EO, we have extended the underlying EO and override the validateEntity() method to throw the OAAttrValException. After substitution, validation is working.
    For VO based on query, the standard AM’s are instantiating the VO and using Rowimpl class they are setting the attributes.
    Question: How to enforce validation logic in VO or AM in such cases where oracle standard code is instantiating the VO dynamically?
    One possible solution is to use the PPR event on input text message bean and catch it in custom controller’s processFormRequest method and before super. processFormRequest(), we can put the validation.
    Is there any other way to achieve it?

    {forum:id=210} is the OAF forum

  • Validation in VO based on sql query

    I need few inputs for the customization of OAF page.
    Business Need: Restriction of special character in supplier oaf pages for supplier name.
    There are 3 oaf pages involved in supplier creation/update for R 12.1.3
    1)     oracle\apps\pos\supplier\webui\OrganizationPG (underlying VO is sql query based).
    2)     oracle\apps\pos\supplier\webui\QuickUpdatePG (underlying VO is sql query based).
    3)     oracle\apps\pos\supplier\webui\SuppCrtPG (underlying VO is based on EO - HzPuiOrgProfileQuickEOEx) .
    For pages referring EO, we have extended the underlying EO and override the validateEntity() method to throw the OAAttrValException. After substitution, validation is working.
    For VO based on query, the standard AM’s are instantiating the VO and using Rowimpl class they are setting the attributes.
    Question: How to enforce validation logic in VO or AM in such cases where oracle standard code is instantiating the VO dynamically?
    One possible solution is to use the PPR event on input text message bean and catch it in custom controller’s processFormRequest method and before super. processFormRequest(), we can put the validation.
    Is there any other way to achieve it?

    {forum:id=210} is the OAF forum

  • Oracle Validated RPM - Which to download?

    I'm preparing a couple Linux x86_64 servers for building a RAC cluster and thought I'd give the Oracle Validated RPM a try.
    The specifics -
    Red Hat Linux RHEL 5.5 (kernel level 2.6.18-194)
    Oracle Grid Infrastructure release 11.2.0.2.2
    Oracle Database release 11.2.0.2.2
    I am not a member of OLN, I don't use the Oracle Unbreakable Kernel and I don't use Oracle Enterprise Linux. (But worn out reading about all of these topics.)
    I go to this download site as directed by MOS Note 437743.1.
    http://oss.oracle.com/el5/oracle-validated
    where I am presented a list of these rpm files:
    Index of /el5/oracle-validated
    Name Last modified Size
    oracle-validated-1.0.0-5.el5.i386.rpm 02-Feb-2008 17:09 11K
    oracle-validated-1.0.0-5.el5.x86_64.rpm 02-Feb-2008 17:09 12K
    oracle-validated-1.0.0-5.el5.src.rpm 02-Feb-2008 17:09 12K
    readme 18-Jul-2008 14:55 198
    oracle-validated-1.0.0-18.el5.i386.rpm 30-Mar-2009 01:36 15K
    oracle-validated-1.0.0-18.el5.x86_64.rpm 30-Mar-2009 01:36 15K
    oracle-validated-1.0.0-18.el5.src.rpm 30-Mar-2009 01:36 23K
    oracle-validated-1.0.0-24.el5.i386.rpm 10-Jun-2010 07:07 22K
    oracle-validated-1.0.0-24.el5.x86_64.rpm 10-Jun-2010 07:07 22K
    oracle-validated-1.0.0-24.el5.src.rpm 10-Jun-2010 07:07 29K
    MD5SUMS 19-Aug-2010 17:12 658
    oracle-validated-1.0.0-22.el5.i386.rpm 19-Aug-2010 17:13 15K
    oracle-validated-1.0.0-22.el5.src.rpm 19-Aug-2010 17:13 24K
    oracle-validated-1.0.0-22.el5.x86_64.rpm 19-Aug-2010 17:13 16K
    oracle-validated-1.1.0-7.el5.x86_64.rpm 17-Nov-2010 11:03 23K
    oracle-validated-1.1.0-7.el5.src.rpm 17-Nov-2010 11:03 30K
    Question 1: Do I just grab the newest version of these EL5 based rpms? in this case "oracle-validated-1.1.0-7.el5.x86_64.rpm", or is there some other dependency on kernel level?
    Question 2: Does this Validated RPM configure changes necessary for Grid Infrastucture as well as Database?
    Question 3: Are prople pretty happy with the job that this RPM does in configuring things properly?
    Thanks in advance for your insights.
    - tim

    Hi,
    1.) Yes, only take the newest one.
    2.) Yep it should (some expeptions apply)*. Just make sure you configured an automatic update server (yum), that all dependent RPMs can be downloaded as well.
    For OEL you could use public-yum.oracle.com, but since you don't like OEL I am sure you know where to find a free RHEL one...
    3.) Definitely. I never go without it.
    * E.g. it does not configure/change NTP configuration, as it is a requirement for the 11gR2 GI install.
    Sebastian

  • How are valid-from and valid-to dates set in TRPROD table in APO during CIF

    First question:
    How are the valid-from and valid-to dates set in the APO TRPROD table?
    For example, if I activate a transportaion lane model on March 1, 2008, will the valid-from and valid-to in TRPROD be 20080301 and 99991231?
    Or are these dates defined in the model itself? Or in APO config somewhere?
    Next question:
    Suppose I rerun the model on May 1 2008 to add some records that didn't get added in the March 1 run.  But suppose I want to back-date the valid-from of these records to March 1, rather than use May 1 as the valid-from.
    Can this be done?
    Or is the valid-from always the date of the CIF that creates the TRPROD records?
    Please advise. 
    Thanks

    Hi Srinivas -
    Thanks very much for replying so quickly, but your response didn't answer the questions.
    Let me try again.
    1)
    If I run a CIF on March 1 2008 based on a model that will generate APO TLane data, will the TRPROD valid-from and valid-to get automatically set to 20080301 and 99991231?
    2) 
    If the answer to this question is yes, then is it also true that if I run a new set of records in on May 1, 2008 (no duplications with the original set), then the valid-from and valid-to dates will be 20080501 and 99991231.
    3)
    But if (1) and (2) are both true (if you answer them both "yes", then suppose I want the May 1 record set to have valid-from dates of March 1 (for example, if the May 1 record set includes records that didn't get included in the March 1 set due to an error in selection criteria).  How do I get the valid-from to be March 1 when I run the CIF on May 1?
    On the other hand, if (1) and (2) are NOT true, then how DO the two dates get set during a CIF ????
    Thanks
    djh

  • Characteristics validations in BPS

    I have a question related to characteristics validation in BPS.
    Is it true that the validation is not executed if any one of the involved characteristics have unassigned value?
    We have two characteristics: Account and “Trading partner” in the validation rule. We want to make sure the trading partner is assigned for certain inter company accounts. We have a validation rule (Exit type) that performs this check. But if the user did not input any value for the trading partner, the validation is not getting executed at all.
    I could use enhancement SEMBPS01 to perform the same validation.
    Thanks you for your comments and advice.

    Ravi,
    Unfortunately that's the way it works. But it has a reason for the same as Marc Bernard (Master of BPS) explained as follows.
    "yes, if one or more of the source characteristics is initial #, then the system does not execute derivation (exit or other type).
    The reason is that in those cases the selected record might be part of another aggregation level (which does not contain the initial characteristic)."
    Fore more information, take a look at the following post.
    SEM "EXIT" CHAR RELATIONSHIP not working
    hope it helps.

Maybe you are looking for

  • How to make the graph follows the position of the video frame

    Hai.. I want to open the video (. avi) and graph together in one VI.. The videos and graphs can be controlled (forwarded or rewind) in the desired position.. However, the graph should follow where's the position of the video frame..   The graph gener

  • How to automatically capitalize i in pages 5.0?

    Hi I just updated my MacBook to Mavericks and I found the lastest verion of Pages won't allow me to automatically capitalize "i". Same thing happens to the first word of a sentence, I used to be able to do that. If anyone knows how to solve this, I w

  • How to set default value of VO from pageflowscope variable

    Hi All, I have a page for creating new records and when the user clicks on the create button I show a pop up window for creating the new record. I would like to default few values when the page pops up.I have these default values in pageflowscope var

  • Micro SD Corrupt, Mac OS X Can't Read

    So I have a 1GB Kingston Micro SD card that was used with a phone. I plug it into a USB Micro SD reader, and the disk of "NO NAME" pops up. I click to see what's inside, but none of the files are there (or at least visible). I open up avtivity monito

  • Form help

    Hi There. Please help. I created a form with some required field. when required field is not completed anywhere in form, how do make the mouse pointer to jump to that empty field automatically? Thanks guys. Louk63