APO DP relative dates in macros

Hi guys, perhaps anyone could give input on this issue.
I have a macro that calculates a key figure for the future.
Part of the macro is based on the average of a different key figure for the past 12 months.
I have defined this as an AREA containing the relevant months.
The problem is that as the calculation runs for each future month the area containing the 12 month average moves along with it. This is not what i want, i want the 12 month average to be relative to the current system date, not the date that the calculation happens to have gotten to.
Can this be done? How would you go about doing it?
I can see that my AREA has got some checkboxes called keyfig relative, but i can't seem to find any information on what these actually do, might they be helpful in my case?
Regards
Simon Pedersen

Hi Simon,
There is different options to calculate a "non moving" average.
The easiest is in my opinion the following:
1. Create in your macro a new step with only one time bucket (for example initial, or your first month)
2. Save the result of the average in a variable (use function LAYOUTVARIABLE_SET())
You can then call the variable later on in your macro (LAYOUTVAR_VALUE()).
As I said this is the easiest option, but there is some alternatives (store information in a additional cell (not a row) or fix the horizon in the area)
I hope it will help you.
Thanks and Regards
Julien

Similar Messages

  • How to generate XML from relational data : PL/SQL or Java

    I'm new to Oracle XML and would appreciate some advice. I've been asked to generate XML documents from data stored in relational tables. The XML documents must be validated against a DTD. We will probably want to store the XML in the database.
    I've seen a PL/SQL based approach as follows :
    1.Mimic the structure of the DTD using SQL object types 2.Assign the relational data to the object type using PL/SQL as required
    3.Use the SYS_XMLGEN package to render the required XML documents from the SQL objects
    However, creating the object types seems to be quite time consuming (step 1 above) for anything other than the simplest of XML documents.
    I've also seen that there is the Java based approach, namely :
    1. Use the XML generator to build Java classes based on a DTD.
    2. Use these classes to build the required XML
    On the face of it, the Java based approach seems simpler. However, I'm not that familiar with Java.
    Which is the best way to proceed ? Is the PL/SQL based approach worth pursuing or should I bite the bullet and brush up my Java ?
    Is it possible to use a combination of PL/SQL and Java to populate the dtd generated java classes (step 2 of the Java approach) to reduce my learning curve ?
    Thanks in advance

    To help answer your questions:
    1) Now, in 9iR2, you can use SQL/XML as another choice.
    2) You can also use XSU to generate the XML and use XSLT to transform it to a desired format instead of using object views if possible.
    3) XDK provide Class generator support to populate XML data to Java classes.

  • Any way to disable "Use Relative Dates" everywhere in Finder?

    Is there any way to make Use Relative Dates to NOT be on by default in Finder?
    I just hate the yesterday, today, etc. crap. I want to just see the file dates. Period. I don't want to have to figure out what the date was yesterday to do a file date comparison.
    Yes, you can turn it off but each time I check Use as defaults and open a new folder in finder, the relative dates are back! Argggh!
    Thanks for any help on this.

    Yes its very easy, with finder open, finder bar at top see View, "drop down box" Show View Options, remove the check mark for use relative dates.

  • Standard IDOCS, Programs for posting FI & bank related data.

    Hi,
    (1)Are there any idocs available for posting FI documents, Vendor master?
    (2)Are there any outbound idocs, programs for sending data to banks. EG:positive pay etc?
    Kindly reply to these questions. Correct answer will be awarded points.
    Regards,
    Akshaya.

    Hi,
    There is message type BANK_CREATE for posting the FI related Bank details. Using the change pointers you can trigger the idocs for posting the bank related data.
    Regards,
    Uday

  • PA_CONTRACT_XSLFO: How to invoke a RTF-template with related data template

    Dear Reader,
    actually I want to extend the standard Document Type Layout for a Purchase Agreement Contract with additional data from approved supplier list (ASL).
    Therefor I have created a RTF-template and a data template with the needed sql-statement. For testing I put this in a standalone concurrent programm and it works fine (result was a blue table with all data rows).
    Next step for me was to invoke the RTF-template into the PA_CONTRACT_XSLFO template for extending the Document Type Layout for my Purchase Agreement Contract. So I put the neede invoke-statements
    <xsl:import href="xdo://XXOC.XX_RTF_TEMPLATE.de.00/"/>
    and
    <xsl:call-template name="XX_RTF_TEMPLATE"/>
    into the XSLFO-template. Also I extend the RTF-template with the define template statement
    <?template:XX_RTF_TEMPLATE?>
    So all seems to be fine.
    As result I get the standard document for Purchase Agreement Contract with the additional blue table from RTF-template BUT WITHOUT DATA !
    From my point of view there is no execution of the sql-statement in data template. But I dont know why.
    Do Oracle support a combination of XSLFO-template with data template?
    [XSLFO-template] with related [XSD-data definition]
    calls [RTF-template] with related [data template (with included sql-statement)]
    Thanks for your help.
    Best regards
    Mario.

    How to call a rtf template from another rtf template by passing a value try in main template create hyperlink of url with parameters for another template
    http://bipconsulting.blogspot.ru/2010/02/drill-down-to-detail-or-another-report.html
    When user pull a quote report from siebel this new rtf template should attach to the quote at the end.it'll be only another report
    IMHO you can not attach it to main. it'll be second independent report
    you can try subtemplate but it's not about rtf from rtf by click
    it's about call automatically rtf subtemplate from main rtf based on some conditions
    for example, main template contain some data and if some condition is true then call subtemplate and place it instead of its condition

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Get Rid of Relative Dates in Applications?

    How do I get rid of relative dates in applications such as Mail?
    I cannot find anything in the Mail Preferences to change this.
    Nor can I find anywhere in System Preferences -> Date & Time and International to change this behaviour.
    Settings in View->Show View Options, as previously recommended, do not affect the date format in applications.
    Thanks
    Al Maloney

    Why get rid of them?
    (1) They require extra steps in my brain to calculate the date of the document.
    (2) When I do not know to-day's date, how can I calculate yesterday's date?
    (3) Aesthetically, they are inconsistent with the list of dates.
    (4) I just do not like them.
    Sláinte!
    Al

  • Why can't I set a relative date for a recurring event in calendar on my ipod touch?

    why can't I set a relative date for a recurring event in calendar on my ipod touch?

    Because The Calendar app does not support it. There are apps in the App store that do.

  • How to retrive relational data from an XMLType column in Oracle 10g R2

    Hi
    I want how to retrive the data which is in XML document in an XMLColumn in a Table(or an XMLTable which has the XML Document). This XML Document has to be Queried with XQuery as a Relational data(not an XML Document).
    If any body has some ideas please share it across ASAP.
    please share an example for this because i am new to this XQuery.
    Thanks in Expectation,
    Selva.

    Got it working now. I used the 'extract' function in my select statement, but had to add the .getStringValue() fuction. The extract function, just by itself, returns an XMLDocument type. The call for the column in the SQL statement looked like this.
    extract(XML_CONTENT, '/ROOTOBJECT').getStringVal() xml_content
    Thanks so much for your help. Problem solved!

  • Settlement of planned deliv. costs not possible if GR-related data entered

    Hi gurus,
            While doing T-code MRRL if i am entering the GR posting date and ticking the good + delivery check box an error message is trigerd 'Settlement of planned deliv. costs not possible if GR-related data entered'.PLZ reply the reson for this error.

    I Have the same problem them I realise  that :
    This message just appear if i am using doc. selection = 4 .
    tks
    Teresinha

  • Need Suggestion to Stage Process Order related data

    Hi All,
    Could anybody help me by providing the solution or suggestion for the problem which I am describing here. The problem is like to stage the process order related data (which are downloaded from SAP ECC for sending to machine data base or confirmed by machine data base which will be sent to SAP ECC) can we use NetWeaver Data Base by creating separate tables or by creating separate schema in NetWeaver DB or we will go for separate DB to stage all transactional data of Process Order.
    Thanks in Advance.
    Chandan

    Hello Chandan,
    1) Yes you can use underlying Netweaver DB either by creating new tables or by creating new schema but as per MII best practices it is Not Recommended because by mistake you might end up affecting the NW and MII configurations
    2) There is no hard and fast rule saying you must stage Data but it is very good if you do it because,
                           a) Your data will be buffered when SAP is down
                           b) Faster processing
                           c) Ability to perform more analysis on data through many drill downs to sub-levels
    3) I would recommend going for a separate DB to stage all your SAP Data (both for sending or receiving data from ECC)
    Hope this helps!!
    Regards,
    Adarsh

  • Any way to get rid of Relative Date display in Mail?

    This is a long standing problem for me and macs for as long as I can remember. I can't stand relative dates, and no matter what I do they always reappear. I just switched to MacMail and much to my disappointment, I find relative dates there too. Is there ANY permanent way to get rid of Relative Dates once and for all?

    Carsten, you CAN get rid of the "negative number" entries. If you use the wizard to build the dimension, when it's all done, go back into the dimension object and delete the default "standard" hierarchy that gets created. Even though you apparently DO have to have a default level, you do NOT need to have a default hierarchy.
    This will get rid of the negative numbers and associated logic from the loads...but you still end up with both a DIMENSION_KEY and KEY column...both having exactly the same value in it.
    I wish Oracle would give you an option when creating a dimension to specify whether or not it's a dimension with levels...and if you said NO it would simply have a DIMENSION_KEY without any other keys. Glad to hear though that I wasn't missing something obvious on how to turn that off.
    Thx,
    Scott

  • Help Required: Working with relational data

    Hi,
    I'm looking for some advice
    Scenario
    We have a customer who has an SE relational database with Apex for transaction processing. They require reporting functionality and are debating whether to buy BI SE One or BI EE; they will not buy the enteprise edition database though.
    Questions
    Here are my questions:
    1. In order to use answers and dashboards, does the relational data have to be in a dimensional format. Is this reccommended.
    2. We were debating on whether to create data marts for various business requirements. Is this standard procedure for relational data (Which is not too complex)
    3. If the data has to be in a dimensional format, is this best modelled using warehouse builder or the model layer of the repository using BI Administrator. (Baring in mind that the customer is an SME using a SE database)
    The main problem where having, been fairly new to warehouse building, is that we dont know what the best path is to take when working with relational data. Do we create dimensional models? What should we use? Is a relational structure suitable enough?
    Can anyone shed light on our problems
    Regards
    Kevin

    1. In order to use Answers and Dashboards you do not need to have a Data Warehouse. It can report out of both Transactional database as well as a data warehouse.
    2. It depends on what your business requirements are. BI EE/SE1 does not require you to have data marts. Data Marts would give performance gains, would help you in visualizing the transactional data from a business standpoint, is a big plus but not a mandate. If you want to go that route of creating Datamarts, you can easily do that using BI SE1 since that gives you warehouse builder which can be both used as a data modelling tool and a ETL tool.
    3. Warehouse Builder is a ETL tool. It helps in you visualizing/deploying data warehouses in databases. But BI EE Admin tool is more of a logical dimension modelling tool wherein the facts and dimensions do not exist physically but the tool uses them to do the querying/processing.
    Since you are just starting, what you can do is to use BI EE directly on top of the transactional system for the time being. But as a long term solution you can start working on your data marts and as soon as they are ready you can port the BI EE on top of them rather than the transactional system.
    Thanks,
    Venkat
    http://oraclebizint.wordpress.com

  • Relative dates in advanced search / snapshot queries

    Hi -
    Is there any way to search with a relative date in PT 5.x? EX: "Find me content published in the last week"
    We have over 700 publications that make use of relative date searches in PT 4.5 WS. I understand that these should be converted to separate snapshot queries in 5.x, but as I look at things I realize there does not seem to be a way to query by relative date - I seem to need fixed, specific dates ("between October 1 and October 7"). We realy heavily on this kind of logic in order to keep our content fresh with no ongoing maintenance.
    Anyone have any suggestions? Are we missing something obvious?
    Thanks,
    Eric

    Hi.
    You can do this:
    1.- Replace the controller class (MAC) for the corresponding structure.
    2.- Redefine the method QUERY; then is possible to change the parameters that the "Search Engine"  (might be the Reporting Framework) uses
    You can find more details in "The cookbook" and this can be found in the marketplace. If you don't have access give me your e-mail and i will send it to you.
    Best Regards.
    Armando Rodriguez.

  • Using relational data from SQL data source in Planning and Essbase

    Hi,
    How do I take sample data from a SQL data source and bring it into a Hyperion Planning application? I understand that when creating Planning applications, a link between a relational data source and Essbase must be established, because the relational database holds the metadata while the database outline is stored with Essbase. However, all I am currently able to do is load the data into Planning applicaitons via EAS, where I right-click on the Application database, hit load data, and select from either a .txt file or an excel file. Do I need Oracle Data Integrator? Any help or insight would be greatly appreciated, as well as corrections to any incorrect assumptions I may have made in this post. Thank you.

    When you import your file (Excel or text), you're importing it using a Load Rule in EAS. To load from SQL, you simply create a SQL load rule. You'll load data the exact same way (via EAS), but with a different type of load rule. The load rule will contain the SQL that queries the database. You can preview your data in the load rule the same way you would with a file.
    If your SQL is very complex, I'd recommend creating a view and loading from that view. But otherwise it's pretty straight-forward.
    The only catch is that you need to configure a database connection (to your relational database) on the Essbase server. The Essbase DBA guide will show you how to do this.
    You COULD use ODI, but I tend to only use it for loading metadata.
    Hope this helps,
    - Jake

Maybe you are looking for

  • Strange Admin password change Apex 3.01

    Hi, I recently installed apex 3.01successfully and copied the Admin user password during the install. I successfully logged in later with that password into the Admin account and set up a couple workspaces. Yesterday I tried to log in with that same

  • Itunes wont open. itunes has stopped working message Please help!!

    I had itune 10.1 working just normal, and suddenly it wont open.    even before that, 1st problem i had  was with Quick time, i couldnt uninstall it, kept saying newer version of Quick time installed..or something like that. and i figured out to inst

  • Beats By Dre

    Anyone that spends a decent amount of time listening to music through headphones knows what a wide range of sound quality and comfort you can get from different models of headphones. I spend a good portion of my days with my headphones on to help blo

  • CL_GUI_FRONTEND_SERVICES= FILE_EXIST

    Hi experts, See the following method. Here what value i have to give for  RECEIVING  RESULT   It is a type group. How to declare a type-group?            CALL METHOD CL_GUI_FRONTEND_SERVICES=>FILE_EXIST       EXPORTING         FILE                 =

  • Using a substitution variable to pull data from a table

    Hello, I am working with a simple program that asks the user to enter any info that will relate to a table in the db. Thus far I have read 1,000 ways to accomplish this, and none of them work. PROMPT ACCEPT var PROMPT 'Enter var' DECLARE BEGIN SELECT