HFM Extended Analytics Extract - Only for Data at Review Level 6 and up?

Hello, I have a requirement to Extract Data from HFM as Flat Files in order for them to be picked up by a separate process.
In order to do this I have used Extended Analytics to create the extracts I require.
The issue I have is that users are only supposed to be able to extract data that has been approved up to Review Level 6 and above, and at the moment they can choose any POV regardless of the Review Level the combination of the 4 dimensions has.
Is there a recommended or easy way to achieve this?
I have looked into a few possibilities including:
1. Restricting the LOV drop-downs (via re-creating metadata via a custom script)
2. Having a process "catch" the fact that the POV is not approved upon export (perhaps using a taskflow with a "Process Management Action" stage, or by creating a Rule that can be applied to the EA Exract?)
3. Including in the Extracted data file a "header" row with the POV Review Level value (so the next process in the chain can handle validating this)
4. A custom API method
I am quite in-experienced with HFM so any advice or pointers would be very much appreciated.
Many Thanks,
Martin

Terri,
Doesn't EIS require a parent-child/recrusive table for the dimensions? If it could read HFM EA's tables as they are natively organized, then I guess EIS would be an approach although unfortunately it is not possible in the environment in question. I am at a loss as to why HFM EA doesn't output tables as required for Essbase, especially given the Essbase option in exporting, but it is what it is.
I got an answer over on Network54 suggesting how to build those tables in Oracle. Again, unfortunately, I'm in a SS 2000 environment so that means extra work for my SQL resource as I think it requires a stored procedure (am not, never have, and will likely never claim to have more than a 'SELECT * FROM' level of SQL knowledge) but at least it's a start.
The original thought I had was to get these Essbase-ready parent-child dimension tables and build the cubes from them. Are you interested in EIS because it has better automation?
Regards,
Cameron Lackpour

Similar Messages

  • Are analytic functions usefull only for data warehouses?

    Hi,
    I deal with reporting queries on Oracle databases but I don't work on Data Warehouses, thus I'd like to know if learning to use analytic functions (sql for anaylis such as rollup, cube, grouping, ...) might be usefull in helping me to develop better reports or if analytic functions are usually usefull only for data warehouses queries. I mean are rollup, cube, grouping, ... usefull also on operational database or do they make sense only on DWH?
    Thanks!

    Mark1970 wrote:
    thus does it worth learning them for improving report queries also not on DHW but on common operational databases?Why pigeonhole report queries as "+operational+" or "+data warehouse+"?
    Do you tell a user/manager that "<i>No, this report cannot be done as it looks like a data warehouse report and we have an operational database!</i>"?
    Data processing and data reporting requirements not not care what label you assign to your database.
    Simple real world example of using analytical queries on a non warehouse. We supply data to an external system via XML. They require that we limit the number of parent entities per XML file we supply. E.g. 100 customer elements (together with all their child elements) per file. Analytical SQL enables this to be done by creating "buckets" that can only contain 100 parent elements at a time. Complete process is SQL driven - no slow-by-slow row by row processing in PL/SQL using nested cursor loops and silly approaches like that.
    Analytical SQL is a tool in the developer toolbox. It would be unwise to remove it from the toolbox, thinking that it is not applicable and won't be needed for the work that's to be done.

  • Need to build communication redundancy using serial RS-232 for Data Transfer b/w Host and RT irrespective of TCP/IP Data Transfer

    Hi - I would like to build the logic in which it should accomodate the communication redundancy using serial RS-232 for Data Transfer b/w Host and RT irrespective of TCP/IP Data Transfer.
    I want to do data transfer b/w host and RT through RS232 VISA portal whenever TCP/IP ethernet cable has been unplugged from the controller continuosly , it should keep on checking for TCP/IP link re-establishing also , when ever the tcp/ip link established again that time the communication should be using in that link only. This is accomplished by deploying the RT vi as execuatbale file. I made some logic regards to the above said logic , bur it was not working as much I expected.
    I request you to go through the attached two VI's and let me know , what I did wrong in that,
    Please do the needful.
    Attachments:
    TCP_Serial_Host.vi ‏33 KB
    TCP_Serial_RT.vi ‏41 KB

    even i am new to this topic and i am trying to get familiar with these protocols
    refer to tcp server/client examples in labview examples

  • PO for item of service Level short and long text using sap script

    Please let me know how to do the PO for item of service Level short and long text using sap script.
    <MOVED BY MODERATOR TO THE CORRECT FORUM>
    Edited by: Alvaro Tejada Galindo on May 5, 2009 10:25 AM

    Hi ,
          In the transaction related to PO select the path goto->header texts and then details of the texts like ID,objectname,language,name.Pass all theses details to 'READ_TEXT '   FM. Then you will get text lines described for a perticular Po ,similarly for item also.
    Thanks,
    Suma.

  • Scripting: browse for data files in a folder and all subfolders below

    Hello,
    i'm looking for a command to search for data files (*.dat) in a folder and all subfolders below.
    I checked out command DirListGet. This command only search in the declared folder and not in the subfolders below.
    Anyone an idea?
    I don't want to use a loop-structure for finding subfolders and browse for the data files.
    Mr. Buddy

    dim result : result = DirListGet("C:\tmp", "*.dat", "filename", "FullFilenamesRecursive")
    dim fl : for each fl in result
    MsgBox fl
    Next
    Works fine for me and even recursive.
    Alternatively but with mucg more effort and the same result.
    Option Explicit 'Forces the explicit declaration of all the variables in a script.
    dim folderPath : folderPath = "C:\tmp"
    dim files : files = GetFileListRecursive(folderPath)
    dim fl : for each fl in files
    MsgBox fl
    Next
    Function GetFileListRecursive(folderPath)
    dim fso : Set fso = CreateObject("Scripting.FileSystemObject")
    dim results : results = Array()
    GetFiles fso, folderPath, results
    GetFileListRecursive = results
    End Function
    Sub GetFiles(fso, folderPath, results)
    dim folderObj : Set folderObj = fso.GetFolder(folderPath)
    dim f : for Each f In folderObj.Files
    if(0 = StrComp(fso.GetExtensionName(f), "dat", 1)) then
    dim index : index = ubound(results) + 1
    redim Preserve results(index)
    results(index) = f.Path
    End If
    next
    dim d : for Each d In folderObj.SubFolders
    GetFiles fso, d.Path, results
    next
    End sub

  • Why am I being charged for data when connected to wifi AND not using my phone?!

    WHY am I being charged for data that I'm not using? At times when I'm asleep AND my phone is connected to wifi?!  This happens at really specific, alternating times like 9:54 pm every day, or 12:01 am, 6:01 am and then 12:01pm and 6:01pm. When I called I was told  "maybe your wifi at home is disconnecting" or "maybe your phone is connecting to atnt or sprint". It's not possible that my phone is disconnecting from wifi every six hours on the dot and, at that exact time, I'm always using data. I find it hard to believe that I live in a HUGE city and my phone magically connects to other networks and I NEVER notice this. Verizon is full of LIARS (and maybe’s) and I'm not convinced that these data charges are a coincidence. I was told to turn off my cellular data to see if the problem resolves itself. Why should I alter my daily life when Verizon is clearly the one with the issue? I hope there's a class action lawsuit because Verizon is trying to get over on everyone! I need answers!

        Hello NotHappy101,
    I think it's quite odd that your data usage is at very specific times. We can certainly take a closer look. Please reply to my Direct Message, so we can get some additional details.
    Thanks,
    MichelleH_VZW
    Follow us on Twitter @VZWSupport

  • Reconciliation and data verification approaches for data in  sap r/3 and bw

    Hi
    Cam anybody suggest what are the different reconciliation and data verification approaches to assure that BW and R/3 source data are in sync.
    Thanks in advance.
    Regards,
    Nisha.

    Hi
      What you can do is, go to R/3 transaction RSA3 and run the extractor, it gives you the number of records extracted and then go to BW Monitor to check the number of records in the PSA.
    if it's the same, there you go.
    there is an HOW-TO Document on service market place "Reconcile Data Between SAP Source Systems and SAP BW".
    Check this document as well which is very helpful.
    There is another HOW-TO document "Validate infocube data by comparing it with PSA Data". this is also a good document.
    Hope it helps.
    Hari Immadi
    http://immadi.com
    SEM BW Analyst

  • Data Mismatch: Plant level stock and Serial Number collection

    Hi,
    I am working with stock in SM.
    I want the plant level stock and the serial number collection for the serialised material. The value coming for the plant stock is 20 (Eg. MATNR = 50004050) and the corresponding serial number collection is 27. But the values in plant stock and in the serial number collection should be one and the same.
    For getting the plant stock I'm using the MARA and MARC table. For getting the serial number collection I'm using EQUI and EQBS table.
    When I increase the stock using the transaction MB1C and movement type 561, I'm giving the serial numbers for the material. The issued serial numbers are coming in the serial number collection correctly but the stock at the plant level is not getting updated properly. Please let me know from where I can get same stock.
    Thanks in Advance,
    Mohan

    This could happen, if the status of serial number is wrongly updated to ESTO. From where are you taking serial numbers?Physically what is right the qty showing plant stock or the qty showing serial number? May be you go and check the history of the serial number to see if it is really available in stock.
    Also it can be serial number is optional so though the stock is reduced but serial number is not captured while issuing so the stock at plant level is less but serial number is more,
    If this is the case, reconciallation directly from tables will be difficult.
    Regards
    Sangeeta
    Edited by: Sangeeta Khurana on May 4, 2011 12:34 AM

  • FLat file  Extended analytics extract from HFM

    The file we extract is in a unicode/binary - how do we convert that to the regular mode?.
    Any suggestions. Thanks.

    Can you provide some more detail? The EA data we extract in v9.3.1 is exported directly into an ODBC database which we can query via SQL, Access, etc. You should just have to set up a query around the FACT table.

  • How is it possible to extend pattern chars valid for Date formatting?

    Hi
    I need to represent Calendar.DAY_OF_MONTH and Calendar.MONTH date fields in one symbol (1,2,3,..,9,A,B,C,D..) and keep working standard patterns
    SimpleDateFormart doesn't give such possibility.
    In a result i want to have something like that:
    ExtDateFormat edf = new ExtDateFormat("yyyy/B/C"); // where B - is month in (1,2,3,..,9,A,B,C), C - is day in (1,2,3,..,9,A,B,C..)
    System.out.println(edf.format(Calendar.getInstance().getTime())); // prints something like "2008/03/D"

    You can extend the wireless range of the AirPort with a D-Link or the other way around ... BUT only if the connection between them is wired. This would be the basis of a roaming network. If you must have them interconnected by wireless, then it will NOT work.

  • Would it be useful to set a partition on my internal HD only for data and the other for apps, OS and that kind of stuff?

    As I wrote in the title, would it be useful?
    I come from a Windows PC where I had that kind of division and it seemed everything had worked fine along 5 years.
    Please, give a motivation to your answers to let me get in the point.

    File fragmentation is not an issue with OS X, which looks after itself.
    Defragmentation in OS X:
    http://support.apple.com/kb/HT1375  which states:
    You probably won't need to optimize at all if you use Mac OS X. Here's why:
    Hard disk capacity is generally much greater now than a few years ago. With more free space available, the file system doesn't need to fill up every "nook and cranny." Mac OS Extended formatting (HFS Plus) avoids reusing space from deleted files as much as possible, to avoid prematurely filling small areas of recently-freed space.
    Mac OS X 10.2 and later includes delayed allocation for Mac OS X Extended-formatted volumes. This allows a number of small allocations to be combined into a single large allocation in one area of the disk.
    Fragmentation was often caused by continually appending data to existing files, especially with resource forks. With faster hard drives and better caching, as well as the new application packaging format, many applications simply rewrite the entire file each time. Mac OS X 10.3 onwards can also automatically defragment such slow-growing files. This process is sometimes known as "Hot-File-Adaptive-Clustering."
    Aggressive read-ahead and write-behind caching means that minor fragmentation has less effect on perceived system performance.

  • I need options for data replication within production db and dimensional db

    Hi,
    I'm looking for options on how to solve this issue. We've 2 databases, one is our production, operative database, used by around 400 users at a time, and another one, which is our dimensional model of the same info, used to obtain reports. We also have a lot of ETL's (extract, transform and load) processes running every night to update the dim model.
    Mi problem is that we have some online reports, and nowadays, we're getting data from the operational database, causing a performance issue in online operations. We want to migrate this reports to the dimensional model, and we're trying to find the best options for doing this.
    Options that we're considering are ETL's process running continuously every XX minutes, materialized views, ETL's on demand, and others.
    Our objective is to minimize performance issues on transactional database.
    We're using Oracle 8i (yes, the oldie one) and Reporting Services as report engine (reports just run a pkg to get data).
    Any option is welcome.
    Thx in advance.
    Regards,
    Adrian.

    The best option for you if the performance is the
    most important is ORACLE STREAMS. Also is the most
    complex but the final results are very goodsAgreed. As User12345 points out, though, that requires Oracle 9.2 or higher.
    Another option is the materialized views with Fast
    Refresh , that need the materialized view logs in
    the master site.
    The first load is expensive but if you refresh each
    15 minutes the cost is not high.I'd be careful about making that sort of statement. The overhead of both maintaining materialized view logs (which have to be written to synchronously with the OLTP transactions and which impose an overhead roughly equivalent to a trigger on the underlying table) and doing fast refreshes every 15 minutes can be extensive depending on the source system. One of the reasons that Streams came about was to limit this overhead.
    For refresh i execute a cron shell that run the
    DBMS_MVIEW.REFRESH package. my experience with group
    refresh not was goodWhat was your negative experience with refresh groups? I've used them regularly without serious problems. Manual refreshes of individual materialized views against an OLTP system would scare the pants off me because you'd inevitably end up with transactionally inconsistent views of the data (i.e. child records would be present with no parent record, updates that affect multiple tables would be partially replicated until the next refresh, etc). Trying to move that sort of inconsistent data into a different data model, and trying to run reports off that data, would seem highly challenging at a minimum. Throwing everything into a single refresh group so that all the materialized views are transactionally consistent, or choosing a handful of refresh groups for those tables that are related to each other, seems like a far easier way to build a system.
    Justin

  • Extended Support contract required for 10.2.0.5 PSU and SBP

    Hi all
    There are important changes regarding the terms of use of the Oracle Bundle Patches during the Oracle Extended Maintnenance period for Oracle Version 10.2.
    For more information, see Hot News [SAP Note 1654734|http://service.sap.com/sap/support/notes/1654734] (SAP Service Marketplace logon required).
    Best regards, Aidan
    Follow us on [Facebook|http://www.facebook.com/SAPonOracle#!/SAPonOracle] and [Twitter|http://twitter.com/#!/SAPonOracle].

    Hi,
    Thanks a lot.
    Yes, I just saw in the support exactly what you said. No PSU available for our Linux version of x86-64.
    I would appreciate if you could correct me if I am wrong regarding OPATCH for DB HOME...for the purpose of ROLLING RAC .
    oracle:> opatch apply -local -oh /u01/app/oracle/product/db10.2.0
    I will only stop instance running at db01 node. I hope this above opatch command is used for ROLLING RAC witn no downtime option, isn't it?
    Best Regards

  • Can I backdate iTunes to iTunes 10? I use it only for my 18,000 song collection, and I hate the iTunes 11 configuration.

    I have an iMac,  version 10/68,  My iTunes is, reluctantly, up-to-date.  I use iTunes to organize my music, with over 100 playlists, all song rated, dated,commented on, and added to as many of the playlists as they ought to be heard in.  I have no use for the album covers - I don't even keep them.
    Can I re-enter an earlier iTunes version without losing my comments,dates, ratings, etc?  I would like to randomize the order in which songs are played wittin their playlists and I don't waant to lose th  I would much prefer this to the current tedious sytem today.
    Thank you so much
    PS  A bit of Mac history on its birthday.  II had one of the first 50 Macs sold in the Washington area and made good use of it the very first time I sat down to try it.  It was very expensive, over $4000 and worth every penny. I was running the office of a group of people running for political office as a team.  The artist we had hired to design political flyers for each of them did not do her job, and I was able, the first time I ever touched a Mac, to design and print 20-odd flyers, with no  trouble at allI I was enchanted, and have never used anything other than a Mac ever since. 

    iTunes will always store the backups to the same drive or partition it is installed on. A couple things that might help:
    -If you have multiple backups sitting there you can delete extraneous ones by opening itunes and click on the Edit menu and choosing Preferences. In the window that cames up click the Devices tab and delete the backups you don't need.
    -Alternately you can move the backup files to a different location. Use this article to locate the backup files:
    http://support.apple.com/kb/ht4946
    -If there is really not enough room on that drive, just uninstall itunes and reinstall it onto the larger drive. Make sure to follow the steps in this article if you uninstall:
    http://support.apple.com/kb/ht1925

  • Steps for Data Guard with one primary and 2 standby

    Hi,
    Database :10.2.0.4, 11.2.0.1
    Os: Windows , Unix
    A ----------------> Primary database
    B ----------------> Standby Database 1
    C ----------------> Standby Database 2
    I want to configure *2 standby* databases for single primary database.
    Lets take, A ,B and C are my machines.My data guard configuration will be like,*archive logs will be moving* from A to B and A to C.
    If i do any switchover in between A and B , now B is primary and remaining A and C are standby databases.At this stage also , archive logs should move from B to A and B to C. Also, same should happen from C to A and C to B,If i do switchover in between B and C.If everything is fine , then i will do switchback to main Primary database(A).
    How do i have to mention PFILE in all machines ,the parameters like
    LOG_ARCHIVE_DEST_1=LOCATION=<PATH> -- LOCAL ARCHIVE PATH
    LOG_ARCHIVE_DEST_2=SERVICE=
    LOG_ARCHIVE_DEST_3=SERVICE=
    FAL_SERVER=
    FAL_CLIENT=
    STANDBY_FILE_MANAGEMENT=
    In my tnsnames.ora , primary,standby1 and standby2 are my service entries and these are same in all of my machines.
    Please suggest me , how do i can configure my pfiles in all machines ?.
    Thanks,
    Sunand

    Not yet, but now you have me interested.
    Please consider Flashback.
    I still have to test but here's my take:
    PRIMARY SETTINGS
    *.FAL_SERVER=STANDBY
    *.FAL_CLIENT=PRIMARY
    *.STANDBY_FILE_MANAGEMENT=AUTO
    *.DB_UNIQUE_NAME=PRIMARY
    *.LOG_FILE_NAME_CONVERT='STANDBY','PRIMARY'
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMARY'
    *.log_archive_dest_2='SERVICE=STANDBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'
    *.log_archive_dest_3='SERVICE=STANDBY2 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY2'
    *.LOG_ARCHIVE_DEST_STATE_1=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_2=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_3=ENABLE
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    STANDBY 1 SETTINGS
    *.FAL_SERVER=PRIMARY
    *.FAL_CLIENT=STANDBY
    *.STANDBY_FILE_MANAGEMENT=AUTO
    *.DB_UNIQUE_NAME=STANDBY
    *.LOG_FILE_NAME_CONVERT='PRIMARY','STANDBY'
    *.log_archive_dest_1=LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STANDBY'
    *.log_archive_dest_2='SERVICE=PRIMARY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY'
    *.log_archive_dest_3='SERVICE=STANDBY2 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY2'
    *.LOG_ARCHIVE_DEST_STATE_1=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_2=DEFER
    *.LOG_ARCHIVE_DEST_STATE_3=DEFER
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    STANDBY2 SETTINGS
    *.FAL_SERVER=PRIMARY
    *.FAL_CLIENT=STANDBY2
    *.STANDBY_FILE_MANAGEMENT=AUTO
    *.DB_UNIQUE_NAME=STANDBY2
    *.LOG_FILE_NAME_CONVERT='PRIMARY','STANDBY2'
    *.log_archive_dest_1=LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STANDBY2'
    *.log_archive_dest_2='SERVICE=STANDBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'
    *.log_archive_dest_3='SERVICE=PRIMARY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY'
    *.LOG_ARCHIVE_DEST_STATE_1=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_2=DEFER
    *.LOG_ARCHIVE_DEST_STATE_3=DEFER
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    Edited by: mseberg on Nov 29, 2010 9:39 AM
    The first test slapped me. Looking at 409013.1 Cascaded Standby Databases
    Edited by: mseberg on Nov 29, 2010 12:49 PM

Maybe you are looking for