FLat file  Extended analytics extract from HFM

The file we extract is in a unicode/binary - how do we convert that to the regular mode?.
Any suggestions. Thanks.

Can you provide some more detail? The EA data we extract in v9.3.1 is exported directly into an ODBC database which we can query via SQL, Access, etc. You should just have to set up a query around the FACT table.

Similar Messages

  • HFM Extended Analytics Extract - Only for Data at Review Level 6 and up?

    Hello, I have a requirement to Extract Data from HFM as Flat Files in order for them to be picked up by a separate process.
    In order to do this I have used Extended Analytics to create the extracts I require.
    The issue I have is that users are only supposed to be able to extract data that has been approved up to Review Level 6 and above, and at the moment they can choose any POV regardless of the Review Level the combination of the 4 dimensions has.
    Is there a recommended or easy way to achieve this?
    I have looked into a few possibilities including:
    1. Restricting the LOV drop-downs (via re-creating metadata via a custom script)
    2. Having a process "catch" the fact that the POV is not approved upon export (perhaps using a taskflow with a "Process Management Action" stage, or by creating a Rule that can be applied to the EA Exract?)
    3. Including in the Extracted data file a "header" row with the POV Review Level value (so the next process in the chain can handle validating this)
    4. A custom API method
    I am quite in-experienced with HFM so any advice or pointers would be very much appreciated.
    Many Thanks,
    Martin

    Terri,
    Doesn't EIS require a parent-child/recrusive table for the dimensions? If it could read HFM EA's tables as they are natively organized, then I guess EIS would be an approach although unfortunately it is not possible in the environment in question. I am at a loss as to why HFM EA doesn't output tables as required for Essbase, especially given the Essbase option in exporting, but it is what it is.
    I got an answer over on Network54 suggesting how to build those tables in Oracle. Again, unfortunately, I'm in a SS 2000 environment so that means extra work for my SQL resource as I think it requires a stored procedure (am not, never have, and will likely never claim to have more than a 'SELECT * FROM' level of SQL knowledge) but at least it's a start.
    The original thought I had was to get these Essbase-ready parent-child dimension tables and build the cubes from them. Are you interested in EIS because it has better automation?
    Regards,
    Cameron Lackpour

  • Flat file issue in extract program

    Hi All
    I have developed an extract program, which downloads the data from SAP and dump it in a flat file, in desired format.
    Flat file contains various rows.
    In one row, i want to keep 80 charecter length blank at the end.
    Ex:
    HDR2345 c 20060125 0r42z1005.5USD
    Now after "USD", i want to keep 80 char length as Blank spaces.
    If i code it as  v_record+33(80) = ' '.. it won't work out...
    Please suggest me...
    Regards
    Pavan

    Hi Pavan,
    Please try this code.
    DATA: L_SPACE TYPE X VALUE '20'.
    MOVE L_SPACE TO V_RECORD+33(80)
    OR
    MOVE L_SPACE TO V_RECORD+79(80).
    This should work.
    Regards,
    Ferry Lianto

  • Data extract from HFM

    Hi,
    There is a need to get TB data from HFM in a format that's got account info as debits and credits(for a journal load to Peoplesoft), and has a control field at the end, how can this be best achived.
    Regards

    The ghost numbers I am referring to are ones generated in HFM. For example on an income statement if you load $1,000 as the January YTD Balance and load $0 as the February YTD Balance, HFM will create a ghost (gray) $-1000 to make the February YTD Balance truly Zero. I need to somehow extract that -1000 amount to make a true periodic data file that will be loaded into Hyperion Planning. Thanks!

  • Periodic Data Extract from HFM

    Does anyone have any tips/suggestions for extracting periodic data out of HFM? Currently in our application we are loading year to date data from Oracle into HFM using FDM. When we go to extract periodic data in the application it skips over the ghost numbers that are generated in HFM. In order for the data to be complete though we need those ghost numbers to come out in the extract file. Any help would be greatly appreciated. Thank you!

    The ghost numbers I am referring to are ones generated in HFM. For example on an income statement if you load $1,000 as the January YTD Balance and load $0 as the February YTD Balance, HFM will create a ghost (gray) $-1000 to make the February YTD Balance truly Zero. I need to somehow extract that -1000 amount to make a true periodic data file that will be loaded into Hyperion Planning. Thanks!

  • Unable to download metadata extracts from HFM

    I have installed 11.1.2.3 using an Oracle database on windows server 2008. After extracting a metadata file, when I click on download I am unable to download the file. At the bottom of the screen there is a message "http://servername:9000/hfmadf/faces/hfm.jspx? accessibility mode= false.
    I would appreciate if some one could help me to resolve this issue.
    Thank you

    HornHonker wrote:
    I was planning on buying my kids some new 4th gen iPod touch units and am considering a new iPhone myself, but if Apple are going to brutally excommunicate earlier generations of hardware this way, then I will be going Android instead  :-(
    It has nothing to do with the hardware, and not all devices seem to be affected. If you bought 4th gen iPods, they wouldn't have this problem. But the problem has nothing to do with what generation the iPod is. As for yourself getting a new phone, if you're not going to get an iPhone, I strongly recommend a Windows 7 phone or a Blackberry phone before Android. But I hope your problem gets worked out and your next phone is an iPhone. Good luck

  • Getting errors loading flat file into BI 7 from application server

    We are able to load the file via local (from the PC) successfully but when we try and load the exact same file from the application server we get an error:
    Error 'The argument '151.8470707' cannot be interpreted as a number' when assigning application structure, line 478, contents "11 140604     1000 8110 902070    P..."
    What is really strange if we split the file into 2 we are able to load each of the sections successfully from the appliation server.  We have a need to load the whole file from the application server and don't wish to split it up.  Can anyone tell me how to make this work??

    what I should do is the following:
    open the 3 files you have using notepad and compare line 478 in the original file (that had the error) with the corresponding line in one of those 2 other files. what are the differences? the answer to this question might give you a clue.
    and if you don't see any differences try to cut and paste that one line with the error to a new 4th file and load this one and see if the error reoccurs (I think it will).
    regards, André

  • Extract hierarchy from a flat file

    Hello,
    I’m trying to extract hierarchy from a flat file.
    The hierarchy is built from 3 different infoObjects each infoObject represent different level in the hierarchy. I built the hierarchy in the last level of the hierarchy and put all the levels as external chars in the hierarchy.
    (I used the blog: Hierarchy Upload from Flat files: /people/prakash.bagali/blog/2006/02/07/hierarchy-upload-from-flat-files )
    When I extract the data into the PSA it looks fine but when I choose to  update the PSA data into the infoObject  the request stays yellow. Three is no dump and I cant find the job in SM50.
    Please Advice,
    David

    Hi David,
    Thats the problem with using PSA for flatfile hierarchy loads. If you can change the transfer method to "IDOC" then change and retry the load. The IDoc method gives you an advantrage when dealing with flatfiles as you can debug and notice which record # has errors.
    Bye
    Dinesh

  • Down Loading  of data in Japanese Language from SAP R/3  30F to Flat file

    Hi All,
    We need to Extract the data (like MM ,SD, FI, CO Modulus for reporting purposes) which is  in Japanese from the R/3 30F to Flat file and the Upload from flat file to BW 3.5
    When we try to down load some sample records ,the Japanese Characters coming as boxes.
    Can anyone Suggest how to Extract the Data in japanese Language into Flat file  ,
    and also are there any standard Extractors Available to extract all MM,FI, CO SD ..etc so that we can use them...
    Please give some solution to above problem
    Thanks&Regards
    Ranga Rao

    Hi,
    I have no idea about this.
    I have read some document after seeing your question.
    may be helpful for you.
    check the below link.
    <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5411d290-0201-0010-b49b-a6145fb16d15">https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5411d290-0201-0010-b49b-a6145fb16d15</a>
    Regards,
    Vijay

  • Extra column to be added while importing data from flat file to SQL Server

    I have 3 flat files with only one column in a table.
    The file names are bars, mounds & mini-bars.
    Table 'prd_type' has columns 'typeid' & 'typename' where typeid is auto-incremented and typename is bars, mounds & mini-bars.
    I Import data from 3 files to prd_details table. This table has columns 'pid', 'typeid' & 'pname' where pid is auto-incremented and pname is mapped to flat files and get info from them, now i wanted the typeid info to be received from prd_type table.
    Can someone please suggest me on this?

    You can get it as follows
    Assuming you've three separate data flow tasks for three files you can do this
    1. Add a new column to pipeline using derived column transformation and hardcode it to bars, mounds or mini-bars depending on the source
    2. Add a look task based on prd_type table. use query as
    SELECT typeid,typename
    FROM prd_type
    Use full cache option
    Add lookup based on derived column. new column -> prd_type.typename relationship. Select typeid as output column
    3. In the final OLEDB destination task map the typeid column to tables typeid column.
    In case you use single data flow task you need to include a logic based on filename or something to get the hardcoded type value as bars, mounds or mini-bars in the data pipeline
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Deleting files that have been extracted from a zip archive

    Can we PLEASE get an update that will allow us to delete the files that were extracted from a zip archive. This is ridiculous. A quick google brings back loads of threads all over the place about people who want to be able to delete files but can't because of some daft bug. 
    How are RIM not aware of this? Do they not care?

    There is a bug in the .zip decompressor used by the native file manager. It incorrectly applies file permissions sometimes stored in the archive to the extracted files, preventing them from being deleted by us mere mortals. Once extracted, there's nothing you can do.
    <plug>
      You could, of course, use Files & Folders to extract your zip files, which doesn't suffer from this problem.
    </plug>
    Files & Folders, the unified file & cloud manager for PlayBook and BB10 with SkyDrive, SugarSync, Box, Dropbox, Google Drive, Google Docs. Free 3-day trial! - Jon Webb - Innovatology - Utrecht, Netherlands

  • Loading flat files from a folder

    hi,
    i'm new in O.W.B., want to know if there is a way to import flat files into OWB directly from a folder. that is, automatically, without having to import one at a time.
    thanks.

    Hi Francesco
    So the UI lets you import one at a time or say this file is the same as that file (if same number of columns). You can also configure an external table or flat file mapping to add many data files to be read for the one external table or flat file mapping.
    If you want to import the metadata into OWB by pointing at a directory and importing all of the files there automatically then you would have to script this up. I have written scripts in the past that read metadata from files (such as for fixed length files for example the Oracle Sales Analyzer/OSA product has such files) and generate flat file definitions and external tables. The metadata files describe the fixed length fields, both the positional and datatype information. With CSVs if the files have a header row with a name for the column, you could script some of this or default the names, but then the datatypes would also have to be inferred by analyzing or from some metadata that can be added.
    Cheers
    David

  • HUGE amount of data in flat file every day to external system

    Hello,
    i have to develop several programs to export all tables data in a flat file for external system ( EG. WEB).
    I have some worries like if is possible by SAP export all KNA1 data that contains a lot of data in a flat file using the extraction:
    SELECT * FROM KNA1 ITO TABLE TB_KNA1.
    I need some advices about these kind of huge extractions.
    I also have to extract from some tables, only the data changes, new record, and deleted records; to do this I thought of developing a program that every day extract all data from MARA and save the extraction in a custom table like MARA; the next day when the programs runs compare all data of the new extraction with the old extraction in the ZMARA table to understand the datachanges, new records or deleted record.. It's a righ approach? Can have problems with performance? Do you now other methods?
    Thanks a lot!
    Bye

    you should not have a problem with this simple approach, transferring each row to the output file rather than reading all data into an internal table first:
    open dataset <file> ...
    select * from kna1 into wa_kna1
      transfer wa_kna1 to <file>
    endselect
    close dataset <file>
    Thomas

  • Golden Gate for flat file

    hi,
    I have tried with GoldenGate for Oracle/ non-Oracle databases. Now, I am trying for flat file.
    What i have done so far:
    1. I have downloaded Oracle "GoldenGate Application Adapters 11.1.1.0.0 for JMS and Flat File Media Pack"
    2. Kept it on the same machine where Database and GG manager process exists. Port for GG mgr process 7809, flat file 7816
    3. Following doc GG flat file administrators guide Page 9 --> configuration
    4. Extract process on GG manager process_
    edit params FFE711*
    extract ffe711
    userid ggs@bidb, password ggs12345
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    add extract FFE711, EXTTRAILSOURCE ./dirdat/oo*
    add rmttrail ./dirdat/pp, extract FFE711, megabytes 20*
    start extract  FFE711*
    view report ffe711*
    Oracle GoldenGate Capture for Oracle
    Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
    Windows (optimized), Oracle 11g on Apr 22 2011 03:28:23
    Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
    Starting at 2011-11-07 18:24:19
    Operating System Version:
    Microsoft Windows XP Professional, on x86
    Version 5.1 (Build 2600: Service Pack 2)
    Process id: 4628
    Description:
    ** Running with the following parameters **
    extract ffe711
    userid ggs@bidb, password ********
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 1G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1.77G
    CACHESIZEMAX (strict force to disk): 1.57G
    Database Version:
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for 32-bit Windows: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    Database Language and Character Set:
    NLS_LANG environment variable specified has invalid format, default value will b
    e used.
    NLS_LANG environment variable not set, using default value AMERICAN_AMERICA.US7A
    SCII.
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "AL32UTF8"
    Warning: your NLS_LANG setting does not match database server language setting.
    Please refer to user manual for more information.
    2011-11-07 18:24:25 INFO OGG-01226 Socket buffer size set to 27985 (flush s
    ize 27985).
    2011-11-07 18:24:25 INFO OGG-01052 No recovery is required for target file
    E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, at RBA 0 (file not opened).
    2011-11-07 18:24:25 INFO OGG-01478 Output file E:\GoldenGate11gMediaPack\V2
    6071-01\dirdat\ffremote is using format RELEASE 10.4/11.1.
    ** Run Time Messages **
    5. on Flat file GGSCI prompt-->_*
    edit params FFR711*
    extract ffr711
    CUSEREXIT E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\flatfilewriter.dll CUSEREXIT passthru includeupdatebefores, params "E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\sample-dirprm\ffwriter.properties"
    SOURCEDEFS E:\GoldenGate11gMediaPack\V26071-01\dirdef\vikstkFF.def
    table ggs.vikstk;
    add extract ffr711, exttrailsource ./dirdat/pp*
    start extract ffr711*
    view report ffr711*
    Oracle GoldenGate Capture
    Version 11.1.1.0.0 Build 078
    Windows (optimized), Generic on Jul 28 2010 19:05:07
    Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.
    Starting at 2011-11-07 18:21:31
    Operating System Version:
    Microsoft Windows XP Professional, on x86
    Version 5.1 (Build 2600: Service Pack 2)
    Process id: 5008
    Description:
    ** Running with the following parameters **
    extract ffr711
    CUSEREXIT E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\flatfilewriter.dll CUSE
    REXIT passthru includeupdatebefores, params "E:\GoldenGate11gMediaPack\GGFlatFil
    e\V22262-01\sample-dirprm\ffwriter.properties"
    E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\ggs_Windows_x86_Generic_32bit_v11
    _1_1_0_0_078\extract.exe running with user exit library E:\GoldenGate11gMediaPac
    k\GGFlatFile\V22262-01\flatfilewriter.dll, compatiblity level (2) is current.
    SOURCEDEFS E:\GoldenGate11gMediaPack\V26071-01\dirdef\vikstkFF.def
    table ggs.vikstk;
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 1G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1.87G
    CACHESIZEMAX (strict force to disk): 1.64G
    Started Oracle GoldenGate for Flat File
    Version 11.1.1.0.0
    ** Run Time Messages **
    Problem I am facing_
    I am not sure where to find the generated flat file,
    even the reports are showing there is no data at manager process
    I am expecting replicat instead of extract at Flatfile FFR711.prm
    I have done this much what to do give me some pointers.....
    Thanks,
    Vikas

    Ok, I haven't run your example, but here are some suggestions.
    Vikas Panwar wrote:
    extract ffe711
    userid ggs@bidb, password ggs12345
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    ggsci> add extract FFE711, EXTTRAILSOURCE ./dirdat/oo
    ggsci> add rmttrail ./dirdat/pp, extract FFE711, megabytes 20
    ggsci> start extract  FFE711
    You of course need data captured from somewhere to test with. You could capture changes directly from a database and write those to a trail, and use that as a source for the flat-file writer; or, if you have existing trail data, you can just use that (I often test with old trails, with known data).
    In your example, you are using a data pump that is doing nothing more than pumping trails to a remote host. That's fine, if that's what you want to do. (It's actually quite common in real implementations.) But if you want to actually capture changes from the database, then change "add extract ... extTrailSource" to be "add extract ... tranlog". I'll assume you want to use the simple data pump to send trail data to the remote host. And I will assume that some other database capture process is creating the trail dirdat/oo
    Also... with your pump "FFE711", you can create either a local or remote trial, that's fine. But don't use a rmtfile (or extfile). You should create a trail, either a "rmttrail" or "exttrail". The flat-file adapter will read that (binary) trail, and generate text files. Trails automatically roll-over, the "extfile/rmtfile" do not (but they do have the same internal GG binary log format). (You can use a 'maxfiles' to force them to rollover, but that's beside the point.)
    Also, <ul>
    <li> don't forget your "table" statements... or else no data will be processed!! You can wildcard tables, but not schemata.
    <li> there is no reason that anything would be discarded in a pump.
    <li> although a matter of choice, I don't see why people use absolute paths for reports and discard files. Full paths to data and def files make sense if they are on the SAN/NAS, but then I'd use symlinks from dirdat to the storage directory (on Unix/Linux)
    <li> both windows and unix can use forward "/" slashes. Makes examples platform-independent (another reason for relative paths)
    <li> your trails really should be much larger than 5MB for better performance (e.g,. 100MB)
    <li> you probably should use a source-defs file, intead of a dblogin for metadata. Trail data is by its very nature historical, and using "userid...password" in the prm file inherently gets metadata from "right now". The file-writer doesn't handle DDL changes automatically.
    </ul>
    So you should have something more like:
    Vikas Panwar wrote:
    extract ffe711
    sourcedefs dirdef/vikstkFF.def
    rmthost 10.180.182.77, mgrport 7816
    rmttrail dirdat/ff, purge, megabytes 100
    table myschema.*;
    table myschema2.*;
    table ggs.*;For the file-writer pump:
    +5. on Flat file GGSCI prompt+
    extract ffr711
    CUSEREXIT flatfilewriter.dll CUSEREXIT passthru includeupdatebefores, params dirprm\ffwriter.properties
    SOURCEDEFS dirdef/vikstkFF.def
    table myschema.*;
    table ggs.*;
    ggsci> add extract ffr711, exttrailsource ./dirdat/pp
    ggsci> start extract ffr711
    Again, use relative paths when possible (the flatfilewriter.dll is expected to be found in the GG install directory). Put the ffwriter.properties file into dirprm, just as a best-practice. In this file, ffwriter.properties, is where you define your output directory and output files. Again, make sure you have a "table" statement in there for each schema in your trails.
    Problem I am facing_
    I am not sure where to find the generated flat file,
    even the reports are showing there is no data at manager process
    I am expecting replicat instead of extract at Flatfile FFR711.prm
    I have done this much what to do give me some pointers.....The generated files are defined in the ffwriter.properties file. Search for "rootdir" property, e.g.,
    goldengate.flatfilewriter.writers=csvwriter
    csvwriter.files.formatstring=output_%d_%010n
    csvwriter.files.data.rootdir=dirout
    csvwriter.files.control.ext=_data.control
    csvwriter.files.control.rootdir=dirout
    ...The main problem you have is: (1) use rmttrail, not rmtfile, and (2) don't forget the "table" statement, even in a pump.
    Also, for the flat-file adapter, it does run in just a "extract" data pump; no "replicat" is ever used. The replicats inherently are tied to a target database; the file-writer doesn't have any database functionality.
    Hope it helps,
    -m

  • Error in DTP creation for a flat file datasource

    Hi SDN friends,
    Please if somebody can give me a help with this, it will be very appreciate.
    I have a DTP to load attributes master data into an infoobject. The master data comes from a flat file (data source flat file) .
    The extraction mode is full, when I try to modify any selection in the "Update" tab, a message "Enter valid value" shows, but it doesn't give me any hint about which is the wrong value that must be checked.
    The message ID is 00 002 and the and the text is the following
    Enter a valid value
    Message no. 00002
    Procedure
    Display the allowed values with F4 and correct your entry.
    I think it may be something related to the flatfile datasource. Are there any special considerations to create a DTP to upload master data from a flat file?
    Thanks so much for your attention and best regards,
    Karim

    Hi Karim,
    I suspect that you might be entering values that are not in correct format as defined in BI DTP. Eg: date it could be DD.MM.YYYY in BI you might be giving MM.DD.YYYY similary you may be entering the wrong format for master data you are giving it in selections.
    Hope it helps...
    Best Regards,
    Madhu

Maybe you are looking for

  • How do I restore from time machine on a new macbook pro ?

    Hello, My Macbook pro was stollen about 2 months ago and i'm currently debating on which Macbook retina I should get, the higher 13" or the higher 15" model. In either case, how do I go about restoring the system to which my old macbook pro was at ?

  • Flash video not reporting when finsihed

    Using a flash video object in Flash 9 via netStream off my local hard drive I am finding when my long video reaches the end I don't get the expected NetSream.Play.Stop. I just get this from the netStream.onStatus at the end of the video status : NetS

  • Music speeds up when playing music in itunes!

    hi everyone, i recently bought a creative xmod and when i play music through itunes now the music speeds up to an insane speed, does anyone know how to fix this? i couldn't find any help on the creative tech support but i sae that others had this pro

  • How to browse my playlists without itunes switching to them automatically?

    I remember in an older version of itunes I could be playing an album from eith the "music" tab or from one of my smart playlists on shuffle, and then while the album was playing I could browse my other playlists, or if I was playing from a playlist I

  • Wet Macbook Air

    I wet my Macbook Air a little, nothing happend, it didn't shut down but now I can't swich it on... Some tips what to do or how much the repair costs?