Oracle API for Extended Analytics Flat File extract issue

I have modified the Extended Analytics SDK application as such:
HFMCONSTANTSLib.EA_EXTRACT_TYPE_FLAGS.EA_EXTRACT_TYPE_FLATFILE
instead of
HFMCONSTANTSLib.EA_EXTRACT_TYPE_FLAGS.EA_EXTRACT_TYPE_STANDARD
I am trying to figure out where the Flat file extract is placed once the Extract is complete. I have verified that I am connecting to HFM and the log file indicates all the steps in the application have completed successfully.
Where does the FLATFILE get saved/output when using the API?
Ultimate goal is to create a flat file through an automated process which can be picked up by a third party application.
thanks for your help,
Chris

Never mind. I found the location on the server.

Similar Messages

  • Extended Analytics Flat File extract in ANSI

    Hi,
    Version: EPM 11.1.2.1
    The Flat file extract using Extended Analytics from HFM is exported in UNICODE format. Is there any option to get the export in ANSI code format?
    Thanks,
    user8783298

    Hi
    In the latest version, Currently the data from HFM can be exported only in UNICODE format. There is a defect filed to Oracle for the same.
    At present the only workaround is to change the file encoding after it has been extracted, using third party software.
    For example, the extracted file can be opened using Windows Notepad and then desired encoding format can be selected when the file is saved using option Save As.
    Hope this helps.
    thank you
    Regards,
    Mahe

  • Extended Analytics Flat File extract question

    I have modified the Extended Analytics SDK application as such:
    HFMCONSTANTSLib.EA_EXTRACT_TYPE_FLAGS.EA_EXTRACT_TYPE_FLATFILE
    instead of
    HFMCONSTANTSLib.EA_EXTRACT_TYPE_FLAGS.EA_EXTRACT_TYPE_STANDARD
    I am trying to figure out where the Flat file extract is placed once the Extract is complete. I have verified that I am connecting to HFM and the log file indicates all the steps in the application have completed successfully.
    Where does the FLATFILE get saved/output when using the API?
    Ultimate goal is to create a flat file through an automated process which can be picked up by a third party application.
    thanks for your help,
    Chris

    Never mind. I found the location on the server.

  • Generic and Flat file extraction

    Hi experts
    can any body send the Real time scenario for Generic and flat file extraction with examples.
    Thanks and regrds,
    satya

    Hi,
    Generic extraction is an extraction where you create a Generic DS and based upon this you would be extracting the data from R/3, which means you would have an source system to extract the generic data from the generic data source.
    Flact file extraction is an extraction where you need not have to connect to source
    system, in this case you source system would be PC, here you would be designing an infosource based up on the fields/columns of your flat file and mapping the corresponding the infoojects to it.. and update rules..
    and you would be loading/extracting the data from the file as a source..
    These two things are different in nature and usage..
    Hope this helps..
    assign points if useful..
    cheers,
    Pattan.

  • Java API for running entire ".sql" files on a remote DB ( mySQL or Oracle)?

    Hi,
    Would anyone happen to know if there's a java API for executing entire ".sql" files (containing several different SQL commands), on a remote database server ?
    It's enough if the API works with MySQL and/or Oracle.
    Just to demonstrate what i'm looking for:
    Suppose you've created sql file "c:/test.sql" with several script lines:
    -- test.sql:
    insert into TABLE1 values(3,3);
    insert into TABLE1 values(5,5);
    create table TABLE2 (name VARCHER) ENGINE innoDB; -- MYSQL specific
    Then the java API should look something like:
    // Dummy java code:
    String driver="com.mysql.jdbc.Driver";
    String url= "jdbc:mysql://localhost:3306/myDb";
    SomeAPI.executeScriptFile( "c:/test.sql", driver, url);
    Thanks.

    No such a API, but it's easy to parse all sqls in a file, then run those command:
    For instance:
    import java.sql.*;
    import java.util.Properties;
    /* A demo show how to load some sql statements. */
    public class testSQL {
    private final static Object[] getSQLStatements(java.util.Vector v) {
    Object[] statements = new Object[v.size()];
    Object temp;
    for (int i = 0; i < v.size(); i++) {
    temp = v.elementAt(i);
    if (temp instanceof java.util.Vector)
    statements[i] = getSQLStatements( (java.util.Vector) temp);
    else
    statements[i] = temp;
    return statements;
    public final static Object[] getSQLStatements(String sqlFile) throws java.
    io.IOException {
    java.util.Vector v = new java.util.Vector(1000);
    try {
    java.io.BufferedReader br = new java.io.BufferedReader(new java.io.
    FileReader(sqlFile));
    java.util.Vector batchs = new java.util.Vector(10);
    String temp;
    while ( (temp = br.readLine()) != null) {
    temp = temp.trim();
    if (temp.length() == 0)
    continue;
    switch (temp.charAt(0)) {
    case '*':
    case '"':
    case '\'':
    // System.out.println(temp);
    break; //Ignore any line which begin with the above character
    case '#': //Used to begin a new sql statement
    if (batchs.size() > 0) {
    v.addElement(getSQLStatements(batchs));
    batchs.removeAllElements();
    break;
    case 'S':
    case 's':
    case '?':
    if (batchs.size() > 0) {
    v.addElement(getSQLStatements(batchs));
    batchs.removeAllElements();
    v.addElement(temp);
    break;
    case '!': //Use it to get a large number of simple update statements
    if (batchs.size() > 0) {
    v.addElement(getSQLStatements(batchs));
    batchs.removeAllElements();
    String part1 = temp.substring(1);
    String part2 = br.readLine();
    for (int i = -2890; i < 1388; i += 39)
    batchs.addElement(part1 + i + part2);
    for (int i = 1890; i < 2388; i += 53) {
    batchs.addElement(part1 + i + part2);
    batchs.addElement(part1 + i + part2);
    for (int i = 4320; i > 4268; i--) {
    batchs.addElement(part1 + i + part2);
    batchs.addElement(part1 + i + part2);
    for (int i = 9389; i > 7388; i -= 83)
    batchs.addElement(part1 + i + part2);
    v.addElement(getSQLStatements(batchs));
    batchs.removeAllElements();
    break;
    default:
    batchs.addElement(temp);
    break;
    if (batchs.size() > 0) {
    v.addElement(getSQLStatements(batchs));
    batchs.removeAllElements();
    br.close();
    br = null;
    catch (java.io.FileNotFoundException fnfe) {
    v.addElement(sqlFile); //sqlFile is a sql command, not a file Name
    Object[] statements = new Object[v.size()];
    for (int i = 0; i < v.size(); i++)
    statements[i] = v.elementAt(i);
    return statements;
    public static void main(String argv[]) {
    try {
    String url;
    Object[] statements;
    switch (argv.length) {
    case 0: //Use it for the simplest test
    case 1:
    url = "jdbc:dbf:/.";
    if (argv.length == 0) {
    statements = new String[1];
    statements[0] = "select * from test";
    else
    statements = argv;
    break;
    case 2:
    url = argv[0];
    statements = getSQLStatements(argv[1]);
    break;
    default:
    throw new Exception(
    "Syntax Error: java testSQL url sqlfile");
    Class.forName("com.hxtt.sql.dbf.DBFDriver").newInstance();
    //Please see Connecting to the Database section of Chapter 2. Installation in Development Document
    Properties properties = new Properties();
    Connection con = DriverManager.getConnection(url, properties);
    Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,
    ResultSet.CONCUR_READ_ONLY);
    //Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE);
    // stmt.setMaxRows(0);
    stmt.setFetchSize(10);
    final boolean serializeFlag = false;//A test switch to serialize/deserialize the resultSet
    ResultSet rs;
    for (int i = 0; i < statements.length; i++) {
    if (statements[i] instanceof java.lang.String) {
    String temp = (java.lang.String) statements;
    switch (temp.charAt(0)) {
    case 'S':
    case 's':
    case '?':
    System.out.println(temp);
    rs = stmt.executeQuery(temp);
    if (serializeFlag) {
    // serialize the resultSet
    try {
    java.io.FileOutputStream fileOutputStream = new
    java.io.FileOutputStream("testrs.tmp");
    java.io.ObjectOutputStream
    objectOutputStream = new java.io.
    ObjectOutputStream(fileOutputStream);
    objectOutputStream.writeObject(rs);
    objectOutputStream.flush();
    objectOutputStream.close();
    fileOutputStream.close();
    catch (Exception e) {
    System.out.println(e);
    e.printStackTrace();
    System.exit(1);
    rs.close(); //Let the CONCUR_UPDATABLE resultSet release its open files at once.
    rs = null;
    // deserialize the resultSet
    try {
    java.io.FileInputStream fileInputStream = new
    java.io.FileInputStream("testrs.tmp");
    java.io.ObjectInputStream objectInputStream = new
    java.io.ObjectInputStream(
    fileInputStream);
    rs = (ResultSet) objectInputStream.
    readObject();
    objectInputStream.close();
    fileInputStream.close();
    catch (Exception e) {
    System.out.println(e);
    e.printStackTrace();
    System.exit(1);
    ResultSetMetaData resultSetMetaData = rs.
    getMetaData();
    int iNumCols = resultSetMetaData.getColumnCount();
    for (int j = 1; j <= iNumCols; j++) {
    // System.out.println(resultSetMetaData.getColumnName(j));
    /* System.out.println(resultSetMetaData.getColumnType(j));
    System.out.println(resultSetMetaData.getColumnDisplaySize(j));
    System.out.println(resultSetMetaData.getPrecision(j));
    System.out.println(resultSetMetaData.getScale(j));
    System.out.println(resultSetMetaData.
    getColumnLabel(j)
    + " " +
    resultSetMetaData.getColumnTypeName(j));
    Object colval;
    rs.beforeFirst();
    long ncount = 0;
    while (rs.next()) {
    // System.out.print(rs.rowDeleted()+" ");
    ncount++;
    for (int j = 1; j <= iNumCols; j++) {
    colval = rs.getObject(j);
    System.out.print(colval + " ");
    System.out.println();
    rs.close(); //Let the resultSet release its open tables at once.
    rs = null;
    System.out.println(
    "The total row number of resultset: " + ncount);
    System.out.println();
    break;
    default:
    int updateCount = stmt.executeUpdate(temp);
    System.out.println(temp + " : " + updateCount);
    System.out.println();
    else if (statements[i] instanceof java.lang.Object[]) {
    int[] updateCounts;
    Object[] temp = (java.lang.Object[]) statements[i];
    try {
    for (int j = 0; j < temp.length; j++){
    System.out.println( temp[j]);
    stmt.addBatch( (java.lang.String) temp[j]);
    updateCounts = stmt.executeBatch();
    for (int j = 0; j < temp.length; j++)
    System.out.println((j+1)+":"+temp[j]);
    for (int j = 0; j < updateCounts.length; j++)
    System.out.println((j+1)+":" +updateCounts[j]);
    catch (java.sql.BatchUpdateException e) {
    updateCounts = e.getUpdateCounts();
    for (int j = 0; j < updateCounts.length; j++)
    System.out.println((j+1)+":"+updateCounts[j]);
    java.sql.SQLException sqle = e;
    do {
    System.out.println(sqle.getMessage());
    System.out.println("Error Code:" +
    sqle.getErrorCode());
    System.out.println("SQL State:" + sqle.getSQLState());
    sqle.printStackTrace();
    while ( (sqle = sqle.getNextException()) != null);
    catch (java.sql.SQLException sqle) {
    do {
    System.out.println(sqle.getMessage());
    System.out.println("Error Code:" +
    sqle.getErrorCode());
    System.out.println("SQL State:" + sqle.getSQLState());
    sqle.printStackTrace();
    while ( (sqle = sqle.getNextException()) != null);
    stmt.clearBatch();
    System.out.println();
    stmt.close();
    con.close();
    catch (SQLException sqle) {
    do {
    System.out.println(sqle.getMessage());
    System.out.println("Error Code:" + sqle.getErrorCode());
    System.out.println("SQL State:" + sqle.getSQLState());
    sqle.printStackTrace();
    while ( (sqle = sqle.getNextException()) != null);
    catch (Exception e) {
    System.out.println(e.getMessage());
    e.printStackTrace();

  • Delta upload for flat file extraction

    Hello everyone,
    Is it possible to initialize delta update for a flat file extraction, If yes please explain how to do that??

    Hi Norton,
    For a Flat file data source, the upload will be always FULL.
    Please refer to following doc:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a0b3f0e2-832d-2c10-1ca9-d909ca40b54e?QuickLink=index&overridelayout=true&43581033152710   if you need to to extract delta records from Flat file. We can write routine at infopackage level.
    Regards,
    Harish.

  • Omit the Open Hub control file 'S_*' for flat file extracts

    Hi Folks,
    a quick question is it somehow possible to omit the control file generation for flat file extracts.
    We got some unix scripts running that get confused by the S_* files and we where wondering if we can switch the creation of those files off.
    Thanks and best regards,
    Axel

    Hi,
    However, the S_ (structure) file does not reflect the proper structure. The S_ file lists all fields sorted in alphabetical order based on the field name, instead of listing all fields in the correct order that they appear in the output file.
    As I know and checked this is not the case. The S_ file has the fields in the order of infospoke object sequence ( and in transformation tab, source structure).
    I would suggest you to control it again.
    Derya

  • Changing the file name n Flat file Extraction

    Hi,
    Currently i am using flat file extraction for my forecast data and i am sending the file through application server.
    I have created directory successfully and now everyday morning i receive a file thru FTP server with name 20060903.csv and this name is based on one field in my flat file data.  ex. /interface/asf/20060903.csv
    During mid off month we have cut off date, and this cut off date varies for each month. During this time file name changes in the FTP and a file with different name i.e 20061002.csv will be existing in the application server.
    Now in the infopackage i also need to set the deletion settings like if the file name is same delete the previous requests. I could achieve this if i could get the file name changed.
    Lets say if i am not chnaging the file name how do i set deletion condition, like it should not delete if the field(scenario) changes. ie from 20061002 to 20061101. I should have only one file for 20061002 and one file for 20061101 etc... If the scenario is same it should delete.
    Any one kindly advise. Very urgent and critical.
    Tks & regards,
    Bhuvana.

    Hi Bhunva,
    Try the following abap code in routine under External data tab in infopackage.
    data: begin of i_req occurs 0,
          rnr like RSICCONT-rnr,
          end of i_req.
    select * from RSICCONT UP TO 1 ROWS
                      where ICUBE  = <datatargetname>
                      order by TIMESTAMP descending.
      i_req-rnr = rsiccont-rnr .
      append i_req.
      clear i_req.
    endselect.
    loop at i_req.
      select single * from RSSELDONE where RNR eq i_req-rnr and
                                       filename = p_filename.
      if sy-subrc = 0.
        CALL FUNCTION 'RSSM_DELETE_REQUEST'
          EXPORTING
            REQUEST                    = i_req-rnr
            INFOCUBE                   = <datatargetname>
          EXCEPTIONS
            REQUEST_NOT_IN_CUBE        = 1
            INFOCUBE_NOT_FOUND         = 2
            REQUEST_ALREADY_AGGREGATED = 3
            REQUEST_ALREADY_COMDENSED  = 4
            NO_ENQUEUE_POSSIBLE        = 5
            OTHERS                     = 6.
        IF SY-SUBRC <> 0.
          MESSAGE ID sy-MSGID TYPE 'I' NUMBER sy-MSGNO
              WITH sy-MSGV1 sy-MSGV2 sy-MSGV3 sy-MSGV4.
        else.
          message i799(rsm1) with i_req-rnr 'deleted'.
        ENDIF.
    endif.
    let me know if you get any problem in this logic.
    regards,
    Raju

  • Encountering problem in Flat File Extraction

    Flat file extraction: the settings which I maintain in the data source u201Cfield tabu201D are conversion routine "ALPHA", format "External". But when I load the data, the record which is maintained in lower case (in the Flat File) is not getting converted into upper case. But if I try with out ALPHA conversion, the lower case data is getting converted into the SAP format (Upper case). Could you please help me to fix the problem
    When I use both ALPH & External together?

    Hai did you enable the lower case enable option check box in the infoobject level .For the objects which you want lower case values .Check the help.sap site as well.
    Goodday

  • Hierarchy flat file extraction

    Hi experts--
    Can anyone could guide me in Flat file Hierchy extraction.Step by step.
    if possible with screen shots.
    can send it to [email protected]
    regards,
    Rambo.

    Hi,
    Flat file hierarchy extraction is similar to the normal flat file extraction procedures. But the file structure itself can be complex and is different than normal flat files.
    Take a look at the threads below for more details :
    Hierarchy Flat file
    Hierarchy from flat file
    Program to load a flat file in a Hierarchy
    Cheers,
    Kedar

  • Logical path for getting a Flat file from application server

    Hi All,
    We have loaded some .csv files to application server, what is the logical path we have to mention in the infopackage scheduler screen? please guide me how to give the path for getting a flat file from application server.
    Thanks,
    Sairam.

    Hi Sairam,
    I hope you know which location you have saved in the Application server.
    Now if you go to the Infopackage and click on the "External Data" tab, there you will see Radio Buttons for
    1) Client Workstation
    2) Application Server
    Choose the second radio button, then in the Field "Name of the File" you will be able to use the F4 help and browse AL11 transaction through this option. You can then choose the File.
    Hope this helps
    Regards,
    Praveen.

  • Can i able to put filter for my source flat file?

    Hi all,
    Please help me with the best practise of ODI.
    My source is flat file and i want to put filter.
    can i able to put filter for my source flat file? If yes, please help me with the best practise of applying filter.
    Regards
    Suresh

    Hi ,
    If you are trying to create at the Model -->Datastore ---> Filter ---> Insert condition
    then it will not work for File technology . You will get " Invalid Format Description "
    But you can specify a filter in the interface .
    Just drop the column(s) from your flat file data store into the canvas and then specify the filter condition .
    Thanks,
    Sutirtha

  • InfoSpoke Flat File Extract to Logical Filename

    I'm trying to extract data from an ODS to a flat file. So far, I've found that the InfoSpoke must write to the application server for large data volume. Also, in order for the InfoSpoke to transport properly, I must use logical filenames. I've attempted to implement the custom class and append structure as defined in the SAP document "How To... Extract Data with OPEN HUB to a Customer Defined Logical Filename". I'm getting an error when attempting to import the included transports (custom class code). It appears to be a syntax error. Has anyone encountered this, and, if so, how did you fix it?

    Hello.
    I'm getting a syntax error also.  I did not import the transport, but applied the notes thru the appendix.  When I modified the method "GET_OBJECT_REF_INT" in class CL_RSB_DEST as below, I get a syntax error on the "create object" statement.
        when rsbo_c_desttype_int-file_applsrv.
    *{   REPLACE        &$&$&$&$                                          1
    *\      data: l_r_file_applsrv type ref to cl_rsb_file_applsrv.
          data: l_r_file_applsrv type ref to zcl_rsb_file_logical.
    *}   REPLACE
          create object l_r_file_applsrv
            exporting i_dest    = n_dest
                      i_objvers = i_objvers
    Class CL_RSB_DEST,Method GET_OBJECT_REF_INT
    The obligatory parameter "I_S_VDEST" had no value assigned to it.

  • Flat file extraction

    hi
    i know the procedure to extract from flat file but my question is how to extract date from flat file every month without entering path every time, i mean how to extract different files ex: september file, october file.. and can  any one explain how we have to program for getting all the files without entering file path every time.

    Hi,
    If you want to automate your flat file data loads you must keep those files in application server and write an ABAP code at info pack level to extract the file.
    For example you are getting files every day and those files are ended with current date. like xxxx0311208.csv you can read system date and store in the ABAP object and search for the same in the application server by giving the server path, so once the program finds the specified file in the server your program will execute and your info pack will trigger and load that file.
    in the similar way you can write your code for your requirment for month.
    Regards
    Charan

  • Flat file extraction with FILE

    Hi BWers,
    I want to extract a flat file to BW with an infopackage.
    This flat file is located in a physical path changing according to the system :
    - in the developpment server, path is :
    ...\dev\file.csv
    - in the quality server, path is :
    ...\qual\file.csv
    - in the production server, path is :
    ...\prod\file.csv
    In the Name of the File into the infopackage extraction tab, i have to put the physical path or the logical file or routine.
    I don't want to put directly the physical path, because i would have to change this path in qual and prod servers.
    I don't want too to use a routine.
    So I created a logical file with FILE T-code. But in physical path i put :
    ...\dev\<FILENAME>
    How to put a directory variable like that : $(director)<filename> which says : if we are in Dev, so $directory =
    ...\dev\file, else qual...
    Thanks for help.
    Cheers,

    Hi,
    Here is the answer i found:
    Use in  FILE\Assignment of Physical Paths to Logical Path the variable like that:
    <V=Z_INT_SRV>
    Declare this variables in Definition of Variables.
    Z_INT_SRV

Maybe you are looking for

  • Long time to load a first page, get "the connection has timed out" frequently.

    Frequently takes a long time to get to first page. Even a "speed test" to check my DSL connection. Usually can get to "google.com" quickly but it did come up slow today (just a test for a simple page). I have a wireless N router with it's own Firewal

  • Issue: "No Authorization to send Idocs with message type Orders" - IDX5

    Hi All,    I am working on a File to IDoc(Orders.Orders05) scenario. The sender is PI Server and the receiver is the SAP ISR system. A technical system has been created in PI SLD for the ISR system and a Business system is added to that. The logical

  • XMLType toobject return garbage when open-close tag used .

    Hi, one of our developer send me this test case: CREATE OR REPLACE TYPE SYNC_RESULT AS OBJECT(     RES_ID    NUMBER(12),     CHG_ID    NUMBER(12),     CODE      NUMBER,     INFO      VARCHAR2(4000),     INFO_TECH VARCHAR2(4000),     CONSTRUCTOR FUNCT

  • How to generate anagram?

    hey everyone I'm trying to write a method which return true or false if the given two words are anagram. What i came up with is that, first find all the possible arrangements of the 1st given word, store them into an array and then compare the result

  • IMovie .mp4 export crapping out at about 20 gigs

    We're producing 12 corporate videos from DV video files ranging in size from 6 gig to 30 gig. When the smaller files are exported to .mp4 from iMovie 08, things go fine. But, when the larger files are exported, they fail at around 20 gig. Is there a