Open Hub Extraction

Hello
I am doing a open hub and sending data to a file. Now in the file i want to maitain the header line(description of field) for the fields i am sending to the file.  How can i do that. Please can someone explain me a little bit in brief.
thanks

Hi,
Once u run the Infospoke, it will generates two files, data file and schema file. The data file contains the data and the schema file contains the deader or the structure of the Infospoke. So if u want to maintain the header in the data file, u just copy the list of InfoObjects name and paste it on the header file.
Thanks & Regards
Ramakrishna Kamurthy

Similar Messages

  • To create an header for Open Hub extract

    Hi guys,
    How do i get an header for an open hub extract for a CSV file, an header consisting of the field names technical or otherwise.
    Thanks,
    Your help will be greatly appreciated

    hi,
    Field definition
    On the Field Definition tab page you define the properties of the fields that you want to transfer.
    We recommend that you use a template as a basis when you create the open hub destination. The template should be the object from which you want to update the data. This ensures that all the fields of the template are available as fields for the open hub destination. You can edit the field list by removing or adding fields. You can also change the properties of these fields.
    You have the following options for adding new fields:
    ●     You enter field names and field properties, independent of a template.
    ●     You select an InfoObject from the Template InfoObject column. The properties of the InfoObject are transferred into the rows.
    ●     You choose  Select Template Fields. A list of fields are available as fields for the open hub destination that are not contained in the current field list. You transfer a field to the field list by double-clicking on it. This allows you to transfer fields that had been deleted back into the field list.
    If you want to define the properties of a field so that they are different from the properties of the template InfoObject, delete the template InfoObject entries for the corresponding field and change the properties of the field. If there is a reference to a template InfoObject, the field properties are always transferred from this InfoObject.
    The file or database table that is generated from the open hub destination is made up of the fields and their properties and not the template InfoObjects of the fields.
    If the template for the open hub destination is a DataSource, field SOURSYSTEM is automatically added to the field list with reference to InfoObject 0SOURSYSTEM. This field is required if data from heterogeneous source systems is being written to the same database table. The data transfer process inserts the source system ID that is relevant for the connected DataSource. You can delete this field if it is not needed.
    If you have selected Database Table as the destination and Semantic Key as the property, the field list gets an additional column in which you can define the key fields for the semantic key.
    In the Format column, you can specify whether you want to transfer the data in the internal or external format. For example, if you choose External Format here, leading zeros will be removed from a field that has an ALPHA conversion routine when the data is written to the file or database table.
    Assign points if helpful
    Cheers!!
    Aparna

  • Problem with a Open Hub Extraction

    Hi Experts,
    I have a huge problem here.
    I´m trying  to extract some data with an Open Hub application, but i´m having trouble with the application server.
    See, I have to extract the information from a DSO to a file in a application server, but the company has one application server for each environment (BD1, BQ1, etc) and the conversion it´s not working and the transportation of the request always fails.
    Anyone knows how to fix this??
    Thanks.
    Regards.
    Eduardo C. M. Gonçalves

    Hi,
    While creating the open hub, you need to maintain the file name details under T-CODE - FILE.
    I hope you have maintained that.
    There also you wil have to use the variable to change the file path which will change the file path in each system..
    There you will have to define
    Logical File Path Definition
    Assignment of Physical Paths to Logical Path
    Logical File Name Definition, Cross-Client
    Once you have define this and transport , then your Open Hub will go smooth.
    Thanks
    Mayank

  • Open Hub extraction for Hierarchies

    Hello,
    I need to create a Open Hub on the 0MAST_CCTR in HR-PA. Unfortunately the Open Hub does not give you the option for hierarchies. Any ideas?
    Thanks,

    Hi,
      If it is a flat file output then you can use the below program mentioned in SAP how to document. We are using this in our system.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/0403a990-0201-0010-38b3-e1fc442848cb?quicklink=index&overridelayout=true
    Regards,
    Raghavendra.

  • Open Hub Extract in .txt file

    Hi Expert,
    A abc.txt file is getting populated as a target of open hub. That file is populated through a process chain. DTP monitor screen shows that output data volume is around 2.8 million everyday, but I am not been able to find data more than 1250000. Every time whatever be the data target in dtp monitor it shows me 1250000 in the txt file. Is there any limitation on the number of rows in .txt file.
    Please help.
    Regards,
    Snehasish

    thanks for the reply Geetanjali.
    Yeah one field routine is there , but from dtp monitor screen i can see that total record count is 2608000 but when i m chking from the .txt file . it is showing me 1250000 irrespective of the dtp monitor record count. Moreover, I m just populating 3 column in my target txt file.

  • Open Hub third Pary extraction in yellow state forever

    Hi,
    Would appreciate any assistance in this regard.
    Here's the situation:
    1 - We have third party open hub extraction setup.
    2 - All settings are fine : RFC destination, transformations, Open hub.
    3 - Now the issue is that we have a third party system which is data stage and teradata.
    4 - Now they trigger a job which in turn starts the PC in BW.
    5  - This PC has 2 steps only. The start and the DTP.
    6 - This DTP loads data from Info-object attributes to Open hub (basically a table).
    7 - The DTP technical status is green but overall status stays yellow.
    8 - Now I know that for 3rd party, we have several APIs and here's my understandin:
    When the extraction is finished and technical status is green, BW notifies 3rd party through RSB_API_OHS_3RDPARTY_NOTIFY.
    Then the 3rd party checks the data through RSB_API_OHS_DEST_READ_DATA.
    Then if data is fine, the 3rd party send confirmation through  RSB_API_OHS_REQUEST_SETSTATUS and the overall status turns to green.
    Now I do not understand what's wrong because all of the above mentioned should complete automatically.
    We have other openhubs with same RFC destination and they are completing successfully. Just for the information: We have recently moved to SP 10 on EHP1.
    Regards
    Debanshu

    Were you able to solve this? I am also facing exact same problem.
    Regards
    Pankaj

  • Open hub error when generating file in application server

    Hi, everyone.
    I'm trying to execute an open hub destination that save the result as a file in the application server.
    The issue is: in production environment we have two application servers, XYZ is the database server, and A01 is the application server. When I direct the open hub to save file in A01 all is working fine. But when I change to save to XYZ I´m getting the following error:
    >>> Exception in Substep Start Update...
    Message detail: Could not open file "path and file" on application server
    Message no. RSBO214
    When I use transaction AL11, I can see the file there in XYZ filesystem (with data and time correspondent to execution), but I can´t view the content and size looks like be zero.
    Possible causes I already checked: authorization, disk space, SM21 logs.
    We are in SAP BW 7.31 support package 6.
    Any idea what could be the issue or where to look better?
    Thanks and regards.
    Henrique Teodoro

    Hi, there.
    Posting the resolution for this issue.
    SAP support give directions that solved the problem. No matter in which server (XYZ or A01) I logon or start a process chains, the DTP job always runs in A01 server, and it causes an error since the directory doesn´t exist in server XYZ.
    This occurs because DTP settings for background job was left blank. I follows these steps to solve the problem:
    - open DTP
    - go to "Settings for Batch Manager"
    - in "Server/Host/Group on Which Additional Processes Should Run" I picked the desired server
    - save
    After that, no matter from where I start the open hub extraction, it always runs in specified server and saves the file accordingly.
    Regards.
    Henrique Teodoro

  • Open Hub 3rd Party - Process Chain Automation

    Hi,
    I have a requirement to fulfill the Open Hub extraction to 3rd party SQL Server DB using process chain automation.
    There are at times no deltas and during the execution of DTP since there are no delta's the request status shows GREEN but overall status as YELLOW because there are no new records.
    Until we clear this request to GREEN, we cant trigger the next load. I am doing these steps manually by going to tables/ FM's to change the status.
    I am looking for options for automation in process chain can someone help me on this request. Appreciate your help!
    Thanks,
    Pandu.

    Do you know when the delta will being zero records? Also when you say the overall status is yellow which status are you talking about?
    You can always use a ABAP program in your PC which will check for the latest request ID has records and turn the the status green so that dependent  data triggers are sent
    Thanks
    Abhishek Shanbhogue

  • OPEN HUB LICENSE

    Hi gurus,
    I have a question regarding to the OPEN HUB LICENSE. I have seen some old posts where it is said that it is necessary to have a different SAP BI license to use OPEN HUB. I worked in a project in december where I used it and I didn't have any problem with the licenses but I don't know what licenses the company had.
    I would appreciate if somebody could explain me something else about this topic.
    Thanks.

    As far as I know and when I did pre sales - this was the mantra
    If you use BW to extract data into another datawarehouse or other system at all then you need a Open Hub License
    This covers technical open hub extraction AND flat file dumps/xls outputs/oracle tables where you then load the results into another system (ie you don;t have to use Open Hub to technically get the data out but you still need the license)
    The lack of a license does not stop you using the transaction but if SAP catch you.. oops!
    Please contct your SAP account manager in all instances for clarification and pricing

  • What is the difference between Open hub destination and Info spokes

    what is the difference between Open hub destination and Info spokes?
    Please seacrh the forum before posting a thread
    Edited by: Pravender on Aug 16, 2010 11:17 AM

    Hi,
    When a user initiates open hub extraction by creating an InfoSpoke, behind-the-scenes activity involves OO ABAP which is calling classes to determine each of the different components involved in making the open hub extraction possible.  In particular, this enhancement will focus on 2 standard classes: one used to determine file destination name and path and the other to control the user interface of the InfoSpoke which will ultimately allow the user to enter his/her own filename and path.
    The open hub service enables us to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, we can ensure controlled distribution using several systems.
    The central object for the export of data is the InfoSpoke. Using this, we can define the object from which the data comes and into which target it is transferred.
    Regards,

  • Open Hub: How-to doc "How to Extract data with Open Hub to a Logical File"

    Hi all,
    We are using open hub to download transaction files from infocubes to application server, and would like to have filename which is dynamic based period and year, i.e. period and year of the transaction data to be downloaded. 
    I understand we could use logical file for this purpose.  However we are not sure how to have the period and year to be dynamically derived in filename.
    I have read in sdn a number of posted messages on a similar topic and many have suggested a 'How-to' paper titled "How to Extract data with Open Hub to a Logical Filename".  However i could not seem to be able to get document from the link given. 
    Just wonder if anyone has the correct or latest link to the document, or would appreciate if you could share the document with all in sdn if you have a copy.
    Many thanks and best regards,
    Victoria

    Hi,
    After creating open hub press F1 in Application server file name text box from the help window there u Click on Maintain 'Client independent file names and file paths'  then u will be taken to the Implementation guide screen > click on Cross client maintanance of file name > create a logical file path by clicking on new entiries > after creating logical file path now go to Logical file name definition there give your Logical file , name , physical file (ur file name followed by month or year what ever is applicable (press f1 for more info)) , data format (ASC) , application area (BW) and logical path (choose from F4 selection which u have created first), now goto Assignment of  physical path to logical path > give syntax group >physical path is the path u gave at logical file name definition.
    however we have created a logical path file name to identify the file by sys date but ur requirement seems to be of dynamic date of tranaction data...may u can achieve this by creating a variable. U can see the help from F1 that would be of much help to u. All the above steps i have explained will help u create a dynamic logical file.
    hope this helps u to some extent.
    Regards

  • Extract Data with OPEN HUB to a Logical Filename

    Hi Experts,
    Can anybody help me in sending the link for How to guide...Extract Data with OPEN HUB to a Logical Filename?
    Thanks in advance.
    BWUser

    Hi,
    check this links...
    http://searchcrm.techtarget.com/generic/0,295582,sid21_gci1224995,00.html
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e698aa90-0201-0010-7982-b498e02af76b
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1570a990-0201-0010-1280-bcc9c10c99ee
    hope this may help you ..
    Regards,
    shikha

  • Open Hub not extracting all data in CUBE

    We created an FI-SL cube and use the OPEN HUB process to extract the current value of accounts.  The problem is that not all the data is being extracted.
    I have a couple of examples where some of the data in the CUBE is not being extracted by the OPEN HUB process.  I have checked the filter and it is setup properly.  I have manually validated the data in the CUBE. 
    It is very strange that the data is in the CUBE correctly, but just being extracted.
    Any ideas?
    Thanks,
    Bill Lomeli

    Ashok,
    This is the first time I have seen a comment about the field limitation.
    + A previous message from Deepak Simon states:
    "I want to extract data from a cube with more than 16 fields."+
    We are using BI 7.0...is that restriction still in place for this version?
    Thanks,
    Bill

  • Change of sign while extracting data using Open hub

    Hello All,
    We are extracting data from Info cube to a file in the application server using Open hub.
    While extracting data, if there is any negative value for the key figure, negative sign is appending on the right side of the value(Eg: "123.67-"), which is the standard behaviour but I want the sign to be on the left side of the value (Eg: "-123.67") while extracting to the file at application server.
    Could any one please let me know if there is any setting to do this change ?
    Thanks in advance

    Hi,
    Changing the SIGN position from one side of the NUMBER to the other side can happen in following ways.
    1) After the file is place in the application server. Deploy a OS SCRIPT file (for .csv) for obtaining the required changes to the required COLUMN [Basis team will have more idea on this]
    2)This is time consuming idea. well deploy a routine at the Open hub -Infospoke level.
    Regards

  • How to get rid of zeros for the data which has been extracted with open hub

    Hi all
    I am trying to extract data from the cube thru an open hub, but the data which i have extracted, for a cost center i am able to see '0's which is expected to be an empty space, even in cube there is an empty space.
    Can anyone suggest me quickly, points will be rewarded
    thanks
    preethi......

    In the transformation of the Open Hub,use BAPI....
    Create a target structure with all fields as type characters,move data from source structure to target structure in the BAPI using ABAP codes.When there is no value in character fields it would not show up 0's.

Maybe you are looking for