How the data storage works?

i'm looking for new ideas about how to save an stl has map into the disk..

You could use a Berkeley DB database to store the data from an STL hash_map.
Berkeley DB btree and hash databases store key/data pairs. These can contain variable length key and/or data items.
It should be fairly straight forward to iterate through a hash_map and store each item in a database.
The best places to start learning how to use Berkeley DB are the Getting Started Guide:
http://www.oracle.com/technology/documentation/berkeley-db/db/gsg/CXX/index.html
Or the examples distributed with the source code.
Please let us know if you have more specific questions.
Regards,
Alex

Similar Messages

  • How the idoc interfaces works..

    Hi...
    When we design a interfaces using Idocs... what is the actual process so that data get updated in SAP..
    I want to know the flow of the process...
    I have created Idoc structure in the system....but how the data will get into this...and from Idoc what makes the data to get updated into system....
    Just few lines will be helpfull..

    Hi,
    An IDoc is simply a data container that is used to exchange information between any two processes that can understand the syntax and semantics of the data...
    1.IDOCs are stored in the database. In the SAP system, IDOCs are stored in database tables.
    2.IDOCs are independent of the sending and receiving systems.
    3.IDOCs are independent of the direction of data exchange.
    The two available process for IDOCs are
    1) Outbound Process
    2) Inbound Process
    There are basically two types of IDOCs.
    1) Basic IDOCs: Basic IDOC type defines the structure and format of the business document that is to be exchanged between two systems.
    2) Extended IDOCs: Extending the functionality by adding more segments to existing Basic IDOCs.
    For creating a IDOC
    see the below steps for outbound processing IDOCS..
    1. Analyse Hierarchy Levels
    2. Create New segment
    3. Create New IDoc Type
    4. Create New Message Type
    5. Link Message with IDoc Type
    6. Create an entry in EDP13 via transactions WE20 and BD64
    7. Populate the Custom IDoc via ABAP Program
    7b Error Handling
    7c. Send Status Email
    8. Test the Population of the Custom IDoc
    Step 1 – Analyse Hierarchy Levels:
    Analyse the data relationships being processed in the interface. Define the appropriate hierarchical Parent-to-Child relationships.
    Navigate to transaction code WEDI
    Transaction WEDI displays the IDOC main menu. This allows navigation around the various development and control areas to create a customised IDOC.
    Step 2 – Create a new segment:
    via wedi : Development - IDOC Segments or Transaction code WE31.
    • Enter segment name and click on Create.
    name of the segment type must start with Z1 , and have a maximumThe of eight characters.
    • Enter description and enter the relevant field names and data elements.
    The segment should represent a structure in the program so each field in the segment a field name and a data element must befor defined.
    • Save the segment and enter Person Responsible and Processing Person .
    • Go to Edit and Set Release.
    • Repeat this procedure for each new Segment in the IDOC.
    Step 3 – Create a new IDOC Type
    via wedi Development - IDOC Types or Transaction WE30.
    • Enter segment name (starting with Z), click on Basic Type and then Create.
    • Create as new, enter Person Responsible and Processing Person and enter description.
    • On ‘Create Basic Type’ screen decide where segments should be inserted and go to Edit/Create Segment.
    • Complete relevant fields in the Maintain Attributes screen:
    • From the relevant segments created in Step 2 enter the Segment type and if mandatory segment.
    • The Minimum and Maximum number of segments to be allowed in the sequence. (One minimum and one maximum if segment is mandatory).
    • The Parent Segment and Hierarchy Level will be automatically created depending on where in the IDOC tree you decided to create that particular segment.
    • Repeat this process for each segment needed in the IDOC type, deciding whether to add the next segments at the same level or as a ‘Child’.
    • When IDOC created return to initial screen. Go to Edit and Set Release.
    • Go to Transaction WE60 to view the IDoc Type you have created.
    Step 4 – Create new Message Type
    via wedi Development - Message Types or Transaction WE81.
    • Display/Change and click on New Entries
    • Create a new Message Type and Save.
    Step 5 – Link Message Type to IDOC Type
    via wedi Development - IDOC Type/Message or Transaction WE82.
    • Display/Change and then click on New Entries.
    • Enter Message Type, Basic Type (IDOC Type) and Release (46C) and Save.
    Step 6 – Create an entry in EDP13 via transactions WE20 and BD64.
    The partner profile for the Idoc must be set up and generated in the transaction BD64 and transaction WE20.
    • WE20 – Add Message Type to appropriate Partner Type, Enter Message Type, Receiver Port and Idoc Type and Save.
    • BD64 – Create a Model View, Enter Sender and Receiver Ports, Attach Message Type. Go to ‘Environment’ on Menu and click on Generate Partner Profiles and generate (not save) profile.
    Step 7 – Populate the custom IDOC via ABAP Program
    See Test Program ZOUTBD_IDOC_TEMPLATE, Appendix IV.
    • Create an Internal Table for each segment type, this should be exactly the same structure as the segment type.
    • The control record is filled into a structure like EDIDC. The message type and the Idoc type for the Idoc must be populated into the eddic structure.
    PERFORM populate_Control_structure USING c_mestyp
    c_SEGMENT_type1.
    • The data segments are filled into a structure like edidd-sdata; sdata and the segment name are populated into the edidd structure.
    PERFORM transfer_Parent_data_to_seg.
    • The standard SAP function module MASTER_IDOC_DISTRIBUTE is called to pass the populated IDOC to the ALE Layer.
    PERFORM master_idoc_distribute.
    • NOTE: This function module is only called for stand alone programs and Shared Master Data programs (SMD). It is not called when using extensions or output determination.
    • The ALE Layer handles the sending of the IDOC to the receiving system.
    • Error Handling (see Step 7b).
    • Commit work.
    Project SpecificStep 7b – Error Handling
    • Analyse which fields in the interface are mandatory for the receiving system and who needs to receive error notification.
    • Declare a structure of type ‘MCMAILOBJ’ for sending instructions.
    • Enter values for the internal table based on structure ‘MCMAILOBJ’
    • For selection processes, on SY-SUBRC checks and where fields are mandatory for the receiving system; insert Function Module ‘MC_SEND_MAIL’.
    • Enter values in the following parameters: -
    MS_MAIL_SENDMODE = ‘B’ (Batch Mode)
    MS_MAIL_TITLE = 'Mail Title'
    MS_MAIL_DESCRIPTION = ‘Error description’ (e.g. MATNR not given)
    MS_MAIL_RECEIVER = ‘Name of Receiver’ (To be determined)
    MS_MAIL_EXPRESS = ‘E’ (Express Delivery)
    MS_MAIL_DLINAME = Leave Blank
    MS_MAIL_LANGU = 'E' (Language)
    MS_MAIL_FUNKOBJ_NAME = Leave Blank
    TABLES
    MS_MAIL_CONT = I_MCMAILOBJ
    Note:
    It has to be determined separately for each interface how these errors and mail notifications are to be grouped – dependant upon the number of errors that are potentially likely. One possible approach is to send an email for each reason for rejection and include all the records that failed for that reason in the mail notification. Another possible approach is to send an email for every failure.
    When error checking for mandatory fields it is common SAP practice to reject a record on its first failure (irrespective of subsequent errors in that record)
    Step 7c – Send status mail
    • Append to table I_MCMAILOBJ details of the time the interface was processed, how many Idocs were created and how many of these produced a status of 03.
    • Select the user to receive the mail from ZINT_RECEIVER, using the name of the program as a key (SY-CPROG).
    • Use function Module ‘MC_SEND_MAIL’ to send a mail to the user containing the contents of I_MCMAILOBJ at the end of the interface processing.
    Step 8 – Test the population of the custom IDOC
    via wedi IDoc - Display IDoc or Transaction WE02.
    • Enter your message type and execute.
    • Status should be green, double click on one of the Idocs you have created to view its contents.
    • If a problem has occurred click on Status which will give you a description of the error.
    • Drop down Data Records arrow and this should list the data in the IDoc in the correct hierarchical structure.
    • Click on each individual segment and view the content to check that the correct data has been read.
    • If you have UNIX access by using AL11 you can view the file that you have created.
    Note:
    For some interfaces it may be valid to send an empty file to SAP. This empty file is converted to the custom IDOC format expected by SAP. This custom IDOC will contain dummy information. In the inbound processing code, if the dummy information is identified then the processing of the IDOC is considered to be complete and the IDOC should then be assigned a successfully processed status of 53, even though it has not been processed at all.
    hi,
    2.2 Inbound Interface
    Follow steps 1 to 5 inclusive as detailed above in outbound interface.
    Step 6
    Write a custom function module to handle custom inbound processing. This custom function module must
    • Check for the correct message type
    • Read the IDoc data segment
    • Perform data conversion and validate the data as appropriate
    • Post the data to the database
    • Handle any error situations
    • Set the correct return values for the status record
    Note that the Function Module must not make a commit to the database. This is because the status record is not written until control returns to the ALE layer. So if you commit work in the Function Module and an error occurs in returning to the ALE Layer, the status record must not be updated with a successful outcome.
    The commit work is executed in the ALE Layer after the status records are updated, via the standard SAP function module IDOC_INBOUND_PROCESS. This attributes of this function module are set up in (Transaction BD51), click on ‘New Entries’ and fill in data; the main input parameters being ‘Input Type’ and ‘Dialog Allowed’.
    Take care as some standard SAP transactions contain a Commit Work as part of their processing. Therefore using a BDC to process inbound data to SAP may not be acceptable. You need to check that the SAP transaction is ALE enabled.
    Step 7
    (Transaction WE57)
    Assign the custom function module to the IDoc type and the message type.
    Set function module to type ‘F’ and direction ‘2’ for inbound.
    Step 8
    (Transaction WE42)
    Create a new process code and assign it to the function module. The process code determines how the incoming IDoc is to be processed in SAP.
    Step 9
    (Transaction BD67)
    Assign the function module to the process code created above. Got to ‘New Entries’ and enter the process code and the function module name.
    Step 10
    (Transaction WE20 and Transaction BD64)
    Create a partner profile for your message and ensure that in transaction WE20 the process code is the one that points to your function module. (See step 6 of creating Outbound Idocs).
    Step 11
    Ensure that error handling functionality is present.
    Or Check these links.....
    ALE/ IDOC
    http://help.sap.com/saphelp_erp2004/helpdata/en/dc/6b835943d711d1893e0000e8323c4f/content.htm
    http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
    http://edocs.bea.com/elink/adapter/r3/userhtm/ale.htm#1008419
    http://www.netweaverguru.com/EDI/HTML/IDocBook.htm
    http://www.sapgenie.com/sapedi/index.htm
    http://www.sappoint.com/abap/ale.pdf
    http://www.sappoint.com/abap/ale2.pdf
    http://www.sapgenie.com/sapedi/idoc_abap.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/0b/2a60bb507d11d18ee90000e8366fc2/frameset.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/78/217da751ce11d189570000e829fbbd/frameset.htm
    http://www.allsaplinks.com/idoc_sample.html
    http://www.sappoint.com/abap.html
    http://help.sap.com/saphelp_erp2004/helpdata/en/dc/6b835943d711d1893e0000e8323c4f/content.htm
    http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
    http://edocs.bea.com/elink/adapter/r3/userhtm/ale.htm#1008419
    http://www.netweaverguru.com/EDI/HTML/IDocBook.htm
    http://www.sapgenie.com/sapedi/index.htm
    http://www.allsaplinks.com/idoc_sample.html
    Reward points if useful....
    Regards
    AK

  • How Active Data Model Works?

    Some one can explain to me, how Active Data Model Works?
    All data is managed in memory? When de changes go to the database? benefits of use.
    Thanks in advance.

    See if this overview helps:
    http://docs.oracle.com/cd/E16162_01/web.1112/e16182/bcintro.htm#sm0061

  • Can anyone tell me how the Status Profile works in relation to BP's...

    Can anyone tell me how the Status Profile works in relation to BP's...
    For example, how can I view a particular status that a BP has, and can a BP have more than one status against a status profile. Also what date/time details are set when a status is changed?.
    Can anyone help with this?.
    Jas

    Arap,
    Many thanks for the info. As with all these posts, you do your best to try and find out yourself, fail, then raise a question on a site like this. As soon as you do the answer(s) become clear.
    I've found transactions BS02 which shows Status Profiles, and associated status.
    The Order/transaction holds the status set and can be extracted using the FM/Program CRM_ORDER_READ and using the ET_STATUS table.
    That's what I was mainly after.
    Many thanks for your help.
    Jas
    Edited by: Jason Stratham on Apr 1, 2009 4:59 PM

  • HP 34401A-When I run the Read Meas.vi, it has error if I do not turn off the data storage

    When I run the Read Meas.vi, it has error if I do not turn off the data storage. The same thing happen when I run the App. Example. vi. Anyone knows how to solve this problem? Thanks alot.
    KL

    LoganS wrote:
    Hi KL,
    The Read Meas.vi is one of the subvi's in the App. Example.vi, so this problem is most likely the same problem in each case. From the help for Read Meas.vi:
    Data Storage instructs the device to store the data to be sent to either the internal or external buffer. If TRUE (default), the VI stores data in the on mode. If FALSE, the VI does not retain data in the off mode. Use the off mode only with the average min/max operation when you do not need to retrieve data. You cannot configure the meter for external buffering in the off mode.
    So the question is, are you trying to retrieve any data? If so, then as indicated in the above paragraph, you cannot retrieve data and have the data storage turned off. Good luck!
    Logan S.
    Yes, I need to retrieve data. So due to that problem, I cannot really get any data. I am not sure is that problem is due to the USB GPIB or not. But once I click on the click arrow, everying goes fine from initialise, measurment... once it comes to Read measurment, it has error (say somthing like: VISA Wait on Event for RQS.vi->HP34401A Read Meas.vi->Untitled 1) I have no idea like what it shows up.
    KL

  • How do data packages work?

    how do data packages work specifically?
    or how does the 25mb data package work?
    any specifications?

    In addition to the information SuzyQ has provided, here is a link for additional information http://support.vzw.com/clc/faqs/Calling%20Plans/nationwide.html

  • Does anyone know how the cutout filter works and is there a way of achieving the same effect without using filters to get more control over final look?

    does anyone know how the cutout filter works and is there a way of achieving the same effect without using filters to get more control over final look?

    Several ways to get similar results.  Image > Adjustments > Posturize with low values similar to what you'd use n Cutout.  This is the most flexible way I can think of as you keep the image in RGB mode with layers intact.  A more radical approach would be to reduce bit depth using Indexed Colour.  You'll need to experiment with settings, try changing Forced to Primaries, and Matte to Foreground Color.  There's no going back from this route, although you can change the mode back to RGB to re-enable layers, adjustment layers etc.
    A nice thing about the Filter gallery filters is that you can change the layer to a Smart object with all the control that gives you.
    Now if only this forum could filter out bizarre content.

  • Is there a good "Sticky note app that ACTUALLY put a Sticky Note on my iPad ( over my existing wallpaper images) on either the Home or Lock screens like how the Stickies app works on a Mac?

    I have an iPad 3.
    I tried two, One was a worthless piece of junk I wasted $2.99 on and the other is pretty good ($1.99) except is does not do a function I would like to use as advertised ( which is to create a sticking note that works like an app by creating an icon sized version of the original created sticky note, then  clicking on an app icon size version of the originally created sticky note that you can tap on like an app and a large sticky note is supposed to appear. This does NOT work with an iPad.
    Does anyone use a Stickies app for IOS that works exactly like how the Stickies app works on a Mac?
    What good is a Stickies note app on iOS when it cannot put actual virtual stickies notes over the main IOS screens and wallpapers and remain there until they are no longer needed?
    Does any one use a good stickies app on iPad? NOT iPhone or iPod Touch.

    It really cannot be done in iOS due to the sandboxed security model of the operating system. No app can modify the "desktop" like that - they can only display their own "virtual" desktop when actually actively in use.  so any stickie note app is the same as any simple note app - it must be open and active to display content. It cannot modify any screen but it's own when active.

  • ITunes shows that there are no songs on my iphone but there are 274 songs there. The data storage shows up as "other" on iTunes. I just want to clear the music on my phone but  I can't seem to do so. Is there any way to do this?

    iTunes shows that there are no songs on my iphone but there are 274 songs there. The data storage shows up as "other" on iTunes. I just want to clear the music on my phone but  I can't seem to do so. Is there any way to do this? You used to be able to swipe and delete a song straight from your phone but I can't seem to do that anymore.

    That is stealing.

  • Entire Scenario how the data is being process.

    Hi,
    I need the full scenario in detail, when the sender adapter pick the file from the source directory, how the data is passed to IS and how the data is passed to adapter engine and how the adapter engine process the data and how the data is send to adapter framework and wat all the steps adapter framework perform and on wat step the audit logs is maintain, how messaging, logging and queing will done in AFW and after processing the data in adpter engine how the data is being passed to Integration Engine and how the pipeline steps will get execute and how the data is been transfered to receiver.
    All others steps being process while sending the data from sender system to receving steps and how the data is process internally and where audit log is maintain etc.

    Hi,
    Please see the below links
    see the below links to helps you lot
    http://help.sap.com/saphelp_nw2004s/helpdata/en/fd/16e140a786702ae10000000a155106/content.htm
    /people/siva.maranani/blog/2005/05/25/understanding-message-flow-in-xi
    http://help.sap.com/saphelp_nw2004s/helpdata/en/6a/a12241c20af16fe10000000a1550b0/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/e4/6019419efeef6fe10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/327dc490-0201-0010-d49e-e10f3e6cd3d8
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/34a1e590-0201-0010-2c82-9b6229cf4a41
    Regards
    Chilla

  • How the data is stored in Info cube...in the back end what will happen???

    Hi Experts,
    How the data is stored in Info cube and DSO...in the back end what will happen???
    I mean  Cube contain Fact table and Dimension tables How the data will store and what will happen in the backend???
    Regards,
    Swetha.

    Hi,
    Please check :
    How is data stored in DSO and Infocube
    InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available independent of the InfoCube. Characteristics refer to master data with their attributes and text descriptions.
    An InfoCube consists of several InfoObjects and is structured according to the star schema. This means there is a (large) fact table that contains the key figures for the InfoCube, as well as several (smaller) dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions.
    An InfoCube fact table only contains key figures, in contrast to a DataStore object, whose data part can also contain characteristics. The characteristics of an InfoCube are stored in its dimensions.
    The dimensions and the fact table are linked to one another using abstract identification numbers (dimension IDs) which are contained in the key part of the particular database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube.
    Characteristics that logically belong together (for example, district and area belong to the regional dimension) are grouped together in a dimension. By adhering to this design criterion, dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume. This is beneficial in terms of performance. This InfoCube structure is optimized for data analysis.
    The fact table and dimension tables are both relational database tables.
    Characteristics refer to the master data with their attributes and text descriptions. All InfoObjects (characteristics with their master data as well as key figures) are available for all InfoCubes, unlike dimensions, which represent the specific organizational form of characteristics in one InfoCube.
    http://help.sap.com/saphelp_nw04s/helpdata/en/4c/89dc37c7f2d67ae10000009b38f889/frameset.htm
    Check the threads below:
    Re: about Star Schema
    Differences between Star Schema and extended Star Schem
    What is the difference between Fact tables F & E?
    Invalid characters erros
    -Vikram

  • How the PS Capex works in SAP R/3 Finance

    Hi All,
    how the secondary cost element is used in SAP for PS capex settlement ..
    how the PS Capex works in SAP R/3 Finance
    when the secondary cost elements starting with 9 and PS will accoounted in PS Capex Settlement
    will the sencondary cost element be zero with regarding to PS Capex settlement postings.
    Thanks & Regards
    Nandha

    for ECCN numbers we did a LSMW or you can do  a BDC program

  • How the data is fetched from the cube for reporting - with and without BIA

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at      ***** Records, Selected     *****Records, Transported
    Cube A     ***** blank ***** 0.624305      ***** 8,087,502      ***** 2,011
    Cube B     ***** E ***** 42.002653 ***** 1,669,126      ***** 6
    Cube B     ***** F ***** 98.696442 ***** 2,426,006 ***** 6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi,
    yes, Vitaliy could be guess right. Please check if FEMS compression is enabled (note 1308274).
    What you can do to get more details about the selection is to activate the execurtion plan SQL/BWA queries in data manager. You can also activate the trace functions for BWA in RSRT. So you need to know how both queries select its data.
    Regards,
    Jens

  • How the data is fetched from the cube for reporting

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1:  I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at          ***** Records, Selected     *****Records, Transported
    Cube A     *****             blank                 *****           0.624305         *****          8,087,502        *****             2,011
    Cube B     *****                     E   *****                      42.002653  *****                  1,669,126            *****                    6
    Cube B     *****                     F  *****                     98.696442    *****                  2,426,006       *****                    6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi Jay,
    Thanks for sharing your analysis.
    The only reason I could think logically is BWA is having information in both E and F tables in one place and hence after selecting the records, it is able to aggregate and transport the reords to OLAP.
    In the second case, since E and F tables are separate, aggregation might be happening at OLAP and hence you see more number of records.
    Our Experts in BWA forum might be able to answer in a better way, if you post this question over there.
    Thanks,
    Krishnan

  • How the data is entered  in the customized table

    Hi,
    In implemenation scenario when we create generic extraction ,   how the data is entered
    in the customized table if it is huge data  ( around 5000 records)
    Regards,
    Vivek

    Hi Vivek,
    Follow bellow steps:
    1.Goto RSO2.
    Choose Datasource from bellow of Three
    a). Transaction Data
    b). Master data Attributes
    c). Master data Text
    2.Specify Application component(SD/MM..)
    3.There are three extraction methods to fill datasource.
    4.Select extraction method extracts the data from a transparent table or database view.
    5.Select Extraction from View, then we have to create the View.
    a).Specify the view name.
    b).Choose the view type (Database view) from bellow mentioned views.
    i). Database view.
    ii). Projection view.
    iii).Maintainance view.
    iv). Help view.
    6. Specify Tables and Join Conditions and define view fields.
    7. Assign View to Datasource
    8. Once you specify view in Data source, the extract structure will generate.
    9. you can check the data in RSA3.
    Regards,
    Suman

Maybe you are looking for