Rearrange between dataset and datagrid

I binding between a dataset and a datagrid an all works fine
and I can see all fields of my dataset in the datagrid (name,
last_name, phone, fax,...).
I want now to show only the field name, fax and phone in my
datagrid and I have tried changing formater in databinding.
I use
formater: Rearrange fields
formatter option:
{"name=<name>;fax_number=<fax>;telephone=<phone>"}
but this fails
Someone know a easy way to do this?
Thanks

download "Dataset connection wizard" extension.
its simple useful and visual ;)

Similar Messages

  • Array of objects into dataset and then to datagrid

    I have discovered amfphp (
    http://amfphp.org). It's a really
    cool way to move complex data objects from php to flash and back.
    quote:
    So I am successfully creating an array of items from my
    database...an item that would otherwise be created like this:
    items = array();
    item[0] = new Object();
    item[0].id = 7;
    item[0].name= "Item 1";
    item[0].styleNumber = "001";
    item[1] = new Object();
    etc.
    I put DataSet and DataGrid objects on my stage called
    productDataSet and productDataGrid, respectively, and tried this:
    quote:
    this.productDataSet.items = re.result;
    trace('dataSet length:' + this.productDataSet.getLength());
    this.productDataGrid.dataProvider =
    this.productDataSet.dataProvider;
    the trace appears to work correctly--several hundred
    items--and the dataGrid shows my object property names (albeit in
    reverse order) and the DataGrid appears to have several hundred
    items in it but there is NO DATA THERE. All the list items are
    blank.
    I've had a bit of luck working with the component inspector
    creating bindings and stuff but I'm hampered by my incomplete
    understanding of how it works. All the tutorials I've found
    instruct me to import an XML sample to create a schema. Since I
    have no XML i cannot one.
    I want to do just a couple of things:
    1) put my data in the DataSet properly, ideally in one fell
    swoop--i *think* i've done this correctly above
    2) attach the DataGrid to the DataSet so that when i sort or
    filter the data set, the datagrid shows the results and when i
    select an item in the DataGrid, the DataSet knows that is the
    currently selected item.
    3) hide the 'id' field in the data grid and display
    user-friendly names for the columns: "Style Number" rather than
    "styleNumber".
    Any help would be much appreciated. I *think* this is all
    about understanding schemas but I don't really know.

    That *is* a very interesting article. Unfortunately, it
    doesn't mention anything about DataSets. I haven't had any trouble
    getting my data into my flash application. I've also been able to
    put it into a DataGrid pretty easily (although not quite as
    elegantly as that example did).
    The problem is that I'm having issues when I introduce a
    DataSet for filtering. I can't get the data from the DataSet into
    the DataGrid like I want it. I haven't been able to hide the 'id'
    column of my data and the column names unfortunately are *exactly*
    what the Object property names are. I was hoping to put
    user-friendly column headers on their like 'Product Name' rather
    than 'name' or 'Style Number' rather than 'styleNumber'.
    Also, the concept of a schema is still somewhat beyond me. I
    tried changing the schema for my productDataSet to this and I got
    the data to display. I added 'name, 'styleNumber', and 'id'
    quote:
    <-->dataProvider : Array
    <-->deltaPacket : DeltaPacket
    <-->items : Array
    <-->selectedIndex: Number
    <-->name : String
    <-->styleNumber: String
    <-->id : Integer
    I have tried radically different shemas (schemae?) that seem
    to also get the data in there but I don't really understand what
    i'm doing here and I feel like I'm asking for trouble not knowing
    how this stuff really works.
    And, like I said before, i want to hide certain information
    that's in the DataSet so that it doesn't get displayed. I'd also
    like to Have some user-friendly names rather than the actual
    field/property identifiers.

  • DataSet and Tree component binding

    Could you please tell me how you set up data binding between
    DataSet and Tree component?

    download "Dataset connection wizard" extension.
    its simple useful and visual ;)

  • Password Field not mapped between Request and Provisioning Form

    Hi to all. I'm working with OIM 11g. I've faced a strange issue. I'm not sure I'm working properly, so let me explain you my case. In my installation I've got the SSH connector, which is correctly connected with the physical resources. I've loaded the resource dataset ProvisionResourceSSH User bundled with the connector. Consider now that the user "goofy", with "ALL USERS" role, tries to make a request Provision Resource SSH User (Request Based provisioning). He fills in all the field in the appropriate manner, but when OIM triggers "Create User" provisioning task, after the required approval process, the password field is always blank (although goofy filled it in!!!).
    I've thought: "ok, it seems a role trouble". And effectively, if goofy has got also the role "REQUEST ADMINISTRATORS", the provisioning form shows the password field correctly valued (as goofy stated in his request).
    Note that all the fields are correctly mapped between request dataset and the provisioning form (I'm using the original dataset and the original provisioning form installed by the connector). So all other fields filled by goofy on the request form (request based provisioning) are correctly passed to the provisioning form. All the fields, except for the password.
    Am I wrong in something? How Could I make possible to pass the data filled on the request for the password field to the provisioning form even if the requester has not the role "REQUEST ADMINISTRATORS"?
    Thank you in advance for the help.

    This sure seems goofy! ;-) ... can you try making the ALL Users have all the permissions on the Resource Object and the Process Form and test it out? Also try from the backend at the database and see if the table has NULL for the password field? ... What's the type of password field in the dataset and the process form? Encrypted/Secret at both ends or mis match? Try making them plain text both the places as well.
    -Bikash

  • Difference between sapscripts and BDCs

    what is the difference between sapscripts and BDCs

    BDC for data communication between sap to non sap(vice-versa)
    where scripts is a business layout(we create)
    <b>bdc help</b>
    They are the only 3 methods mostly we use in BDC.
    Call _Dialog is outdated. it is there in the intial stages of SAP.
    BDC:
    Batch Data Communication (BDC) is the process of transferring data from one SAP System to another SAP system or from a non-SAP system to SAP System.
    Features :
    BDC is an automatic procedure.
    This method is used to transfer large amount of data that is available in electronic medium.
    BDC can be used primarily when installing the SAP system and when transferring data from a legacy system (external system).
    BDC uses normal transaction codes to transfer data.
    Types of BDC :
    CLASSICAL BATCH INPUT (Session Method)
    CALL TRANSACTION
    BATCH INPUT METHOD:
    This method is also called as ‘CLASSICAL METHOD’.
    Features:
    Asynchronous processing.
    Synchronous Processing in database update.
    Transfer data for more than one transaction.
    Batch input processing log will be generated.
    During processing, no transaction is started until the previous transaction has been written to the database.
    CALL TRANSACTION METHOD :
    This is another method to transfer data from the legacy system.
    Features:
    Synchronous processing. The system performs a database commit immediately before and after the CALL TRANSACTION USING statement.
    Updating the database can be either synchronous or asynchronous. The program specifies the update type.
    Transfer data for a single transaction.
    Transfers data for a sequence of dialog screens.
    No batch input processing log is generated.
    For BDC:
    http://myweb.dal.ca/hchinni/sap/bdc_home.htm
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/bdc&
    http://www.sap-img.com/abap/learning-bdc-programming.htm
    http://www.sapdevelopment.co.uk/bdc/bdchome.htm
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://help.sap.com/saphelp_47x200/helpdata/en/69/c250684ba111d189750000e8322d00/frameset.htm
    http://www.sapbrain.com/TUTORIALS/TECHNICAL/BDC_tutorial.html
    Check these link:
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://www.sap-img.com/abap/question-about-bdc-program.htm
    http://www.itcserver.com/blog/2006/06/30/batch-input-vs-call-transaction/
    http://www.planetsap.com/bdc_main_page.htm
    call Transaction or session method ?
    These are the Function modules used for the Session Method.
    BDC_OPEN_GROUP
    BDC_INSERT
    BDC_CLOSE_GROUP
    BDC_DELETE_SESSION to create a session
    You can schedule the execution of BDC session using the program RSBDCSUB.
    another one call transaction
    open dataset
    close dataset
    reward if it helps u
    vijay pawar

  • What is the programming (ABAP) difference between Unicode and non Unicode?

    What is the programming(ABAP) difference between Unicode and non Unicode?
    Edited by: NIV on Apr 12, 2010 1:29 PM

    Hi
    The difference between programming in Unicode or not Unicode is that you should consider some adjustments to make on the Program "Z" to comply with the judgments Unicode Standard.
    In the past, developments in SAP using multiple systems to encode the characters of different alphabets. For example: ASCII, EBCDI, or double-byte code pages.
    These coding systems mostly use 1 byte per character, which can encode up to 256 characters. However, other alphabets such as Japanese or Chinese use a larger number of characters in their alphabets. That's why the system using double-byte code page, which uses 2 bytes per character.
    In order to unify the different alphabets, it was decided to implement a single coding system that uses 2 bytes per character regardless of what language is concerned. That system is called Unicode.
    Unicode is also the official way to implement ISO/IEC 10646 and is supported in many operating systems and all modern browsers.
    The way of verifying whether a program was adjusted or not, is through the execution of the UCCHECK transaction. Additionally, you can check by controlling syntax (making sure that this asset verification check Unicode).
    The main decisions to adjust / replace are (examples):
    ASSIGN H-SY-INDEX TEXT TO ASSIGN <F1> by
    H-SY-INDEX TEXT (*) TO <F1>.
    DATA INIT (50) VALUE '/'. by
    DATA INIT (1) VALUE '/'.
    DESCRIBE FIELD text LENGTH lengh2 by
    DESCRIBE FIELD text LENGTH lengh2 in character mode.
    T_ZSMY_DEMREG_V1 = record_tab by
    record_tab TO MOVE-Corresponding t_zsmy_demreg_v1.
    escape_trick = hot3. by
    escape_trick-x1 = hot3.
    itab_txt TYPE wt by
    ITAB_TXT TYPE TABLE OF TEXTPOOL
    DATA: string3 (3) TYPE X VALUE B2023 '3 'by
    DATA: string3 (6) B2023 TYPE c VALUE '3 '.
    OPEN DATASET file_name IN TEXT MODE by
    OPEN DATASET file_name FOR INPUT IN TEXT MODE ENCODING NON-UNICODE.
    or
    OPEN DATASET file_name FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    CODE FROM PAGE TRANSLATE a_codepage record by
    record TRANSLATE USING a_codepage.
    CALL FUNCTION 'DOWNLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_download
    CALL FUNCTION 'WS_DOWNLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_download
    CALL FUNCTION 'UPLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_upload
    CALL FUNCTION 'WS_UPLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_upload
    PERFORM USING HEAD APPEND_XFEBRE +2. by
    PERFORM USING HEAD APPEND_XFEBRE +2 (98).
    Best Regars
    Fabio Rodriguez

  • What is the difference between DSo and Infocube

    Hello,
             Kindly tell me what is the difference between DSO and Infocube?
    And please tell me how to take the desicion that in whichi case we can use DSO and in which case we can use Infocube..

    Hi ,
    DataStore object serves as a storage location for consolidated and cleansed transaction data or master data on a document (atomic) level.
    This data can be evaluated using a BEx query.
    A DataStore object contains key fields (for example, document number/item) and data fields that can also contain character fields (for example, order status, customer) as key figures. The data from a DataStore object can be updated with a delta update into InfoCubes and/or other DataStore objects or master data tables (attributes or texts) in the same system or across different systems.
    Unlike multidimensional data storage using InfoCubes, the data in DataStore objects is stored in transparent, flat database tables. The system does not create fact tables or dimension tables.
    Use
    The cumulative update of key figures is supported for DataStore objects, just as it is with InfoCubes, but with DataStore objects it is also possible to overwrite data fields. This is particularly important with document-related structures. If documents are changed in the source system, these changes include both numeric fields, such as the order quantity, and non-numeric fields, such as the ship-to party, status and delivery date. To reproduce these changes in the DataStore objects in the BI system, you have to overwrite the relevant fields in the DataStore objects and set them to the current value. Furthermore, you can use an overwrite and the existing change log to render a source delta enabled. This means that the delta that is further updated to the InfoCubes, for example, is calculated from two successive after-images.
    An InfoCube describes (from an analysis point of view) a self-contained dataset, for example, for a business-orientated area. You analyze this dataset in a BEx query.
    An InfoCube is a set of relational tables arranged according to the star schema: A large fact table in the middle surrounded by several dimension tables.
    Use
    InfoCubes are filled with data from one or more InfoSources or other InfoProviders. They are available as InfoProviders for analysis and reporting purposes.
    Structure
    The data is stored physically in an InfoCube. It consists of a number of InfoObjects that are filled with data from staging. It has the structure of a star schema.
    The real-time characteristic can be assigned to an InfoCube. Real-time InfoCubes are used differently to standard InfoCubes.
    ODS versus Info-cubes in a typical project scenario
    ODS
    why we use ods?
    why is psa  & ods nessasary
    Hope this helps,
    Regards,
    CSM Reddy

  • Difference Between Aggregates and Compression

    Hi,
    Can you tell me what is the difference between Aggregates and Compression?
    I know ,once compressed data is not available for deletion in request wise
    and it moves from  F table to E table .
    Aggregates means ,data will move from cube to aggreagates(Baby cubes).
    But my query is ,As both of them aggregates the data.which of them should be used. at what situation?
    I hope you understood my Query.
    Regards.
    Naresh.

    Hi,
    An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form into the database.
    Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates serve, in a similar way to database indexes, to improve performance.
    especially in the following cases we create aggregates:
    The execution and navigation of query data leads to delays with a group of queries.
    You want to speed up the execution and navigation of a specific query.
    You often use attributes in queries.
    You want to speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.
    For more info on aggregates go though the link below
    https://help.sap.com/saphelp_sem320bw/helpdata/en/c5/40813b680c250fe10000000a114084/frameset.htm
    Compression creates a new cube that has consolidated and summed duplicate information.
    2. When you compress, BW does a group by on dimensions and a sum on measures... this eliminates redundent
    information.
    3. Compressed infocubes require less storage space and are faster for retrieval of information.
    4. Once a cube is compressed, you cannot alter the information in it. This can be a big problem if there
    is an error in some of the data that has been compressed.
    For more info go though the below link
    http://www.sap-img.com/business/infocube-compression.htm
    Regards,
    Marasa.

  • Difference between collect and move stmts

    hi
    anyone plz explain...
    1. Difference between collect and move stmts
    2. Badi and user exit.
    gowri

    Hi,
    1.COLLECT:COLLECT is used to create unique or compressed datsets. The key fields are the default key fields of the internal table itab .
    If you use only COLLECT to fill an internal table, COLLECT makes sure that the internal table does not contain two entries with the same default key fields.
    If, besides its default key fields, the internal table contains number fields,the contents of these number fields are added together if the internal table already contains an entry with the same key fields.
    If the default key of an internal table processed with COLLECT is blank, all the values are added up in the first table line.
    If you specify wa INTO , the entry to be processed is taken from the explicitly specified work area wa . If not, it comes from the header line of the internal table itab .
    After COLLECT , the system field SY-TABIX contains the index of the - existing or new - table entry with default key fields which match those of the entry to be processed.
    COLLECT can create unique or compressed datasets and should be used precisely for this purpose. If uniqueness or compression are unimportant, or two values with identical default key field values could not possibly occur in your particular task, you should use APPEND instead. However, for a unique or compressed dataset which is also efficient, COLLECT is the statement to use.
    If you process a table with COLLECT , you should also use COLLECT to fill it. Only by doing this can you guarantee that the internal table will actually be unique or compressed, as described above and COLLECT will run very efficiently.
    If you use COLLECT with an explicitly specified work area, it must be compatible with the line type of the internal table.
    In 'move' the actually data copies into another data field
    2.
    Difference between BADI and USER-EXIT.
    i) BADI's can be used any number of times, where as USER-EXITS can be used only one time.
    Ex:- if your assigning a USER-EXIT to a project in (CMOD), then you can not assign the same to other project.
    ii) BADI's are oops based.
    A. BAdI Definition
    1. SE18
    2. Enter the name for the BAdI to be created in customer namespace and press "Create".
    3. Enter a definition for your BAdI and on the interface tab enter a name for the BAdI interface. SAP proposes a name and it is pretty good. Meanwhile a BAdI class is also created which is not in our concern.
    e.g for "ZTEST", SAP proposes "ZIF_EX_TEST" for the interface and "ZCL_EX_TEST" for the class.
    4. Save your BAdI.
    5. Double-click on the interface name. It will pass to a Class Builder session to make you implement your interface. If you are not familiar to the Class Builder; it's a bit like Function Builder and it will be easy to discover its procedure.
    6. Save and activate your interface.
    B. Calling your BAdI from an application program
    1. Declare a reference variable with reference to the Business Add-In interface.
    e.g. DATA exit_ref TYPE REF TO zif_ex_test.
    2. Call the static method GET_INSTANCE of the service class CL_EXITHANDLER. This returns an instance of the required object.
    e.g.
    CALL METHOD CL_EXITHANDLER=>GET_INSTANCE
    CHANGING instance = exit_ref .
    3. After those two steps, you can now call all of the methods of the BAdI where it is required in your program. Make sure you specify the method interfaces correctly.
    C. BAdI Implementations
    1. SE19
    2. Enter the name for the BAdI implementation to be created in customer namespace and press "Create".
    3. It will request the BAdI definition name to which this implementation will be tied.
    4. Enter a definition for your implementation and on the interface tab enter a name for the implementing class. Again SAP proposes a name and it is pretty good.
    e.g for "ZIMPTEST", SAP proposes "ZCL_IM_IMPTEST".
    5. Save your implementation.
    6. To implement a method, just double-click on the method name and you will be taken to the Class Builder to write the code for it. Here you redefine the BAdI interface methods.
    7. You must activate your implementation to make it executable. You can only activate or deactivate an implementation in its original system without modification. The activation or deactivation must be transported into subsequent systems
    Regards

  • What is the difference between Interface and Conversion?

    Hi friends,
       Can any one teel me What is the difference between Interface and Conversion in detail.
    Rewarded with points
    Thanks & Regards,
    Naren.

    Hi,
       interface can be outbound i.e writing data to
       application server using open dataset,transfer
      or downloading data (gui_download)
       inbound -> reading data from application server using
                  open dataset,read dataset,uploading data
                  using gui_upload.
       conversion:BAtch data communication method
                  where legavy data is uploaded in SAp.
    Regards
    Amole

  • What's the relationship between Flex and AIR?

    What's the relationship between Flex and AIR?
    I only know that Flex is a Framework and the AIR is a runtime,
    Can any one tell me more about these two object in detail,thanks a lot!

    AIR is a runtime that supports a superset of the Flash Player API. You use it to run mobile and desktop applications, as opposed to browser apps.
    Flex is a set of technologies for building either AIR apps or browser apps. It includes a framework of runtime classes (e.g, Button, DataGrid, etc.) to use in your applications, an SDK with a command-line compiler, and an IDE called Flash Builder (formerly Flex Builder) that supports intelligent editing, a design view, and a debugger.
    So, a brief statement of their relationship is that you can use Flex to build AIR apps.
    Gordon Smith
    Adobe Flex SDK Team

  • Difference between COPA and Logistic Delta Mechanisam

    Dear All,
    May i knw the difference between the COPA and Logistic Delta Mechanisam h it works.
    In our Production system, when ever a logistic delta fails we take action and repeat it then it will featch the delta on the same day,
    where as in COPA delta, when it fails we take action and when we repeat it will fetch zero records and in the next day the delta will come.
    What is the difference in both
    What exactly is happeing in R/3 for these both delta mechanisam.
    Thanks in advance,
    K Janardhan Kumar

    Hi Guru,
    I will explain about  delta extraction with timestamp in general with an example:
    timestamp is generally in  yyyymmddhhmmss format
    let's assume delta runs daily at 09:00 morning.Last delta ran at 09.00 yestreday.And today when delta runs it picks the data ranges between
    09:01 (yesterday's) to  09:00(today).
    if one record is posted at 09:10 today,then it will not be picked by today's delta(coz' it is posted after 09:00)
    Hope now you are clear about timestamp.........
    In case of COPA ,we are using timestamp as a tool to identify delta
    now COPA delta mechanism has one more concept "saftey delta ":let's put a question to ourselves ,why we should use this;
    SAP answer is "The reason for the selection of the safety delta is that there are possible level differences of the clocks on different application servers. If the delta is selected on a level that is too low, it is possible that records
    are not taken into account when uploading into the BW."
    'Safety delta' usually will be set to 30 mins during the initialization /delta upload(default).
    This means that only records that are already half an hour old at the starting point of the upload are loaded into BW
    Ex:
    we have made following settings for copa
    timestamp=09:00
    safety delta=30 mins
    now when you run daily delta,it picks data ranges between (current timestamp-safety delta) i.e 08:30 instead of 09:00(yesterday's)
    to 09:00 today
    check this oss notes  502380 for better understanding on COPA delta mechanism
    Symptom
    There is some confusion about how the delta process works with CO-PA DataSources and the old logic (time stamp administration in the Profitability Analysis) or there are data inconsistencies between the BW and OLTP systems.
    As of PlugIn Release PI2004.1 (Release 4.0 and higher), a new logic (generic delta) is used during the delta process. Old DataSources can be converted to the new logic. New DataSources automatically use the new logic. With the new logic, the time stamp administration is located in the Service-API and no longer in the Profitability Analysis.
    This note refers only to DataSources with the old logic.
    Reason and Prerequisites
    Administration of the delta process for CO-PA DataSources partly occurs in the OLTP system. In particular, the time up to which the data was already extracted is stored in the DataSource control tables (old logic).
    Solution
    Since the control tables for the delta process for the extractor are managed in the OLTP, the following restrictions apply:
    1. There should only ever be one valid initialization package for a DataSource. Data inconsistencies may occur between BW and OLTP if, for example, you schedule an Init for various selections for the same DataSource and data is posted between the individual initializations to the Profitability Analysis. The reason for this is that each time the time stamp for the DataSource is initialized in the OLTP, the current value (minus the safety delta, see note 392876) is reset. Records from a previous selection are therefore no longer selected with the next delta upload if they were posted before the last initialization run with another selection.
    2. Initialization can always only be carried out from one system. Inconsistencies may occur if the same DataSource is used from several BW systems and if data is posted between the initialization runs. This is because the time stamp for the replication status is reset for every initialization or delta upload in the OLTP. Records may therefore be missing in the system that was first updated if updates were made in the result area before the Init or delta run. In the system that was the second one to be updated, the records that were loaded into the first system are missing for a delta upload.
    In the case of large datasets, you should therefore perform initialization either using several DataSources or with a combination of one or more full uploads and an init upload. Full uploads without errors are possible for closed periods/fiscal years because no additional changes are made to this data. Initialization should be performed, for example, from the current fiscal year. The full updates for the closed periods can also be split in time. If required, more characteristics, for example, the action type, can also be used for the selection. For information on the period selection, see note 425844
    Hope you are clear now!!!!!!!!!
    Cheers
    Swapna.G
    Message was edited by:
            swapna gollakota

  • Connection between Portal and MDM

    Hi all
    Can anybody help me on how to establish the connection between portal and MDM with step by step procedure.
    i am new to this combination .
    before i worked with portal but not worked with MDM .
    Regards
    Suresh babu

    Hello Suresh,
    As of my knowledge MDM IViews have the advantage to ful fil your neeed to provide connectivity moreover,that they are not complex in structure and are therefore not restricted by any underlying schema structure. As opposed to regular iViews in other systems which are embedded into the pages on which they appear, MDM iViews are more generic and granular in nature enabling their placement on a page to be more adaptable.
    The SAP Enterprise Portal (SAP EP) offers templates for creating the iViews. The following are some examples of these templates available at the link:
    [http://help.sap.com/saphelp_mdm550/helpdata/en/45/c9f8cac2124ebee10000000a11466f/content.htm]
    Procedure:
           1.      Create a new page using the page wizard.
           2.      To see the iViews you created on the new page, you need to link them to the page. Right-click each of the iViews that you created and from the context menu, select Add iView to Page &#8594; Delta Link
           3.      Double-click the new page you created and select:
    Page Content to view a list of the iViews that have been linked to this page
    Page Layout to view the layout currently assigned to this page. The iViews can be rearranged using the drag&drop function.
           4.      The visual structure of a page can be changed with the Page Layout function. Select Define Layout to reset the layout of the page.
           5.      Select Preview to view the page showing how the iViews have been arranged. The preview simulates what an end user sees at runtime.
    Detailed pictorial presentation available at:
    [http://help.sap.com/saphelp_mdm550/helpdata/en/45/c8e2fa5f0c2e97e10000000a155369/content.htm]
    Hope you will find it helpful.
    Regards,
    Krutarth

  • Differancec between local and sequential files

    hi,
    what is the differancec between local and sequential files

    Sequential files are files with specific format. You upload data into SAP using sequential files.
    ABAP allows you to use sequential files located on the application server or presentation server. You can use these files to buffer data, or as an interface between local programs and the R/3 System
    Sequential files are the files which are stored at application server
    To read them or to put the data into them we use the DATASET concepts for transfering data
    OPEN data set is used to open the file
    READ dataset for reading the file
    TRANSFER dataset for transfering/writing the data
    CLOSE dataset to close the dataset.
    Local file is nothing but ur local system file ..Like C:\, D:\

  • Difference between Null and null?

    What is the difference between null and NULL?
    When is each used?
    Thanks,

    veryConfused wrote:
    There is a null in java, but no NULL. null means no value. However, when assigning value, the following is different:Although the empty String has no special role. Null means, the referential type is not assigned (doesn't refer) to a specific object. The empty String is just another object though, so seeing it or pointing it out as something special when it actually isn't at all (no more special than new Integer(0) or new Object[0]) just adds to the confusion.

Maybe you are looking for

  • How to load excel-csv file into oracle database

    Hi I wanted to load excel file with csv extension(named as trial.csv) into oracle database table(named as dept),I have given below my experiment here. I am getting the following error,how should I rectify this? For ID column I have defined as number(

  • Add Namespace in XPATH - Declaration - [JS]-CS4

    Dear All, Here I have a big doubt regarding for XPATH - Add Namespace in JavaScript. I have to written in Vb.NET //---------------- Add Namespace in VB.NET --------------------// Dim xDom As New XmlDocument Dim xNs As New XmlNamespaceManager(xDom.Nam

  • Group and group counter in sap

    Hi all, when i am creating rate routing while how can i change group number, external entry is not possible for group.,

  • How to enable callback

    Hi Netpros, I have 3660 router with NM-16AM, Modem type is microcom_mimic. I was interested in configuring callback facility on this so that I could connect to this Router from my home. How can I achive this. I am able to dial into the router succesf

  • More than 5 ipods

    Is there a way to be able to sync more than 5 products to our home computer?  We have 2 iPhones, 2 old iPhones, 5 iPods, and now an iPad. Thanks, hari