Caching system for loading reference table data

Hi,
I want to load the small reference table data into memory and later i want to query the data in the memory. That table wont be updated frequenctly.I dont want to to load the query results in the memory. As i know i can do it by IBatis or Hibernate. I want to load the reference table data into memory once and want to use the data in memory for querying. Suppose i have a Department table with 100 rows with columns as DeptNO, DeptName and NoOfEmployees. I want to load all the records in to memory and later i want to run queries like getDepartmes where NoEmployees >50. queries can be dynamic.Is it possible with any Java caching systems.

They allow you to manipulate collections (duh) of objects. Typically, but not exclusively, using a comparator or somesuch. The sort of operation you're describing can be carried out reasonably swiftly by merely iterating over the items - unlikely to be as slow as a database round-trip.
If you need something particularly sophisticated then you can either craft a suitable data structure to contain it, or go for the generic solution of an embedded database (and if you go for this option, you might want to apologise for biting my head off).
Or, indeed, you can use a query cache if you need no dynamic capabilities whatsoever (i.e. query with a <50 constraint twice and the second one is fast, but do it again with a <49 constraint and it takes just as long as the first one).

Similar Messages

  • Procedure for creating transparent table, data element and domain

    Hi,
    Can anybody let me know the procedure for creating transparent table, data element and domain.
    Thanks,
    Mahathi

    Hi
    Database table and its components
    A database table is the central data structure of the ABAP/4 data dictionary.
    The structure of the objects of application development are mapped in tables on the underlying relational database.
    The attributes of these objects correspond to fields of the table.
    A table consists of columns (fields) and rows (entries). It has a name and different attributes, such as delivery class and maintenance authorization.
    A field has a unique name and attributes; for example it can be a key field.
    A table has one or more key fields, called the primary key.
    The values of these key fields uniquely identify a table entry.
    You must specify a reference table for fields containing a currency (data type CURR) or quantity (data type QUAN). It must contain a field (reference field) with the format for currency keys (data type CUKY) or the format for units (data type UNIT). The field is only assigned to the reference field at program runtime.
    The basic objects for defining data in the ABAP Dictionary are tables, data elements and domains. The domain is used for the technical definition of a table field (for example field type and length) and the data element is used for the semantic definition (for example short description).
    A domain describes the value range of a field. It is defined by its data type and length. The value range can be limited by specifying fixed values.
    A data element describes the meaning of a domain in a certain business context. It contains primarily the field help (F1 documentation) and the field labels in the screen.
    A field is not an independent object. It is table-dependent and can only be maintained within a table.
    You can enter the data type and number of places directly for a field. No data element is required in this case. Instead the data type and number of places is defined by specifying a direct type.
    The data type attributes of a data element can also be defined by specifying a built-in type, where the data type and number of places is entered directly.
    <b>Two Level Domain Example</b>
    A domain defines a field technically and therefore it may
    be used at different business levels.
    A data element describes the meaning of a domain in a certain business context.
    A domain, however, is used for the technical definition of a table field (for example field type and length).
    Therefore, although a take-off airport (data element S_FROMAIRP) would have a different business meaning from an airport where a plane lands (data element S_TOAIRP), they could still have the same domain(here S_AIRPID) because technically we could assign the same number of characters whether the airport is a take-off or a landing airport.
    <b>Definitions of Table in Database</b>
    In SAP R/3 tables are defined as
    A) Transparent tables: All of the fields of a dictionary table correspond to a field in the real database table.
    B) Pooled tables: Different tables which are not linked to each other with a common key are combined into a TABLE POOL. Several logical tables thus exist as a single real database table.
    C) Cluster tables: Several tables linked by a common key may sometimes be combined by the data dictionary and made to exist on the database schema as a single table.
    SAP is evolving R/3 tables in transparent tables.
    <b>Elaboration on each of the definitions</b>
    A transparent table is automatically created on the database when it is activated in the ABAP Dictionary. At this time the database-independent description of the table in the ABAP Dictionary is translated into the language of the database system used.
    The database table has the same name as the table in the ABAP Dictionary. The fields also have the same name in both the database and the ABAP Dictionary. The data types in the ABAP Dictionary are converted to the corresponding data types of the database system.
    The order of the fields in the ABAP Dictionary can differ from the order of the fields on the database. This permits you to insert new fields without having to convert the table. When a new field is added, the adjustment is made by changing the database catalog (ALTER TABLE). The new field is added to the database table, whatever the position of the new field in the ABAP Dictionary.
    Tables can also reside on the database as Pooled tables or cluster tables
    Pooled Tables: Different tables which are not linked to each other with a common key can be combined into a Table Pool. The tables contained within this pool are called Pooled Tables. A table pool is stored in the database a simple table. The table's data sets contain, in separate fields, the actual key for the data set to be stored, the name of the pooled table and the contents of the data set to be stored.
    Using this schema, several logical tables are combined into a single real database table. Although the data structure of each set is lost during the write to the table pool, it is restored during the read by the ABAP/4 Data Dictionary. The ABAP/4 Data Dictionary utilizes its meta-data to accomplish this.
    Since information must be prepared (defined) within the ABAP/4 Data Dictionary when it is read or written to (or accessed), this process itself defines these as not transparent tables
    Cluster Tables: Occasionally, several tables may be linked by a common key. The ABAP/4 Data Dictionary can also combine these tables into a single table. Each data set of the real table within the database contains a key and in a single data field, several data sets of the subsequent table for this key.
    As mentioned above, these table types require special data handling, therefore they are not transparent tables.
    <b>Technical Settings in Dictionary</b>
    The data class logically defines the physical area of the database (for ORACLE the table space) in which your table should be created. If you choose the data class correctly, the table will automatically be created in the appropriate area on the database when it is activated in the ABAP Dictionary.
    The most important data classes are master data, transaction data, organizational data and system data.
    Master data is data that is rarely modified. An example of master data is the data of an address file, for example the name, address and telephone number.
    Transaction data is data that is frequently modified. An example is the material stock of a warehouse, which can change after each purchase order.
    Organizational data is data that is defined during customizing when the system is installed and that is rarely modified thereafter. The country keys are an example.
    System data is data that the R/3 System itself needs. The program sources are an example.
    Further data classes, called customer data classes (USER, USER1), are provided for customers. These should be used for customer developments. Special storage areas must be allocated in the database.
    The size category describes the expected storage requirements for the table on the database.
    An initial extent is reserved when a table is created on the database. The size of the initial extent is identical for all size categories. If the table needs more space for data at a later time, extents are added. These additional extents have a fixed size that is determined by the size category specified in the ABAP Dictionary.
    You can choose a size category from 0 to 4. A fixed extent size, which depends on the database system used, is assigned to each category.
    Correctly assigning a size category therefore ensures that you do not create a large number of small extents. It also prevents storage space from being wasted when creating extents that are too large.
    Modifications to the entries of a table can be recorded and stored using logging.
    To activate logging, the corresponding field must be selected in the technical settings. Logging, however, only will take place if the R/3 System was started with a profile containing parameter 'rec/client'. Only selecting the flag in the ABAP Dictionary is not sufficient to trigger logging.
    Parameter 'rec/client' can have the following settings:
    rec/client = ALL All clients should be logged.
    rec/client = 000[...] Only the specified clients should be logged.
    rec/client = OFF Logging is not enabled on this system.
    The data modifications are logged independently of the update. The logs can be displayed with the Transaction Table History (SCU3).
    Logging creates a 'bottleneck' in the system:
    Additional write access for each modification to tables being logged.
    This can result in lock situations although the users are accessing different application tables!
    <b>Create transparent table</b>
    Go to transaction SE11. Enter name of table you want to create (beginning with Y or Z) and click on create pushbutton
    Enter the delivery class and the table maintenance criteria
    The delivery class controls the transport of table data when installing or upgrading, in a client copy and when transporting between customer systems .
    The display/maintenance indicator specifies whether it is possible to display/maintain a table/view using the maintenance tools Data Browser (transaction SE16) and table view maintenance (transactions SM30 and SM31).
    Enter the name of the table field and the data element. The
    System automatically populates the technical details for
    existing data elements.
    So far as possible it is advisable to use existing data elements which befit the business requirements.
    However, we may create data elements if need be. The same is shown in the next slide.
    To create a data element simply double click on it.
    Alternately create a data element by simply choosing the
    data type radio button on SE11 initial screen.
    <b>Create data element</b>
    The system prompts you to create a new data element.
    Choose the Yes pushbutton.
    Under the data type tab enter the domain name which
    determines the technical characteristics of the field.
    Further characteristics tab: Allows you to specify a search help assigned to the data element.
    It also allows you to specify a parameter id which helps you populate a field from SAP memory.
    Field label: Can be assigned as prefixed text to a screen field referring to the ABAP Dictionary. The text is displayed on the screen in the logon language of the user (if the text was translated into this language).
    <b>Create domain</b>
    If the domain does not exist in the data dictionary the
    system prompts you to create one.
    Give the technical characteristics under the definition
    tab. Value range allows you value restriction at domain
    level.
    Value range tab:
    As explained in the section Consistency through input checks one can restrict the possible values for a field at domain level itself by either entering fixed values or by specifying a value table under the tab Value range.
    <b>Currency/Quantity fields in a table</b>
    A currency or a quantity field must be assigned a reference field from a reference table containing applicable qty unit or currency unit.
    Field of the reference table, containing the applicable quantity unit or currency
    A field containing currency amounts (data type CURR) must be assigned a reference field including the currency key (data type CUKY).
    A field containing quantity specifications (data type QUAN) must be assigned a reference field including the associated quantity unit (data type UNIT).
    <b>Create transparent table continue</b>
    Maintain the technical settings of the table by clicking on the tab

  • ETL for loading Fact tables

    Hi all
    I have a fundamental question regarding loading records into the Fact tables. As you know we always load Dimensions before loading Facts. In our data model, for most of the Fact tables we have a special Dimension that has got the basic attributes in it, along with the source key of the records which are valid in terms of our data warehouse. The source of the Fact table includes the source tables which are involved in loading the Dimension as well.
    For example for Student Fact table we have a Student Dimension which is loaded before Student Fact, hence has got the most recent changes (source key of the records which are to be loaded into DW). Now my question is that which options below are more appropriate in loading Student Fact table.
    Option 1 -  Load Student Fact table using the same source tables that are used in loading records into Student Dimension, ignoring the similar logic has been applied already in Dimension to get valid records from Source and do it again in Fact job.
    Option 2 -  Inner Join the Student Dimension in the Student Fact ETL job, so it limits the number of source records (coming from Source System) to the validated records which we are interested to load into DW. This option makes loading the Fact table faster, as by contributing the Dimension in loading its corresponding Fact, we don't have to get all records from the source and validate them against the rules again (as it has been done in the Dimension job and now we can use the validated record IDs in Fact job).
    As we are new building jobs for a dimensional model, I thought it would be good to ask your opinions and your experience
    Thanks in advance,
    Tootia

    I don't see any problem with re-using your "which students do we care about?" logic and rules by borrowing the list of students from the dimension table. You can join to it, or do a lookup() into it and then filter-out nulls (assuming you return a null on a no-match), or use a Validation transform with the "Exists in table" option -- which to use is question of style & performance -- any would work.  Indeed, when migrating data into SAP using iDocs, I use a similar idea, always joining the iDoc base segment "map" table to the load of any child segment table to "automatically" filter-out any child segment records not present in the base, re-using the selection logic as you suggest, which then only needs to be built in the base segment. Keeps things tidy.
    Best wishes,
    Jeff Prenevost
    Data Services Practice Manager
    itelligence

  • Is Oracle Partitioning req'd in the Source R/3 system for SBIW Setup Table?

    We have ECC6.0 as our Source System.  My BI team is trying to load 3 years of SALES history into the BI7.0 system, and then the APO system will pull 1 of those 3 years into it.  To begin this process, they are using SBIW to Manage the Extract Structures and create the Setup Tables.
    I got an alert that these Setup Tables were created with Partitions.  We are not licensed for partitioning in our ECC(R/3) system, only in BI and APO.
    1.)  Is Partitioning really necessary for these Setup Tables?
    2.)  It appears that the Setup jobs create these tables by default as partitioned?  How do I disable or change that? 
    Thanks,
    Richard (Basis)

    Hi Neeraj,
    thank you for the reply.
    Yes, i have checked tRFCs and logs , everything looks ok, we are getting expected records in BW but job is not ending in the backend system and BW load is always showing YELLOW.
    looks like our developer using  custom FM to extract data and i feel that something code issue which is going endless loop. they need to write code to terminate process after last record in the extract process. i will update you once i get root cause of the issue.
    thanks again..
    thanks
    Venkata

  • How do I AUTOMATICALLY generate Transport request for a Z table data?

    Hello friends,
    I would like to automatically create/generate a transport request for a Z table every time I change, add or delete data in that table.
    Currently, every time I modify data in the Z table, I need to go and manually create a transport request and then indicate keys for the row(s) which were modified. I find this a tedious, time consuming, and error prone process.
    So I would like to automate this process. That is to say, the transport request is generated for the rows that I modified. The same way as is done when you change a program. It automatically prompts you for a transport request thingy.
    Any ideas?
    Your help is greatly appreciated.

    You need to make the table contents transportable by creating a "table maintenance generator" in SE11 under Utilities -> Table maintenance generator.
    In that screen for the section "Dialog data transport Details" the "Standard Recording Routine" option should be selected.
    Then only will the system ask for Transport request when entering data through the generated maintenance view in SM30.
    Also change the "delivery class' in the table attributes to 'C' - Customizing table

  • Data interpreter for loading of DASYLAB data into DIADEM

    Can NI deliver a data interpreter for loading DASYLAB data into DIADEM?

    Hi zzzz,
    It is my understanding that DasyLab has the ability to write DIAdem *.DAT files. This I have gathered from the fact that US customers have sent me *.DAT data files generated by their DasyLab version, in which several properties are called by their German names. So I assume the DasyLab *.DAT file output was originally written for a German audience, but the data made it over just fine. I suppose that could have been a special feature of their particular DasyLab, but check if you can output *.DAT files with yours. If not, please write back.
    Regards,
    Brad Turpin

  • Long time for loading Planning books/data views

    Hi
    Could someone throw some pointers/solutions for extremely horrible time taken for a planning view (data view) to load and populate for certains selection profiles that already have large number of trasactional records. In my case it is specifically the run time dumps thrown for a popular combination (large number of transactions).
    Urgent suggestions/solutions required. Pls. call 9923170825. India, if you are lazy enough to type it out. or just type in some key words, tcodes etc.
    Thanks

    I hope you dont have too many macros hogging the memory in interactive book. The other thing is to be on the latest LC build. Also try to have default unit of measures (keep the dataview settings for UoM blank).
    Can you confirm if you build the TLB view brand new in 5.0 or did you do some migration of the books ? The first time opening of book takes longer due to some compilation of certain objects. in SM50, try to identify where the process takes longest - whether it is at client end, at the DB procedure of LC or at application level.

  • Some confusion about Web Cache clustering for load balancing

    I have 4 Windows 2003 identical machines having OAS 10g installed with Web Cache (Same Passwords and Oracle Home) . I want to utilize them for load-balancing using Microsoft Network Load-balancing. I followed the Note: 259208.1 till step#10. Now we have virtual machine "IASPROD" for all other nodes {IASPROD1,IASPROD2,IASPROD3,IASPROD4 } I am able to logon to Web Cache manager from all nodes separately.
    On Node-1 I added all nodes entries in " Origin Server " having routed "Enabled " for all of them. Same I did added in Site-Definitions and gave alias hostname as "IASPROD" which is virtual server.
    Priority =1
    Site Hostname =iasprod1.domain Port = 80
    Policy = Unrestricted
    Origin Server Hostnames , port,proxy
    iasprod1.domain 7778 No
    iasprod2.domain 7778 No
    iasprod3.domain 7778 No
    iasprod4.domain 7778 No
    I have couple of questions might be very basic but as I am configuring very first time.
    1. How do I access all the machines through virtual name as stated above ? I tried but not working.
    2. This cluster setup should be repeated for all Nodes or for only node-1.
    3. If for only Node-1 than what will heppen in case of Node-1 goes down.
    4. How can I verify Load-balancing is working ?
    If you guyz can guide me than I'll be very greatful.

    One more addition Database is on AIX (9.2.0.4)

  • Setup event table cache clear for - Essbase cube table

    we have event polling table for clearing the cache now having issues in setting up for essbase physical cube tables.
    1. we have structure of hyperion like this for physical cube table "CONSOL"- where schema is empty.
    "Hyperion"."CONSOLX".."CONSOL"
    2. Event polling table is in different database/connection
    When the event table is populated without schema, polling actually looking for the table in the database configured for the event pool connection pool and not "hyperion".
    *[55001] The physical table Hyperion:CONSOLX:Event Polling:CONSOL in a cache polled row does not exist.*
    When i read in some blogs...I understood its working as expected(when something is null it looks in event polling database)
    Has anybody configured event polling for hyperion cubes table cache clear ?
    Thanks
    Ayaps

    what is your insert statement in UET table?
    Looks like you are missing one value out of 4 required. Thats why it is considering Event Pooling in between. As you have event pooling table in different physical database, you have to fill DbName,CatalogName,SchemaName and TableName in the UET tables.

  • Validation for loaded mapping tables

    Hi
    As I know FDM doesn't validate maps imported from the file. If there is any way to do a validation for these maps? May be using any Event script?

    Hi Tushar,
       I see following benifits in storing mapping tables on R3 -
       1. R3 being the bigger system, you will have more information available for mapping in R3. E.g. if a mapping depends on some attribute of data like "material type" you have that information avialable in R3 whereas you need to have a process to pull this information into Biztalk from R3 and keep it in sync.
       2. You can reuse the mapping logic for other interfaces if coded correctly, e.g. function modules for mapping.
       3. Tomorrow if biztalk is replaced with XI, there is less impact because you need not worry about mapping logic which is alreay there in R3.
      Lastly I don't see any real reason why it should have an y impact on upgrade because you are going to use tables in customer name range. It only increases the number of tables in customer name range. Usually people calculate the cost of an upgrade based on level of customization done in the system and that is genrally linked to number of objects in customer name range. But If think pragmatically there is no impact on upgrade cost because these kind of customization do not affect upgrade.
    Cheers,
    Sanjeev

  • How  change NLS_NUMERIC_CHARACTERS parameter for load external table

    Hi,
    I use this version:
    OWB 11gR2
    Database 11gR2
    Parameter NLS_NUMERIC_CHARACTERS Database ., Instance ,.
    When I created database with wizard and in this moment I don't set spanish language, later I changed this parameters in instance parameters.
    Now I want load data from a file to external table, but I've an error when I try load data with decimal point.
    why does it use the database parameter instead of instance parameter?
    Is possible to change this parameter?
    Cheers
    Marisol

    At this moment , this is not possible . You can see metalink note ID 268906.1.
    It says:
    Currently, external tables always use the setting of NLS_NUMERIC_CHARACTERS
    +at the database level.+
    Cheers
    Marisol

  • Regarding performance tuning for BSEG & BKPF table data fetch

    Hi Friends:
       Plz see the below select queries. This is really impacting the performance of my report. Plz suggest the steps to increase the performance of the report.Points will be rewarded.
    Thanks:
    FORM GET_DATA .
    Selecting the Document number from BSEG table
      SELECT BELNR BUKRS FROM BSEG INTO TABLE L_DOC_NO
                                   WHERE BUKRS IN S_BUKRS
                                   AND   GJAHR = P_GJAHR
                                   AND   HKONT IN S_SAKNR.
      IF SY-SUBRC <> 0.
        MESSAGE ID 'ZTFI' TYPE 'E' NUMBER 006.
      ENDIF.
      CLEAR L_DOC_NO.
      SORT L_DOC_NO BY BELNR.
    Selecting the Document Number Based on the selection-screen.
      SELECT BELNR BUKRS BUDAT CPUDT  BLART MONAT FROM BKPF INTO TABLE
    L_BKPF
      FOR ALL ENTRIES IN L_DOC_NO
                            WHERE BUKRS = L_DOC_NO-BUKRS AND
                                  BELNR = L_DOC_NO-BELNR AND
                                  GJAHR = P_GJAHR AND
                                  BUDAT IN S_BUDAT AND
                                  MONAT IN S_MONAT.
      IF SY-SUBRC <> 0.
        MESSAGE ID 'ZTFI' TYPE 'E' NUMBER 006.
      ENDIF.
    *Fetch the Line Items
      SORT L_BKPF BY BELNR.
      SELECT BELNR BUKRS BUZEI HKONT  SHKZG WRBTR FROM BSEG INTO TABLE
    L_BSEG
                                    FOR ALL ENTRIES IN L_BKPF
                                 WHERE BUKRS = L_BKPF-BUKRS
                                 AND   BELNR = L_BKPF-BELNR
                                 AND   GJAHR = P_GJAHR
                                 AND   BUZEI BETWEEN '001' AND '999'.

    Hi,
    Let me understand your code first.
    The below code (the 2 selects) gets data from BSEG first and then gets data from BKPF table.
    "*Selecting the Document number from BSEG table
    SELECT BELNR BUKRS FROM BSEG INTO TABLE L_DOC_NO
    WHERE BUKRS IN S_BUKRS
    AND GJAHR = P_GJAHR
    AND HKONT IN S_SAKNR.
    IF SY-SUBRC 0.
    MESSAGE ID 'ZTFI' TYPE 'E' NUMBER 006.
    ENDIF.
    CLEAR L_DOC_NO.
    SORT L_DOC_NO BY BELNR.
    "*Selecting the Document Number Based on the selection-screen.
    SELECT BELNR BUKRS BUDAT CPUDT BLART MONAT FROM BKPF INTO TABLE L_BKPF
    FOR ALL ENTRIES IN L_DOC_NO
    WHERE BUKRS = L_DOC_NO-BUKRS AND
    BELNR = L_DOC_NO-BELNR AND
    GJAHR = P_GJAHR AND
    BUDAT IN S_BUDAT AND
    MONAT IN S_MONAT.
    IF SY-SUBRC 0.
    MESSAGE ID 'ZTFI' TYPE 'E' NUMBER 006.
    ENDIF.
    The below code, can't you avoid by taking all the fields required on your 1st select on BSEG table?
    *Fetch the Line Items
    SORT L_BKPF BY BELNR.
    SELECT BELNR BUKRS BUZEI HKONT SHKZG WRBTR FROM BSEG INTO TABLE
    L_BSEG
    FOR ALL ENTRIES IN L_BKPF
    WHERE BUKRS = L_BKPF-BUKRS
    AND BELNR = L_BKPF-BELNR
    AND GJAHR = P_GJAHR
    AND BUZEI BETWEEN '001' AND '999'.
    Please check the below blog on "Performance of Nested Loops" by Rob Burbank which would be really helpful.
    /people/rob.burbank/blog/2006/02/07/performance-of-nested-loops
    Hope this helps. Rwd points if helpful.
    Thanks,
    Balaji

  • ABAP routine for loading previous months data in infopackage

    Hi,
    I have an infopackage included in a process chain. The issue is that I need to load previous month's data through the infopackage and I have a data field which is of date type. I could not find an appropriate OLAP variable so I chose the ABAP code in the infopackage. My knowledge of ABAP is not very good so I thought I could get some insight from someone. Need this to be a periodic job/regular without any developer interference.
    Edited by: BG on Apr 9, 2008 1:03 PM

    Hi
    Try this code and see if you are able to achive the results
    data: month(2).
    data: date  like sy-datum,
          last  like sy-datum,
          first like sy-datum.
    read table l_t_range with key
    fieldname = 'BLDAT'.
    l_idx = sy-tabix.
    date = sy-datum.
    month = date+4(2).
    if month = 12.
      month = 01.
    else.
      month = month - 1.
    endif.
    move month to date+4(2).
    call function 'SLS_MISC_GET_LAST_DAY_OF_MONTH'
      exporting
        day_in                  = date
    IMPORTING
       LAST_DAY_OF_MONTH       = last
    EXCEPTIONS
      DAY_IN_NOT_VALID        = 1
      OTHERS                  = 2
    if sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    endif.
    first = last.
    move 01 to first+6(2).
    l_t_range-low  = first
    l_t_range-high = last
    l_t_range-sign = 'I'.
    l_t_range-option = 'BT'.
    modify l_t_range index l_idx.
    p_subrc = 0.
    paste the code as it is in your editor activate this and see if you are able to achive the results.
    assign points if useful.
    regards
    Vishal
    Edited by: Vishal Vashishta on Apr 9, 2008 2:33 PM

  • Creating the spool request for the internal table data

    Hi..
    I am little confused with the function module used for creating the spool request.
    well...I am practicing the ALV report and sending the that report to the spool.
    I  ve used the FM SLVC_TABLE_PS_TO_SPOOL.
    this is the way i defined the internal table.
    Data:
    begin of imat occurs 0,
    matnr like marav-matnr,
    maktx like marav-maktx,
    matkl like marav-matkl,                     
    ntgew like marav-ntgew,
    gewei like marav-gewei,
    end of imat.
    data i_lines type sy-tfill
    and i declared..
    describe table imat lines i_lines.
    then i given the value for i_file_length as i_lines in the export paramet of the FM SLVC_TABLE_PS_TO_SPOOL.
    CALL FUNCTION 'SLVC_TABLE_PS_TO_SPOOL'
        EXPORTING
          i_file_length            = i_lines
       IMPORTING
         E_SPOOLID                = spoolid
        tables
          it_textdata              = imat
    when i executed it shows the list and wen going back it shows the runtime error as
    CALL_FUNCTION_CONFLICT_LENG  - Type conflict when calling a function module (field length).
    please provide me some solutions.
    thanks in advance.
    etienne.

    Hi satyajit,
    Thanks for your response....but the fields in my internal table "IMAT" is not compatible with the structure LVC_S_1022 as it has it componenrt - LINE.
    so how can i define my internal table IMAT as type lvc_s_1022 as you suggested.
    thanks in advance.
    etienne.

  • Pdf & Excel for printing af:table data's

    Hi,
    Good afternoon. :)
    Using jdev 11.1.1..50-adfbc.
    Say Vo:
    Select * from employee where empid=:piddrag and drop execute as params in jsff page.
    In Jsff page(dynmaic region)
    Employee Id(inputtext box):  101       //entering the data
    ExecuteParams                                 // hitting the button
    af:table                                           //results came in af:table
    employeeid  | employeename |employee salary
      101               Jack                   20000ok. all are working fine.
    My requirement is: button named with pdf and another button named excel
    while hit pdf button means the particular records write into pdf and should opened the written pdf show to enduser.
    as well for excel.
    how can i do this?
    Edited by: ADF7 on Mar 12, 2012 3:25 AM

    I think,
    as like export to excel ,there is no such for generating pdf in adf.
    if we(here 'we means ' adf7 need pdf only) . you(john,timo) both suggesting me go with some third party tools is it so?
    jasper-reports
    itext
    Apache-Fop

Maybe you are looking for