Transfer Struc, Comm struc, Extract Struc,

Hey All!
I am working on a project of Business content development
& i have the following question.
Can somebody let me know how comm struc, Transfer Struc & extract struc& info source are maintained in the physical level. I ofcourse know that comm & transfer Struc are group of info 0bject. In which table is this maintained in the database?. It sounds more basis but i want to get some of ur views too thats why i write this post in this forum too.
while we choose some business content objects and activate them then  the tables in the databases are created for the respective objects
Can somebody give an idea where can i get this physical table information from. For cubes i can trace out F,E tables & dimensions but this sort Infosource relationships
and query details in the DB level is difficult for me to get. Can anyone help with your insights!?
Rgds
Karthik Krishna.

RSZCOMPDIR      Directory of reporting components
RSZCOMPIC     Queries on cubes
RSZELTATTR      Attribute selection per dimension element
RSZELTDIR      Directory of the reporting component elements
RSZELTPROP     Reporting component elements properties
RSZELTTXT      Texts of reporting component elements
RSZELTXREF      Directory of query element references
RSZGLOBV      Global Variables in Reporting
Best regards,
Eugene

Similar Messages

  • How to enable delta without data transfer option for Collections - Extracts

    Hi All,
    My question is basically related to FSCM - Collections related data sources and Init and Delta loads of the same.
    Once the Init is completed, I wanted to do delta without data transfer and option is graded out - means can not really select the option for Collections Data Sources ..
    For Example I have OCLM_INVOICE as my data source which has lot of data to extract - and in the Init stage itself, its either giving errors or keeps running for longer time (Records to be extracted is ONLY 400000) and finally shows in the monitor as 400000 of 0 Records..
    And also Is there any place where I can find that all of Collection data sources are TIME STAMP based or other options.
    I appreciate your quick an ddetails reply and points will be awarded for sure.
    Thanks
    AK.

    As far as I remember there some datasources which wont support init w/o data transfer. I guess COPA is one of them. I havent worked on datasource 0CLM_INVOICE but it is possible that even this could be same case. When I seached on help.sap.com for this datasource, I found this info -
    (I am pasting content because strangely the webpage doesnt give html link for this search result window)
    DataSource Transactional Data: 0CLM_INVOICE
    This DataSource is used to extract the transaction data of an invoice from Accounts Receivable Accounting (FI-AR). The system determines the key figures from the amounts of the line items belonging to the invoice. The date fields each contain the respective date of an action that refers to a line item. The extractor provides the data in optimized form for SAP Collections Management.
    Technical Data
    Application Components Collections Management (0FSCM-CLM)
    Available as of Release mySAP ERP Value Pack 2005.1
    Shipment SAP NetWeaver 2004s BI Content Add-On 3 SP02
    Content Versions There are no content versions.
    RemoteCube-Capable No
    Delta-Capable Yes
    Extraction from Archives No
    Verifiable Yes
    Data Modeling
    Delta Update
    The DataSource permits delta update with generic delta processing using the time stamp. The system determines the invoice reference for a document in Business Transaction Event (BTE) 5010 and saves it with a time stamp. In the case of a delta upload, it determines the amounts of the line items based on these document numbers.
    Extractor Logic
    Verification of Key Figures
    If necessary, you can check the key figures of the individual customers as follows:
    In the source system, from the SAP Easy Access screen, choose the following function: Accounting  u2192 Financial Supply Chain Management u2192 Collections Management u2192 Integration with Accounts Receivable Accounting  u2192 Process Receivables (Old).
    Enter the required customer.
    The Customer Account - Process Receivables screen appears. Here you see a list of all invoices for which there are open receivables.
    Using this list, you can compare the following key figures:
    Original Amount of Invoice or Credit Memo (INVOICE_AMOUNT)
    Amount Paid (PAID_AMOUNT)
    Open Amount (OPEN_AMOUNT)
    Total of All Credit Memos for Selected Invoice (CREDITED_AMOUNT)
    Amount Arranged for Payment (PAYMENT_ARRANGED)
    Unsychronized Backup Recovery in BW System
    If you import backup copies asynchronously, you have to initialize the delta procedure for the DataSource again.
    Abhijit
    Edited by: ABHIJIT TEMBHEKAR on Nov 18, 2008 5:00 PM

  • Can anyone enlighten me as to how I can transfer data from an extracted hard drive off of a Macbook Pro onto a new Retina model? No SATA port!!!

    Hello,
    I'm trying to transfer data from my old hard drive (taken out of Macbook Pro 2009) onto a new Macbook Pro Retina.  My ignorance is vast and I purchased an eSATA device to do so, not thinking that there is no SATA port on this new computer.  Is there a special cable or anything anyone can recommend?  I'd be very grateful for some help here.     

    I would just get a cheap USB enclosure from OWC, put your old drive in it, and use it it transfer the data to your new rMBP. Cheap and simple. The one I spec'ed is for a USB 3.0 enclosure as you have USB 3.0 ports on your computer - much faster than USB 2.0.
    Clinton

  • Generic data extraction and loading to cube

    Hello guys, I am trying to load R3 data thru Gernic extraction to infocube in bw3.5.
    I just created generic DS and activated it and also Replicated in BW side.
    I am trying to extract transaction data - from FI , DS has Ship Info Table fields in the extraction structure.
    Everything got activated and now i am in BW side, just assigned DS to infosource.
    But i am stuck in Transfer rules, all infoobjects on the left are empty and it wont let me activate.
    I am really confused, Should i be creating infoobjects first , because i havent created infocube yet.
    My DS has about 12 fields , I really need someone to tell me why its not activating Transfer rules, Comm structure is still empty.
    Should i be creating INfocube first with KF and Chars?or how am i supposed to map these DS fields in the Infocube (my data target)?
    I guess i am not clear at this point, even with LO extraction I could get create DS and replicate in BW , but once i am in BW i become clueless i should say.I did search other posts here but couldnt find anything that help me understand this.if someone could explain me in a simple terms i would appreciate it.
    I have created DS with extraction structure, i am in BW , so whatever field i selected in extract structure in R3 side f rom the table , is that going to be Chars and Key figures in my Infocube once i complete Loading ? or how would that work? Why would we need to chose specific Table when creating view , assuming that we already know the data that we need for reporting purposes, we would know those data should be on what table correct?
    please drop some lines to help me thanks

    hello again, i am writing this since i didnt get any response. I would really appreciate it if someone could give me little hint , i have been practicing on my own using bw system and need help from you Pros.
    My previous question was regarding Transfer rule . I am still not able to get through that steps.
    This is what i have done so far:
    -created DS generic for transaction data,  using View VBAK table (not sure if this was the right table, i just wanted to see if i would be successful loading data in the cube).
    - ACtivated DS, replicated in BW, Assigned INfosource.
    - I selected 15 Fields from DS (extraction structure ) from that VBAK table.
    - But when i am in Transfer rule ./structure screen, there are many fields appearing.
    - It will let me active the Transfer Rules, however i also created InfoCube , it asked me to chose at least one Time Chars, and KF. But i used Template using my infosource, there were no KFs . so i figured i need to change Transfer rule. I tried to create Key figure infoobject on the right side of the screen in Transfer rules, but it will not move to the right when i tried to transfer it.
    My question is, why there are more fields in this screen and some fields are not appearing the one i selected.
    Since i chose transaction Generic DS,is that why i have to have Key figures? did i chose worng table VBAK for practice purpose? i dont really see much Keyfigures when i looked the content of this table using SE16.
    Guys please suggest me , what route should i take to get through this confusion.
    My main objective here is to simply load R3 data using Genric DS(transacitonal)  into a customized infocube.
    I get error when creating Update rule, infosource and infocube wouldn not link, i think its becoz i didnt have any keyfigures available when creating Infocube ? how would i chose one?
    anyone please throw me some suggestions ..i would really appreciate it.thanks again for reading

  • Steps to be followed in lo cockpit extraction

    hi,
    i have read help links and so many blogs in sdn before doing lo extraction
    right now i need to do init and delta for purchasing (02)
    i want to follow these steps
    correst me if i am wrong
    step1:
    i will fill set up tables for init extraction during that i will stop users for posting documents
    step2
    after fillling setup tables, in lbwe transaction i  will select delta mode as 'queued delta' and run a v3 job for every hour
    step3
    documents which are new or modified or deleted are collected in lbwq transaction for one hour and transferred to rsa7 as 1 luw
    step 4
    when i do delta this luw will be extracted to bw
    correct me if i am wrong?
    if this is correct
    if i modify my data source when my deltas are running
    what should i do in bw and r/3 side?

    Hi Venkat,
    To extract data from LO Cockpit follow this steps
    1. Log on to R/3
    2. execute Transaction LBWE (to get in to LO Cockpit)
    3. After u are in LO Cockpit. Expand the desired extract
    structures(eg:SD,MM etc..)
    4. Click on Maintainence to Maintain Extract Structures
    5. Add and remove fields like (MCVBAK, MCVBAP)
    6. Generate Data Source by clicking on the data source and then
    Select/Hide/Invert fields.
    7. Activate Extract Structure by clicking on inactive.( If its already
    active you have to deactivate it by clicking on active before step 4)
    8. Then execute transaction LBWG to delete the content of setup
    tables(Select the application 01 for sales etc)
    9. Now execute transaction OLI*BW to fill the set up tables ( where *=7
    for sales and *=9 for billing etc)
    10. check the data in setup table RSA3
    11. Go to BW and Replicate the Data Source
    12. Maintain the transfer rules, comm struct, schedule infopacks etc...
    13. For delta go in r/3 then LBWE then you application(eg: Sales) and
    click on delta and then select DIRECT, QUEUED, or NON SERIALISED V3
    And before everything.. after you log on to R/3 transfer the Master
    data for your application for Eg: for SD go to SD-IO and transfer it by
    using transactions RSA5, RSA6 and RSA3 in that order.
    You should fill the setup table in R/3 system & extract the data
    to BW .... the setup tables is in SBIW....after that you can do
    delta extraction by initialize the extractor.....Full loads are
    always taken from the Setup table.
    Setup table concept applize only to Lo-Extracion.
    Steps for Lo-Extraction:
    LO-EXTRACTION
    • T-Code: LBWE
    • First we need to check which data source suits the client's
    requirements in LBWE.
    • Check whether it is in Active version/Modified version. If it is
    in M version go to RSA5 and select our data source and press on
    Transfer. Then go to RSA6 and see whether the datsource is been
    transferred.
    • If the datasource is already in active version then we need to
    check whether the datsource is already extracting data in to BW.
    • If the datasource is extracting the data then we need to check the
    data existence in Setup tables (use SE11 to check the setup tables
    data. For every extract structure one and only one setup table is
    generated whose technical name is Extract structure name+ setup, for
    eg. If extract structure name is MC11VAOHDR then set up tables name
    is MC11VAOHDRSETUP) and Extraction Queue (LBWQ) and Update tables
    (SM13) and Delta Queue (RSA7). If data exists in any of these T-
    codes we need to decide whether we need the data in BW or not. If we
    need the extract it as we do in LO-Extraction below. I f we don't
    need delete the data.
    The dataflow from R/3 into BW:
    • We nee to generate the extract structure by selecting the fields
    from the communication structure in LBWE.
    • Generate the datasource and select the selection fields,
    cancellation fields, hide fields that we want.
    • Replicate it into BW. Then we need to attach Info source(Transfer
    rules/communication structure) to the datasource. We got 3 methods
    to attach the infosource..
    1) Business content:: Business content automatically proposes the
    transfer rules and communication structure, we don't have to do
    anything manually.
    2) Application proposal: Here too proposal is done but some objects
    will be missing which we need to assign in the transfer rules.
    3) Others: here we need to create the transfer structure, rules,
    comm. Strct from the scratch.
    • Modeling like infocube, Infosurce attachment,….
    • Then activate the extract structure.
    • We need to fill the setup tables for the first time loading. In
    filling the setup tables we can select between Full load and delta
    initialization loads.
    Filling Set up table:
    T Code : SBIW
    Settings for Application-Specific DataSources (PI)->Logistics--
    >Managing Extract Structures>Initialization>Filling in the Setup
    Table -->Application-Specific Setup of Statistical Data --> in that
    Youcan perform the Setup (Example : SD-Sales Orders - Perform Setup)
    and execute .. or else direct T Code : OLIBW ( means based on your
    application like sales order/billing/ purchase etc)• For setup
    tables T-Code is OLI*BW where * equals to application number like 02
    fro purchasing,08 for shipment…..
    • First we need to decide whether we want delta loads to be
    performed in the future. If we want to do delta loads, then we need
    to go for Delta Initialization process or else do full load.
    • When we perform setup tables extraction, since setup tables are
    cluster tables, we can't see the data in the setup tables. So we go
    for Extractor checker (RSA3) just to see the setup tables data
    (full/delta initialization).
    • Then create infopackage and select Full or Delta initialization in
    Update tab and schedule it.
    • Delete the setup tables data by using LBWG.
    • Now we need to do delta loads.
    • Delta load data flow differs with various delta update methods. We
    got 3 delta update methods as u know.
    • If we select "Queued Delta"update method then the data moves to
    Extraction queue(LBWQ). Then run Collective update to move the data
    from LBWQ into Delta Queue (RSA7). Then schedule the data using the
    infopackage by selecting Delta Load in the update tab.
    • If we select "Direct delta" then the delta data moves into RSA7
    directly.
    • If we select "Unserialized V3" method then the data goes
    into "Update tables(SM13)", then run Collective update to move the
    data from SM13 into RSA7. Schedule the data using infopackage.
    • If we click on Maintenance, we can generate the extract structure.
    • If we click on 2LIS_02_CGR button, then we we can generate data
    source
    • Inactive under Update column: we can make a extract structure as
    active or inactive.
    • If we click on Job control button the we can maintain the
    collective update parameters like, start time, and is it hourly once
    or dialy once ……
    • If we click on Queued delta button under Update mode then we can
    select among 3 delta update methods.
    Only Full/Delta initialization data loads only moves into setup
    tables. Delta load data doesn't move into setup tables.
    **RSA3 contains only setup table's data
    Only Delta update data moves into RSA7/LBWQ/SM13 no full/delta
    initialization loads data.
    Assign Points if useful
    Regards
    Venkata Devaraj !!!

  • I am trying to extract metadata from essbase into a flat file using ODI.

    I have 2 questions in this regard :
    Some of the account members storage property is missing in the extract.the reason which i suspect is due to the parent child sorting that did not happen while extracting.How do i do this.I do not see this option when i select IKM hyperion Essbase Metadata to SQL....
    I have many account members that have more than one UDA's upto 5 UDA's.But in my extrcat only one UDA appears.How do i incorporate all the UDA's in a single column,sperated by a comma,the extract file as such is semicolon seperated,mainly because of this reason and some alias descriptions have comma's in the source system
    ODi is extracting metadata in a descending order.How do i change it to sort records in parent child order
    Thanks,
    Lingaraj
    Edited by: user649227 on 2009/06/10 6:50 AM

    Hi,
    There was an issue with early versions of the KM around the storage property, this has since been resolved. I recommend upgrading to the latest release of ODI or have a look through the patches.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • GG extract - what is excessive 'control file sequential reads'?

    Hi,
    base SR - 3-2192225691, GG - 10.4.0.19, database - 11.2.0.1.0, running on Linux x86 64-bit
    customer opened SR based on high number of control file sequential read
    Top 5 Timed Foreground Events (over 13+ hrs)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Avg
    wait % DB
    Event Waits Time(s) (ms) time Wait Class
    db file sequential read 13,843,266 80,952 6 48.3 User I/O
    DB CPU 44,897 26.8
    control file sequential read 12,793,822 10,978 1 6.6 System I/O
    enq: TX - row lock contention 30,000 10,066 336 6.0 Applicatio
    log file sync 959,602 9,276 10 5.5 Commit
    However, extract traces do not appear to show any waits associated with control files. A 10046 trace has been requested but not done as of yet.
    Are there any known issues we can test for, or is this expected behavior? Any further tests that could be run?
    Thanks,
    Steve

    Steve,
    There is an Oracle internal list you can use. I just sent you email on how to join.
    Here's a shot in the dark: I've seen something similar with log sync waits on systems not moving a lot of data (usually a test system) and we're a little eager to get the next piece of data when at the logical end of file. That can be overcome using THREADOPTIONS EOFDELAYMS. Otherwise you'll need to start using the OGG trace commands.
    Good luck,
    -joe

  • Difference between Extraction Structure and Datasource

    Hi Gurus,
    I have a very basic question here.Can anybody explain to me in detail the difference between an extraction structure and Datasource.
    Thanks in advance

    Hi:
    I am pasting a summarized def. from sap notes.
    http://help.sap.com/saphelp_bw30b/helpdata/en/ad/6b023b6069d22ee10000000a11402f/frameset.htm
    Data Source:
    Data that logically belongs together is stored in the source system in the form of DataSources. A DataSource contains a number of fields in a flat structure used to transfer data into BW (Extract Structure).
    Extract Structure:
    In the extract structure, data from a DataSource is staged in the source system. The extract structure contains the amount of fields that are offered by an extractor in the source system for the data loading process.
    You can edit and enhance DataSource extract structures in the source system. To do this, in the BW Administrator Workbench choose Goto à Modeling à Source Systems à Your Source System à Context Menu (right mouse click) à Customizing Extractors à Subsequent Processing of DataSources.
    While, you are on the topic, you have one more structure.
    Transfer Structure:
    The transfer structure is the structure in which the data is transported from the source system into the SAP Business Information Warehouse.
    It provides a selection of the extract structure fields for the source system.
    It may be a good idea to look into a sample Business Content extractor to get a better understanding of how they are related.
    Chamarthy

  • Extract dataset to internal table

    Hello,
    Is it possible to transfer data in an extract field group to an internal table in a loop and how?
    Thx in advance,
    Ali.

    Simply Loop at extract and move corresponding values to internal table. Example
    TABLES: spfli, sflight, sbook.
    "define extract without their structure
    FIELD-GROUPS: header,
                  travel_time,
                  flight_detail.
    "define structure of each record in extract
    INSERT: spfli-cityfrom spfli-cityto                    INTO header,       "sort by cities
            spfli-deptime spfli-arrtime                    INTO travel_time,
            sflight-carrid sflight-connid sflight-price    INTO flight_detail.
    "output table to transfer data to
    DATA    it_out_spfli TYPE TABLE OF spfli WITH HEADER LINE.
    START-OF-SELECTION.
    GET spfli.
      "fill records of extract with data
      EXTRACT: header, travel_time.
    GET sflight.
      EXTRACT flight_detail.
    END-OF-SELECTION.
      "sort the extract by the key determined in header field group
      SORT.
      LOOP.
        AT NEW spfli-cityfrom.
          WRITE: /'Connections from:', spfli-cityfrom.
        ENDAT.
        "connection must have details, flight_detail record must be found after this one
        AT travel_time WITH flight_detail.
          WRITE: /30 spfli-cityto, spfli-deptime, spfli-arrtime.
       "move data to your table
          MOVE-CORRESPONDING spfli TO it_out_spfli.
        ENDAT.
        AT flight_detail.   "if record of field-group fligth_detail reached
          WRITE: sflight-carrid, sflight-connid, sflight-price.
       "move data to your table
          MOVE-CORRESPONDING sflight TO it_out_spfli.
        ENDAT.
        AT END OF spfli-cityfrom.
          WRITE: /80'Total cities travel', cnt(spfli-cityto).
          ULINE.
        ENDAT.
        APPEND it_out_spfli.
      ENDLOOP.
    You need also make sure that all fields in IT_OUT_SPFLI are correcly provided. In my example there are some gaps but this is just a matter of supplying rest of the fields.
    Regards
    Marcin

  • Upgrade of Sourcesystem SAP EHP3 SAP ECC 6.0 to SAP EHP4 SAP ECC 6.0

    Hi,
    I noticed there is a lot of questions on the testing procedures for BI side when Upgrading the source system from R/3 to ECC 6.0. But noticed there aren't many posts on when doing Ehancement Packs. But I believe the process would be more or less the same?
    My project is doing an Upgrade of SAP EHP3 SAP ECC 6.0 Stack03 to SAP EHP4 SAP ECC 6.0 Stack07. BI Side is also upgraded, but I am more interested in the impact of the Upgrade's Source System (ECC) on BI.
    I believe the amount of impact is less as compared to an upgrade from R/3 to ECC, but there still should be impact to BI somehow.
    Basically, from what I have researched, there correct procedure is as below:
    Pre-Upgrade
    1 - Lock users in ECC
    2 - ensure that the Delta queue is cleared in RSA7
    3 - check the PI in ECC6.0 ( What is PI???)
    3 - stop all loads from BW
    Post-upgrade
    1 - Check Source System connections
    2 - replicate all data sources in ECC (when I do a check connection in RSA1 I get the message "No metadata upload since upgrade" which I believe is due to this step not done yet? Is it ok to replicate all datasources??)
    3 - activate all rules thru program (RS_TRANSTRU_ACTIVATE_ALL. Is it ok to activate all transfer strucs?)
    4 - run Init w/o Data Transfer for deltas
    5- run and then check extraction for full & delta loads
    Appreciate if I could your expert inputs on the procedures above? Maybe a more detailed step by step procedure or comments are most welcomed.
    Thanks.

    Additional information:
    I start the upgrade from SAP ECC 6.0 with a dual stack.
    SAP advised me to download the following file to start the upgrade
    SAPehpi_68-10005800.SAR from the SMP
    Download path: Browse our Download Catalog-->  Additional Components --> Upgrade tools --> SAP EHPI Installer --> SAP EHP Installer 7.00.
    Start the upgrade without the jce_policy_zip extension. This was written in note 1302772 .
    Have more luck with it.

  • Reg. business content doubts

    hi
    when installing business content it is showing level 1, 2 ...
    what does it mean. how many levels are there.
    How to know the infoobject name of a field. for example what will be the business content name of a particular infoobject.
    normally how long it will take to install. whether to install in background is advisable.
    thanks
    regards
    sridhar
    [email protected]

    Hi Sridhar,
    SAP delivered the object in "D" version and we need to activate those as per our project need.
    Now Business content can be installed as (assuming you are doingfor CUBE):
    1) Only necessary object (this will insalled only infoobjects, atteributes which are comprosing of cube)
    2) data flow before (will installed Update rule, transfer struc, comm. struc, infosources, and so on)
    3) data flow afterwords (will installed queries, reports, templates, workbooks, multiprovider if any.)
    4) data flow before and afterword (this installed step 3 & 4 combined)
    Now level depicts the kind of hierrarchy under which that objectcan be located. Ex: you are installing perticuler Infoobject, so the level will be kind of Infoarea --> Info Catalog --> infoobject.
    Hope this helps you.
    Regards
    Pankaj

  • Open dataset in UTF8. Problems between Unicode and non Unicode

    Hello,
    I am currently testing the file transfer between unicode an non unicode systems.
    I transfered some japanese KNA1 data from non unicode system (Mandt,Name1, Name2,City) to a file with this option:
    set local language pi_langu.
      open dataset pe_file in text mode encoding utf-8 for output with byte-order mark.
    Now I want to read the file from a unicode system. The code looks like this:
    open dataset file in text mode encoding utf-8 for input skipping byte-order mark.
    The characters look fine but they are shifted. name1 is correct but now parts of the city characters are in name2....
    If I open the file in a non unicode system with the same coding the data is ok again!
    Is there a problem with spaces between unicode an non-unicode?!

    Hello again,
    after implementing and testing this method, we saw that the conversion is always taken place in the unicode system.
    For examble: we have a char(35) field in mdmp with several japanese signs..as soon as we transfer the data into the file and have a look at it the binary data the field is only 28 chars long. several spaces are missing...now if we open the file in our unicode system using the mentioned class the size is gaining to 35 characters
    on the other hand if we export data from unicode system using this method the size is shrinking from 35 chars to 28 so the mdmp system can interprete the data.
    as soon as all systems are on unicode this method is obselete/wrong because we don't want to cut off/add the spaces..it's not needed anymore..
    the better way would be to create a "real" UTF-8 file in our MDMP system. The question is, is there a method to add somehow the missing spaces in the mdmp system?
    so it works something like thtat:
          OPEN DATASET p_file FOR OUTPUT IN TEXT MODE ENCODING UTF-8 WITH BYTE-ORDER MARK.
    "MDMP (with ECC 6.0 by the way)
    if charsize = 1.
    *add missing spaces to the structure
    *transfer strucutre to file
    *UNICODE
    else.
    *just transfer struc to file -> no conversion needed anymore
    endif.
    I thought mybe somehow this could work with the class CL_ABAP_CONV_OUT_CE. But until now I had no luck...
    Normally I would think that if I'am creating a UTF-8 file this work is done automatically on the transfer command

  • To write into a mime file.

    How do i specify the path for the mime object so that i can write data into it.

    This is how it stands. I have a file playlist.txt in mime folder. I need to write values into it.This was the piece of code i had written.
    DATA: file_name(200) TYPE c VALUE '/sap/bc/bsp/sap/y_test_bsp/playlist.txt'.
        struc-f1 = 'X'.
        struc-f2 = 'Y'.
        OPEN DATASET file_name IN TEXT MODE FOR APPENDING ENCODING   DEFAULT.
        TRANSFER struc TO file_name.
        CLOSE DATASET file_name.
    It gives the following error messge"An exception with the type CX_SY_FILE_OPEN_MODE occurred, but was neither handled locally, nor declared in a RAISING clause"
    Exception Class CX_SY_FILE_OPEN_MODE
    Error name DATASET_NOT_OPEN
    Program CL_O244RYK10ZF3FOF1DPPOO4U3777CP
    Include CL_O244RYK10ZF3FOF1DPPOO4U3777CM00C
    ABAP Class CL_O244RYK10ZF3FOF1DPPOO4U3777
    Method _ONINPUTPROCESSING
    BSP Application Y_TEST_BSP
    BSP Page MUSIC.HTM
    Row 81 
    Is tis bcos i gave a wrong path.
    Message was edited by:
            Asha Lilliett

  • The OLTP source 3FI_SL_J2_TT for source system BM4CLNT100 not present

    Hi All,
    iam trying to load data from r/3 for the DS 3FI_SL_J2_TT , when i see in BW , the transfer structure is inactive in Dev system. So i activated the transfer struc. but after activation also its shows as inactive.
    and also iam getting this error message : "The OLTP source 3FI_SL_J2_TT for source system BM4CLNT100 not present".
    I have checked in r/3 side , the DS is available and is in active. I have replicated the DS also in BW. but no luck.
    could you please let me know why this error occured.
    Thanks,
    Ravi.

    Hi,
    RC on Source system and perform the action check if connection is ok the try to activate that in se38
    RS_TRANSTRU_ACTIVATE_ALL.
    Regards,
    Satya

  • How to write read dataset statement in unicode

    Hi All,
    I am writing the program using open dataset concept .
    i am using follwing code.
        PERFORM FILE_OPEN_INPUT USING P_P_IFIL.
        READ DATASET P_P_IFIL INTO V_WA.
        IF SY-SUBRC <> 0.
          V_ABORT = C_X.
          WRITE: / TEXT-108.
          PERFORM CLOSE_FILE USING P_P_IFIL.
        ELSE.
          V_HEADER_CT = V_HEADER_CT + 1.
        ENDIF.
    Read dataset will work for normal code.
    when it comes to unicode it is going to dump.
    Please can u tell the solution how to write read dataset in unicode.
    Very urget.
    Regards
    Venu

    Hi Venu,
    This example deals with the opening and closing of files.
    Before Unicode conversion
    data:
      begin of STRUC,
        F1 type c,
        F2 type p,
      end of STRUC,
      DSN(30) type c value 'TEMPFILE'.
    STRUC-F1 = 'X'.
    STRUC-F2 = 42.
    Write data to file
    open dataset DSN in text mode. ß Unicode error
    transfer STRUC to DSN.
    close dataset DSN.
    Read data from file
    clear STRUC.
    open dataset DSN in text mode. ß Unicode error
    read dataset DSN into STRUC.
    close dataset DSN.
    write: / STRUC-F1, STRUC-F2.
    This example program cannot be executed in Unicode for two reasons. Firstly, in Unicode programs, the file format must be specified more precisely for OPEN DATASET and, secondly, only purely character-type structures can still be written to text files.
    Depending on whether the old file format still has to be read or whether it is possible to store the data in a new format, there are various possible conversion variants, two of which are introduced here.
    After Unicode conversion
    Case 1: New textual storage in UTF-8 format
    data:
      begin of STRUC2,
        F1 type c,
        F2(20) type c,
      end of STRUC2.
    Put data into text format
    move-corresponding STRUC to STRUC2.
    Write data to file
    open dataset DSN in text mode for output encoding utf-8.
    transfer STRUC2 to DSN.
    close dataset DSN.
    Read data from file
    clear STRUC.
    open dataset DSN in text mode for input encoding utf-8.
    read dataset DSN into STRUC2.
    close dataset DSN.
    move-corresponding STRUC2 to STRUC.
    write: / STRUC-F1, STRUC-F2.
    The textual storage in UTF-8 format ensures that the created files are platform-independent.
    After Unicode conversion
    Case 2: Old non-Unicode format must be retained
    Write data to file
    open dataset DSN in legacy text mode for output.
    transfer STRUC to DSN.
    close dataset DSN.
    read from file
    clear STRUC.
    open dataset DSN in legacy text mode for input.
    read dataset DSN into STRUC.
    close dataset DSN.
    write: / STRUC-F1, STRUC-F2.
    Using the LEGACY TEXT MODE ensures that the data is stored and read in the old non-Unicode format. In this mode, it is also possible to read or write non-character-type structures. However, be aware that data loss and conversion errors can occur in Unicode systems if there are characters in the structure that cannot be represented in the non-Unicode codepage.
    Reward pts if found usefull :)
    Regards
    Sathish

Maybe you are looking for