View of set up tables data in se11??????????

Hi,
can we view the data in se11 thru set up tables names? if so then how do we find set up tables names, say for example if i am using 2lis_13_VDHDR then how can i view the data apart from RSA3 after filling up of set up tables? is there any chance to view the data thru se11 and with the set up table name? and if so then how do we find set up tables name?

Hi
set up table names starts with 'MC' followed by application component '01'/'02' etc and then last digits of the datasource name and then followed by set up..
we can say the <b>communication structure</b> (R/3 side,you can check it in LBWE also) name followed by 'setup'
for your example below is the name of setup table
MC13VD0HDRSETUP
If you want to check data in set up tables you better look at the transaction NPRT here you can see the table name from which data is picking also
Hope it helps
Thanks
Teja

Similar Messages

  • Can we look at EBAN table data using SE11?

    Can we look at EBAN table data using SE11? who can I look at the data in EBAN or ECKPO in ECC?
    Thanks in advance.
    York.

    SE11 - Create/Change/Display Table Fields - ABAP Dictionary
    SE16 - Display Table Entries - You cannot see Structure
    For Data  - SE16 - Input Table Name - F7 - F8
    Or
    SE11 - Input Table Name - F7 - {Ctrl + Shift + F10 }/Contents - F8

  • ***What is your favorite VI or other software to view large sets of waveform data?

    What is your favorite VI or other software to view large sets of
    waveform data?
    I have to look at waveforms with several sets of y data of 10000 or so
    points. Should I put this into Excel? There are no cursors, or zoom
    capabilities, and so on in Excel.
    Using a graph in LV is very slow when loaded down with this much data.
    Frankly a scope is easier for viewing waveforms. A scopelike utility
    for Windows?
    MathCAD?
    Is there any neat shareware out there?
    Thanks for your opinion,
    Mike

    There are a large number of analysis packages on the market that can display very large datasets. The one to choose depends largely on the nature of the data and what you want to do with it after you have loaded it in. Wandering down the software aisle at your local Staples you will find at least two or three.
    With datsets the size you are contemplating, a major feature to look for is a zoom function that initially shows the entire waveform but lets you narrow in on the data you want to see.
    DIGRESSION ALERT
    Actually, this sort of algorithm isn't to hard to code. I wrote a viewer once that allowed you to view datasets of arbitrarily large size (in V3 I think... though it may have been in V4). The code initally presented an overview of the entire dataset
    by decimating it down to a size that LV could display rapidly. As the user zoomed in using the cursors (not LV's graph zoom) the code would extract the selected portion of the data, redecimate it if necessary and redraw the display. (Which by the way is all the fancy packages really do anyway.)
    The tricky part is selecting a decimation function that retains the overall shape of the data.
    ENDOF DIGRESSION
    Several years ago I wrote an article for a magazine (Personal Engineering and Instrumentation News) that dealt with graphing packages that could handle a million datapoints at that time (late eighties) there were over a dozen programs--and they were all better than Excel...
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Set up table data extracted from R/3 not visible in data target of BW

    Hai friends,
             I am currently working on extracting data from R/3 to BW. I read the several docs given in the forum and did the following:
      1) In the LBWE transaction, my extract structure is already active.
      2) In SBIW, i went to the filling of set up tables for QM
      3) I executed the set up table extraction
      4) Then, i checked in RSA3. The extraction was successful.
      5) In BW, i replicated the datasource, and in  the infopackage, i selected in the  
          PROCESSING TAB, "PSA and then into Data targets (Package by Package)
      6) In UPDATE tab, i selected FULL UPDATE
      7) And then i did immediate load.
      8) In RSMO, it showed successful. (It showed the same number of records as in the
          RSA3 of R/3)
              But when i went into the data target (ODS) and checked for its contents, nothing is visible. Why is it so? HAve i skipped any step? Please help.
    Regards,
    Neha Solanki

    Hai,
           U r rite. It is an NW2004 system.This is what is displayed in the status tab in RSMO.
    Data successfully updated
    Diagnosis
    The request has been updated successfully.
    InfoSource : 2LIS_05_QE2
    Data type : Transaction Data
    Source system: QAS678
       And i can find no such button as u said.
    Regards,
    neha Solanki

  • Materialized view on a Partitioned Table (Data through Exchange Partition)

    Hello,
    We have a scenario to create a MV on a Partitioned Table which get data through Exchange Partition strategy. Obviously, with exchange partition snap shot logs are not updated and FAST refreshes are not possible. Also, this partitioned table being a 450 million rows, COMPLETE refresh is not an option for us.
    I would like to know the alternatives for this approach,
    Any suggestions would be appreciated,
    thank you

    From your post it seems that you are trying to create a fast refresh view (as you are creating MV log). There are limitations on fast refresh which is documented in the oracle documentation.
    http://docs.oracle.com/cd/B28359_01/server.111/b28313/basicmv.htm#i1007028
    If you are not planning to do a fast refresh then as already mentioned by Solomon it is a valid approach used in multiple scenarios.
    Thanks,
    Jayadeep

  • Get data from view and displaying the table data into Excel  pivot table

    Hi All,
    I have a small reqirement inthat When i get the data from the View that would displayed as Excel Pivot table.
    For displaying gerneral data to Excel I have followed Binarcy cachey
    Please suggest me in this.
    Thanks,
    Lohi.
    Message was edited by:
            Lohitha M

    Try this:
    http://download-west.oracle.com/docs/html/B25947_01/bcservices005.htm#sthref681
    Specifically code sample 8-10 for accessing the AM object.
    Then use the findView method to get a pointer to the VO.

  • Problem about fill set up table

    Hi all,
    I got a problem when I run the fill set up table for sales billing data source. what I did is:
    first delete the setup table then oli9bw->type in a sales document no.(as I only want this order data), then give a run name, last execute, but I got the message as follow:
    Data source 2LIS_13_VDITM contains data still to be transferred
    Could you pls explain what this means, and what should I do to solve the problem
    Thanks

    Set up table definition
    Setup table is store the historical data, where as the delta records are updated in delta queue not to set up table.
    So once historical data is loaded you can delete the contents of the set up table.
    name is extract sturcture of your data source + setup
    also data is taken from setup table when an init / full/ full repair load is done.
    so in order to take ur records from the table delete the setup table(LBWG) and initialize it with ur records.
    You can find the contents SE16
    Filling set up tables
    transac LBWG
    + SBIW -> Logistics -> Managing Transfer Information Structures-> Setup of Statistical Data-> Application-Specific Setup of Statistical Data
    set up tables
    Set up tables
    Set up tables
    view of set up tables data in se11??????????
    Set Up tables..
    lo: delete set up tables: DOUBT
    LBWQ is the extraction queue and RSA7 is delta queue. Data is sent to delta queue from extraction queue through collective job scheduled in transaction LBWE.
    when we want to extract the data using LO Cockpit, the data will be come to the extraction queue first and from there it will processed to the delta queue. SO lbwq works as a outbound queue.
    If the update mode is Unserialised V3 then as soon as the document is posted it comes update table which you can see in Tr. Code SM13. After the jonb is scheduled the records come to RSA7 i.e delta queue from which BW pulls the data
    If you use u201CQueued Deltau201Dupdate method then the data moves to Extraction queue(LBWQ). Then run Collective update to move the data from LBWQ into Delta Queue (RSA7). Then schedule the data using the infopackage by selecting Delta Load in the update tab.
    So in If you are going to do Setup table filling, delete the data from LBWQ.
    The usage of Extraction queue(LBWQ) comes in to picture in delta loading. But actually system starts collecting the data (whenever there is document creation in R/3) after activating the EXtraction structure.
    Steps to Be Performed
    We know that there are 4 types of delta extraction that are available in LO. If you use u201CQueued Deltau201Dupdate method then the data moves to Extraction queue(LBWQ). Then run Collective update to move the data from LBWQ into Delta Queue (RSA7). Then schedule the data using the infopackage by selecting Delta Load in the update tab.
    Here is Roberto's weblog:
    /people/sap.user72/blog/2005/02/14/logistic-cockpit--when-you-need-more--first-option-enhance-it
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    check this out...
    LBWQ - T code is for what?
    Ttransaction LBWQ
    What is LBWQ?
    Assign points if it helps
    Hope it helps
    regards
    Bala

  • Why Set up table in LO Extraction ?

    Hi All,
    In LO Extraction we have filling up of set up tables for doing Init and then delta falls into update table etc then using V3 collective run push it to Delta Queue. Then we extract into BW. Why in LO alone this is the methodology for extraction and why not in other extractions like COPA or HR or FISL or anything ? What is the reason for these unique steps in LO extraction alone ?
    Kindly let me know the answer.
    Best Regards,
    Fanie Hudson.

    This question has already been posted several times and lot of documents are available.
    Have a look at these discussions:
    set up tables
    Set up tables
    Set up tables
    view of set up tables data in se11??????????
    Set Up tables..
    lo: delete set up tables: DOUBT
    Blogs of Roberto will be useful as well:
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    LOGISTIC COCKPIT DELTA MECHANISM - Episode two: V3 Update, when some problems can occur...
    LOGISTIC COCKPIT DELTA MECHANISM - Episode three: the new update methods
    LOGISTIC COCKPIT - WHEN YOU NEED MORE - First option: enhance it !
    LOGISTIC COCKPIT: a new deal overshadowed by the old-fashioned LIS ?
    award points if useful

  • Regarding set up table filling in R3 prod system

    Hi Expert,
    When I fill set-up table for Billing LIS data-source . It is taking lot of time, All though  Data-volume is very less (Around 1 Lakh) .
    It is taken almost 50000 sec and still running .
    Please help.
    Thanks
    Devesh Varshney

    Hi,
    Prod system will be busy during your set up tables filling time.
    Try to check ECC system load burden and fill set up tables data when system have required free application servers(SM50/51). Even try to fill set up tables in background.
    if possible, use selections and fill set up tables.
    Thanks.

  • Restrict specific tables from SE16/SE11

    Dear Experts,
    We have a requirement of lock from view of some specific tables in SE16/SE11. Please provide the solution.
    Regards
    Shishir

    Create a new authorization group from SE54.
    Go to SE54, then select Authorization group and click create / change.
    Here create a new authorization object by clicking the new entries.
    Then note the authorization group and assign the authorization group to table.
    Go to SE11, enter the table name, click display. Then select Utilities - Assign Authorization Group.
    Then click edit and enter the authorization group (which is created earlier)
    Then in PFCG, for authorization object S_TABU_DIS enter the authorization group.
    Now check the tables in SE16.

  • Regarding SET UP table filling

    Hi All
    I have a small doubt,
    Why do we need to fill the set up table when there is not user activity in the source system?
    Is this going to lock the application tables like vbak, vbap, vbuk etc,.,?
    what happends if i run the statistical run when the users are active in the system?
    I want to do set up fill for orders starting from 2005 to July 2009.
    When is the best time?
    regards
    Janardhan KUmar K

    Hi,
    Prod system will be busy during your set up tables filling time.
    Try to check ECC system load burden and fill set up tables data when system have required free application servers(SM50/51). Even try to fill set up tables in background.
    if possible, use selections and fill set up tables.
    Thanks.

  • Help me with set up table error message please

    Hello
    I am trying to fill set up tables for application 12 in background as I try to schdule the job, I was getting the message " datasource 2lis_12_vchdr contains data still to be transfred".
    I tried twice and both the time i got same message. so I schdule it for running it immediately and it was runnning for 2 hrs and after that I got the same message that " datasource 2lis_12_vchdr contains data still to be transfred".
    so what does this mean ? are the set up tables already filled or I need to do someting else before filling set up tables
    can you please guide where I can check the set up tables have been filled sucessfully.
    I tried in RSA3 and it is showing data there but I am not sure which mode I should select to see the data and  I am also not sure weather this is right place to check the data.
    This is kind of urgent pl. help
    I will assign full points to right answer.
    Krish

    Hello Thank you all for your quick replies,
    can you please suggest the set up table name I have to look in se16 to see the set up table data for application 12 ??
    also as I said I did not checked in RSA7 for any records and starte filling set up tables and now I have started the loading as well, is it ok as I am in dev. system and i have to repeat filling the set up tables ?
    Please suggest.
    note: I will assign points to all the answers
    Regards
    Krish

  • DataServices 3.1 AIX/DB2 problem viewing table data.

    This is a brand new DS 3.1 installation. After install, DI part works fine, jobs run, and we can view table data. At one point view table data became very slow, actually after a 30-second hour-glass show, an empty grey  window shows up.

    Hi Mahir,
    the CID transform uses by default the Substitution Parameter [$$RefFilesAddressCleanse] to search for the necessary directory information.
    You see this if you open the transform configuration and take a look at the options panel.
    It sounds like on your installation you have the Data Services installed on your IBM AIX system and use the designer from the Windows based Client and during installation this Substitution parameter has been set to point to a local directory on you windows system instead to the server side installed directory (...\DataQuality\referencedata).
    Can you check in your Substitution Parameter Editor (in DS Designer under Tools--> Substitution Parameters Configuration...) if all of your Substitution Parameters are set to the local machine?
    Niels

  • Transposing Table Data From Rows to Columns Into a View

    I have a web-based HRMS (Human Resources Management System) ERP with a back-end SQL Server 2008 R2. I'm trying to compare the actual manpower with the manpower contract requirements for 30 cost centers. My base data is as follows:
    TABLE Contract_Requirements: Class, Cost_Center, Contract_Qty
    VIEW Manpower_Count: Class, Cost_Center, Head_Count
    I would like to transpose the rows to columns of both objects so that the end result would be similar to the following:
              Class          |          Site_1          |         
    Site_2          |          Site_3          |          Site_4         
    |...
    Superintendent                   1                              
    1                             1                            
    1
    Supervisor                           2                              
    2                             2                            
    2
    Medic                                   1                              
    2                             1                            
    3
    Crane Operator                   1                              
    1                             2                            
    1
    The target layout is that each individual record displays the number of employees of a specific class allocated to each individual cost center; the cost centers become columns. I was able to accomplish this using the following TSQL:
    DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX)
    SELECT @cols = STUFF((SELECT ',' + QUOTENAME(Cost_Center)
    FROM Manpower_Count
    GROUP BY Cost_Center
    ORDER BY Cost_Center
    FOR XML PATH(''), TYPE
    ).value('.', 'NVARCHAR(MAX)'),1,1,'')
    SET @query = 'SELECT Class,' + @cols + ' INTO
    Manpower_Allocation_Matrix FROM
    (SELECT Class, Cost_Center, Head_Count
    FROM Manpower_Count) x
    PIVOT (SUM(Head_Count) FOR Cost_Center IN (' + @cols + '))p'
    EXECUTE(@query);
    The only problem is if an employee is transferred from one cost center to another, which happens a lot on a daily basis, this would only be reflected in the base view; the resultant table will not be updated. I would have to repeatedly drop and recreate
    the table which isn't efficient, nor is it the right practice. I was thinking of automating this using a scheduled procedure every day, but it's still not going to work. The actual manpower count must be known at real-time, meaning any changes should be reflected
    immediately after any employee transfer. What is the most efficient way to automate this process and store real-time data? FYI, I'm not an SQL expert and have never worked with
    stored procedures or triggers. I would also like to point out the number of cost centers is never fixed; consequently the number of columns aren't fixed either.

    Hi Seif,
    You can pivot straightly on the base view to get real time data. The dynamic PIVOT is encapsuled in a Stored Procedure(SP), so every time your want to check the manpower count you can call the SP.
    --This table is the same with your base view
    CREATE TABLE srcTbl(
    Employee_Code VARCHAR(99),
    Employee_Name VARCHAR(99),
    Cost_Center_name VARCHAR(99),
    Cost_Center_NO VARCHAR(99),
    Position_ VARCHAR(99),
    Total_Salary Money
    INSERT INTO srcTbl VALUES('CAN-010','John Doe A','Site 120',120,'Fork Lift Operator',150);
    INSERT INTO srcTbl VALUES('EGY-130','John Doe B','Site 150',150,'Driver',200);
    INSERT INTO srcTbl VALUES('IND-120','John Doe C','Site 113',113,'Fork Lift Operator',150);
    INSERT INTO srcTbl VALUES('SAU-50','John Doe D','Site 112',112,'Mechanic',261.948);
    INSERT INTO srcTbl VALUES('PHI-90','John Doe F','Site 112',112,'Crane Operator',250);
    INSERT INTO srcTbl VALUES('CAN-012','John Doe G','Site 120',120,'Driver',200);
    INSERT INTO srcTbl VALUES('IND-129','John Doe I','Site 150',150,'Superintendent',2300);
    INSERT INTO srcTbl VALUES('PAK-464','John Doe X','Site 141',141,'Supervisor',1800);
    INSERT INTO srcTbl VALUES('FRA-003','John Doe M','Site 120',120,'Medic',700);
    GO
    CREATE PROC proc1
    AS
    DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX)
    SELECT @cols = STUFF((SELECT ',' + QUOTENAME(Cost_Center_no)
    FROM srcTbl
    GROUP BY Cost_Center_no
    ORDER BY Cost_Center_no
    FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'),1,1,'')
    SET @query=N';WITH Cte AS(
    SELECT Position_ as Class,Cost_Center_No, COUNT(1) AS Head_Count FROM srcTbl
    GROUP BY Position_,Cost_Center_No
    SELECT Class,'+@cols+'
    FROM Cte
    PIVOT
    (MAX(Head_Count) FOR Cost_Center_No IN('+@cols+')) AS PvtTbl
    ORDER BY Class';
    EXEC sp_executesql @query ;
    GO
    EXEC PROC1
    DROP PROC PROC1
    DROP TABLE srcTbl
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • How to Extract Data for a Maintenance View, Structure and Cluster Table

    I want to develop  3 Reports
    1) in First Report
    it consists only two Fields.
    Table name : V_001_B
    Field Name1: BUKRS
    Table name : V_001_B     
    Field Name2: BUTXT
    V_001_B is a Maintenance View
    For this one I don't Find any Datasource
    For this Maintenance View, How to Extract the Data.
    2)
    For the 2nd Report also it consists Two Fields
    Table name : CSKSZ
    Field Name1: KOSTL (cost center)
    Table name : CSKSZ
    Field Name2: KLTXT (Description)
    CSKSZ is a Structure
    For this one I don't Find any Datasource
    For this Structure How to Extract the Data
    3)
    For the 3rd Report
    in this Report all Fields are belonging to a Table BSEG
    BSEG  is a Cluster Table
    For this one also I can't Find any Datasource,
    I find very Few Objects in the Datasource.
    For this One, How to Extract the Data.
    Please provide me step by step procedure.
    Thanks
    Priya

    Hi sachin,
    I don't get your point can you Explain me Briefly.
    I have two Fields for the 1st Report
    BUKRS
    BUTXT
    In the 2nd Report
    KOSTL
    KLTXT
    If I use  0COSTCENTER_TEXT   Data Source
    I will get KOSTL Field only
    what about KLTXT
    Thanks
    Priya

Maybe you are looking for