SXMB_MONI transaction table

Hi,
Does anyone know which ABAP table is sxmb_moni transactions is stored?
Thanks,
Gowri

Patil,
There is a function module SXMB_GET_MESSAGE_LIST through which you can get the messages in SXMB_MONI.
Or
Try running sxmb_moni in debug mode and see all the tables it is using or set the DB performance analysis and see all the db calls.
Check this thread also:
In Which Database Table the Messages are Stored in XI
---Satish

Similar Messages

  • Advice needed on designing schema to accomodate multiple transaction tables.

    Hi,
    The attached images shows my current schema. It consists of three transaction tables, a product table and a calendar table.
    - Background -
    The product table 'Q1 Data Set' contains all unique sales. In addition it also contains a number of columns by which I will later filter my pivot tables (e.g. whether the customer of the order is new/returning). This
    table also contains a column named 'DateOrdered',the date the order was originally placed (but not paid). 
    Each sale that is paid can be done so either in a single transaction, or across multiple transactions of different transaction types.
    An example of a sale  paid in multiple parts would be an order that has three transactions;
    one online (table 'trans_sagepay',
    one over the phone (table'trans_epdq')
    and another by card (table'trans_manual'). Furthermore there can be more than one transaction of each type for an sale.
    I have created measures which total the sales in each transaction table.
    Each transaction has a 'transaction_date' which is the date of that individual transaction.
    The calendar is simply a date table that has some friendly formatted columns for laying out pivot tables. An example column
    is FiscalMonthAbbrv which displays months similar to '(04) - January'
    to accommodate our fiscal year.
    - Problem -
    My problem is that I need the ability to create some tables that have the
    Date Ordered as the rows (listed by Year>Month), and I need to produce other tables that have
    Transaction Date as the rows.  
    Date ordered works fine, however the problem comes when I try and create a table based on the transaction date.
    With the current model seen in the attached image I cannot do it because the transactions have a relationship to
    Q1 Data Set and this table has the relationship with the
    Cal_Trans table. What happens in this scenario is that whenever I set the rows to be FiscalMonthAbbr  the values it displays is the transactions based not on transaction date but date ordered. To explain further:
    If I have an order A with a DateOrdered of 01/01/2014, but the transaction of £100 for that order was made later on the 05/01/2014, that £100 is incorrectly attributed to the 01/01/2014.
    To clarify the type of table I am aiming for see the mock-up below, I however NEED the ability to filter this table using columns found in
    Q1 Data Set.
    How can I make a schema so that I can use both DateOrdered and TransactionDate? I cannot combine all three transaction tables into one because each transaction type has columns unique to that specific type.

    Thanks for your suggestions, at the moment I don't have time to prepare a non-confidential copy of the data model, however I've taken one step forward, and one step back!
    First to clarify; to calculate sales of each transaction type I have created the following measures (I've given them friendly names):
    rev_cash
    rev_online
    rev_phone
    I then have a measure called rev_total which sums together the above measures. This allows me to calculate total revenue, but also to break it down by transaction type.
    With this in mind I revised the schema based on Visakh original suggestion to look like this:
    Using this I was able to produce a table which looked like that below:
    There were two issues with this:
    If I add the individual measures for each transaction type I get no errors, as soon as I add the 'Total Sales' measure on the end of the table I get an error "Relationship between tables may be needed". Seemingly however the numbers still calculate as expected
    - what is causing this error and how do I remove it?
    I CAN with this scenario filter by 'phd' which is a column in the Q1 Data Set table
    and it works as expected. I cannot however filter by all columns in this table, an example would be 'word count'.
    'Word Count' is a integer column, each record in the Q1 Data Set table has a value set for this column.
    I would like to take the column above and add a new measure called 'Total Word Count' (which I have created) which will calculate the total number of words in that monthly period. When I add this however I get the same relationship error as above and it
    display the word count total for the entire source tbale for every row of the pivot table.
    How can I get this schema working so that I can filter by word count and other columns from the product table. It Is confusing me how I can filter by one column, but not by a another in the same table.
    Also, I don't fully understand how I would add a second date table or how it would help my issues.
    Thanks very much for you help.

  • How to create transaction table in R/3

    Hi Gurus,
                  I have created master data table in R/3 with zempid as a primary key. I need to create transaction table for employee and visitor. how can i relate 2 transaction table with employee master table while creating tables in R/3 SE11.
    Notes: both employee and visitor transaction table has zempid.
    please let me know how to create table with relations.
    Thanks & Regards,
    Bharath

    Hi,
    In a normal database (say Oracle), this referential integrity is taken care of by PK-FK (primary key - foreign key) relationships. As far as OLTP - SAP R/3 is concerned, these are exisitng for the tables defined by SAP.
    Speaking about BW:
    1. You will bring master data from R/3--> BW first (its always a good practice), which should mean that empid 1000 - 1007 will flow to BW. Also your OLTP system ensures that there can be no transaction data (due to PK-FK relationships), for any emp. id other than 1000-1007.
    2. There could be another case that when the master data has been transferred to BW, due to time lag, in between master data and transaction data are created in R/3... meaning thereby that R/3 now has both master data about say emp.Id 1008 and transaction data as well.
    3. Now the transaction data is extracted from R/3, which will bring the transaction data for 1008, eventhough the master data for 1008 (loaded in step 1) has not come down to BW.
    4. Such situation can be controlled in BW, there are options wherein you can control this. the options are:
      - Load data even if there is no master data (this will create master data record with key values and nulls in other attributes) - record for 1008 will get loaded in such cases.
    - Do not Load data when there is no master data - The record with 1008 emp. id will not be loaded in such a case.
    Hope this helps.
    Cheers,
    Sumit

  • How to find out the domains related to only Transaction Tables....?

    Hi All,
    I have to find out all the Domains which are related to Only Transaction Tables, that Domains should not be used or related to Master Tables, pls let me know is there any way to find out?
    Akshitha.

    Step 1:
    Select TABNAME from DD09L where TABART = 'APPL1'.
    Select DOMNAME from DD03L where TABNAME = TABNAME (got from the above statement)
    By this way you will get all the domains used in Transaction table
    Step 2:
    Select TABNAME from DD09L where TABART = 'APPL0'.
    Select DOMNAME from DD03L where TABNAME = TABNAME (got from the above statement)
    By this way you will get all the domains used in Master table
    Step 3:
    Display all the domains got from step 1 and not in step 2.
    Hopefully this will fulfill your requirement.
    Please reward if useful..
    -Tushar

  • How design transaction table for  accounting system

    Hi master;
    Sir,
    I making accounting system for garment factory chart of account table I design but
    I confused in transaction table how I design make only one table
    Such as
    Trans
    Ac_code
    Date
    V_no
    Description
    DR_amount
    CR_amoun
    Chequ_no
    Or make transaction master detail style
    Tran_master
    V_no
    Date
    Description
    Ac_code
    Chequ_no
    Tran_detail
    S_no
    V_no
    Ac_code
    Description
    DR_amount
    CR_amoun
    But how I get ledger from both style please give me idea how I set transaction table
    Or give I any sample application or sample code
    Thanking You
    Aamir

    Hi master;
    Sir,
    I making accounting system for garment factory chart
    of account table I design but
    I confused in transaction table how I design make
    only one table
    Such as
    Trans
    Ac_code
    Date
    V_no
    Description
    DR_amount
    CR_amoun
    Chequ_no
    Or make transaction master detail style
    Tran_master
    V_no
    Date
    Description
    Ac_code
    Chequ_no
    Tran_detail
    S_no
    V_no
    Ac_code
    Description
    DR_amount
    CR_amoun
    ut how I get ledger from both style please give me
    idea how I set transaction table
    Or give I any sample application or sample code
    Thanking You
    Aamir
    only one table voucher may have more then one invoice then i would go with master detail .
    If you maintain one table how will you maintain more then one invoice's voucher.
    Khurram

  • Transactional Tables in ABAP

    Hello all,
    I want to see which and how many transactional tables are populated with data in SAP . Is there any standard utility for it or will it be carried out through an ABAP program. If yes, then how ?
    Bundle of thanks,
    Shehryar

    Hi,
    If assume you meaning 'Which are the tables updated by an program/ Transaction'.
    If its your query then you can use ST05.
    Its the transaction for trace.
    Switch on the SQL trace and run your transaction.
    Once done see the trace you will able to find what are the tables updated by the program or transaction.
    Thanks,
    Arun

  • How should I join 2 transaction tables in IDT?

    Dear Experts,
    I have a little bit knowledge with IDT now.
    I'm facing problem to join 2 transaction tables in IDT as show below:
    Please help me.
    Best regards,
    Chenna Yon

    Hi..
    You can enhance the data foundation using the derived tables , but derived table is a virtual table in the data foundation that combines other tables using calculations and functions. You can create objects in the business layer on a derived table in the same way that you do for a standard table.
    Open the data foundation in the editor by double-clicking the data foundation name in the Local Projects View.
    Right-click the table header of the table to be the basis for the derived table, and select Insert > Derived Table.
    Enter a name for the derived table that is unique in the data foundation,
    Write the SQL statement which is to be used for the derived table in the "Enter SQL Expression"
    (Sales Transaction. Primary Kay (as per your tables: Sales ID) = Order Transaction. Foreign Key(as per your tables:
    Sales ID) and calculations and restrictions)
    and check any error’s by Click "Check Syntax". If the SQL gives error, check the syntax and fix it. If the SQL parses correctly, click OK.

  • Process records in a transaction table in real time

    We are currently designing a new system which basically needs to process records from a transaction table. We are envisaging to have approximately 100000 records per hour. There is no need to process each transaction independently as the external process will process all records residing in the table which have not been processed as yet.
    We are basically looking at various options:
    1) have the external process run continuously, select all records in the table, process them and delete them and then start the process again
    2) have the external process run continuously, select all records in the table, process them, update a status flag and then start the process again processing only those records with their status not yet updated
    3) fire a trigger for each record launching the external process (if it is not running yet)
    4) have a separate table containing a timestamp which is updated via trigger for every transaction that is inserted in the transaction table. Have the external process run continuously and only process those records which have exceeded the previous timestamp.
    Would appreciate any ideas you may have how to tune this process and your views regarding the options mentioned above(or others you might have)
    Thanks a lot.

    user9511474 wrote:
    We are currently designing a new system which basically needs to process records from a transaction table. We are envisaging to have approximately 100000 records per hour. There is no need to process each transaction independently as the external process will process all records residing in the table which have not been processed as yet.My busiest table collects up to 50 million rows per hour (peak periods in the day) that also needs to be processed immediately (as a batch) after the hour. I use partitioning. It is a very flexible and there are very few (if any) performance knocks.
    The entire data set has to be processed. With a partition that means a full scan of the table partition - and the ability to do it using parallel query. No additional predicates are needed, except the to have the CBO apply partition pruning. In other words, the predicate enables the CBO to narrow down the SQL to the previous hour's partition only.
    No additional predicates needed like a STATUS flag to differentiate between processed and unprocessed rows - as the entire data set in the partition is unprocessed at that time. (such a flag approach will not be very scalable in any case)
    Also, I do not use external processes. Too expensive performance wise to ship data all the way from the Oracle buffer cache to some external process. And parallel query is also not an option with an external process as the OCI does not provide the external process with a threading interface in order to hook into each of the data output streams provided by the parallel query clients.
    I stay inside PL/SQL to perform the data processing. PL/SQL is even more capable than ProC/C++ and Java and .Net in this regard.
    The execution interface to drive the scheduling of processing is DBMS_JOB. Straight forward and simple to use.
    The basic principles of processing large data volumes in Oracle is to effectively use I/O. Why use indexes when an entire data set needs to be processed? Why perform updates (e.g. updating a status flag) when the data model and physical implementation of that can eliminate it?
    I/O is the most expensive operation. And when dealing with a large volume, you need to make sure that every single I/O is actually required to achieve the end result. There's no room to waste I/O as the performance penalties are hefty.

  • Kick off SSIS master package once new data gets into a transactional table

    hello
    I have a SSIS master package.. I also have an UI which the user uses to configure up some date ranges (that I need to use to load the data from the package). After date configuration, when user clicks on RUN SSIS button in UI, that configured data gets loaded
    up in a transactional db table. Once new data is entered into that table, i need to kick off this master SSIS package.
    Question is , is there any other method besides using table triggers?
    Please advice.
    Thanks

    Another method is to have a automated job created in SQL Agent which keeps on polling the transactional table. you can configure it to run periodically (may be every 30 mins etc). Each time it starts it needs to check the status of a table which captures
    details of whether user has pressed RUN SSIS button (you need to include that logic in UI to set a bit in a status table when button is pressed). If status is set, it will check if transactional table has data (IF EXISTS(SELECT 1 FROM Table)). Once these two
    are truw it will go to next step where it calls master package. At the end it needs to log somewhere that data transfer is completed and also need to reset the bit field as well.
    So job will have three steps
    Step1: check bit is set and table has data
    ie
    IF (SELECT bitField FROM StatusTable)=1 AND EXISTS(SELECT 1 FROM TTable)
    SELECT 1
    ELSE
    RAISERROR 'Table not ready',16,1
    and set job advanced option as
    quit job reporting success for failure condition and go to next step for success condition
    Step2: Execute master package
    Step3: rest status bit and update daya transfer status in a log table with date of run,record count etc
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long.

    Hello Friends,
    The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
    Scenario:
    Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
    Questions are…
    What is best option?
    IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
    I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
    When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
    Thanks
    Karthikeyan Jothi

    http://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
    Processing
    hundreds of millions records can be done in less than an hour.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Impact of creating a non-clusterd index on a huge transaction table?

    Hello Everyone,
    We have a transaction table containing 10 million records and everyday a million records will be inserted. We don’t have any clustered index on this table as this is a transaction table (more than 10 columns to uniquely identify a row). We
    do have some SPs which in turn some reports getting generated using this table. In order to improve the performance of an SP, we created a non-clustered index on this table and we found a huge performance gain.
      Here comes my question - will this (creation of non-clustered index) impacts my table data load performance or other reports generation?
    Any suggestions will be appreciated.
    Many Thanks!
    Rajasekhar.

    Hello Rajasekhar, 
    First identify this table and corresponding columns usage. Through SP_depends system procedure you can identify this table dependencies. 
    Then look at complex queries and it's execution plans. You can get an output recommendations of appropriate missing indexes. 
    Now you can try to create appropriate indexes. Always I suggest you to limit the index count if you are inserting/updating large volume records. Also if possible create clustered index. 
    One more option, you can horizontally partitioned the table and move data to multiple filegroups. Based of range of data your query performance also improve a lot. 
    To apply partition for existing table, you should take backup and recreate from scartch. 
    Check this link : http://www.mssqltips.com/sqlservertip/2888/how-to-partition-an-existing-sql-server-table/
    Best Regards, 
    Ashokkumar.
    Ashokkumar

  • Last Used status for Reports, Transactions, Tables ??

    Dear all,
    Kindly let me know how to find the last used status for
    Reports, Transactions, Tables ??
    This is very urgent, Please do the needful.
    Waiting for the responses..
    Cheers,
    Virendra.

    Here the issue is same.
    STAT transaction is asking for the time which is the main hurdle.
    I am hanging around with STAT transaction but not finding
    how to getover the time specification.
    if i dont specify the time i takes by default the 2 or minutes back from the current time.
    which is not useful for me at all.
    My main requirement is that my seniors want to know what are all reports or transaction or tables which are not used at what last time ?
    so if i have to specify the time wont be so useful for me.
    And writing a report 'll not gain anything coz internal SAP report is take time as main parameter.
    Regards,
    Virendra.

  • Committed SCN in Undo header slot (Transaction table )

    Hi ,
    When ever a transaction was committed the Committed SCN is recorded with the Change vector in redo logs and in Slot reserved for this transaction in undo segment header .
    At some later point in time another session may read one of those blocks and discover that the ITL includes a transaction that has committed but not been cleaned out. (It
    can work this out by cross-checking the ITL entry with the relevant transaction table slot in the undo segment header block).
    My Question is : By the time next session visit this block, what happens if the undo slot was over written (This may happens due to lack of undo space and or the the time after commit reached undo retention ) .
    Please clarify my doubts ?
    -thanks
    Vijay

    Vijay.Cherukuri wrote:
    thanks for your reply
    Possibly: ORA-01554 transaction concurrency limit reached reason:%s params:%s, %s
    (ORA-01574 is another candidate, but I think that one really is about just the parameter setting.)
    In this case we may also get the error : ORA-30036 : unable to extend segment by string in undo tablespace 'string' Sine the undo table space was full not extendable. ?
    And also : ORA-01554 transaction concurrency limit reached reason:%s params:%s, %s also valid Since the Maximum number of concurrent transactions reached
    Which error code more likely to occur in this case ?
    Why do you think you need to know the answer to that question ?
    Regards
    Jonathan Lewis

  • How to wirte query when using transaction tables.

    Hello,
    I am new to the database and working as java developer.
    I am saving the data in transaction table. when the component is deleted or added to the kit.
    Id kit_id comp_id trans_type
    1 k_01 c_01 added
    2 k_01 c_02 added
    3 k_01 c_01 deleted.
    4 k_01 c_05 added.
    5 k_02 c_06 added.
    6 k_01 c_02 deleted.
    i need to write query to get the all the components are attached to kit(k_01).
    Please help me write the query.
    Thanks,
    RK

    885176 wrote:
    Hello,
    I am new to the database and working as java developer.
    I am saving the data in transaction table. when the component is deleted or added to the kit.
    Id kit_id comp_id trans_type
    1 k_01 c_01 added
    2 k_01 c_02 added
    3 k_01 c_01 deleted.
    4 k_01 c_05 added.
    5 k_02 c_06 added.
    6 k_01 c_02 deleted.
    i need to write query to get the all the components are attached to kit(k_01).
    Please help me write the query.
    Thanks,
    RKSELECT <column list>
    FROM <table name>
    WHERE kit_id='k_01'
    I'd suggest you spend some time here ==> http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/toc.htm

  • Reports from Transaction Tables

    Hi Experts,
    Can I develop a report using transaction tables.
    Here I am trying to develop a report based on 2 transaction tables. In physical layer I joined one another table using primary key from table and the same column which is a foreign key in another table. And the dragged them to BBM and later to Presentation layer.
    But when I tried to build a report it is showing error.
    I think I missed something some where. Please help me with your inputs to develop a report directly using transaction tables.
    Thanks
    Surpie

    Hi,
    Double-click a physical table object to view its properties
    Click the General tab to view and set general properties for a physical table object
    The Table Type drop-down list allows you to specify the physical table object type. Physical Table is the default. You can also define the physical table as a stored procedure or a SELECT statement.
    Select Table Type as "SELECT"
    When you select either of these options, a text pane below the Table Type drop-down list becomes active, allowing you to enter the stored procedure or the SELECT statement
    Enter the SQL statement
    Click OK.
    Select tables have an eyeglasses icon
    A Select table serves as an opaque view in the repository metadata, but no corresponding view is actually created in the database. The Administration Tool provides a Deploy View feature that creates a corresponding view in the database.
    To deploy a view, right-click the object and select Deploy View.
    View deployment window will appear asking for confirmation.
    Click ok. View will be created in database.
    Mark if this is helpful/ correct.
    Thanks
    Azhar

Maybe you are looking for