Confused: the difference between ExtractStruct., and table in datasource?

Run RSA6 on R3, locate any datasource and display it, then we can see a ExtractStruct. field at the top, and many table fields at bottom.   What's the difference between the ExtractStruct. and the below table in the datasource?   All the entries are the same between these two?  Or usually the below table in the datasource contains all the standard fields and it we want to populate the standard fields, we will have to add them to the ExtractStruct. during the extraction? 
Also double click ExtractStruct. to open it, we can see an Append Structure button at the top bar. We know that if want to add custom fields, then we need to add them to the Append Structure, but if we want to just update standard fields, we just need to add them to the ExtractStruct.?
Thanks!

Hi Kevin,
The Fields you see in RSA6 are the fields of the extract structure only. It is a list of all the fields in ES. Some of the fields might be hidden and some of the fields might be used for selection. If you double click on the ES and go in, you will see the same set of fields. If you edit the DS, you can change the selections boxes there according to your requirement.
Append structure is for appending the necessary fields to the ES as you rightly mentioned. You can not change the standard delivered fields. You can do it using the access key but definitely not advisible. Hope this helps.
Thanks and Regards
Subray Hegde

Similar Messages

  • A CONFUSION:The difference between cluster and multi-IP--DNS mapping??

              I have a test about simplest cluster,admin and managed server all told me "start
              cluster service"!the two IP address use ONE DomainName,If one down,when I send
              request with dn,It first look up admin server,no found and then go dns to search
              another server,about one minute then the server send response to me!
              But when I do not config a cluster,only start two admin server and give them the
              same dns,the same appearance as cluster!
              I don't understand the difference between cluster and multi-IP--DNS mapping??
              

              <[email protected]> wrote in message news:3b16f1db$[email protected]..
              |
              | I have a test about simplest cluster,admin and managed server all told me
              "start
              | cluster service"!the two IP address use ONE DomainName,If one down,when I
              send
              | request with dn,It first look up admin server,no found and then go dns to
              search
              | another server,about one minute then the server send response to me!
              |
              This is DNS fail-over.
              | But when I do not config a cluster,only start two admin server and give
              them the
              | same dns,the same appearance as cluster!
              | I don't understand the difference between cluster and multi-IP--DNS
              mapping??
              It is totally different. regarding the last failover example you've given,
              of course the 2 servers can have identical files maintained under
              public_html. DNS will failover requests to "foo.html" to the other one if
              the first server is down, but if you have something saved in a session, say
              shopping cart example, it's totally lost, but with WLS clustering, the
              session is replicated to the other server in the cluster, you may just
              check-out, don't have to order again.
              This is just a simple example of WLS cluster session-replication. WLS
              supports EJB, RMI Objects, JMS (6.0) clustering.. check the doc at
              http://www.weblogic.com/docs50/cluster/index.html
              

  • What is the difference between sheets and tables?

    Seriously. I've used Numbers for many projects, but the difference just isn't apparant to me?

    Hi Michael,
    Sounds like it's time to crack open the manual and spend a little quality time with Chapter 1.
    Apple provides a couple of excellent tools for understanding and using Numbers, the Numbers '09 User Guide and the iWork Formulas and Functions User Guide. Both are fully searchable PDF documents, and both are available for download via the Help menu in Numbers.
    I'd recommend reading at least the first three chapters of the Numbers Guide (about 60 pages in all), then dipping into the rest as you need it.
    The F&F guide is a useful reference whn you're writing formulas, or when you're trying to figure out what's going on in the formulas in a template produced by someone else.
    Regards,
    Barry
    PS: The information regarding Sheets, Tables, etc. may be found in the article The Numbers Window, starting on the second page of Chapter 1, Numbers Tools and Techniques.

  • What is the difference between infocube and fact table?

    hi bw gurus,
    what is the difference between infocube and fact table?
    thanks in advance
    bye

    Fact table contains only KeyFigures and foreign keys of dim ids.
    Infocube conatin fact table sorrounded by dimension tables.dimension table contain primary keys of dim ids and SIDs which link to master data.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84

  • What is the difference between tkprof and explainplan

    Hi,
    what is the difference between tkprof and explainplan.

    Execution Plans and the EXPLAIN PLAN Statement
    Before the database server can execute a SQL statement, Oracle must first parse the statement and develop an execution plan. The execution plan is a task list of sorts that decomposes a potentially complex SQL operation into a series of basic data access operations. For example, a query against the dept table might have an execution plan that consists of an index lookup on the deptno index, followed by a table access by ROWID.
    The EXPLAIN PLAN statement allows you to submit a SQL statement to Oracle and have the database prepare the execution plan for the statement without actually executing it. The execution plan is made available to you in the form of rows inserted into a special table called a plan table. You may query the rows in the plan table using ordinary SELECT statements in order to see the steps of the execution plan for the statement you explained. You may keep multiple execution plans in the plan table by assigning each a unique statement_id. Or you may choose to delete the rows from the plan table after you are finished looking at the execution plan. You can also roll back an EXPLAIN PLAN statement in order to remove the execution plan from the plan table.
    The EXPLAIN PLAN statement runs very quickly, even if the statement being explained is a query that might run for hours. This is because the statement is simply parsed and its execution plan saved into the plan table. The actual statement is never executed by EXPLAIN PLAN. Along these same lines, if the statement being explained includes bind variables, the variables never need to actually be bound. The values that would be bound are not relevant since the statement is not actually executed.
    You don’t need any special system privileges in order to use the EXPLAIN PLAN statement. However, you do need to have INSERT privileges on the plan table, and you must have sufficient privileges to execute the statement you are trying to explain. The one difference is that in order to explain a statement that involves views, you must have privileges on all of the tables that make up the view. If you don’t, you’ll get an “ORA-01039: insufficient privileges on underlying objects of the view” error.
    The columns that make up the plan table are as follows:
    Name Null? Type
    STATEMENT_ID VARCHAR2(30)
    TIMESTAMP DATE
    REMARKS VARCHAR2(80)
    OPERATION VARCHAR2(30)
    OPTIONS VARCHAR2(30)
    OBJECT_NODE VARCHAR2(128)
    OBJECT_OWNER VARCHAR2(30)
    OBJECT_NAME VARCHAR2(30)
    OBJECT_INSTANCE NUMBER(38)
    OBJECT_TYPE VARCHAR2(30)
    OPTIMIZER VARCHAR2(255)
    SEARCH_COLUMNS NUMBER
    ID NUMBER(38)
    PARENT_ID NUMBER(38)
    POSITION NUMBER(38)
    COST NUMBER(38)
    CARDINALITY NUMBER(38)
    BYTES NUMBER(38)
    OTHER_TAG VARCHAR2(255)
    PARTITION_START VARCHAR2(255)
    PARTITION_STOP VARCHAR2(255)
    PARTITION_ID NUMBER(38)
    OTHER LONG
    DISTRIBUTION VARCHAR2(30)
    There are other ways to view execution plans besides issuing the EXPLAIN PLAN statement and querying the plan table. SQL*Plus can automatically display an execution plan after each statement is executed. Also, there are many GUI tools available that allow you to click on a SQL statement in the shared pool and view its execution plan. In addition, TKPROF can optionally include execution plans in its reports as well.
    Trace Files and the TKPROF Utility
    TKPROF is a utility that you invoke at the operating system level in order to analyze SQL trace files and generate reports that present the trace information in a readable form. Although the details of how you invoke TKPROF vary from one platform to the next, Oracle Corporation provides TKPROF with all releases of the database and the basic functionality is the same on all platforms.
    The term trace file may be a bit confusing. More recent releases of the database offer a product called Oracle Trace Collection Services. Also, Net8 is capable of generating trace files. SQL trace files are entirely different. SQL trace is a facility that you enable or disable for individual database sessions or for the entire instance as a whole. When SQL trace is enabled for a database session, the Oracle server process handling that session writes detailed information about all database calls and operations to a trace file. Special database events may be set in order to cause Oracle to write even more specific information—such as the values of bind variables—into the trace file.
    SQL trace files are text files that, strictly speaking, are human readable. However, they are extremely verbose, repetitive, and cryptic. For example, if an application opens a cursor and fetches 1000 rows from the cursor one row at a time, there will be over 1000 separate entries in the trace file.
    TKPROF is a program that you invoke at the operating system command prompt in order to reformat the trace file into a format that is much easier to comprehend. Each SQL statement is displayed in the report, along with counts of how many times it was parsed, executed, and fetched. CPU time, elapsed time, logical reads, physical reads, and rows processed are also reported, along with information about recursion level and misses in the library cache. TKPROF can also optionally include the execution plan for each SQL statement in the report, along with counts of how many rows were processed at each step of the execution plan.
    The SQL statements can be listed in a TKPROF report in the order of how much resource they used, if desired. Also, recursive SQL statements issued by the SYS user to manage the data dictionary can be included or excluded, and TKPROF can write SQL statements from the traced session into a spool file.
    How EXPLAIN PLAN and TKPROF Aid in the Application Tuning Process
    EXPLAIN PLAN and TKPROF are valuable tools in the tuning process. Tuning at the application level typically yields the most dramatic results, and these two tools can help with the tuning in many different ways.
    EXPLAIN PLAN and TKPROF allow you to proactively tune an application while it is in development. It is relatively easy to enable SQL trace, run an application in a test environment, run TKPROF on the trace file, and review the output to determine if application or schema changes are called for. EXPLAIN PLAN is handy for evaluating individual SQL statements.
    By reviewing execution plans, you can also validate the scalability of an application. If the database operations are dependent upon full table scans of tables that could grow quite large, then there may be scalability problems ahead. On the other hand, if large tables are accessed via selective indexes, then scalability may not be a problem.
    EXPLAIN PLAN and TKPROF may also be used in an existing production environment in order to zero in on resource intensive operations and get insights into how the code may be optimized. TKPROF can further be used to quantify the resources required by specific database operations or application functions.
    EXPLAIN PLAN is also handy for estimating resource requirements in advance. Suppose you have an ad hoc reporting request against a very large database. Running queries through EXPLAIN PLAN will let you determine in advance if the queries are feasible or if they will be resource intensive and will take unacceptably long to run.

  • Whats the difference between EPMA and Essbase Studio

    Aren't they both used for designing applications?
    And do one really need Essbase Studio?

    Whats the difference between EPMA and Essbase Studio ^^^It is a bit confusing, isn't it? As far as I understand EPMA's role, it's a way to share common dimensions and data across multiple Oracle EPM products including Essbase, Planning, and HFM. I actually don't know if other products live in EPMA. What EPMA does not have is a nice way of sourcing or manipulating dimension and fact tables (or even files). Oh, there are interface tables, but you will write code/use the EPMA dimension utility to load dimensions. I have to say I've never tried to load data through EPMA so someone else is going to have to comment on that. Once dimensions are built, you can deploy Essbase (and other products) from common and database-specific dimensions.
    Studio is the tool you would use to go against a data warehouse, or something awfully close to a data warehouse to build Essbase databases. It's for Essbase only -- Planning, HFM, etc. are not the targets of Studio's output.
    What gets confusing/intriguing is that when EPMA deploys Essbase apps, it uses Studio under the covers to do so. Thus the interesting implementations where people use Studio (which is way more flexible than EPMA because it is a development tool as opposed to a dimension/data management app) to read the EPMA interface tables (and for all I know, the base EPMA tables) and build Essbase apps that way.
    If you think there is overlap, I would agree with you, but you can see they really aren't the same.
    And do one really need Essbase Studio?^^^You don't have to use Studio to build Essbase databases. You can go hog-wild in EAS with SQL load rules if you want, although at some point it probably will be easier/more straightforward to build it in Studio. Although Studio has been mentioned as the replacement for EAS for some time, I suspect the effort required to build a database in Studio will keep EAS around for quite a while or Studio Lite will have to come out. There's a lot to be said (sometimes) for the amazing flexibility EAS/Essbase have -- Studio requires a more methodical approach and EPMA has a very formal set of standards and methods.
    Whew, I'm sorry, I wrote a book about this and I'll bet you get a bunch of differing opinions on this.
    If you're interested in Studio, you might do well to pick up a copy of Glenn Schwartzberg's "Look Smarter Than You Are with Oracle Essbase Studio 11". I receive not a penny from its sale although I was one of the copy editors (hey, I got a mention in the acknowledgements). It's a good book and an excellent introduction to the tool.
    Regards,
    Cameron Lackpour

  • What is the difference between PUSH and FETCH

    I am a little confused. I use my iphone for both my personal POP email accounts and my business exchange account. I am trying to save as much battery as I can so I turned off push and set everything to manual...but now when I try and get may mail....it either says "connecting" or "checking for new mail" at the bottom of the screen and nothing happens from there.
    What is the best way to setup this situation for the most battery life? I don't need it to automatically download emails just when I open the email accout would be nice.
    I also noticed that when I delete an email from my iphone on my exchange account, it also deletes is on my desktop at work...I need to turn this off...is it possible?

    Hi maxum25,
    The difference between push and fetch is that:
    When using push, the server sends a signal to the iphone and lets it know that an email is coming its direction. Kind of like receiving a call. The iphone does not need to do anything except receive the email.
    When using fetch, the iphone has to wake up every so often and send a request to the server to see if there is any new email waiting for it on the server to download. This takes more time because the iphone sends a request, the server says yes there is some, the iphone says ok give me the new email.
    Now the exchange email uses active sync to keep all changes on the exchange server and mobile device in sync. This is automatic and is the nature of exchange and active sync. In order to keep this from happening you would need to talk to your IT dept. and see if they have an imap or pop alternative. Even using imap reflects the changes back to the server.
    Hope this helps.

  • What is the difference between exists and in

    hi all
    if i have these queries
    1- select ename from emp where ename in ( select ename from emp where empno=10)
    and
    2- select ename from emp where exists ( select ename from emp where empno=10)
    what is the difference between exists and in is that only when i use in i have to bring the field name or what.... i mean in a complex SQL queries is it will give the same answer
    Thanks

    You get two entirely different result sets that may be the same. Haah! What do I mean by that.
    SQL> select table_name from user_tables;
    TABLE_NAME
    BAR
    FOO
    2 rows selected.
    SQL> select table_name from user_tables where table_name in (select table_name from user_tables where table_name = 'FOO');
    TABLE_NAME
    FOO
    1 row selected.
    SQL> select table_name from user_tables where exists(select table_name from user_tables where table_name = 'FOO');
    TABLE_NAME
    BAR
    FOO
    2 rows selected.So, why is this? the WHERE EXISTS means 'if the next is true', much like where 1=1 being always true and 1=2 being always false. In this case, where exists could be TRUE or FALSE, depending on the subquery.
    WHERE EXISTS can be useful for something like testing if we have data, without actually having to return columns.
    So, if you want to see if an employee exists you might say
    SELECT 1 FROM DUAL WHERE EXISTS( select * from emp where empid = 10);
    If there is a row in emp for empid=10, then you get back 1 from dual;
    This is what I call an 'optimistic' lookup because the WHERE EXISTS ends as soon as there is a hit. It does not care how many - only that at least one exists. It is optimistic because it will continue processing the table lookup until either it hits or reaches the end of the table - for a non-indexed query.

  • What is the difference between TO_CHAR and TO_DATE()?

    Hi everybody,
    i am facing a problem in my system.It is quite urgent, can you explain me "What is the difference between TO_CHAR and TO_DATE()?".
    According to user's requirement, they need to generate a code with format "YYMRRR".
    YY = year of current year
    M = month of current month (IF M >=10 'A' ,M >=11 'B' , M >=10 'C')
    RRR = sequence number
    Example: we have table USER(USER_ID , USER_NAME , USER_CODE)
    EX: SYSDATE = "05-29-2012" MM-DD-YYYY
    IF 10
    ROW USER_ID , USER_NAME , USER_CODE
    1- UID01 , AAAAA , 125001
    2- UID02 , AAAAA , 125002
    10- UID010 , AAAAA , 12A010
    This is the original Script code. But This script runs very well at my Local. Right format. But it just happens wrong format on production.
    12A010 (Right) => 11C010 (Wrong).
    SELECT TO_CHAR(SYSDATE, 'YY') || DECODE( TO_CHAR(SYSDATE, 'MM'),'01','1', '02','2', '03','3', '04','4', '05','5', '06','6', '07','7', '08','8','09','9', '10','A', '11','B', '12','C') ||     NVL(SUBSTR(MAX(USER_CODE), 4, 3), '000') USER_CODE FROM TVC_VSL_SCH                                                       
         WHERE TO_CHAR(SYSDATE,'YY') = SUBSTR(USER_CODE,0,2)                         
         AND TO_CHAR(SYSDATE,'MM') = DECODE(SUBSTR(USER_CODE,3,1),'1','01',          
              '2','02', '3','03', '4','04', '5','05',          
              '6','06', '7','07', '8','08', '9','09',          
              'A','10', 'B','11', 'C','12')                    
    I want to know "What is the difference between TO_CHAR and TO_DATE()?".

    try to use following select
    with t as
    (select TO_CHAR(SYSDATE, 'YY') ||
             DECODE(TO_CHAR(SYSDATE, 'MM'),
                    '01', '1',
                    '02', '2',
                    '03', '3',
                    '04', '4',
                    '05', '5',
                    '06', '6',
                    '07', '7',
                    '08', '8',
                    '09', '9',
                    '10', 'A',
                    '11', 'B',
                    '12', 'C') as code
        from dual)
    SELECT t.code || NVL(SUBSTR(MAX(USER_CODE), 4, 3), '000') USER_CODE
      FROM TVC_VSL_SCH
    WHERE SUBSTR(USER_CODE, 1, 3) = t.codeand yes you need check time on your prodaction server
    good luck
    Edited by: Galbarad on May 29, 2012 3:56 AM

  • What is the difference between upgradation and migration.

    Hi Guru's
    what is the difference between upgradation and migration.
    actuallly i involved in upgradation project, here my role is
    1. first i check the query's in 3.5 save the query and transport the query. and check the query in bex analyzer also.
    2. go to BI .7  find the query;s ,give the query name and save the query ,
    3. once save the query, again will come to 3.5 open the query , it will not open. this is my job here,
        come to 7.0 check the query in analyzer also.
    i am having littile bit confusion, how it will comes query in 7.0, why are u saving the query's in 3.5 and 7.0
    query's already available in 7.0 why are u doing this work?
    can i know the upgrades those  objects, is it neccessary, if necessary how can i upgrade.
    infoobje , transferrules, transferstructure ,infosoure, datasoure,updaterules, ods, cubes.
    Points will be Assingned ,
    Thanks & Regards
    prabhavathi

    Hi,
    I was talking in a general sense not on a query level.
    If your taling about migration in that level meaning as a part of larger upgradation (in your case 3.x to 7) there may be many places where you need to do this kind of activities.
    Fr eg migration into new data flow, Migration of Web templates from BW 3.x to Netweaver 2004s, etc
    Hope this helps.
    Thanks,
    JituK

  • What is the difference between count(*) and count(1)

    what is the difference between count(*) and count(1)

    Hi,
    903830 wrote:
    some say count(1) is faster and some say count(*), i am confused about count function?In the link provided by Prakash :
    prakash wrote:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1156159920245
    You can read :
    Followup   August 31, 2001 :
    I'll have to guess, since you don't say, that you are using 7.x and before when count(*) and count(1) were different (and count(1) was slower). In all releases of the databases for the last 4-5 years, they are the same.
    Don't waste your time on that.
    ;-)

  • What is the difference between Service and Component

    What is the difference between Service and Component?

    Generally, the implementation of a service will be comprised of many components, each of which will be comprised of many objects.
    Services should correspond to cohesive collections of use cases (eg an online bank), components are at a finer granularity (eg a table of numbers).
    Pete

  • What's the Difference Between OLAP and OLTP?

    HI,
    What's the difference between OLAP and OLTP ? and which one is Best?
    -Arun.M.D

    Hi,
       The big difference when designing for OLAP versus OLTP is rooted in the basics of how the tables are going to be used. I'll discuss OLTP versus OLAP in context to the design of dimensional data warehouses. However, keep in mind there are more architectural components that make up a mature, best practices data warehouse than just the dimensional data warehouse.
    Corporate Information Factory, 2nd Edition by W. H. Inmon, Claudia Imhoff, Ryan Sousa
    Building the Data Warehouse, 2nd Edition by W. H. Inmon
    With OLTP, the tables are designed to facilitate fast inserting, updating and deleting rows of information with each logical unit of work. The database design is highly normalized. Usually and at least to 3NF. Each logical unit of work in an online application will have a relatively small scope with regard to the number of tables that are referenced and/or updated. Also the online application itself handles the majority of the work for joining data to facilitate the screen functions. This means the user doesn't have to worry about traversing across large data relationship paths. A heavy dose of lookup/reference tables and much focus on referential integrity between foreign keys. The physical design of the database needs to take into considerations the need for inserting rows when deciding on physical space settings. A good book for getting a solid base understanding of modeling for OLTP is The Data Modeling Handbook: A Best-Practice Approach to Building Quality Data Models by Michael C. Reingruber, William W. Gregory.
    Example: Let's say we have a purchase oder management system. We need to be able to take orders for our customers, and we need to be able to sell many items on each order. We need to capture the store that sold the item, the customer that bought the item (and where we need to ship things and where to bill) and we need to make sure that we pull from the valid store_items to get the correct item number, description and price. Our OLTP data model will contain a CUSTOMER_MASTER, A CUSTOMER_ADDRESS_MASTER, A STORE_MASTER, AN ITEM_MASTER, AN ITEM_PRICE_MASTER, A PURCHASE_ORDER_MASTER AND A PURCHASE_ORDER_LINE_ITEM table. Then we might have a series of M:M relationships for example. An ITEM might have a different price for specific time periods for specific stores.
    With OLAP, the tables are designed to facilitate easy access to information. Today's OLAP tools make the job of developing a query very easy. However, you still want to minimize the extensiveness of the relational model in an OLAP application. Users don't have the wills and means to learn how to work through a complex maze of table relationships. So you'll design your tables with a high degree of denormalization. The most prevalent design scheme for OLAP is the Star-Schema, popularized by Ralph Kimball. The star schema has a FACT table that contains the elements of data that are used arithmatically (counting, summing, averaging, etc.) The FACT Table is surrounded by lookup tables called Dimensions. Each Dimension table provides a reference to those things that you want to analyze by. A good book to understand how to design OLAP solutions is The Data Warehouse Toolkit: Practical Techniques for Building Dimensional Data Warehouses by Ralph Kimball.
    Example: let's say we want to see some key measures about purchases. We want to know how many items and the sales amount that are purchased by what kind of customer across which stores. The FACT table will contain a column for Qty-purchased and Purchase Amount. The DIMENSION tables will include the ITEM_DESC (contains the item_id & Description), the CUSTOMER_TYPE, the STORE (Store_id & store name), and TIME (contains calendar information such as the date, the month_end_date, quarter_end_date, day_of_week, etc).
      Database Fundamentals > Data Warehousing and Business Intelligence with Mike Lampa
    Search Advice from more than 250 TechTarget Experts
    Your question may have already been answered! Browse or search more than 25,000 question and answer pairs from more than 250 TechTarget industry experts.

  • What's the difference between boxing and unboxing?

    What's the difference between boxing and unboxing, I'm a bit confused?
    Is autoboxing the same as boxing?
    This is what I know so far:
    -This is boxing, i think:
    int  []arrayset = {1,2,3};but I don't understand what unboxing is, can someone explain it to me.

    I did little research, but please correct me ifI'm
    wrong:
    Is this similiar to unboxing:
    System.out.println(intArray[0] + intArray[1] +
    intArray[2]);
    Only if intArray is declared asInteger[]
    intArray ...Then the Integers at intArray[0] etc are unboxedinto
    int primitives before adding.Did you try that before you posted it? Autoboxing is
    applied only to an individual primitive/wrapper.
    Arrays and collections aren't subject to
    autoboxing/unboxing. Basically, the compiler will
    fill in constructs such as Integer.valueOf() for you,
    it doesn't go as far as generating loops and new
    arrays for you
    OP, as Peter said, the difference between boxing and
    unboxing is merely one of direction. Autoboxing wraps
    a primitive up for you, auto-unboxing extracts the
    primitive from a wrapperAm fully aware of that, the OP's example contained references to array elements but it was not clear whether that array was declared as Integer[] or int[], hence my answer.
    And just for the recordInteger[] intArray = new Integer[] {1, 2, 3}; // autoboxing
    System.out.println(intArray[1] + intArray[2]); // autounboxing
    // while ...
    int[] intArray = new int[] {1, 2, 3}; // no boxing
    System.out.println(intArray[1] + intArray[2]); // no unboxingis what i meant.

  • Find the difference between two internal table

    how can i see the difference between two interal tables?
    The requirement is as follows
    1. We have a transparent table, which stores the employee data with EMP ID as key.
    2. We load the transp table data into a interal table (B).
    3. We get data from legecy system as file and it gets loaded into another internal table (A) (this also has the same EMP ID key and this will have latest addition/update to those emplyees).
    Now we need to seperate out these data into three interal table Inserted (I), Deleted (D) and Updated (U).
    We want to do followign things
    I = A - B
    D = B - A
    Both A and B will have around 40k records. Hence we are trying to avoid the looping.
    Please suggest the best option for us.
    Thank you in advance.
    Raghavendra

    >
    RAGHAV URAL wrote:
    > how can i see the difference between two interal tables?
    > The requirement is as follows
    >
    > 1. We have a transparent table, which stores the employee data with EMP ID as key.
    > 2. We load the transp table data into a interal table (B).
    > 3. We get data from legecy system as file and it gets loaded into another internal table (A) (this also has the same EMP ID key and this will have latest addition/update to those emplyees).
    >
    > Now we need to seperate out these data into three interal table Inserted (I), Deleted (D) and Updated (U).
    >
    > We want to do followign things
    > I = A - B
    > D = B - A
    >
    > Both A and B will have around 40k records. Hence we are trying to avoid the looping.
    >
    > Please suggest the best option for us.
    >
    > Thank you in advance.
    > Raghavendra
    Hi Raghavendra,
      Currently as of my knowledge, these operations are only possible through LOOPs. But LOOPign can be really fast here if you properly utilize the SORTING, READ with BINARY SEARCH and FIELD-SYMBOLS usage. I would say:-
    Steps for Insert:-
    SORT: A, B.
    LOOP AT A ASSIGNING <WA_A>.
      READ TABLE B WITH TABLE KEY key = <WA_A>-key BINARY SEARCH.
      IF SY-SUBRC NE 0.
        APPEND <WA_A> TO I.
      ENDIF.
    ENDLOOP.
    Steps for Delete:-
    SORT: A, B.
    LOOP AT B ASSIGNING <WA_B>.
      READ TABLE A WITH TABLE KEY key = <WA_B>-key BINARY SEARCH.
      IF SY-SUBRC NE 0.
        APPEND <WA_B> TO D.
      ENDIF.
    ENDLOOP.
    Regards,
    Ravi.

Maybe you are looking for

  • Dynamic Table control header text

    Hi everyone, I need to change the column header texts of a table control dynamically. There are 32 columns in my table control. I got the information that I have to use I/O fields for every columns as the column header texts to change it dynamically.

  • SFTP error

    Hi, i am using n software SFTP adapter. i am able to successfully connect to SFTP server from server A and using the same configurations i am not able to connect from server B. any idea what could be the reason . i am getting the following error Tran

  • Why inbound delivery for when  i click "goods movement tab" still A

    Hi Please clarify my doubt. I have a PO for which i did inbound delivery and did goods receipt through MIGO also for entire quantity. My doubt is that when i display inbound delivery after all this process and click on "goods movement tab" why the st

  • What r the list of selection screen events

    What r the list of selection screen events

  • C_TSCM52_05 SAP certification for SAP MM pattern and pass percentage

    Hi Experts, Good day every one. Can any one please let me know if you appear for the sap mm certification. i need the pattern and pass percentage.I am going to attend in december. i want share my experiences with you after completion. please some one