Comaparing Data within Tables in Database

Hi All,
I need to write a PL/SQL Procedure to Compare Data within Tables in a Database.
For Ex :- There is two tables A and B . A has id as 101 and B has is has 101. I need to comapre the Data . If no match is found within the tables i need to have those table name and column name printed. Please help.

Assuming that you want to compare tables with identical structure here is one query that will give you all the records that exist in TABLE_A but do not in TABLE_B
select 'exist in A but doesn't in B' desc, a.*
from TABLE_A a
MINUS
select 'exist in A but doesn't in B' desc, b.*
from TABLE_B bfrom the other hand if you want to get the oposite result you can transform the query above to
select 'exist in B but doesn't in A' desc, b.*
from TABLE_B b
MINUS
select 'exist in B but doesn't in A' desc, a.*
from TABLE_A afinally, if you want to get all the records that are missing in some of the tables, you can do a UNION of the queries above:
select 'exist in A but doesn't in B' desc, a.*
from TABLE_A a
MINUS
select 'exist in A but doesn't in B' desc, b.*
from TABLE_B b
UNION
select 'exist in B but doesn't in A' desc, b.*
from TABLE_B b
MINUS
select 'exist in B but doesn't in A' desc, a.*
from TABLE_A aNow the question is, which records do you want to print?
Also this query might work quite slow if TABLE_A and TABLE_B are large tables.
I hope this will give you some ideas.

Similar Messages

  • Automation of transaferring Excel data to table in Database

    Hi,
           I am new to SSIS. I need to do automation process to transfer data from excel to table in database. I will get the report daily. So how to make the automation process without manually placing the excel table into excel source everyday?
    1. The excel names changes everyday.
    2. Everyday the excel must be load into same table.
    3. while loading the excel I need to get rid of some rows in excel. Is it is possible in automation process?
    4. Will automation process can form a column with current date as a new column while transferring the data? 
    Thanks in advance....
    BALUSUSRIHARSHA

    Hi BALUSUSRIHARSHA,
    After testing the issue in my environment, we can refer to the following steps to achieve your requirement:
    Create two variables, VarFolderPath stores the folder path in which our files exist, VarFileName stores the date value in the file name like below:
    (DT_STR, 4, 1252) DATEPART("yy" , GETDATE())+ "-" + RIGHT("0" + (DT_STR, 2, 1252) DATEPART("mm" , GETDATE()), 2) + "-" + RIGHT("0" + (DT_STR, 2, 1252) DATEPART("dd"
    , GETDATE()), 2)
    In the Data Flow Task, drag an Excel Source with the as Excel Connection Manager.
    Add the property below the Excel Connection Manger expression:
    Property: ExcelFilePath      Expression: @[User::VarFolderPath] + @[User::VarFileName]
    Drag a Conditional Split Transformation that connect to the Excel Source, then add some conditions to filter some rows.
    Drag a Derived Column Transformation that connect to the Conditional Split Transformation, then add a Derived Column Name with <add a new column> option and use GETDATE() as Expression.
    Drag an OLE DB Destination that connect to the Derived Column Transformation, then configure a table to store the data.
    The following screenshot are for your references:
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Data collection table in database

    Hi,
    In our SAP ME WIP databse we are not able to see the parameter values for data collection ,as entered by operator.
    We tried searchin in WIP db as system rule setting for '
    Store Data Collection Results in ODS ' is false.
    Are there any other settings apart from those mentioned in "how-to-guide" for datacollection that stores data in ODS database?
    Or are we going wrong somewhere?
    Attached is the reccomended landscape that we have used in our project.
    Regards
    Mansi
    P.S.

    I think this post is in the wrong forum, but what table are you looking in for your data? 
    Did you figure out the answer to this problem?

  • How to export data within tables in Oracle 10g

    Since I'm using Oracle 10g, thus I wanted to know that do I have the option for exporting data from one table to another or exporting the whole table to some another databases like SQL Server or any other database.

    There are several options, each has different advantages and disadvantages. When both source and target are Oracle;
    1. you can use database links across databases -
    http://psoug.org/reference/db_link.html
    2. you can unload to external tables after 10g and load from external table -
    http://tonguc.wordpress.com/2007/08/09/unload-data-with-external-tables-and-data-pump/
    3. you can use data pump(expdp/impdp) after 10g -
    http://psoug.org/reference/datapump.html
    http://psoug.org/reference/dbms_datapump.html
    4. you can use traditional export(exp) and import(imp) -
    http://psoug.org/reference/export.html
    http://psoug.org/reference/import.html
    5. you can unload to text with an unloader and use sql*loader to load -
    http://tonguc.wordpress.com/2007/09/02/announcement-of-a-new-product-ubsql-from-ubtools/
    http://asktom.oracle.com/tkyte/flat/
    http://psoug.org/reference/sqlloader.html
    for some options you may get the meta information(all table related DDLs) with supplied package DBMS_METADATA;
    http://psoug.org/reference/dbms_metadata.html
    When source is Oracle 10g and target is non-Oracle you may unload to text from Oracle and use the related text loader utility supplied with the other vendor.
    Also Heterogeneous Connectivity is another option between Oracle and non-Oracle systems;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14232/toc.htm
    If you need more informations please visit Oracle's documentation for your release and search for the topic you are interested; http://tahiti.oracle.com

  • Returning Random Row based on Subset of Data within Table

    Hi,
    Please see below.  Running SQL Server 2008 R2.
    Sample DDL:
    CREATE TABLE [dbo].[TestPersons]
    [TestPersonID] [int] NOT NULL IDENTITY(1,1),
    [FirstName] [varchar](50) NULL,
    [LastName] [varchar](50) NULL,
    [AreaID] [varchar](50) NULL,
    CONSTRAINT [PK_TestPersons_TestPersonID] PRIMARY KEY CLUSTERED ([TestPersonID] ASC)
    WITH
    PAD_INDEX = OFF,
    STATISTICS_NORECOMPUTE = OFF,
    IGNORE_DUP_KEY = OFF,
    ALLOW_ROW_LOCKS = ON,
    ALLOW_PAGE_LOCKS = ON
    ON [PRIMARY]
    ON [PRIMARY]
    GO
    Sample Data:
    INSERT INTO
    [dbo].[TestPersons] ([FirstName], [LastName], [AreaID])
    VALUES
    ('Carlos', 'Matlock', 'A0009'),
    ('William', 'Rivas', 'A0001'),
    ('Kathryn', 'Rice', 'A0008'),
    ('John', 'Ball', 'A0009'),
    ('Robert', 'Barnhill', 'A0009'),
    ('Timothy', 'Stein', 'A0009'),
    ('Christopher', 'Smith', 'A0001'),
    ('Brian', 'Speakman', 'A0001'),
    ('Harold', 'Clark', 'A0009'),
    ('Tim', 'Henson', 'A0009'),
    ('Victor', 'Chilson', 'A0009')
    The above insert statement is a small example of the data contained in this table.  Normally the table would contain several thousand rows.  We use the following query to replace actual data with random rows from our test table:
    UPDATE
    [P]
    SET
    [P].[FirstName] = [T].[FirstName],
    [P].[LastName] = [T].[LastName],
    [P].[AreaID] = [T].[AreaID]
    FROM
    [dbo].[Persons] [P]
    INNER LOOP JOIN
    [dbo].[TestPersons] [T] ON ([T].[TestPersonID] = (1 + ABS(CRYPT_GEN_RANDOM(8)%5000)))
    This query works as it selects a random row from the entire set of data in the table.  However there are cases where we need to specify a restricted subset to randomize from.  For example, we may need to randomize data only for Persons with an AreaID
    of A0001 or A0008.  So in that case, and using the sample data above, we would want the  randomization to only select from rows in TestPersons that have an AreaID of A0001 or A0008.  How would I go about accomplishing this?  I've tried
    adding a WHERE clause but it seems it's ignored because of the INNER LOOP JOIN.  I've also tried including [P].[AreaID] = [T].[AreaID] in the join hint but to no avail.
    I also realize having sample data with only the set that we need would solve this problem but for our needs we need a large test set as our randomization requirements depend on the situation.
    Any assistance is greatly appreciated!
    Best Regards
    Brad

    DECLARE @TestPersons TABLE (TestPersonID int NOT NULL IDENTITY(1,1), FirstName varchar(50), LastName varchar(50), AreaID varchar(50))
    INSERT INTO @TestPersons (FirstName, LastName, AreaID)
    VALUES ('Carlos', 'Matlock', 'A0009'), ('William', 'Rivas', 'A0001'), ('Kathryn', 'Rice', 'A0008'), ('John', 'Ball', 'A0009'), ('Robert', 'Barnhill', 'A0009'), ('Timothy', 'Stein', 'A0009'),
    ('Christopher', 'Smith', 'A0001'), ('Brian', 'Speakman', 'A0001'), ('Harold', 'Clark', 'A0009'), ('Tim', 'Henson', 'A0009'), ('Victor', 'Chilson', 'A0009')
    ;WITH subset AS (
    SELECT ROW_NUMBER() OVER (ORDER BY TestPersonID) AS sID, *
    FROM @TestPersons
    WHERE FirstName LIKE '%e%'
    SELECT *
    FROM subset
    WHERE sID = round((((SELECT COUNT(*) FROM subset) - 1 - 1)*rand())+1,0)
    This will grab a random row from a subset (defined in the CTE).

  • Deletion of data within the database tables

    The user is trying to clean up the 2014 data within the database tables. He is running a delete function which keeps causing the log files
    to exceed their limit. The tables are large and he is unable to delete the data in one command due to available size and logging. What is the best way to approach this?
    Thanks,

    Hi venkatesh1985,
    According to your description, the user fails to delete data in tables due to the limited space of log file. Based on my research, this issue could occur when you use the delete statement(DELETE FROM ExampleTable) in a single transaction and consume all
    available space on your transaction log disk.
    To avoid this issue, you could use the two methods below to delete the data.
    1. Use a loop combined with TOP and delete rows in smaller transactions as the following example. This method requires you to delete all the tables one by one.
    SELECT 1
    WHILE @@ROWCOUNT > 0
    BEGIN
    DELETE TOP (1000)
    FROM LargeTable
    END
    For more information about the process, please refer to the article:
    http://dbadiaries.com/how-to-delete-millions-of-rows-using-t-sql-with-reduced-impact
    2. If you want to delete all the data from all tables in the specific database, you could script the entire database and all database objects. Then drop the database and recreate it using the script as the steps below.
    a. In Object Explorer, expand the node for the instance containing the database to be scripted.
    b. Point to Tasks, and then click Generate Scripts and click Next.
    c. Select the option of 'Script the entire database and all database objects'.
    d. Specify how scripts should be saved. You could save the script to a file or new query window. Click Next, then click ok.
    e. Drop the database, and run the script in the query window to recreate it. For more information, please refer to the article:
    http://msdn.microsoft.com/en-us/library/bb895179.aspx#Introduction
    In addition, if possible, please increase the size of the log file or move the log file to a different disk with more disk space.
    Regards,
    Michelle Li

  • Copying data from KeyFigures to database table in APO

    Hi Experts,
    Is there any way to copy the key figure values to APO database table?
    Can we use any macro function to satisfy this requirement?
    Is there any function module/BAPI which might be helpful to copy the KF data to DB table within APO?
    Thanks for looking into it..
    Rgds/
    Jay

    In the standard macro environment, there is a simple database table where you can store and retrieve data from, table /SAPAPO/ADV_SERI, with four keys and almost as many numbers as you want (combining some predefined column space and an iteration field). It only works for numbers.
    You can use TS_SET to save data and TS_GET to store, please check the macro online help for the complete syntax.
    Hope this helps,
    Pablo

  • Export data from database table before database migration

    Hello,
    We are planning to migrate our SAP ERP 6 Ehp4/NW7.01 from Oracle 11.2 to IBM DB2 v. 9.7 database. During test migrations I have established that we spend a lot of time for one particular table (COEP). Because we donu2019t have possibility to archive this table before migration I have an idea to export data from previous years from this table to the file system (using an ABAP report), delete those data from table before migration and then after migration, import back to the database from the file system.
    Does anybody have any concerns or suggestions about this idea?
    Thank you for your answers
    Andrej

    Hello Andrej,
    I strongly do not recommend to do so.
    I am not sure whether technically this could work at all..
    Even it if would work .. In order to really save time, the export and the import would have to be a dirty one (meaning the system is operational and in production). With this there is a high risk to produce inconsistencies on this table.  And you most likely will receive no support if something unforeseen happens and you end with problems.
    Also  your approach (if it should work at all) , would have to be tested thoroughly by you , also protecting the table from any changes.
    I do not believe that this can save any effort compared to implementing advanced migration techniques like table splitting.
    On top, you would go high risk to end with an unsupported system, with not using official migration procedures
    Hans-Juergen

  • Import  data in excel to database table

    How to insert data from excel to database table ? any suggestion ?

    Hi,
    1. Save your file as csv (say having empno, ename, sal and deptno columns).
    2. Create a HTML page with file browse item.
    3. Create a button.
    3. Create the following process to be executed on pressing button on your region.
    BEGIN
    DECLARE
    v_blob_data BLOB;
    v_blob_len NUMBER;
    v_position NUMBER;
    v_raw_chunk RAW(10000);
    v_char CHAR(1);
    c_chunk_len number := 1;
    v_line VARCHAR2 (32767) := NULL;
    v_data_array wwv_flow_global.vc_arr2;
    v_rows number;
    v_sr_no number := 1;
    v_rows_loaded NUMBER;
    BEGIN
    -- Read data from wwv_flow_files</span>
    select blob_content into v_blob_data
    from wwv_flow_files
    where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
    and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
    v_blob_len := dbms_lob.getlength(v_blob_data);
    v_position := 1;
    -- Read and convert binary to char</span>
    WHILE ( v_position <= v_blob_len ) LOOP
    v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
    v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
    v_line := v_line || v_char;
    v_position := v_position + c_chunk_len;
    -- When a whole line is retrieved </span>
    IF v_char = CHR(10) THEN
    -- Convert comma to : to use wwv_flow_utilities </span>
    v_line := REPLACE (v_line, ',', ':');
    -- Convert each column separated by : into array of data </span>
    v_data_array := wwv_flow_utilities.string_to_table (v_line);
    --DELETE OLD DATA
    -- Insert data into target table </span>
    IF v_sr_no >1 THEN
    EXECUTE IMMEDIATE 'INSERT INTO EMP(EMPNO, ENAME, SAL, DEPTNO)
    VALUES (:1, :2, :3, :4)'
    USING
    -- v_sr_no,
    v_data_array(1),
    v_data_array(2),
    v_data_array(3),
    v_data_array(4);
    END IF;
    -- Clear out
    v_line := NULL;
    v_sr_no := v_sr_no + 1;
    END IF;
    END LOOP;
    END;
    COMMIT;
    END;
    Hope this helps.
    Regards,
    Zahid

  • Data structure similar to database table

    i have the need for a data structure that will essentially be the same as a database table. this "table" can be queried against and sorted.
    currently, i am using an interface to abstract this "table" datastructure, and implementing the interface via jdbc calls.
    may someone point me in the right direction?
    in .NET, they have the DataTable and DataSet data structures, which do exactly as i require. These are objects essentially correspond to a database table and database, respectively. They are used as in-memory database.
    i have already tried using HSQL, but was wondering if there was a "better" way (i.e. more light-weight, intuitive, easier to use).
    thanks.

    Use an array and put your records in it.
    Efficient querying (simple queries) requires you to build indices for each element you wish to query on. The type of index depends on the kind on search you want to perform. Exact matches can use Hashes. Sorting, prefix or postfix matching can be performed using various tree indices.
    Complex queries (i.e. those that query on multiple parameters generally require query optimisers to figure out how to use the indies you have in the most efficient manner possible (well as close as the optimizer can get to it).
    cheers
    matfud

  • How to view data in tables by selecting the synonym from the database objec

    I could not figure out how to view data in tables by selecting the synonym from the database objects navigation tree. I had to first choose the synonym, view the details of the synonym to determine the table name, and then select the table from the database objects tree. Is this the only way available?

    This functionality currently does not exist. I don't see it on the 1.1 statement of direction either, so perhaps someone from Oracle can give some insight as to when this could be expected.
    Eric

  • Populate dates within the date range columns in a table

    Hi ,
    I have a Students table with StudentID, StartDate and EndDate. For one of the requirements, I need to populate all the dates within and includeing the StartDate and EndDate for all the students. Please find the DDL for the source table samples and also below
    samples of Source table columns and how the output should be.
    create table #Students (ID int,startdate date,Enddate date)
    insert into #Students values (1000, '2014-01-01',
    '2014-01-10')
    insert into #Students values (1000, '2014-02-22',
    '2014-02-28')
    insert into #Students values (1001, '2013-07-01',
    '2013-07-12')
    insert into #Students values (1001, '2013-08-01',
    '2013-08-03')
    insert into #Students values (1001, '2014-04-01',
    '2014-04-05')
    --select * from #Students order by id,startdate
    --drop table #students
    Thanks in advance  for your help!

    Hi vskindia,
    A recursive way to achieve the expected output.
    create table #Students (ID int,startdate date,Enddate date)
    insert into #Students values (1000, '2014-01-01',
    '2014-01-10')
    insert into #Students values (1000, '2014-02-22',
    '2014-02-28')
    insert into #Students values (1001, '2013-07-01',
    '2013-07-12')
    insert into #Students values (1001, '2013-08-01',
    '2013-08-03')
    insert into #Students values (1001, '2014-04-01',
    '2014-04-05')
    ;WITH cte AS
    SELECT ID,startdate,Enddate FROM #Students
    UNION ALL
    SELECT ID,DATEADD(DAY,1,startdate) AS startdate,Enddate FROM cte
    WHERE startdate<Enddate
    SELECT id,startdate FROM CTE ORDER BY ID,startdate
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Hoa to delete data from the SE11 database table

    Hi Friends,
    Need some urgent help.
    While uploading data to infotype 6 (addresses) in subtype 4(emergency address), I have uploaded the data twice and it has created a duplicate record now in table PA0006.
    I have done this in the quality server. Will this effect the user acceptance testing(UAT) in anyway and how can I delete this data from table PA0006
    Pls help me urgently.
    Bye
    Naveen

    Hi  Naveen ,
    1 .Check what is the time constraint for address infotype .
    2.If it is 2 , it would have delimited the record , if it is 3 it would have created another record , if it is one there will be no problem as it would have deleted the recored provided u have given the dates into consideration .
    3.It will not be  a problem for UAT . If it is not required do  a scat for deletion of records .
    Regards
    Gajalakshmi

  • Is there any easy way to compare LIKE Addresses from one table, which contains 3rd party data, to another table, our database source

    We have a 3rd party that is supplying us data and we need to compare the addressing between the 3rd party data to our source database addressing. I'd like to make it somewhat flexible meaning I'd like to somehow use the LIKE comparison rather than comparing
    the exact address values. (I have noticed that the 3rd party addressing sometime has a leading <space> at the beginning of the address...why I'd prefer to use LIKE)
    Is there any easy way to do this? Or does this dictate using a CURSOR and processing through the CURSOR of 3rd party data and plugging in the address LIKE as dynamic SQL?
    Please let me know your thoughts on this and I appreciate your review and am hopeful for a reply.

    Yes, it's possible and there are a variety of options but it's may not be for the faint of heart.
    The last time I did it, I ended up taking several passes at the data.
    1st pass was a straight up comparison with no modifications or functions.
    2nd pass was the same but with all special characters removed.
    3rd pass involved splitting the numeric portion of address and comparing just the street numbers and used a double meta-phone function (kind of like a soundex on steroids) to compare the street names.
    Jason Long

  • Unable to access the data and table fields from handheld

    Hi,
    I've created a Testing.sdf file on the local pc using SQL Server Management Studio, creating table, fields and insert some data. in local pc i can access the the data as normal. the problem is after i moved the file to handheld device i
    cannot access the data within the table, it shown error 'Failed to retrieve data for this request. (Microsoft.SqlServer.SmoEnum)' . I tried to google it, but still got no solution.
    Thanks,

    'Name Space
    Imports System.Data.SqlServerCe
    'String Connection
    'Data Source = D:\SKUDWN3 .sdf'
    Public Sub CreateDB(ByVal StrConn As String)
            'Declaration
            Dim cn As SqlCeConnection = Nothing
            Dim cm As SqlCeCommand = Nothing
            Dim SQLEngine As SqlCeEngine = Nothing
            Dim rs As SqlCeResultSet = Nothing
            Dim rec As SqlServerCe.SqlCeUpdatableRecord = Nothing
            'Tables -
            Const TB_SKUDWN3 As String = "SKUDWN3"
            'Fields TB_SKUDWN3
            Const FL_SKUDWN3_UPC As String = "UPC"
            Const FL_SKUDWN3_SKU As String = "SKU"
            Const FL_SKUDWN3_LD As String = "LD"
            Const FL_SKUDWN3_SD As String = "SD"
            Const FL_SKUDWN3_AN As String = "AN"
            Const FL_SKUDWN3_Price As String = "Price"
            Const FL_SKUDWN3_GST_FLAG As String = "GSTFLAG"
            'Create Database
            SQLEngine = New SqlCeEngine(StrConn)
            SQLEngine.CreateDatabase()
            SQLEngine.Dispose()
            'Open Connection
            If IsNothing(cn) Then cn = New SqlCeConnection(StrConn)
            If cn.State = Data.ConnectionState.Closed Then cn.Open()
            cm = cn.CreateCommand
            'Create Table, Fields
            cm.CommandText = "CREATE TABLE " & TB_SKUDWN3 & " (" & FL_SKUDWN3_UPC & " NVARCHAR (13)," & _
                " " & FL_SKUDWN3_SKU & " NVARCHAR (9), " & FL_SKUDWN3_LD & " NVARCHAR(30)," & _
                " " & FL_SKUDWN3_SD & " NVARCHAR (18), " & FL_SKUDWN3_AN & " NVARCHAR(15), " & _
                " " & FL_SKUDWN3_Price & " NVARCHAR (10), " & FL_SKUDWN3_GST_FLAG & " BIT)"
            cm.ExecuteNonQuery()
            'Close Connection
            cm = Nothing
            If Not IsNothing(cn) Then
                If cn.State = ConnectionState.Open Then cn.Close()
                cn.Close()
                cn.Dispose()
            End If
        End Sub
    'The DB was successfully created, but after moving to Handheld the fields of table can't be accessed

Maybe you are looking for