Performance Degradation Due To  Change In The Schema

Hi Guys,
Initially i have an admin user ad1 and application user ap1.The ad1 user owns data and creates synonym and provides that synonym dml access to the ap1.But yesterday we changed the design instead of one admin user we had four admin users
ad1,ad2,ad3 ,ad4 and segregate the objects across all the users but all of a sudden the one scenario in the application which downloads data tool heavy lot of time .
Could any one give any clues for this what be the reason for this.
Any suggestions are appreciated.
Thanks in advance

Thanks for the Reply.
No there is no union let take an example
There is a table Bank this will be owned by ad1 and we created a synonym in ap1 with the same name bank but only granting the dml access to ap1.
Second thing we created the new users , For that also do we need to gather statistics

Similar Messages

  • HT5834 I received a message saying I must change my iCloud Keychain pass code due to changes in the server. Is this correct? Or is it a scam?

    I received a message saying I must change my iCloud pass code keychain due to changes in the server. Is this a hoax?

    Dang! And if I already did before thinking it might be a scam, what can I do about at this point? I did just reset my password again after realizing it might be a scam, but is that sufficient and is there anything else I should look into to see if any damage was already done? Thanks.

  • Will Performance degrade due to Column Level Security

    Hi All,
    I have report with 40 Columns, of which more than 20 columns are restricted to many users on the Dashboards.
    This security is controlled by assigning permissions to those columns in RPD presentation Layer.
    And setting the PROJECT_INACCESSIBLE_COLUMNS_AS_NULL to YES in NQSConfig.ini
    Will the performance of reports degrade due to this type of design.
    Is there any solid evidence?
    Thanks
    Kaushik

    Hi,
    I dont see any performance hinderance because of the column level security.
    But remember in the pivot table you can still see the column without values. And its a bug. Would serve good for table views.
    Hope this helped/ answered
    Regards
    MuRam

  • Performance degradation due to table fragmentation

    Dear all,
    We use a table in oracle to store session IDs for various web applications. This is a very busy table because several rows are inserted and updated almost every single second by web-applications. Due to this reason the disk space containing this table gets apparently gets fragmented and results in poor performance of our web applications. Whenever this table is freshly re-built, the performance of our web application returns back to it’s normal level.
    Can someone kindly advice if this is a normal behaviour of Highly fragmented tables using ASM (Automatic storage Management) ? should the performance of applications degrade if tables are fragmented ? Also are there any suggestions if there is any better solution rather than re-building the table every month ?
    We use Oracle 10.1.0.4 using Real Application Clusters. Our storage system is based on Automatic storage Management (ASM).
    Thanks and regards

    Thanks for the Reply.
    No there is no union let take an example
    There is a table Bank this will be owned by ad1 and we created a synonym in ap1 with the same name bank but only granting the dml access to ap1.
    Second thing we created the new users , For that also do we need to gather statistics

  • I just received my new phone from Verizon, I turned it on and followed direction for set up...now I lost my connection with ATT....How do I get the service back from ATT...not due to change until the 16th

    I just received my new 6 from Verizon by mail.  I was not due to change carriers til the 16th, but I turned on the new phone and did the set up...now I lost ATT.  How do I get the ATT carrier back on my old phone?

    I just received my new 6 from Verizon by mail.  I was not due to change carriers til the 16th, but I turned on the new phone and did the set up...now I lost ATT.  How do I get the ATT carrier back on my old phone?

  • Performance degradation due to workflow

    Is there a possibility of perfomance degradation due to presence of many workflows in CRMOD ?

    Thanks for the Reply.
    No there is no union let take an example
    There is a table Bank this will be owned by ad1 and we created a synonym in ap1 with the same name bank but only granting the dml access to ap1.
    Second thing we created the new users , For that also do we need to gather statistics

  • Error in loading due to change in the length of the Infoobject in BW

    Hi Experts,
    I have a master data. Now i have changed the length of one of the attribute of that master data from char-1 to char-3.
    Now i want to load the data to this master data. The Datasource is a non-SAP Source.
    When i  try to load the data by scheduling the infopackage, it is giving the following error message note in the monitor screen :-
    "If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request."
    Kindly tell me what i need to do in order to load the data successfully.
    Regards,
    Pavan Raj
    Edited by: PavanRaj_S on May 7, 2010 7:36 AM

    hi all,
    It is loading few records. But now it is giving the error
    "Data records for package 1 selected in PSA - 1 error(s)" &
    "Error:         4 in the update"
    can anyone please explain this??
    Regards,
    Pavan Raj

  • How to change the Schema Location ??

    Hi ,
    I need to change the Schema name which is registered through the Deployment Manager . OWB allows all other Parameters to change except the Schema name . Any workaround ???

    On your mapping right click and choose the option configure (window Configuration Properties), open the tab "Sources and Targets" on this tab choose the table that you want to change the schema and on the label Schema insert the schema to be used.
    hope it helps!
    Regards,
    Vitor

  • Performance Degradation - High fetches and Prses

    Hello,
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    2) High fetches and poor number/ low number of rows being processed
    Please let me kno as to how the performance degradation can be minimised, Perhaps the high number of SQL* Net Client wait events may be due to multiple fetches and transactions with the client.
    EXPLAIN PLAN FOR SELECT /*+ FIRST_ROWS (1)  */ * FROM  SAPNXP.INOB
    WHERE MANDT = :A0
    AND KLART = :A1
    AND OBTAB = :A2
    AND OBJEK LIKE :A3 AND ROWNUM <= :A4;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      119      0.00       0.00          0          0          0           0
    Execute    239      0.16       0.13          0          0          0           0
    Fetch      239   2069.31    2127.88          0   13738804          0           0
    total      597   2069.47    2128.01          0   13738804          0           0
    PLAN_TABLE_OUTPUT
    Plan hash value: 1235313998
    | Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |        |     2 |   268 |     1   (0)| 00:00:01 |
    |*  1 |  COUNT STOPKEY               |        |       |       |            |          |
    |*  2 |   TABLE ACCESS BY INDEX ROWID| INOB   |     2 |   268 |     1   (0)| 00:00:01 |
    |*  3 |    INDEX SKIP SCAN           | INOB~2 |  7514 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=TO_NUMBER(:A4))
       2 - filter("OBJEK" LIKE :A3 AND "KLART"=:A1)
       3 - access("MANDT"=:A0 AND "OBTAB"=:A2)
           filter("OBTAB"=:A2)
    18 rows selected.
    SQL> SELECT INDEX_NAME,TABLE_NAME,COLUMN_NAME FROM DBA_IND_COLUMNS WHERE INDEX_OWNER='SAPNXP' AND INDEX_NAME='INOB~2';
    INDEX_NAME      TABLE_NAME                     COLUMN_NAME
    INOB~2          INOB                           MANDT
    INOB~2          INOB                           CLINT
    INOB~2          INOB                           OBTAB
    Is it possible to Maximise the rows/fetch
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      163      0.03       0.00          0          0          0           0
    Execute    163      0.01       0.03          0          0          0           0
    Fetch   174899     55.26      59.14          0    1387649          0     4718932
    total   175225     55.30      59.19          0    1387649          0     4718932
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 27
    Rows     Row Source Operation
      28952  TABLE ACCESS BY INDEX ROWID EDIDC (cr=8505 pr=0 pw=0 time=202797 us)
      28952   INDEX RANGE SCAN EDIDC~1 (cr=1457 pr=0 pw=0 time=29112 us)(object id 202995)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                  174899        0.00          0.16
      SQL*Net more data to client                155767        0.01          5.69
      SQL*Net message from client                174899        0.11        208.21
      latch: cache buffers chains                     2        0.00          0.00
      latch free                                      4        0.00          0.00
    ********************************************************************************

    user4566776 wrote:
    My analysis on a particular job trace file drew my attention towards:
    1) High rate of Parses instead of Bind variables usage.
    But if you look at the text you are using bind variables.
    The first query is executed 239 times - which matches the 239 fetches. You cut off some of the useful information from the tkprof output, but the figures show that you're executing more than once per parse call. The time is CPU time spent using a bad execution plan to find no data -- this looks like a bad choice of index, possibly a side effect of the first_rows(1) hint.
    2) High fetches and poor number/ low number of rows being processedThe second query is doing a lot of fetches because in 163 executions it is fetching 4.7 million rows at roughly 25 rows per fetch. You might improve performance a little by increasing the array fetch size - but probably not by more than a factor of 2.
    You'll notice that even though you record 163 parse calls for the second statement the number of " Misses in library cache during parse" is zero - so the parse calls are pretty irrelevant, the cursor is being re-used.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Big Performance Degradation in LabVIEW 2012

    Hi all,
    I was expecting a performance increase upgrading to LV2012, per usual, unfortunately it seems it's been degraded by more than 50% for a simple benchmark I created for the purpose.
    For the time being, I will stick with LV2011 due to this.
    LV2011 running on a sbRIO9606 (steady around 3800 dereferencing/referencings per second):
    LV2012 on the same computer (at about 1300 dereferencing/referencings per second):
    Any takes on the issue? 
    Source attached.
    Br,
    /Roger
    Solved!
    Go to Solution.
    Attachments:
    PerformanceLV2011.zip ‏406 KB

    Ben wrote:
    RogerIsaksson wrote:
    "Check for button presses 4 billion times a second".
    So what? It's a benchmark program. It's not intended for any practical use besides showing off the performance degradation that I experience in the newer version of labview.
    "race conditions between locals being a prime example"
    Did you cut'n paste that nonsense from the internetz?
    "Maybe those loops get higher priority in the new compiler"
    What does priority have to do with execution performance?
    You are clearly not understanding the issue here.
    Br,
    /Roger
    Could you please post images of the benchmarking code?
    The machine I use for the forums does not have a modern version of LV so I can only look at pictures.
    There is a chance I may be able to explain your observations.
    No promises!
    Curious,
    Ben
    I wonder you don't have LV2011.( Oh you have personnal copy back home? )
    The best solution is the one you find it by yourself

  • SQL Performance Degrades Severely in WAN

    The Oracle server is located in the central LAN and the client is located in the remote LAN. These two LANs are connected via 10Mbps wide network. Both LANs are of 100M bps inside. If the SQL commands are issued in the same LAN as Oracle server is located in, the speed is fast. However, if the same commands are issued in the remote LAN, the speed is degraded severly, almost 10 times slower. There are only a few results returned by these SQL commands. My questions are the reason of this performance degradeness and how to improve the performance in the remote client.
    The server is Oracle817 and OPS, and the SQL commands are issued in PB programs in the remote client.
    Thanks very much.

    Thank you very much.
    I found another point which might lead to the performance problem. The server's Listener.ora is configured as following:
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
    And the client's TNSNAMES.ORA is configured as following:
    EMIS02.HZJYJ.COM.CN =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 172.26.17.18)(PORT = 1521))
    (CONNECT_DATA =
    (SERVICE_NAME = emis)
    It shows that the listener protocol is set as IPC. However, the client is set to the protocol of TCP. Would there be a network latency for the protocol conversion between IPC and TCP?
    Thanks a lot.

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • The system copy with Sapinst has changed the schema in oracle

    Hi;
    have done a copy Homogeneous of a system and after having all correct I have seen that has changed the name of the Schemas and the name of the tablespace.
    the system is a PI NW2004S.
    When the oraginal schema is SAPSR3DB, in the target system is SAPSR4DB, and all tables are here.
    The tablespace SAPSR3DB in the target system is empty and the copy has created one tablespace SAPSR4DB
    They are normal these changes? 
    thanks a lot

    Hi;
    Thanks a lot for the help. But I have done one homogeneous system copy with Oracle 10G and NW2004s.
    I attached information by the original system:
    USERNAME                          USER_ID CREATED
    OPS$ORAGXI                             33 31-JAN-08
    SAPSR3DB                               31 31-JAN-08
    OPS$SAPSERVICEGXI                      30 31-JAN-08
    SAPSR3                                 27 31-JAN-08
    OPS$GXIADM                             26 31-JAN-08
    OPS$SR3ADM                             32 31-JAN-08
    DBSNMP                                 24 31-JAN-08
    TSMSYS                                 21 31-JAN-08
    DIP                                    19 31-JAN-08
    OUTLN                                  11 31-JAN-08
    SYSTEM                                  5 31-JAN-08
    USERNAME                          USER_ID CREATED
    SYS                                     0 31-JAN-08
    Theses the information about the target system:
    USERNAME                          USER_ID CREATED
    SAPSR4DB                               35 18-FEB-08
    SAPSR3DB                               31 18-FEB-08
    OPS$SAPSERVICEGXD                      34 18-FEB-08
    SAPSR3                                 33 18-FEB-08
    OPS$GXDADM                             32 18-FEB-08
    OPS$ORAGXD                             25 18-FEB-08
    DBSNMP                                 24 18-FEB-08
    TSMSYS                                 21 18-FEB-08
    DIP                                    19 18-FEB-08
    OUTLN                                  11 18-FEB-08
    SYSTEM                                  5 18-FEB-08
    USERNAME                          USER_ID CREATED
    SYS                                     0 18-FEB-08
    I have a new schema that contain all tables of original SAPSR3DB. The default tablespace of   SAPSR4DB                                
    is SAPSR4DB (new tablespace).
    I have other error when execute for example one backup with brtools
    BR0925I Public synonym SAP_SDBAH created successfully for table SAPSR4DB.SDBAH
    BR0925I Public synonym SAP_SDBAD created successfully for table SAPSR4DB.SDBAD
    BR0925I Public synonym SAP_DBSTATC created successfully for table SAPSR4DB.DBSTATC
    BR0925I Public synonym SAP_DBSTATTORA created successfully for table SAPSR4DB.DBSTATTORA
    BR0925I Public synonym SAP_DBSTATIORA created successfully for table SAPSR4DB.DBSTATIORA
    BR0925I Public synonym SAP_DBSTATHORA created successfully for table SAPSR4DB.DBSTATHORA
    BR0925I Public synonym SAP_DBSTAIHORA created successfully for table SAPSR4DB.DBSTAIHORA
    BR0925I Public synonym SAP_DBCHECKORA created successfully for table SAPSR4DB.DBCHECKORA
    BR0925I Public synonym SAP_DBMSGORA created successfully for table SAPSR4DB.DBMSGORA
    BR0280I BRBACKUP time stamp: 2008-03-15 00.00.21
    BR0319I Control file copy created: /oracle/GXD/sapbackup/cntrlGXD.dbf 12664832
    BR0280I BRBACKUP time stamp: 2008-03-15 00.00.21
    BR0301W SQL error -980 at location BrDbfInfoGet-32, SQL statement:
    'DELETE FROM SAP_SDBAH WHERE BEG > '10000000000000' AND BEG < '20070209000000''
    ORA-00980: synonym translation is no longer valid
    BR0280I BRBACKUP time stamp: 2008-03-15 00.00.21
    BR0301W SQL error -980 at location BrDbfInfoGet-33, SQL statement:
    'DELETE FROM SAP_SDBAD WHERE BEG > '10000000000000' AND BEG < '20070209000000''
    ORA-00980: synonym translation is no longer valid
    BR0280I BRBACKUP time stamp: 2008-03-15 00.00.21
    BR0301E SQL error -980 at location BrComprDurGet-1, SQL statement:
    'OPEN curs_11 CURSOR FOR'
    'SELECT FUNCT, POS, LINE FROM SAP_SDBAD WHERE BEG = '00000000000001' AND FUNCT IN ('CMP', '   ', '   ', 'DUR', 'DUL') ORDER BY FUNCT, POS'
    ORA-00980: synonym translation is no longer valid
    BR0314E Collection of information on database files failed
    BR0280I BRBACKUP time stamp: 2008-03-15 00.00.21
    BR0301W SQL error -980 at location BrbDbLogOpen-5, SQL statement:
    'INSERT INTO SAP_SDBAH (BEG, FUNCT, SYSID, OBJ, RC, ENDE, ACTID, LINE) VALUES ('20080315000018', 'anf', 'GXD', ' ', '9999', ' ', 'bdxmccba', '7.00 (31)')'
    ORA-00980: synonym translation is no longer valid
    BR0324W Insertion of database log header failed
    BR0056I End of database backup: bdxmccba.anf 2008-03-15 00.00.21
    BR0280I BRBACKUP time stamp: 2008-03-15 00.00.21
    BR0054I BRBACKUP terminated with errors
    Brtools create public synonym of tables of schema SAPSR4DB, but theses tables are in schema SAPSR3.
    In one note , SAP explain that : I performance  Installation with
    independent SCHEMA-ID' as mentioned in note 659509       
    I don´t know this type of installation.
    Any Idea?
    thanks a lot

  • Change the Schema in Sql 2005

    Hello Experts,
    We have refreshed our QAS system from PRD (abap + java) system. The schema on the system is MCOD (IE SID). Although the database has come up, but in SAP we are not able to access the schema because the schema currently belongs to PRD system.
    I have looked at OSS note 551915 but this script is valid for Sql 2000. We have Sql 2005.
    Also, I have looked at a couple of scripts but that have not worked on our scenario to convert the scema.
    Can anyone share and paste the Alter schema script to convert all the tables both for (SAPPRDDB and PRD) ?
    Please Help!
    Thanks,
    Antarpreet

    Hello Antarpreet,
    Please suggest if I am wrong:
    u have refreshed QAS from PRD and now the schema owner of QAS is SID of PRD and you want to change this to SID of QAS
    Well then in that case you will need to perfrom schema conversion using SQL tools mentioned in the note I have given to you,I assure your issues will get resolved,we have been doing it a lot in our setup and have no issues,we also have schema systems
    Also note 551915 also says this- to perform schema upgarde using SQL tools
    Also what makes you say that the scripts mentioned in the note are for SQL server 2000,it is no where mentioned
    But I strongly suggest you to go for SQL tools,even if it does no gud to your system(which is a remotest possibility),it will not do any harm....
    Rohit

  • I have just requested my ipod nano 1st generation to be replaced using the scheme, i have entered the wrong postcode on the shipping of the replacement box, will it still come to my address? or how can i change it?

    I have just requested my ipod nano 1st generation to be replaced using the scheme, i have entered the wrong postcode on the shipping of the replacement box, will it still come to my address? or how can i change it?

    Call up apple care,  (08000480408 if you are in the UK) and ask them to change the postcode and request another replacement packet

Maybe you are looking for