Query performance on same table with many DML operations

Hi all,
I am having one table with 100 rows of data. After that, i inserted, deleted, modified data so many times.
The select statement after DML operations is taking so much of time compare with before DML operations (There is no much difference in data).
If i created same table again newly with same data and fire the same select statement, it is taking less time.
My question is, is there any command like compress or re-indexing or something like that to improve the performance without creating new table again.
Thanks in advance,
Pal

Try searching "rebuilding indexes" on http://asktom.oracle.com. You will get lots of hits and many lively discussions. Certainly Tom's opinion is that re-build are very rarley required.
As far as I know, Oracle has always re-used deleted rows in indexes as long as the new row belongs in that place in the index. The only situation I am aware of where deleted rows do not get re-used is where you have a monotonically increasing key (e.g one generated by a seqence), and most, but not all, of the older rows are deleted over time.
For example if you had a table like this where seq_no is populated by a sequence and indexed
seq_no         NUMBER
processed_flag VARCHAR2(1)
trans_date     DATEand then did deletes like:
DELETE FROM t
WHERE processed_flag = 'Y' and
      trans_date <= ADD_MONTHS(sysdate, -24);that deleted the 99% of the rows in the time period that were processed, leaving only a few. Then, the index leaf blocks would be very sparsely populated (i.e. lots of deleted rows in them), but since the current seq_no values are much larger than those old ones remaining, the space could not be re-used. Any leaf block that had all of its rows deleted would be reused in another part of the index.
HTH
John

Similar Messages

  • SELECT query performance : One big table Vs many small tables

    Hello,
    We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
    Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
    multiple small tables will help ?
    For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
    Thanks.

    Hello,
    There is some information on this topic in the FAQ at:
    http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
    If this does not address your question, please just let me know.
    Thanks,
    Sandra

  • How to structure SP with many DML operations

    Hi
    I have a stored procedure that does inserts, updates and deletes. My question is should this be within one begin and end or should each of these DML's have their own begin, end and exception blocks? What would be considered to be a good practice?

    I'm with user553641 on this; keep things simple.
    I write my programs as small, simple procedures or functions, that generally do one thing. I name them appropriately, and that helps things to be self documenting.
    I group things together in a package, and there might be one or two main procedures that are exposed in the specification. Inside the package body, those main procedures will call the "sub" procedures (which in turn may call other procedures), etc.
    Doing this has several benefits:
    1. Testing becomes easier - you can test each bit of the code in isolation before running the code altogether - this is much, much easier than trying to test one massive "spaghetti" piece of code.
    2. Easier maintenance - it's easy to see what the purpose of each procedure does, so if you need to make changes, you have a much better handle on what the potential impacts might be.
    3. Easier to see what the flow of the program is when you're writing it - you can say "I want to promote someone" - so you have a main proc called "promote_person" which in turn calls two procedures: "change_job_title" and "alter_salary" - you don't have to consider the alter_salary part whilst you're working on the change_job_title part, etc (this is called "top down programming")
    4. Promotes code reuse - "alter_salary" and "change_job_title" could be used elsewhere.
    Think of it like building a house - would you build a house by taking one huge, house-sized brick and carve everything out, or would you use lots of small bricks to build up the walls bit by bit?

  • MaxDB: Table with many LONG fields does not allow an INSERT: ...?

    Hi,
    I have a table with many LONG fields (28). So far, everythings works fine.
    However, if I add another LONG field I cannot insert a dataset anymore
    (29 LONG fields).
    Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
    Thanks in advance
    Michael
    appendix:
    - Create and Insert command and error message
    - MaxDB version and its parameters
    Create and Insert command and error message
    CREATE TABLE "DBA"."AZ_Z_TEST02"
         "ZTB_ID"               Integer    NOT NULL,
         "ZTB_NAMEOFREPORT"           Char (400) ASCII DEFAULT '',
         "ZTB_LONG_COMMENT"                LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_00"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_01"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_02"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_03"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_04"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_05"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_06"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_07"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_08"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_09"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_10"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_11"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_12"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_13"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_14"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_15"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_16"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_17"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_18"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_19"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_20"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_21"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_22"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_23"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_24"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_25"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_26"         LONG ASCII DEFAULT '',
         PRIMARY KEY ("ZTB_ID")
    The insert command
    INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
    works fine. If I add the LONG field
    "ZTB_LONG_TEXTBLOCK_27"         LONG ASCII DEFAULT '',
    the following error occurs:
        Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
        General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
        INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
    MaxDB version and its parameters
    All db params given by
    dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
    are
    KERNELVERSION                         KERNEL    7.5.0    BUILD 026-123-094-430
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    RESTART_SHUTDOWN                      MANUAL
    SERVERDBFOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      INTERNAL
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         10
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   LOG_001
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   64000
    DATA_VOLUME_NAME_0001                 DAT_0001
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0001                 64000
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    BACKUP_BLOCK_CNT                      8
    LOG_MIRRORED                          NO
    MAXVOLUMES                            22
    MULTIO_BLOCK_CNT                    4
    DELAYLOGWRITER                      0
    LOG_IO_QUEUE                          50
    RESTARTTIME                         600
    MAXCPU                                1
    MAXUSERTASKS                          50
    TRANSRGNS                           8
    TABRGNS                             8
    OMSREGIONS                          0
    OMSRGNS                             25
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        1
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    ROWRGNS                             8
    MINSERVER_DESC                      16
    MAXSERVERTASKS                        20
    _MAXTRANS                             288
    MAXLOCKS                              2880
    LOCKSUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       900
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    USEASYNC_IO                         YES
    IOPROCSPER_DEV                      1
    IOPROCSFOR_PRIO                     1
    USEIOPROCS_ONLY                     NO
    IOPROCSSWITCH                       2
    LRU_FOR_SCAN                          NO
    PAGESIZE                            8192
    PACKETSIZE                          36864
    MINREPLYSIZE                        4096
    MBLOCKDATA_SIZE                     32768
    MBLOCKQUAL_SIZE                     16384
    MBLOCKSTACK_SIZE                    16384
    MBLOCKSTRAT_SIZE                    8192
    WORKSTACKSIZE                       16384
    WORKDATASIZE                        8192
    CATCACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      1632
    INIT_ALLOCATORSIZE                    229376
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    TASKCLUSTER01                       tw;al;ut;2000sv,100bup;10ev,10gc;
    TASKCLUSTER02                       ti,100dw;30000us;
    TASKCLUSTER03                       compress
    MPRGN_QUEUE                         YES
    MPRGN_DIRTY_READ                    NO
    MPRGN_BUSY_WAIT                     NO
    MPDISP_LOOPS                        1
    MPDISP_PRIO                         NO
    XP_MP_RGN_LOOP                        0
    MP_RGN_LOOP                           0
    MPRGN_PRIO                          NO
    MAXRGN_REQUEST                        300
    PRIOBASE_U2U                        100
    PRIOBASE_IOC                        80
    PRIOBASE_RAV                        80
    PRIOBASE_REX                        40
    PRIOBASE_COM                        10
    PRIOFACTOR                          80
    DELAYCOMMIT                         NO
    SVP1_CONV_FLUSH                     NO
    MAXGARBAGECOLL                      0
    MAXTASKSTACK                        1024
    MAX_SERVERTASK_STACK                  100
    MAX_SPECIALTASK_STACK                 100
    DWIO_AREA_SIZE                      50
    DWIO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    FBMLOW_IO_RATE                      10
    CACHE_SIZE                            10000
    DWLRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    DATACACHE_RGNS                      8
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              11
    SEQUENCE_CACHE                        1
    IDXFILELIST_SIZE                    2048
    SERVERDESC_CACHE                    73
    SERVERCMD_CACHE                     21
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    READAHEADBLOBS                      25
    RUNDIRECTORY                          E:\_mp\u_v_dbs\EVERW_C5
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        1
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        0
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       648
    EXTERNAL_DUMP_REQUEST                 NO
    AKDUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    UTILITYPROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    BACKUPHISTFILE                      dbm.knl
    BACKUPMED_DEF                       dbm.mdf
    MAXMESSAGE_FILES                    0
    EVENTALIVE_CYCLE                    0
    _SHAREDDYNDATA                        10280
    _SHAREDDYNPOOL                        3607
    USE_MEM_ENHANCE                       NO
    MEM_ENHANCE_LIMIT                     0
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-13 13:47:17
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
    DIAGSEM                             1
    SHOW_MAX_STACK_USE                    NO
    LOG_SEGMENT_SIZE                      21333
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    0
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_JOIN_OUTER                   YES
    JOIN_OPERATOR_IMPLEMENTATION          IMPROVED
    JOIN_TABLEBUFFER                      128
    OPTIMIZE_FETCH_REVERSE                YES
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             NO
    SHAREDSQL_EXPECTEDSTATEMENTCOUNT      1500
    SHAREDSQL_COMMANDCACHESIZE            32768
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    HASHED_RESULTSET                      NO
    HASHED_RESULTSET_CACHESIZE            262144
    AUTO_RECREATE_BAD_INDEXES             NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FORBID_LOAD_BALANCING                 NO

    >
    Lars Breddemann wrote:
    > Hi Michael,
    >
    > this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
    > Really.
    >
    > Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
    > Anyhow, when I use
    >
    > insert into "AZ_Z_TEST02"  values (87,'','','','','','','','','','','','','','','',''
    >                                           ,'','','','','','','','','','','','','','','','')
    >
    > it works fine.
    It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
    >
    Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
    >
    > Now to the other errors:
    > - 28 Long values per row?
    > What the heck is wrong with the data design here?
    > Honestly, you can save data up to 2 GB in a BLOB/CLOB.
    > Currently, your data design allows 56 GB per row.
    > Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
    >
    > - The "ZTB_NAMEOFREPORT" looks like something the users see -
    > still there is no unique constraint preventing that you get 10000 of reports with the same name...
    You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
    (These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
    >
    - MaxDB 7.5 Build 26 ?? Where have you been the last years?
    > Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
    > With 7.6. I was not able to reproduce your issue at all.
    The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
    All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
    >
    - Are you really putting your data into the DBA schema? Don't do that, ever.
    > DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
    > Create a user/schema for your application data and put your tables into that.
    >
    > KR Lars
    In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
    Michael

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • Sparse table with many columns

    Hi,
    I have a table that contains around 800 columns. The table is a sparse table such that many rows
    contain up to 50 populated columns (The others contain NULL).
    My questions are:
    1. Table that contains many columns can cause a performance problem? Is there an alternative option to
    hold table with many columns efficiently?
    2. Does a row that contains NULL values consume storage space?
    Thanks
    dyahav

    [NULLs Indicate Absence of Value|http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref725]
    A null is the absence of a value in a column of a row. Nulls indicate missing, unknown, or inapplicable data. A null should not be used to imply any other value, such as zero. A column allows nulls unless a NOT NULL or PRIMARY KEY integrity constraint has been defined for the column, in which case no row can be inserted without a value for that column.
    Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
    Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns, the columns more likely to contain nulls should be defined last to conserve disk space.
    Most comparisons between nulls and other values are by definition neither true nor false, but unknown. To identify nulls in SQL, use the IS NULL predicate. Use the SQL function NVL to convert nulls to non-null values.
    Nulls are not indexed, except when the cluster key column value is null or the index is a bitmap index.>
    My guess for efficiently storing this information would be to take any columns that are almost always null and place them at the end of the table definition so they don't consume any space.
    HTH!

  • Can I view the same pdf on multiple iPads at the same time with 1 iPad operating as the master (controlling page turns etc)?

    Would like to use 4nr iPads as an autocue system. Can I view the same pdf on multiple iPads at the same time with 1 iPad operating as the master (controlling page turns etc) and the others operating as slaves? Can this be done without the use of 3G or a remote Wifi hub? I would appreciate your input.................

    Open the document in Acrobat Reader and use the menu Window > New Window.

  • Query Builder - How to create a link between tables with many fields?

    I have many fields in my tables. When the query builder loads the tables, the tables are expanded to accomodate all the fields. Suppose I want to link Table A's Customer ID (the first field in Table A) wiith Table B's Customer ID (the last field in Table B). How can I do that if the last field in Table B are not visible in the screen?
    Currently, I create a link in Table A's customer with a random field in Table B. Then I edit the link to create a proper condition. Is there a more efficient way to do this?
    Thanks.
    Edited by: woro2006 on Apr 19, 2011 9:40 AM

    Hi woro2006 -
    Easiest way is to grab Table A's title bar & drag Table A down the page until the columns you want to link are visible.
    FYI, there is an outstanding bug
    Bug 10215339: 30EA1: MISSING THE 2.1 RIGHT CLICK OPTIONS ON DATA FIELDS TO CREATE A LINK
    to add a context menu on the field for this. That is, Link {context field} to > {other data sources} > {fields from that source}
    It is being considered for 3.1, but I have no idea where it will end up in the priority queue.
    Brian Jeffries
    SQL Developer Team
    P.S.: Arghh, Unfortunately, I just tried it and the diagram does not auto scroll while you drag, so there is some guess work/repositioning the view involved.
    Logged Bug 12380154 - QUERY BUILDER DIAGRAM DOES NOT AUTO SCROLL WHEN DRAGGING TABLE

  • Select query for 6 different tables with vbeln as same selction criteria

    Hi,
    I have a query..
    I am using 6 differnet tables with vbeln being the same primary key on the basis of which i have to match the data.
    I have assign vbeln with different name but in the select query it gives me the error that vbeln2 is not the correct field.
    Can anyone please suggest how can i use the different field name and read the data from the table.

    hi,
    Use alias name for fields / tables in select query, problem will solve
    Regards,
    Praveen Savanth.N

  • How to improve query performance of an ODS- with 320 million records

    <b>Issue:</b>
    The reports are giving time-outs while execution.
    <b>Scenario</b>:
    We have an ODS having approximately 320 millions of records in it.
    The reports are based on
    The ODS and
    InfoSets based on this ODS.
    These reports are giving time-outs while execution.
    <b>Few facts about this ODS:</b>
    There are around 75 restricted and calculated keyfigures used in the query definition.
    We can’t replace this ODS by cube as there is requirement of InfoSet on it.
    This is in BW 3.5 environment.
    <b>Few things we tried:</b>
    Secondary Indices are created on the fields which are appearing in the selection screen of the reports. It’s not worked.
    The Restriction/Calculation logic in the query definition can be moved to backend. Will it make the difference?
    Question:
    Can you suggest the ways to improve the query performance of this ODS?
    Your immediate response is highly appreciated. Thanks in advance.

    Hey!
    I think Oliver's questions are good. 320 Mio records are to much for an ODS. If you can get rid of the InfoSet that would be helpful. Why exactly do you need it? If you don't need you could partition your ODS with a characteristic and report over an MultiProvider.
    Is there a way to delete some data from the ODS?
    Maybe you make an Upgrade to 7.0 in the next time? There you can use InfoSets on InfoCubes.
    You also could try to precalculation like sam say. This is possible with reporting agent or Information Broadcasting. Then you have it in your cache. Look that your cache is large enough. Maybe you can use a table or something.
    Do you just need to make one or some special reports on a special time? Maybe you can make an update in another ODS writing just the result in it. For this you can use update rules or maybe analysisprocess designer (transaction RSANWB) is the better way.
    Maybe it is also possible to increase the parameter for your dialog-runtime rdisp/max_wprun_time (If you don't know, you basis should. Else look here https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ab254cf2-0c01-0010-c28d-b26d04627e61)
    Best regards,
    Peter

  • LookUp to the same table with multiple conditions

    Hi,
    I nead to do a lookup to the same table in the flow but with diffrent quieres, each query contains it's own 'where'.
    Can I do it somehow in one look up or do I have to use a few ?
    select a from table where a=1
    select b from table where c=3
    Thanks

    Hi,
      Using multiple lookups will be a cleaner approach. If you are using multiple lookups on the same table consider using Cache transform. Refer the below link for details on Cache transform
    Lookup and Cache Transforms in SQL Server Integration Services
    Alternatively if you want to go ahead with single look up , you may have to modify the SQL statement in the Lookup accordingly to return the proper value. In you case it may be
    select a,b from table where a=1 or c=3
    Note : Consider the above as a pseudo code. This needs to be tested and applied based on your requirement.
    Best Regards Sorna

  • Abap query, join between same tables

    Hi,
    I have an Abap Query (SQ01), I need to create a join between the same table (ESLL-ESLL) to obtain the services from a PO. The join is with the packno from ESLL to subpackno from ESLL (the same table). But I don't know how I can do that with Abap Query. Because the Infoset doesn't allow inserting the table two times.
    Somebody can help me.
    Thanks.
    Victoria

    Hi:
    I was able to create a query to retrieve the service lines entries using tables ESSR (Header) (service entry sheet number as input parameter), linked to package number to view ML_ESLL and then from the view the sup-package number linked to ESLL. That way I was able to retrieve all the service lines information from table ESLL only using SQ02 and SQ01, no ABAP.
    I Hope this help.
    Juan
    PS: I know the post is old but may be there are people out there with no ABAP access who needs to create reports for Service Entry Sheets lines. All the join conditions are.
    Table             Table
    ESSR            EKKO
    ESSR            ML_ESLL
    ML_ESLL      ESLL
    ESLL             ESLH
    Edited by: Juan Marino on Jan 23, 2012 10:53 PM

  • No records in Azure databrowser viewing tables with many columns.

    Yesterday I encountered an issue while browsing a table created in Azure.
    I created a new database in Azure and in this database I created and populated several tables, containing one very big table.
    The big table has 239 columns.
    I succeeded in populating this table with our in-company table-data, means by a dtsx-package. No problem this far.
    When I query the table from SQL Server Management Studio, I get correct results.
    However, the databrowser on the azure-site itself does not show any data for this table. That’s a little disappointing regarding the fact that there are more than 76000 records in this table. Refresh didn’t help.
    When I browse smaller tables with less data-columns, I do get data in this data-browser.
    Is this a known issue or do you know a solution for this issue ?
    Kind regards,
    Fred Silven
    AEB Amsterdam
    The Netherlands.

    Hello,
    Based on your description, you want to edit data of a large table in the Management Portal of SQL database, but it is not return rows in GUI Design tab.  Can you get the data when select "TOP 200 rows"?
    Since there are 239 columns and 76000 rows in the table, the Portal may take a bit long time to load all data in GUI. Please try to using T-SQL statement to perform the select or update operation and specify condition in WHERE clause to load the
    needed data.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Select count(x) on a table with many column numbers?

    Hi all,
    i have a table with physical data with 850 (!!) colums and
    ~1 Million rows.
    The select count(cycle)from test_table Statement is very, very slow
    WHY?
    The select count(cycle)from test_table is very fast by e.g 10 Colums. WHY?
    What has the number of columns, to do with the SELECT count(cyle).... statement?
    create test_table(
    cycle number primary key,
    stamp date,
    sensor 1 number,
    sensor 2 number,
    sensor_849 number,
    sensor_850 number);
    on W2K Oracle 9i Enterprise Edition Release 9.2.0.4.0 Production
    Can anybody help me?
    Many Thanks
    Achim

    hi lennert, hi all,
    many thanks for all the answers. I�m not an Oracle expert.
    Sorry for my english.
    Hi Lennert,
    you are right, what must i do to use the index in the
    query? Can you give me a pointer of direction, please?
    Many greetings
    Achim
    select count(*) from w4t.v_tfmc_3_blocktime;
    COUNT(*) ==> Table with 3 columns (very fast)
    306057
    Ausf�hrungsplan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 SORT (AGGREGATE)
    2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _BLOCKTIME'    
    Statistiken
    0 recursive calls
    0 db block gets
    801 consistent gets
    794 physical reads
    0 redo size
    388 bytes sent via SQL*Net to client
    499 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    select count(*) from w4t.v_tfmc_3_value;
    COUNT(*)==> Table with 850 columns (very slow)
    64000
    Ausf�hrungsplan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 SORT (AGGREGATE)
    2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _VALUE'    
    Statistiken
    1 recursive calls
    1 db block gets
    48410 consistent gets
    38791 physical reads
    13068 redo size
    387 bytes sent via SQL*Net to client
    499 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

  • Selecting specific records out of the same table with PICS

    Post Author: nambi
    CA Forum: Formula
    I have a Database table in which we have information for our paint codes,
    This database when accessed through our will display the types of tests we do for the products we manufacture, we manually enter in these results through the software.  When I open up this same table in access I noticed that each test we do is displayed as new entry under the same product, therefore I have several entries for the same product code, and since we have several tests for each I am unable to specifically pull out the data I need.
    For example I need to create a data sheet for our customers displaying the bake time for these all our products the Product code (formkey) is listed multiple times but all I need is to record the Bake (TargetAlphaValue) time. Out of this table I will also need to report on the Gloss. If I use the record selection I am only able to display one type of test although I will need to specify other types as well.  This is the area I have the problem with,
    I have shown in a jpg what I am looking for, would anyone here know how to pull out only the TargetAlphaValue and associate it to the bake and formkey?  Then do the same with the Gloss test and pull up the TargetAlphaValue for that it would be of great help

    Can you be more precise , please :
    - which table stores the people identities ? (I call this one Identity)
    - which table gives the class where the student works in (I cal l this one Claa_attendees)
    - which table gives the instructor of a class (I call this one Class)
    If your issue is that you have one table which stores Itendities , and you need to display Student identity and Instructor Identity, you have to call this table twice in your query , using table aliases . I mean :
    Select Stud_iden.name, Instr_iden.name
    From Identity Stud_iden, Identity Instr_iden, Class_attendees, Class
    Where Class.clas_id = class_attendees.class_id
    and class.instructor_id = Instr_iden.people_id
    and class_attendees.student_id = Stud_iden.people_id
    Is this what you need to do ?

Maybe you are looking for