Impact of creating a non-clusterd index on a huge transaction table?

Hello Everyone,
We have a transaction table containing 10 million records and everyday a million records will be inserted. We don’t have any clustered index on this table as this is a transaction table (more than 10 columns to uniquely identify a row). We
do have some SPs which in turn some reports getting generated using this table. In order to improve the performance of an SP, we created a non-clustered index on this table and we found a huge performance gain.
  Here comes my question - will this (creation of non-clustered index) impacts my table data load performance or other reports generation?
Any suggestions will be appreciated.
Many Thanks!
Rajasekhar.

Hello Rajasekhar, 
First identify this table and corresponding columns usage. Through SP_depends system procedure you can identify this table dependencies. 
Then look at complex queries and it's execution plans. You can get an output recommendations of appropriate missing indexes. 
Now you can try to create appropriate indexes. Always I suggest you to limit the index count if you are inserting/updating large volume records. Also if possible create clustered index. 
One more option, you can horizontally partitioned the table and move data to multiple filegroups. Based of range of data your query performance also improve a lot. 
To apply partition for existing table, you should take backup and recreate from scartch. 
Check this link : http://www.mssqltips.com/sqlservertip/2888/how-to-partition-an-existing-sql-server-table/
Best Regards, 
Ashokkumar.
Ashokkumar

Similar Messages

  • Non Clustered Indexing not working for SQL server 2005

    I have create two non clustered index on two particular tables. Now the problem is its working fine..but after restarting my server its working stop and then again I have to delete that index and recreate it.

    try these links 
    http://blog.sqlauthority.com/2009/02/07/sql-server-introduction-to-force-index-query-hints-index-hint/
    http://blog.sqlauthority.com/2009/02/08/sql-server-introduction-to-force-index-query-hints-index-hint-part2/
    http://www.brentozar.com/archive/2013/10/index-hints-helpful-or-harmful/
    Hope it Helps!!

  • Impact of Creating new index ?

    Hi All, I am using 10.2.0.4.0 version of oracle.
    I want expert advice in below scenario.
    I need to create one index, as because a particular 'Search' query going slow and my customer frequently accessing that 'Search' Functionality. i have tried all alternative for tuning that query, and finally decided to go for creating New Composite index on two columns of a table that is used in the query.
    So my question/concern is , what will be the impact on other 'Select' queries those are using these columns in their predicate? Whether there is any possibility of any negative impact(slow performance) on them due to this new index. What should be the right way to go for impact analysis in the above scenario?
    ( Please note in this case i am least bothered about the impact on 'INSERTS' due to this index creation).

    >
    Hi again,
    For now, can you please confirm, As 'rp0428' mentioned there is chances of better performance in other 'Select' queries,
    but there is no chance of performance degradation to this new index in other queries. As because oracle will evaluate the best among all.
    Is the above statement always hold true or there may be deviations?Hmmm... I can see no good reason why an added index should adversely affect other SELECTs - the optimiser should
    simply ignore the new index if it doesn't impact other queries - and may make use of it in ways you didn't forsee.
    Having said that, I can't guarantee that this is the case. I would run tests before implementing it in production - at least
    try and test your most common/frequently run queries - remember [url http://en.wikipedia.org/wiki/Pareto_principle]Paretos law.
    A small fraction of your queries will have a large impact on the system and some will be "marginal" to performance/usage.
    Sometimes the optimiser behaves in counter-intuitive ways - run at least some tests and then try it is what I would do if you
    (can't | don't have the time to) test exhaustively.
    HTH,
    Paul...

  • What is RID in non clustered index and its use

    Hi All,
    I need help regarding following articles on sql server
    1) what is RID in non clustered index and its use.
    2) What is Physical and virtual address space. Difference in 32 bit vs 64 bit Virtual address space
    Regards
    Rahul

    Next time Please ask single question in a thread you will get better response.
    1. RID is location of heap. When you create Non clustered index on heap and
    lookup happens to get extra records RID is used to locate the records. RID is basically Row ID. This is basic definition for you. Please read
    this Thread for more details
    2. I have not heard of Physical address space. I Know Virtual address space( VAS)
    VAS is simple terms is amount of memory( virtual )  'visible' to a process, a process can be SQL Server process or windows process. It theoretically depends on architecture of Operating System. 32 bit OS will have maximum range of 4 G VAS, it's calculated
    like a process ruining on 32 bit system can address max up to 2^32 locations ( which is equivalent to 4 G). Similarly for 64 bit max VAS will be 2^64 which is theoretically infinite. To make things feasible maximum VAS for 64 bit system is kept to 8 TB. Now
    VAS acts as layer of abstraction an intermediate .Instead of all request directly mapping to physical memory it first maps to VAS and then mapped to physical memory so that it can manage request for memory in more coordinated fashion than allowing process
    to do it ,if not it will  soon cause memory crunch.Any process when created on windows will see virtual memory according to its VAS limit.
    Please read
    This Article for detailed information
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Create an index on a huge table

    hi gurus
    I am going to create an index on a very large table(194GB) with the temporary tablespace size is 80G.
    I am afraid that during the index creation the temporary tablespace is not enouth to hold the data needed
    to create the index,because i only have 80g in temporary tablespace. How do i estimated the size i need to create a such large index and is there an effcient way to do such index creation?
    thank u in advance.

    Jozsef wrote:
    Hi there,
    I have done similar things couple of times before and I have used a pretty good method to obtain a rough (and it was really a rough, but enough) estimation.
    Sorry folks, it does not work properly for create index...this method only works for SELECT statments, SO IGNORE IT :(
    It nearly works in 10g:
    | Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE INDEX STATEMENT |       | 10000 | 40000 |    10   (0)| 00:00:01 |
    |   1 |  INDEX BUILD NON UNIQUE| T1_I1 |       |       |            |          |
    |   2 |   SORT CREATE INDEX    |       | 10000 | 40000 |            |          |
    |   3 |    TABLE ACCESS FULL   | T1    | 10000 | 40000 |     6   (0)| 00:00:01 |
    Note
       - estimated index size: 196K bytesThe "Note" tells you Oracle's estimate of the final space allocation needed for the index. There are various reasons why the estimate is not very accurate, and why it's not a good estimate of the space requirement in the TEMP tablespace, but it gives you a figure that is probably in the right ballpark (factor of 2 out, either way, probably).
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "For every expert there is an equal and opposite expert."
    Arthur C. Clarke

  • Which index  I should create  Btree or Bitmap  index?

    I have table with columns c1,c2,c3
    I want to create index on column c1
    which index I should create Btree or Bitmap index
    the column contain 50% unique values and 50% duplicate values
    If Btree why?
    If Bitmap Why?
    I know that
    Btree is used when there more unique values (high cardinality)
    Bitmap is used when there less unique values (low cardinality)

    read this -
    Deadlocks with Bitmap Indexes
    Bitmap indexes were designed to be used solely within data warehouses, i.e. where the vast majority of the database activity is reading data,
    and there's very little (or no) data modification, except for batch processes which occasionally re-populate the warehouse.
    Each "row" in the bitmap index contains references to potentially many different rowids, in contrast to a B*-tree index which references a single rowid.
    It should be obvious, therefore, that, since the transactional mechanism is the same for all database operations, that any DML on a table which impacts the bitmap index may end up locking (or attempting to lock) many different "rows" within the index.
    This is the key concept with deadlocks in bitmap indexes, you're not being deadlocked on the underlying table, but on the index blocks. Courtesy - http://www.oratechinfo.co.uk/deadlocks.html
    hope u got it now...

  • How can I create a non-EFI USB installation media

    Hi fellows,
    I usually create a non-EFI installation CD  via this procedure
    https://wiki.archlinux.org/index.php/Un … ical_Media
    But now my optical drive is broken, so I need to create an USB non-EFI installation media.
    How can I do ???

    Hello nTia89,
    I'm in a similar situation to yours. I have an ancient MacBook Pro (1,1) with a broken optical drive, and any attempts to get EFI boot working with Linux have resulted in no graphics.
    My solution has been to first install Mac OS X via USB, then install Refind. After that, Refind will recognise Arch USB media during boot and you can install.
    I'm only booting Linux, so my solution has been to use MBR on the hard drive. That forces the machine to use the BIOS compatability mode.
    Don't hesitate to ask if you have any further questions!

  • Primary Key supported by a non-unique index?

    Encountered a weird situation today. A utility we setup which allows Analysts to restore data into their tables, started failing when it attempted to drop an index. The index was supporting a Primary Key. Makes sense. But our script was supposed to only be attempting to drop/recreate non-unique indexes. Turns out the supporting index on the Primary Key was non-unique, and to the best of my knowledge came about as follows:
    SQL> create table junk (f number(1));
    Table created.
    SQL> create index junk_ix on junk(f);
    Index created.
    SQL> select UNIQUENESS from DBA_INDEXES where index_name = 'JUNK_IX';
    UNIQUENES
    NONUNIQUE
    SQL> alter table forbesc.junk add constraint junk_pk primary key (f) using index junk_ix;
    Table altered.
    SQL> select UNIQUENESS from DBA_INDEXES where index_name = 'JUNK_IX';
    UNIQUENES
    NONUNIQUE
    SQL> insert into junk values (1);
    1 row created.
    SQL> insert into junk values (1);
    insert into junk values (1)
    ERROR at line 1:
    ORA-00001: unique constraint (FORBESC.JUNK_PK) violated
    SQL> select index_name from dba_constraints where constraint_name = 'JUNK_PK';
    INDEX_NAME
    JUNK_IXWhat I can't figure out is how a non-unique index is enforcing uniqueness. I thought that it was the key in that very same process. I thought that perhaps an index with the 'SYS_123456' was getting created, perhaps, but I couldn't find one:
    SQL> select object_name, object_type from dba_objects order by created desc;
    OBJECT_NAME   OBJECT_TYPE
    JUNK_IX     INDEX
    JUNK     TABLE
    ...How is the uniqueness getting enforced in this case? This is in Oracle 11.1.0.7
    Thanks,
    --=Chuck

    It has always been that way. Oracle can, and will, use a non-unique index to enforce a PK constraint, The existing index just needs to have the PK column(s) as the leading column(s) of the index:
    SQL> create table t (id number, id1 number, descr varchar2(10));
    Table created.
    SQL> create index t_ids on t(id, id1);
    Index created.
    SQL> select index_name from user_indexes
      2  where table_name = 'T';
    INDEX_NAME
    T_IDS
    SQL> alter table t add constraint t_pk
      2  primary key (id);
    Table altered.
    SQL> select index_name from user_indexes
      2  where table_name = 'T';
    INDEX_NAME
    T_IDS
    SQL> insert into t values (1, 1, 'One');
    1 row created.
    SQL> insert into t values (1, 2, 'Two');
    insert into t values (1, 2, 'Two')
    ERROR at line 1:
    ORA-00001: unique constraint (OPS$ORACLE.T_PK) violatedJohn

  • How to create a composite nonclustered index using sql server management studio

    Hi,
    How do I create a composite non clustered index using Manage Index and Keys dialogue box?
    sukai

    See image below:
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Database Design
    New Book / Kindle: Beginner Database Design & SQL Programming Using Microsoft SQL Server 2014

  • Transactional Replication: Non-Clustered Indexes not copying.

    Hello,
    I set up replication on our servers at work to streamline some procedures we run daily/weekly on them.
    This copies around 15 articles from two databases on the "Master" server to another server used for execution purposes. For the most part it was a pretty straight forward task and it seemed to work nicely; but I realised after some investigation that the
    non-clustered indexes weren't copying over to the child server.
    I set the non-clustered indexes property in the properties of the publishing articles to "True" and generated a new snapshot, this seemed to work, but I've come into work this morning to find the property has reset to "False" and I have no indexes on the
    table again. Why is this happening and is there any way I can resolve the matter so the indexes are copied over concurrently?
    Thanks in advance for your advice.
    JB

    I actually solved this.
    You can use a post-replication SQL script to create the indexes. Whatever articles you're publishing open up the indexes drop down list of the article in object explorer, right-click on an index and hover over Script Index as, then Create-to, then click
    New Query Window editor.
    Up will pop up a new query window with the resulting index. Work your way through all the indexes on all the articles of the publication, copy and pasting just the create index line and below of each script, pull them all together into one query window.
    Once you're done find a safe folder somewhere on your harddrive and save the SQL query as an .sql file with a sensible name.
    Right click on the publication and goto properties. Click on the "Snapshot" tab, in there; there should be a section saying "Run additional scripts". Choose the browse button next to "After applying the Snapshot; execute this script:"
    Navigate to your script file and choose it. Once done click ok and it'll prompt you that something has changed and if you'd like to generate a new snapshot, make sure you do or it won't work.
    That's it, you'll find once the publication has bulk copied over the the subscriptions successfully there are non clustered indexes on the tables. Pretty simple!

  • Transactional Replication Not replicating non-clustered Indexes

    Hi
    I have created a transactional replication.I am sure that I have configured it to replicate Non-Clustered indexes,
    but It does not do so.
    I tried several times to reinitialize the subscription,but still no luck.
    Regards

    It is True for sure.
    But the problem is solved.I had to wait until the Initialize process get completed.
    But I am sure that this problem happened before.Sometimes it works fine but sometimes not.
    Hi ArashMasroor,
    According to your description, it seems that you encounter the issue that non-clustered indexes property is reset to “False” sometimes.
    To work around this issue, you can use a post-replication SQL script to create the indexes. Whatever articles you’re publishing open up the indexes drop down list of the article in object explorer, right-click on an index and levitate over Script Index as,
    then Create-to, then click New Query Window editor.
    Up will pop up a new query window with the resulting index. Work your way through all the indexes on all the articles of the publication, copy and pasting just the create index line and below of each script, pull them all together into one query window.
    Once you’re done find a safe folder somewhere on your hard drive and  save the SQL query as an .sql file with a sensible name. Then you can execute this script after applying the Snapshot. For more details, please review this
    blog.
    There is also a similar thread for your reference.
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/b0f3870d-1a65-4384-a17b-96825ec5f098/transactional-replication-nonclustered-indexes-not-copying?forum=sqlreplication
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Non Unique Indexes on Dimension Table

    Hello ,
      I have Material Dimension that has Product MainGroup, Brand , SPC code broken from Product Hierarchy.
      As per Business Requirement , we don't want to use the Product Hierarchy , that should need to split into 3 pieces.
      Since The Cube Dimensions already reached the 13 , we can not increase the dimensions to keep these 3 fields into separate dimensions.
      I heard from Basis guy as <b>we can have index on three fields in same dimension table</b> & read some negative impact on aggregate definition.
      Is it true , which one is true , am not sure.
      Which one impact causes more worst & usefull..?
      Could please some experts throw some light on this.
    Cheers
    Martin

    Martin -
    Not sure what you mean by "read some negative impact on aggregate definition".
    But as far as adding additional indices on other columns of a dimension table, that certainly is doable. I have never done this as part of an actual intentiional design, but it seems like a valid apporach if you are limited by dimensions.  I'm assuming when you say you already have 13, that the 13 does not include the three standard dimensions for time, request, (drawing a blank, is it currency), so that you really have 16 dimensions, 13 of which are user defined. 
    We have added dimension tabl e indices in our shop when we have found large dimension tables that, either from poor initial design, or changes to the data and/or queries, have resulted in full tables scans against large dimension tables.  In some cases, the query costs of the full scan of the dimension table was more than the cost for the access of the fact table itself.
    These indices must be added by your DBA as they can not be added thru the Admin Wkbench.  You should also probably keep a record of any of these indices you create because if for some reason you delete the tables and reactivate the cube, you'll probably lose them.
    Pizzaman

  • Space occupied by clustered index Vs non-clustered index

    I am trying to understand the indexes. Does clustered index occupy more space than a non-clustered index because it carries the information about rest of the other columns also. Could you guys please help me understand this. Thanks in advance.
    svk

    Hi czarvk,
    Clustered index in SQL Server takes up more space than non-clustered indexes.
    Clustered index arranges the way records are stored in a table putting them in order (key, value), all the data are sorted on the values of the index.
    A non-clustered index is a completely different object in a table, containing only a subset of columns and a row locator to the table’s rows or to the clustered index’s key.
    So clustered index in SQL Server takes up more space than non-clustered indexes.
    If you have any question, please feel free to let me know.
    Regards,
    Donghui Li

  • Unique index vs non-unique index

    Hi Gurus,
    I'm getting lots of "TABLE ACCESS FULL" for lots of columns which have non-unique indexes is some queries. So my question is does optimizer does not pick up non-unique index but only pickes up unique indexes for those columns.
    Thanks
    Amitava.

    amitavachatterjee1975 wrote:
    Hi Gurus,
    I'm getting lots of "TABLE ACCESS FULL" for lots of columns which have non-unique indexes is some queries. So my question is does optimizer does not pick up non-unique index but only pickes up unique indexes for those columns.
    Thanks
    Amitava.WHY MY INDEX IS NOT BEING USED
    http://communities.bmc.com/communities/docs/DOC-10031
    http://searchoracle.techtarget.com/tip/Why-isn-t-my-index-getting-used
    http://www.orafaq.com/tuningguide/not%20using%20index.html

  • Help with seting up a Data Sorce can't be created with non-existent Pool

    I am wanting to use an Oracle 9i with WebLogic 7
    I have the following in my config.xml:
    <JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
    Name="Thin.Pool" Password="{3DES}C3xDZIWIABA="
    Properties="user=SYSTEM" TestTableName="OID"
    URL="jdbc:oracle:thin:@localhost:1521:DB_SID"/>
    <JDBCDataSource JNDIName="DB_DS" Name="DB_DS" PoolName="Thin.Pool"/>
    The console seems happy, no error mesages but in the log I get:
    ####<Mar 31, 2003 6:33:45 PM MST> <Info> <HTTP> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <101047>
    <[ServletContext(id=4110316,name=console,context-path=/console)]
    FileServlet: Using standard I/O>
    ####<Mar 31, 2003 6:35:37 PM MST> <Info> <JDBC> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <001082> <Creating Data Source named DB_DS for pool
    Thin.Pool>
    ####<Mar 31, 2003 6:35:37 PM MST> <Error> <JDBC> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <001059> <Error during Data Source creation:
    weblogic.common.ResourceException: DataSource(DB_DS) can't be created
    with non-existent Pool (connection or multi) (Thin.Pool)
         at weblogic.jdbc.common.internal.JdbcInfo.validateConnectionPool(JdbcInfo.java:127)
         at weblogic.jdbc.common.internal.JdbcInfo.startDataSource(JdbcInfo.java:260)
         at weblogic.jdbc.common.internal.JDBCService.addDeploymentx(JDBCService.java:293)
         at weblogic.jdbc.common.internal.JDBCService.addDeployment(JDBCService.java:270)
         at weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentTarget.java:375)
         at weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentTarget.java:154)
         at java.lang.reflect.Method.invoke(Native Method)
         at weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBeanImpl.java:732)
         at weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.java:714)
         at weblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBeanImpl.java:417)
         at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
         at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
         at weblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerImpl.java:952)
         at weblogic.management.internal.ConfigurationMBeanImpl.updateConfigMBeans(ConfigurationMBeanImpl.java:578)
         at weblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBeanImpl.java:419)
         at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
         at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
         at weblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerImpl.java:952)
         at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:470)
         at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:198)
         at $Proxy16.addDeployment(Unknown Source)
         at weblogic.management.internal.DynamicMBeanImpl.unprotectedUpdateDeployments(DynamicMBeanImpl.java:1784)
         at weblogic.management.internal.DynamicMBeanImpl.access$0(DynamicMBeanImpl.java:1737)
         at weblogic.management.internal.DynamicMBeanImpl$1.run(DynamicMBeanImpl.java:1715)
         at weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:780)
         at weblogic.management.internal.DynamicMBeanImpl.updateDeployments(DynamicMBeanImpl.java:1711)
         at weblogic.management.internal.DynamicMBeanImpl.setAttribute(DynamicMBeanImpl.java:1035)
         at weblogic.management.internal.ConfigurationMBeanImpl.setAttribute(ConfigurationMBeanImpl.java:353)
         at com.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:1358)
         at com.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:1333)
         at weblogic.management.internal.RemoteMBeanServerImpl.setAttribute(RemoteMBeanServerImpl.java:898)
         at weblogic.management.internal.MBeanProxy.setAttribute(MBeanProxy.java:324)
         at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:193)
         at $Proxy13.setTargets(Unknown Source)
         at java.lang.reflect.Method.invoke(Native Method)
         at weblogic.management.console.info.FilteredMBeanAttribute.doSet(FilteredMBeanAttribute.java:92)
         at weblogic.management.console.actions.mbean.DoEditMBeanAction.perform(DoEditMBeanAction.java:145)
         at weblogic.management.console.actions.internal.ActionServlet.doAction(ActionServlet.java:171)
         at weblogic.management.console.actions.internal.ActionServlet.doPost(ActionServlet.java:85)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1058)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:401)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:306)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:5445)
         at weblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManager.java:780)
         at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3105)
         at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2588)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:213)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:189)
    Why does it say:
    can't be created with non-existent Pool
    Thanks,

    Add "Targets" attribute to the connection pool. You may
    get an idea how it looks like by searching config.xml
    for "targets". If target servers are not set, the pool won't be
    deployed and can not be used to a create datasource.
    Regards,
    Slava Imeshev
    "BBaker" <[email protected]> wrote in message
    news:[email protected]...
    I am wanting to use an Oracle 9i with WebLogic 7
    I have the following in my config.xml:
    <JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
    Name="Thin.Pool" Password="{3DES}C3xDZIWIABA="
    Properties="user=SYSTEM" TestTableName="OID"
    URL="jdbc:oracle:thin:@localhost:1521:DB_SID"/>
    <JDBCDataSource JNDIName="DB_DS" Name="DB_DS" PoolName="Thin.Pool"/>
    The console seems happy, no error mesages but in the log I get:
    ####<Mar 31, 2003 6:33:45 PM MST> <Info> <HTTP> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <101047>
    <[ServletContext(id=4110316,name=console,context-path=/console)]
    FileServlet: Using standard I/O>
    ####<Mar 31, 2003 6:35:37 PM MST> <Info> <JDBC> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <001082> <Creating Data Source named DB_DS for pool
    Thin.Pool>
    ####<Mar 31, 2003 6:35:37 PM MST> <Error> <JDBC> <blue> <GameServe>
    <ExecuteThread: '1' for queue: '__weblogic_admin_html_queue'> <kernel
    identity> <> <001059> <Error during Data Source creation:
    weblogic.common.ResourceException: DataSource(DB_DS) can't be created
    with non-existent Pool (connection or multi) (Thin.Pool)
    atweblogic.jdbc.common.internal.JdbcInfo.validateConnectionPool(JdbcInfo.java:
    127)
    atweblogic.jdbc.common.internal.JdbcInfo.startDataSource(JdbcInfo.java:260)
    atweblogic.jdbc.common.internal.JDBCService.addDeploymentx(JDBCService.java:29
    3)
    atweblogic.jdbc.common.internal.JDBCService.addDeployment(JDBCService.java:270
    atweblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentT
    arget.java:375)
    atweblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentT
    arget.java:154)
    at java.lang.reflect.Method.invoke(Native Method)
    atweblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBeanImpl
    .java:732)
    atweblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.java:7
    14)
    atweblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBea
    nImpl.java:417)
    atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
    atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
    atweblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerI
    mpl.java:952)
    atweblogic.management.internal.ConfigurationMBeanImpl.updateConfigMBeans(Confi
    gurationMBeanImpl.java:578)
    atweblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBea
    nImpl.java:419)
    atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1557)
    atcom.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1525)
    atweblogic.management.internal.RemoteMBeanServerImpl.invoke(RemoteMBeanServerI
    mpl.java:952)
    at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:470)
    at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:198)
    at $Proxy16.addDeployment(Unknown Source)
    atweblogic.management.internal.DynamicMBeanImpl.unprotectedUpdateDeployments(D
    ynamicMBeanImpl.java:1784)
    atweblogic.management.internal.DynamicMBeanImpl.access$0(DynamicMBeanImpl.java
    :1737)
    atweblogic.management.internal.DynamicMBeanImpl$1.run(DynamicMBeanImpl.java:17
    15)
    atweblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManage
    r.java:780)
    atweblogic.management.internal.DynamicMBeanImpl.updateDeployments(DynamicMBean
    Impl.java:1711)
    atweblogic.management.internal.DynamicMBeanImpl.setAttribute(DynamicMBeanImpl.
    java:1035)
    atweblogic.management.internal.ConfigurationMBeanImpl.setAttribute(Configurati
    onMBeanImpl.java:353)
    atcom.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:135
    8)
    atcom.sun.management.jmx.MBeanServerImpl.setAttribute(MBeanServerImpl.java:133
    3)
    atweblogic.management.internal.RemoteMBeanServerImpl.setAttribute(RemoteMBeanS
    erverImpl.java:898)
    atweblogic.management.internal.MBeanProxy.setAttribute(MBeanProxy.java:324)
    at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:193)
    at $Proxy13.setTargets(Unknown Source)
    at java.lang.reflect.Method.invoke(Native Method)
    atweblogic.management.console.info.FilteredMBeanAttribute.doSet(FilteredMBeanA
    ttribute.java:92)
    atweblogic.management.console.actions.mbean.DoEditMBeanAction.perform(DoEditMB
    eanAction.java:145)
    atweblogic.management.console.actions.internal.ActionServlet.doAction(ActionSe
    rvlet.java:171)
    atweblogic.management.console.actions.internal.ActionServlet.doPost(ActionServ
    let.java:85)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    atweblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(Servle
    tStubImpl.java:1058)
    atweblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
    :401)
    atweblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java
    :306)
    atweblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(W
    ebAppServletContext.java:5445)
    atweblogic.security.service.SecurityServiceManager.runAs(SecurityServiceManage
    r.java:780)
    atweblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletCo
    ntext.java:3105)
    atweblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java
    :2588)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:213)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:189)
    Why does it say:
    can't be created with non-existent Pool
    Thanks,

Maybe you are looking for

  • AP GBP Invoice Exchange Rates incorrect in MRC SOB

    Our functional currency is GBP. AP is picking up GBP invoice dates as the exchange rate date meaning USD equivalent(in MRC SOB) is picking up the exchange rate associated to the invoice date. Which is wrong and should be picking up the payment rate d

  • Help! My Ipad IS in a Loop!

    So i have a ipad 2nd gen, and i came home one day and i seen my ipad was off,(most likly i forgot to lock the screen).so it was dead. I pluged it in and let it charge, i came back to it and it was in a reboot loop. (btw its not jailbroken) but the po

  • InfoCUbe in KB ????

    Hello Gurus, BI7.0 Where Can I see how big is my CUBE ?? and Where Can I see in wich CLuster is that Cube ? THNX Edited by: Baris Ozcan on Apr 4, 2008 3:11 PM

  • Ipv4 mapped addresses bind problem: address already in use.

    Hello. It's me again with another, may be newbie, question. This time I observe different behavior binding different type of ip addresses on tcp sockets. For instance - when I bind ipv4 mapped addresses, if there is a remaining old socket in TIME_WAI

  • Unable to verify the printer on the network

    Hi, I have a home IP network with an HP4100N network laser-printer connected via JetDirect card to a powerline device and then to a hub. Up until recently, all computers on my network could successfully print to the printer via the JetDirect card and