Index-building strategy for multi-terabyte database

Running 11g.
We have about 17 million XML files to load into a brand new database. They are to be indexed with a context index. After the 17 million records are imported, there will be future imports of approximately 30,000 records every two weeks, loaded in batch.
1. What is the best way to load the 17 million? Should they be loaded in chunks, say 1 M at a time, and then indexed? Or load them all, then index them all at once? (Based on preliminary tests the initial load will take 9 days and the indexing will take a little under 7 days.)
I vote for doing it in chunks, since the developers will want access to the system periodically during the data load and I want to be able to start and stop the process without causing a catastrophe.
But I have read that this can introduce fragmentation into the index. Is this really something to worry about, given that my chunks are so large? Would there be any real benefit from doing the entire index operation in one go?
2. After each of the bi-weekly 30,000-record imports, we will run CTX_DDL.SYNC_INDEX which I estimate will take about 20 minutes to run. Will this cause fragmentation over time? I have read that it is advisable to perform a full index rebuild occasionally but obviously we won't want to do that; that would take days and days. I guess the alternative is to run OPTIMIZE_INDEX
http://download.oracle.com/docs/cd/B28359_01/text.111/b28304/cddlpkg.htm#i998200
...any advice on how often to run this, considering the hugeness of my dataset? Fortunately the data will be read-only (apart from the bi-weekly load), so I'm thinking that there won't be very much fragmentation occurring at all. Am I correct?

There are two types of fragmentation, one during index creation and one during additional loads. The first can be minimised with some tweaking, the second should not be a big issue because there are only 26 additional loads in one year in.
1)
You will not have any issues loading the XML and index them using Oracle Text as long you use a sensible partitioning. Index the whole dataset in one go, the parallel clause works very well during indexing. Have a look at the initial memory allocation for every thread what is created, I found the default values in 10g and 9i are far too small. When you use 100 partitions it will create 100 Oracle text indexes, nothing to be scared off. More partitions you use, less index memory is required for each thread, but it adds to fragmentation.
You can reduce the initial indexing time and fragmentation in various ways:
Use parallel indexing and partitioning, I used 6 threads over 25 partitions and it reduced time to index 8 millions XML documents from 2 days to less than 12 hours (XML documents as clobs using a user_datastore).
Tweak indexing memory to your requirements, somehow it is more memory than CPU bound. More memory you use less fragmentation occurs and it will finish earlier.
Use a representative list of stop words. For this I use to query the DR$...I tables directly (query is out my head)
SELECT token_text, SUM(token_count) total
FROM DR$...I
WHERE UPPER(token_text) = tokent_text – remove XML tags
ORDER BY total DESC
Have a look at the top entries if you can include them into the stop word list; they will cause trouble later on when querying the index.
2) We found an index rebuild (drop/create) useful for following scenario, but this more a feeling than solid science
1 million XML records, daily loads of around 1 thousand records with many updates => quarterly rebuilds
8 million XML records, bi-weekly loads of around 20 thousand records with very few updates => once a year or so.

Similar Messages

  • Mkfs: bad value for nbpi: must be at least 1048576 for multi-terabyte, nbpi

    Hi, guys!
    *1. I have a big FS (8 TB) on UFS which contains a lot of small files ~ 64B-1MB.*
    -bash-3.00# df -h /mnt
    Filesystem Size Used Avail Use% Mounted on
    /dev/dsk/c10t600000E00D000000000201A400020000d0s0
    8.0T 4.3T 3,7T 54% /mnt
    *2. But today I noticed in dmesg such errors: "ufs: [ID 682040 kern.notice] NOTICE: /mnt: out of inodes"*
    -bash-3.00# df -i /mnt
    Filesystem Inodes IUsed IFree IUse% Mounted on
    /dev/dsk/c10t600000E00D000000000201A400020000d0s0
    8753024 8753020 4 100% /mnt
    *3. So, I decided to make file system with new parameters:*
    -bash-3.00# mkfs -m /dev/rdsk/c10t600000E00D000000000201A400020000d0s0
    mkfs -F ufs -o nsect=128,ntrack=48,bsize=8192,fragsize=8192,cgsize=143,free=1,rps=1,nbpi=997778,opt=t,apc=0,gap=0,nrpos=1,maxcontig=128,mtb=y /dev/rdsk/c10t600000E00D000000000201A400020000d0s0 17165172656
    -bash-3.00#
    -bash-3.00# mkfs -F ufs -o nsect=128,ntrack=48,bsize=8192,fragsize=1024,cgsize=143,free=1,rps=1,nbpi=512,opt=t,apc=0,gap=0,nrpos=1,maxcontig=128,mtb=f /dev/rdsk/c10t600000E00D000000000201A400020000d0s0 17165172656
    *3. I've got some warnings about inodes threshold:*
    -bash-3.00# mkfs -F ufs -o nsect=128,ntrack=48,bsize=8192,fragsize=1024,cgsize=143,free=1,rps=1,nbpi=512,opt=t,apc=0,gap=0,nrpos=1,maxcontig=128,mtb=n /dev/rdsk/c10t600000E00D000000000201A400020000d0s0 17165172656
    mkfs: bad value for nbpi: must be at least 1048576 for multi-terabyte, nbpi reset to default 1048576
    Warning: 2128 sector(s) in last cylinder unallocated
    /dev/rdsk/c10t600000E00D000000000201A400020000d0s0: 17165172656 sectors in 2793811 cylinders of 48 tracks, 128 sectors
    8381432.0MB in 19538 cyl groups (143 c/g, 429.00MB/g, 448 i/g)
    super-block backups (for fsck -F ufs -o b=#) at:
    32, 878752, 1757472, 2636192, 3514912, 4393632, 5272352, 6151072, 7029792,
    7908512,
    Initializing cylinder groups:
    super-block backups for last 10 cylinder groups at:
    17157145632, 17158024352, 17158903072, 17159781792, 17160660512, 17161539232,
    17162417952, 17163296672, 17164175392, 17165054112
    *4.And my inodes number didn't change:*
    -bash-3.00# df -i /mnt
    Filesystem Inodes IUsed IFree IUse% Mounted on
    /dev/dsk/c10t600000E00D000000000201A400020000d0s0
    8753024 4 8753020 1% /mnt
    I found http://wesunsolve.net/bugid.php/id/6595253 that is a bug of mkfs without workaround. Is ZFS what I need now?

    Well, to fix the bug you referred to you can apply patch 141444-01 or 141445-01.
    However that bug is just regarding an irrelevant error message from mkfs, it will not fix your problem as such.
    It seems to me like the minimum value for nbpi on a multi-terabyte filesystem is 1048576, hence you won't be able to create a filesystem with more inodes.
    The things to try would be to either create two UFS filesystems, or go with ZFS, which is the future anyway ;-)
    .7/M.

  • What's your strategy for in-memory databases?

    The recent Lattanze Center research study identified trends that you cannot afford to ignore.  Dr. Elliot King will present the latest research during an informative webinar entitled: u201CIn-Memory Database Adoption is Gaining Momentum.  Whatu2019s your strategy?u201D
    For more information: http://bit.ly/kDpsfH 
    When:                 June 2 at 10:30 AM PDT.
    Duration:     45 Minutes
    Cost:               Free
    Location:          At your computer
    Space is limited, so register now @ http://bit.ly/ilgFot
    Besides learning about the latest trends within In-Memory Database technologies, youu2019ll also receive a free copy of a new White Paper, u201CMaking the Business Case for In-Memory Databasesu201D and the Executive Summary of the recent research from the Lattanze Center for Business Excellence.
    Thursday, June 2, 10:30 AM PDT. Register now @ http://bit.ly/ilgFot  Space is limited!

    Well, it seems like most of you simply read the
    various texts and try the vendors' examples. I'm
    surprised that no one mentioned ever having bought a
    prototype application from the onset. "bought"? What's that mean? You don't buy prototypes. You download evaluation versions, maybe.
    I try to find sample code and tweak it to see the effects. Otherwise, I start writing small sample code an build on that.
    I consider myself a reasonably competent core Java
    programmer, but I had serious difficulty configuring
    and merging its related technologies. There were so
    many disjointed pieces of instructional information
    that the additional research time really hurt our
    budget severely. Not an uncommon thing, I'm sure. There's a lot of stuff. But don't bother learning all of it. Not in detail, at least. It's a good idea to familiarize yourself with the names of packages/libraries and what they do. But only really learn what you need to learn for what you need to do. The next project you will probably need other things, so you learn them then.
    bsampieri,
    I've setup Tomcat and tried the examples--in fact, I
    normally follow tutorials for all products I hope to
    use. Problem is, the examples and tutorials never
    address my specific needs. So, I usually inch toward
    my goal by spending weeks or months in forums to
    continue where the tutorials leave off. Anything complex is going to not be there.... the trick is to identify pieces that you can pull out to build more complex apps. And the fact that JSP/servlets have the issue of being compounded by all the HTML/CSS/JS and HTTP protocol ... I don't want to say limitiations, exactly... Well, it just makes things more complex and harder to know what you need.
    Perhaps you guys are much faster and smarter than
    I...or you have a much bigger budget :)Probably not... on either account.

  • Build gui for existing oracle database tables with webdynpro java?

    hi
    i want to build a GUI to maintain existing oracle tables
    so far we used oracleFORMS to do so
    is there a good approach for webdynpro java? or do you recommend other sap tools?
    can we generate the gui with a wizard based on the fields in the table?
    do we have to generate sql statements or type in manually?
    regards
    joerg

    Hi Joerg,
    generally that is possible, but you'll have to implement the data access by yourself, by means of EJB or another Java persistance framework such as JDO, SQLMaps, Hibernate, whatever...
    Web Dynpro allows to build a GUI based upon a model - in this case this could be some POJOs (DTOs) representing your database tables, which are communicated to the GUI by your data access layer. Consider a model as a simple Java bean representing database data.
    This approach would require to build a data access layer which incorporates manually generated sql statements, so you'll have to have expert database and java knowledge.
    There might be other approaches, this is just to demonstrate one working possibility.
    regards,
    Christian

  • DAC Execution plan build error - for multi-sources

    We are implementing BI apps 7.9.6.1 Financial& HR analytics. We have multiple sources ( Peoplesoft (oracle)8.9 & 9.0, DB2, Flat file ....) and built 4 containers one for each source.We can build the execution plan sucessfully, however getting the below error when trying to reset the sources . This message is very sporadic. We used workaround to run the build and reset sources multiple times. This workaround seems to be working when we have 3 containers in execution plan and It is not working for 4 containers. Does anybody come across this issue.
    Thank you in advance for your help.
    DAC ver 10.1.3.4.1 patch .20100105.0812 Build date: Jan 5 2010
    ANOMALY INFO::: Failure
    MESSAGE:::com.siebel.analytics.etl.execution.NoSuchDatabaseException: No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.ExecutionPlanInitializationException
    com.siebel.analytics.etl.execution.ExecutionPlanDiscoverer.<init>(ExecutionPlanDiscoverer.java:62)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:189)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
    ::: CAUSE :::
    MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
    EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
    com.siebel.analytics.etl.execution.ExecutionParameterHelper.substituteNodeTables(ExecutionParameterHelper.java:176)
    com.siebel.analytics.etl.execution.ExecutionPlanDesigner.retrieveExecutionPlanTasks(ExecutionPlanDesigner.java:420)
    com.siebel.analytics.etl.execution.ExecutionPlanDiscoverer.<init>(ExecutionPlanDiscoverer.java:60)
    com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:189)
    com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
    com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)

    Hi, In reference to this message
    MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
    1. I notice that you are using custom DAC Logical Name DBConnection_OLTP_ELM.
    2. While you Generate Parameters before Building execution plan , Can you please verify to which Source System container you are using this logical name DBConnection_OLTP_ELM as a source? and what is the value assigned to it
    3. Are you building Execution Plan with Subject Areas from 4 Containers.? did u Generate Parameters before Building Execution Plan?
    4. Also verify at the DAC Task level for the 4th Container what is the Primary Source value for all the Tasks? (TASK_GROUP_Extract_CodeDimension)

  • Selective XML Index feature is not supported for the current database version , SQL Server Extended Events , Optimizing Reading from XML column datatype

    Team , Thanks for looking into this  ..
    As a last resort on  optimizing my stored procedure ( Below ) i wanted to create a Selective XML index  ( Normal XML indexes doesn't seem to be improving performance as needed ) but i keep getting this error within my stored proc . Selective XML
    Index feature is not supported for the current database version.. How ever
    EXECUTE sys.sp_db_selective_xml_index; return 1 , stating Selective XML Indexes are enabled on my current database .
    Is there ANY alternative way i can optimize below stored proc ?
    Thanks in advance for your response(s) !
    /****** Object: StoredProcedure [dbo].[MN_Process_DDLSchema_Changes] Script Date: 3/11/2015 3:10:42 PM ******/
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    -- EXEC [dbo].[MN_Process_DDLSchema_Changes]
    ALTER PROCEDURE [dbo].[MN_Process_DDLSchema_Changes]
    AS
    BEGIN
    SET NOCOUNT ON --Does'nt have impact ( May be this wont on SQL Server Extended events session's being created on Server(s) , DB's )
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
    select getdate() as getdate_0
    DECLARE @XML XML , @Prev_Insertion_time DATETIME
    -- Staging Previous Load time for filtering purpose ( Performance optimize while on insert )
    SET @Prev_Insertion_time = (SELECT MAX(EE_Time_Stamp) FROM dbo.MN_DDLSchema_Changes_log ) -- Perf Optimize
    -- PRINT '1'
    CREATE TABLE #Temp
    EventName VARCHAR(100),
    Time_Stamp_EE DATETIME,
    ObjectName VARCHAR(100),
    ObjectType VARCHAR(100),
    DbName VARCHAR(100),
    ddl_Phase VARCHAR(50),
    ClientAppName VARCHAR(2000),
    ClientHostName VARCHAR(100),
    server_instance_name VARCHAR(100),
    ServerPrincipalName VARCHAR(100),
    nt_username varchar(100),
    SqlText NVARCHAR(MAX)
    CREATE TABLE #XML_Hold
    ID INT NOT NULL IDENTITY(1,1) PRIMARY KEY , -- PK necessity for Indexing on XML Col
    BufferXml XML
    select getdate() as getdate_01
    INSERT INTO #XML_Hold (BufferXml)
    SELECT
    CAST(target_data AS XML) AS BufferXml -- Buffer Storage from SQL Extended Event(s) , Looks like there is a limitation with xml size ?? Need to re-search .
    FROM sys.dm_xe_session_targets xet
    INNER JOIN sys.dm_xe_sessions xes
    ON xes.address = xet.event_session_address
    WHERE xes.name = 'Capture DDL Schema Changes' --Ryelugu : 03/05/2015 Session being created withing SQL Server Extended Events
    --RETURN
    --SELECT * FROM #XML_Hold
    select getdate() as getdate_1
    -- 03/10/2015 RYelugu : Error while creating XML Index : Selective XML Index feature is not supported for the current database version
    CREATE SELECTIVE XML INDEX SXI_TimeStamp ON #XML_Hold(BufferXml)
    FOR
    PathTimeStamp ='/RingBufferTarget/event/timestamp' AS XQUERY 'node()'
    --RETURN
    --CREATE PRIMARY XML INDEX [IX_XML_Hold] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index
    --SELECT GETDATE() AS GETDATE_2
    -- RYelugu 03/10/2015 -Creating secondary XML index doesnt make significant improvement at Query Optimizer , Instead creation takes more time , Only primary should be good here
    --CREATE XML INDEX [IX_XML_Hold_values] ON #XML_Hold(BufferXml) -- Ryelugu 03/09/2015 - Primary Index , --There should exists a Primary for a secondary creation
    --USING XML INDEX [IX_XML_Hold]
    ---- FOR VALUE
    -- --FOR PROPERTY
    -- FOR PATH
    --SELECT GETDATE() AS GETDATE_3
    --PRINT '2'
    -- RETURN
    SELECT GETDATE() GETDATE_3
    INSERT INTO #Temp
    EventName ,
    Time_Stamp_EE ,
    ObjectName ,
    ObjectType,
    DbName ,
    ddl_Phase ,
    ClientAppName ,
    ClientHostName,
    server_instance_name,
    nt_username,
    ServerPrincipalName ,
    SqlText
    SELECT
    p.q.value('@name[1]','varchar(100)') AS eventname,
    p.q.value('@timestamp[1]','datetime') AS timestampvalue,
    p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') AS objectname,
    p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') AS ObjectType,
    p.q.value('(./action[@name="database_name"]/value)[1]','varchar(100)') AS databasename,
    p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') AS ddl_phase,
    p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') AS clientappname,
    p.q.value('(./action[@name="client_hostname"]/value)[1]','varchar(100)') AS clienthostname,
    p.q.value('(./action[@name="server_instance_name"]/value)[1]','varchar(100)') AS server_instance_name,
    p.q.value('(./action[@name="nt_username"]/value)[1]','varchar(100)') AS nt_username,
    p.q.value('(./action[@name="server_principal_name"]/value)[1]','varchar(100)') AS serverprincipalname,
    p.q.value('(./action[@name="sql_text"]/value)[1]','Nvarchar(max)') AS sqltext
    FROM #XML_Hold
    CROSS APPLY BufferXml.nodes('/RingBufferTarget/event')p(q)
    WHERE -- Ryelugu 03/05/2015 - Perf Optimize - Filtering the Buffered XML so as not to lookup at previoulsy loaded records into stage table
    p.q.value('@timestamp[1]','datetime') >= ISNULL(@Prev_Insertion_time ,p.q.value('@timestamp[1]','datetime'))
    AND p.q.value('(./data[@name="ddl_phase"]/text)[1]','varchar(100)') ='Commit' --Ryelugu 03/06/2015 - Every Event records a begin version and a commit version into Buffer ( XML ) we need the committed version
    AND p.q.value('(./data[@name="object_type"]/text)[1]','varchar(100)') <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND p.q.value('(./data[@name="object_name"]/value)[1]','varchar(100)') NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND p.q.value('(./action[@name="client_app_name"]/value)[1]','varchar(100)') <> 'Replication Monitor' --Ryelugu : 03/09/2015 We do not want any records being caprutred by Replication Monitor ??
    SELECT GETDATE() GETDATE_4
    -- SELECT * FROM #TEMP
    -- SELECT COUNT(*) FROM #TEMP
    -- SELECT GETDATE()
    -- RETURN
    -- PRINT '3'
    --RETURN
    INSERT INTO [dbo].[MN_DDLSchema_Changes_log]
    [UserName]
    ,[DbName]
    ,[ObjectName]
    ,[client_app_name]
    ,[ClientHostName]
    ,[ServerName]
    ,[SQL_TEXT]
    ,[EE_Time_Stamp]
    ,[Event_Name]
    SELECT
    CASE WHEN T.nt_username IS NULL OR LEN(T.nt_username) = 0 THEN t.ServerPrincipalName
    ELSE T.nt_username
    END
    ,T.DbName
    ,T.objectname
    ,T.clientappname
    ,t.ClientHostName
    ,T.server_instance_name
    ,T.sqltext
    ,T.Time_Stamp_EE
    ,T.eventname
    FROM
    #TEMP T
    /** -- RYelugu 03/06/2015 - Filters are now being applied directly while retrieving records from BUFFER or on XML
    -- Ryelugu 03/15/2015 - More filters are likely to be added on further testing
    WHERE ddl_Phase ='Commit'
    AND ObjectType <> 'STATISTICS' --Ryelugu 03/06/2015 - May be SQL Server Internally Creates Statistics for #Temp tables , we do not want Creation of STATISTICS Statement to be logged
    AND ObjectName NOT LIKE '%#%' -- Any stored proc which creates a temp table within it Extended Event does capture this creation statement SQL as well , we dont need it though
    AND T.Time_Stamp_EE >= @Prev_Insertion_time --Ryelugu 03/05/2015 - Performance Optimize
    AND NOT EXISTS ( SELECT 1 FROM [dbo].[MN_DDLSchema_Changes_log] MN
    WHERE MN.[ServerName] = T.server_instance_name -- Ryelugu Server Name needes to be added on to to xml ( Events in session )
    AND MN.[DbName] = T.DbName
    AND MN.[Event_Name] = T.EventName
    AND MN.[ObjectName]= T.ObjectName
    AND MN.[EE_Time_Stamp] = T.Time_Stamp_EE
    AND MN.[SQL_TEXT] =T.SqlText -- Ryelugu 03/05/2015 This is a comparision Metric as well , But needs to decide on
    -- Peformance Factor here , Will take advise from Lance if comparision on varchar(max) is a vital idea
    --SELECT GETDATE()
    --PRINT '4'
    --RETURN
    SELECT
    top 100
    [EE_Time_Stamp]
    ,[ServerName]
    ,[DbName]
    ,[Event_Name]
    ,[ObjectName]
    ,[UserName]
    ,[SQL_TEXT]
    ,[client_app_name]
    ,[Created_Date]
    ,[ClientHostName]
    FROM
    [dbo].[MN_DDLSchema_Changes_log]
    ORDER BY [EE_Time_Stamp] desc
    -- select getdate()
    -- ** DELETE EVENTS after logging into Physical table
    -- NEED TO Identify if this @XML can be updated into physical system table such that previously loaded events are left untoched
    -- SET @XML.modify('delete /event/class/.[@timestamp="2015-03-06T13:01:19.020Z"]')
    -- SELECT @XML
    SELECT GETDATE() GETDATE_5
    END
    GO
    Rajkumar Yelugu

    @@Version : ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
        May 14 2014 18:34:29
        Copyright (c) Microsoft Corporation
        Developer Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
    (1 row(s) affected)
    Compatibility level is set to 110 .
    One of the limitation states - XML columns with a depth of more than 128 nested nodes
    How do i verify this ? Thanks .
    Rajkumar Yelugu

  • List of indexes and columns for a database.

    Hi
    Do you know the SQL command to get the list of indexes and associated columns for all tables for a given database ?
    The following only shows me the table and index name but I would also like to get the colums for each index
    SELECT o.name,       i.name  FROM sysobjects o  JOIN sysindexes i    ON (o.id = i.id)
    Can you pls help
    Thanks
    H.

    There isn't a single command that will do that.
    There is the sp_helpindex stored procedure which will give you the information on indexes one table at a time, you could call it in a loop, but there is other information in there as well, so the output would be messy.
    You can look at the source code for sp_helpindex to find out how it decodes the key column names. 
    use sybsystemprocs
    go
    sp_helptext sp_helpindex
    go
    The core of it is this loop, which builds up a list of the column names in @keys, a varchar(1024) declared earlier.
            **  First we'll figure out what the keys are.
            declare @i int
            declare @thiskey varchar(255)
            declare @sorder char(4)
            declare @lastindid int
            declare @indname varchar(255)
            select @keys = "", @i = 1
            set nocount on
            while @i <= 31
            begin
                    select @thiskey = index_col(@objname, @indid , @i)
                    if (@thiskey is NULL)
                    begin
                            goto keysdone
                    end
                    if @i > 1
                    begin
                            select @keys = @keys + ", "
                    end
                    /*select @keys = @keys + index_col(@objname, @indid, @i)*/
                    select @keys = @keys + @thiskey
                    ** Get the sort order of the column using index_colorder()
                    ** This support is added for handling descending keys.
                    select @sorder = index_colorder(@objname, @indid, @i)
                    if (@sorder = "DESC")
                            select @keys = @keys + " " + @sorder
                    **  Increment @i so it will check for the next key.
                    select @i = @i + 1
            end
            **  When we get here we now have all the keys.
            keysdone:
                    set nocount off
    -bret

  • Steps for creating a database index

    Do we just create it from SE11? Does Basis needs to be involved for any furthur steps?

    Hi Amrutha,
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan). The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE. If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set. You specify the fields of secondary indexes using the Abap Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields. Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table. If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents. Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column's selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table. If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Index:
    http://help.sap.com/saphelp_nw04/helpdata/en/cf/21eb20446011d189700000e8322d00/content.htm
    Creating Secondary Index
    http://help.sap.com/saphelp_nw04/helpdata/en/cf/21eb47446011d189700000e8322d00/content.htm
    regards,
    keerthi.

  • Strategy for database exceptions

    I am designing our strategy for handling database exceptions which may occur upon saving for instance.
    For example, if a user tries to save an object which violates a unique key constraint, I get that error wrapped by the DatabaseException class as documented.
    The wrinkle is that I would really like to report to the user which fields caused this violation. We are using an oracle database for the foreseeable future, so portability is not critical.
    Has anyone designed something which can get this metadata and attach it to the exception? Any tips would be much appreciated.
    Cheers,
    craig

    I have found it very helpfull to separate the database / recordset from the rest of the project. If you create objects that model your records then the rest of your program doesn't know or care where the data came from. You could easily write store / retrieve methods to deal with the data from a file, over the web, from a socket connection, etc without having to alter your entire app. My advise is to do as little manipulation on the resultsets and focus on object manipulation instead. Mind you if your writing some sort of generic recordset "explorer" then this doesn't apply. You can't possibly model objects from some random recordset that you have no previous knowledge of. In that case you'd have to examine the meta data of the recordset to get field names, data types, etc.

  • Wireless site survey planner tool for multi floor office building

    Hi,
    I'm looking a wireless site survey planner tool for multi floor office building to estimate the number of APs and location. Not the site survey after install APs.
    AigMagnet is excellent for single floor but cannot provide multi floor scenario via planner feature even it merging capable with site survey after APs installation.
    Any tools can provide the planner tool for multi floor office building?
    Thanks in advance,

    Cj
    If you are looking for an AP estimate you can simply do the sq feet divided into the grade of wireless. For example
    You have a 100,000 feet building. All 100,000 sq feet will be wireless.
    Data 5,000 - 6.000 sq feet per AP or 100,000 / 5,000 = 20 access points
    VoIP 2,500 sq feet per AP or 100,000 / 2,500 = 40 access points
    I always like to pad it by another 10%.
    I hope this helps …

  • Configuring raid 0+1 for an oltp database of 1 terabyte size on centos 4.5

    Hi all,
    Configuring raid 0+1 for an oltp database on centos 4.5
    I have to configure 0+1 raid for an oltp database of 1 terabyte size on centos 4.5.
    Please anyone suggest me step by step configuration or link.
    Thanks and Regards
    Edited by: DBA24by7 on Mar 15, 2009 2:20 PM

    >
    it is centos 4.5 which is almost like redhat linux.And thus completely unsupported by Oracle - which begs the
    question as to why anyone would bother to go to the
    expense of setting up a RAID configuration for an
    unsupported database?
    Anyway, you should be using RAID 1+0
    see here: http://www.acnc.com/04_01_10.html
    Paul... (lots of RAID questions today!)

  • How we build Java Database Connectivity for Oracle 8i Database

    Can any one send me a sample code for Java Database Connectivity for Oracle 8i Database
    it will be a grat help
    Thanks & Regards
    Rasika

    You don't need a DSN if you use Oracle's JDBC driver.
    You didn't read ANY of the previous replies. What makes you think this one willk help? Or any instruction, for that matter?
    Sounds like you just want someone to give it to you. OK, I'll bite, but you have to figure out the rest:
    import java.sql.*;
    import java.util.*;
    * Command line app that allows a user to connect with a database and
    * execute any valid SQL against it
    public class DataConnection
        public static final String DEFAULT_DRIVER   = "sun.jdbc.odbc.JdbcOdbcDriver";
        public static final String DEFAULT_URL      = "jdbc:odbc:DRIVER={Microsoft Access Driver (*.mdb)};DBQ=c:\\Edu\\Java\\Forum\\DataConnection.mdb";
        public static final String DEFAULT_USERNAME = "admin";
        public static final String DEFAULT_PASSWORD = "";
        public static final String DEFAULT_DRIVER   = "com.mysql.jdbc.Driver";
        public static final String DEFAULT_URL      = "jdbc:mysql://localhost:3306/hibernate";
        public static final String DEFAULT_USERNAME = "admin";
        public static final String DEFAULT_PASSWORD = "";
        /** Database connection */
        private Connection connection;
         * Driver for the DataConnection
         * @param command line arguments
         * <ol start='0'>
         * <li>SQL query string</li>
         * <li>JDBC driver class</li>
         * <li>database URL</li>
         * <li>username</li>
         * <li>password</li>
         * </ol>
        public static void main(String [] args)
            DataConnection db = null;
            try
                if (args.length > 0)
                    String sql      = args[0];
                    String driver   = ((args.length > 1) ? args[1] : DEFAULT_DRIVER);
                    String url      = ((args.length > 2) ? args[2] : DEFAULT_URL);
                    String username = ((args.length > 3) ? args[3] : DEFAULT_USERNAME);
                    String password = ((args.length > 4) ? args[4] : DEFAULT_PASSWORD);
                    System.out.println("sql     : " + sql);
                    System.out.println("driver  : " + driver);
                    System.out.println("url     : " + url);
                    System.out.println("username: " + username);
                    System.out.println("password: " + password);
                    db = new DataConnection(driver, url, username, password);
                    System.out.println("Connection established");
                    Object result = db.executeSQL(sql);
                    System.out.println(result);
                else
                    System.out.println("Usage: db.DataConnection <sql> <driver> <url> <username> <password>");
            catch (SQLException e)
                System.err.println("SQL error: " + e.getErrorCode());
                System.err.println("SQL state: " + e.getSQLState());
                e.printStackTrace(System.err);
            catch (Exception e)
                e.printStackTrace(System.err);
            finally
                if (db != null)
                    db.close();
                db = null;
         * Create a DataConnection
         * @throws SQLException if the database connection fails
         * @throws ClassNotFoundException if the driver class can't be loaded
        public DataConnection() throws SQLException,ClassNotFoundException
            this(DEFAULT_DRIVER, DEFAULT_URL, DEFAULT_USERNAME, DEFAULT_PASSWORD);
         * Create a DataConnection
         * @throws SQLException if the database connection fails
         * @throws ClassNotFoundException if the driver class can't be loaded
        public DataConnection(final String driver,
                              final String url,
                              final String username,
                              final String password)
            throws SQLException,ClassNotFoundException
            Class.forName(driver);
            this.connection = DriverManager.getConnection(url, username, password);
         * Get Driver properties
         * @param database URL
         * @return list of driver properties
         * @throws SQLException if the query fails
        public List getDriverProperties(final String url)
            throws SQLException
            List driverProperties   = new ArrayList();
            Driver driver           = DriverManager.getDriver(url);
            if (driver != null)
                DriverPropertyInfo[] info = driver.getPropertyInfo(url, null);
                if (info != null)
                    driverProperties    = Arrays.asList(info);
            return driverProperties;
         * Clean up the connection
        public void close()
            close(this.connection);
         * Execute ANY SQL statement
         * @param SQL statement to execute
         * @returns list of row values if a ResultSet is returned,
         * OR an altered row count object if not
         * @throws SQLException if the query fails
        public Object executeSQL(final String sql) throws SQLException
            Object returnValue;
            Statement statement = null;
            ResultSet rs = null;
            try
                statement = this.connection.createStatement();
                boolean hasResultSet    = statement.execute(sql);
                if (hasResultSet)
                    rs                      = statement.getResultSet();
                    ResultSetMetaData meta  = rs.getMetaData();
                    int numColumns          = meta.getColumnCount();
                    List rows               = new ArrayList();
                    while (rs.next())
                        Map thisRow = new LinkedHashMap();
                        for (int i = 1; i <= numColumns; ++i)
                            String columnName   = meta.getColumnName(i);
                            Object value        = rs.getObject(columnName);
                            thisRow.put(columnName, value);
                        rows.add(thisRow);
                    returnValue = rows;
            else
                int updateCount = statement.getUpdateCount();
                returnValue     = new Integer(updateCount);
            finally
                close(rs);
                close(statement);
            return returnValue;
         * Close a database connection
         * @param connection to close
        public static final void close(Connection connection)
            try
                if (connection != null)
                    connection.close();
                    connection = null;
            catch (SQLException e)
                e.printStackTrace();
         * Close a statement
         * @param statement to close
        public static final void close(Statement statement)
            try
                if (statement != null)
                    statement.close();
                    statement = null;
            catch (SQLException e)
                e.printStackTrace();
         * Close a result set
         * @param rs to close
        public static final void close(ResultSet rs)
            try
                if (rs != null)
                    rs.close();
                    rs = null;
            catch (SQLException e)
                e.printStackTrace();
         * Close a database connection and statement
         * @param connection to close
         * @param statement to close
        public static final void close(Connection connection, Statement statement)
            close(statement);
            close(connection);
         * Close a database connection, statement, and result set
         * @param connection to close
         * @param statement to close
         * @param rs to close
        public static final void close(Connection connection,
                                       Statement statement,
                                       ResultSet rs)
            close(rs);
            close(statement);
            close(connection);
    }%

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Strategy for array of tests on front panel

    I'm building a test program that has a lot of tests. The basic strategy
    that I'm trying is to have an array of tests on the front panel which would
    have an on/off button, result, spec, and pass/fail indicator showing. The
    array would also have all other information needed for that test hidden.
    This array gets indexed into a for loop that does all the tests. So the
    elements of the array are used in a lot of sub-vi's. The problem is how do
    I handle changes to the format of the element during the programming
    process? I keep finding things I need to add to the element that I didnt
    think of before. When I first started the project my idea was to make a
    custom variable of that element so that when I made a change
    it would update
    all at once. The problem with that is the array in the main vi loses the
    information it's storing when its elements update from the custom control.
    Then I'd have to retype everything back in. So I quit using a custom
    control and now when I make a change to the format of the element I need to
    copy the array/element onto all subvi's that use it. Comments?

    Isn't humorous how simple projects can turn into monsters? I will suggest an answer to the question/problem of "loosing information stored upon updating your elements in the array".
    As long as you are in development mode, you may find it helpful to spend a few minutes and build a VI (Write/Read to/from files VIs) that records the information/data for quick uploading after you update your code. Another option (if you only need the last set of values) is setting the last values to Default (use property nodes). I don't believe shift registers will fully handle your situation. Just some ideas to jog you along - Good Luck - Doug

  • An application for multi-channel measurements

    Does NI have a software solution for multi-channel measurements? I mean systems for measurements, tests and monitoring which contain numerous DAQ devices with thousands of sensors.
    I suppose the software for such system should have the following features:
    Instrument control
    Sensor management (type, s/n, accuracy, calibration data, next calibration date, measurement limits, etc.)
    Data acquisition
    Storing data in databases
    Data visualisation and analysis
    Report generation
    Tools for creating custom user interfaces / data visualisations for monitoring
    As far as I know the DIAdem is great for data analysis, visualisation and report generation but it's not suitable for other tasks. With LabVIEW you can do anything but it's not an "out-of-the-box" solution.
    Just to clarify what I'm talking about, here's an application that seems to fit the description. It's the HBM catman. Maybe someone worked with it? Do you know any analogues for it?

    Just to add to Hooovahh's comments.
    NI has flat out stated that they do not want to make turn-key solutions.  That would take away from them being able to make tools for people to create the solutions.  That is why they have alliance partners.  These partners take the tools made by NI and make really cool stuff.  My latest project was a software package that helped a technician build a jet engine correctly so that the turbine blades do not come out and destroy the engine (just slightly important).  I have also done some test systems for space craft avionics.
    So if you are really serious about this, I highly recommend finding an Alliance Partner to help you out.  If you want, give me a PM and I can work on getting you and a few people on my side to discuss your requirements and proceed from there.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

Maybe you are looking for