Queries on oracle hang after index creation

Hi,
I have a problem where queries issued on an oracle database hang after I create indexes on some of the tables used by these queries.
I have a script where I drop indexes :
DROP INDEX EVP_PCON00_CDSITC_IX;
DROP INDEX EVP_PCS202_STIMP1_IX;
DROP INDEX EVP_PECT00_ONAECT_IX;
DROP INDEX EVP_PVAL01_ONAVAL_IX;
After this script I load data into those tables.
Then I launch another script where I recreate the indexes I just dropped :
CREATE INDEX EVP_PCON00_CDSITC_IX  ON EVP_PCON00(CDSITC) TABLESPACE TS_ODS_INDEX;
COMMIT;
CREATE INDEX EVP_PCS202_STIMP1_IX  ON EVP_PCS202(STIMP1) TABLESPACE TS_ODS_INDEX;
COMMIT;
CREATE INDEX EVP_PVAL01_CDORIV_IX  ON EVP_PVAL01(CDORIV) TABLESPACE TS_ODS_INDEX;
COMMIT;
When the script ends, I try to execute a query using some of the tables I created indexes on :
SELECT ...
FROM ....
WHERE ....
The query never return a result set, it just hangs, I look at the session browser in toad and the scan of the first table used in the query hangs.
When I drop those indexes and re execute the query everything works fine.
Thanks
Edited by: 946359 on Jul 13, 2012 9:20 AM

This is the result I get :
Execution Plan
Plan hash value: 1198043594
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 1343 | 12 (17)| 00:00:01 |
| 1 | HASH UNIQUE | | 1 | 1343 | 12 (17)| 00:00:01 |
|* 2 | HASH JOIN OUTER | | 1 | 1343 | 11 (10)| 00:00:01 |
| 3 | NESTED LOOPS OUTER | | 1 | 1257 | 9 (12)| 00:00:01 |
| 4 | NESTED LOOPS OUTER | | 1 | 1033 | 8 (13)| 00:00:01 |
| 5 | NESTED LOOPS OUTER | | 1 | 809 | 7 (15)| 00:00:01 |
|* 6 | HASH JOIN | | 1 | 585 | 6 (17)| 00:00:01 |
| 7 | NESTED LOOPS | | | | | |
| 8 | NESTED LOOPS | | 1 | 490 | 3 (0)| 00:00:01 |
| 9 | TABLE ACCESS FULL | EVP_PTIP00 | 1 | 452 | 2 (0)| 00:00:01 |
|* 10 | INDEX RANGE SCAN | EVP_PTIC00_CDROLE_IX | 1 | | 1 (0)| 00:00:01 |
|* 11 | TABLE ACCESS BY INDEX ROWID| EVP_PTIC00 | 1 | 38 | 1 (0)| 00:00:01 |
| 12 | TABLE ACCESS FULL | EVP_PTIF00 | 1 | 95 | 2 (0)| 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID | EVP_PTAB00 | 1 | 224 | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | EVP_PTAB00_CLTABL_IX | 1 | | 1 (0)| 00:00:01 |
|* 15 | TABLE ACCESS BY INDEX ROWID | EVP_PTAB00 | 1 | 224 | 1 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | EVP_PTAB00_CLTABL_IX | 1 | | 1 (0)| 00:00:01 |
|* 17 | TABLE ACCESS BY INDEX ROWID | EVP_PTAB00 | 1 | 224 | 1 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | EVP_PTAB00_CLTABL_IX | 1 | | 1 (0)| 00:00:01 |
| 19 | TABLE ACCESS FULL | EVP_PRIB00 | 1 | 86 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("EVP_PRIB00"."NOTIER"(+)="EVP_PTIP00"."NOTIER")
6 - access("EVP_PTIF00"."NOTIER"="EVP_PTIP00"."NOTIER")
10 - access("EVP_PTIC00"."CDROLE"='EMP')
11 - filter("EVP_PTIC00"."CDDDRX"='CL' AND "EVP_PTIP00"."NOTIER"="EVP_PTIC00"."NOTIER")
13 - filter("EVP_PTAB00_SITF"."CDTABL"(+)="EVP_PTIP00"."CDSITF")
14 - access("EVP_PTAB00_SITF"."CLTABL"(+)='SITF')
15 - filter("EVP_PTAB00_TITR"."CDTABL"(+)="EVP_PTIP00"."CDTITR")
16 - access("EVP_PTAB00_TITR"."CLTABL"(+)='TITR')
17 - filter("EVP_PTAB00_RMAT"."CDTABL"(+)="EVP_PTIP00"."CDRMAT")
18 - access("EVP_PTAB00_RMAT"."CLTABL"(+)='RMAT')

Similar Messages

  • After index creation infocube is very slow! (don't produce result)

    Hello all,
           I have the following problem, on a single cube ZWLPIANIF, after the index creation from manage of cube, the query on this cube is totally unusable because during a lot of time (it's necessary stop the execution) I whait over 30 minutes. When I delete the index the query work fine a little bit slow compared to another cube that have index.
    ZWLPIANIF have 1,200,000 record loaded in FULL
    from RSPC we made the following step:
    deletion index
    loading
    creation index, the job of creation is completed successfully (in SM21 no log, in ST22 no dump, in SM37 is ok).
    This is the details of our system SAP NetWeaver 2004s:
    SAP_ABA     700     0016     SAPKA70016
    SAP_BASIS     700     0016     SAPKB70016
    ST-PI     2008_1_700     0000          -
    PI_BASIS     2006_1_700     0006     SAPKIPYM06
    SAP_BW     700     0018     SAPKW70018
    FINBASIS     600     0013     SAPK-60013INFINBASIS
    SEM-BW     600     0013     SAPKGS6013
    BI_CONT     703     0011     SAPKIBIIQ1
    ST-A/PI     01L_BCO700     0000          -
    everyone have some ideas?
    REGARDS
    Gianvito

    Hi,
    You can analyse/debug the query through RSRT in execute+debug mode and please check wether the database optimiser chooses the right index that means wether statistics are updated or not.
    You can do a reorganise of indexes as well.
    Thank you,
    Tilak

  • Delete cube request - before or after index creation?

    Hi Folks,
    a quick question. I plan to delete for one cube requests that are older than 3 days (cube only hosts short term reporting). Now I wonder if this should be done aftter the new load but before the index is created or after the index is created.
    I guess it is after the index is created otherwise it would take longer to find the requests that should be deleted. The index will be slighly degenerated due to the deletion but should be only marginal.
    I am right or wrong?
    Thanks for all replies in advance,
    Axel

    hi,
    a quick question. I plan to delete for one cube requests that are older than 3 days (cube only hosts short term reporting). Now I wonder if this should be done aftter the new load but before the index is created or after the index is created.
    The delete should be done before index creation, as once the index are created then even though the data corresponding to those index is deleted the index still remains. This unnecessarily increases the index table size
    regards,
    Arvind.

  • Importing HDV Footage: System Hang After Indexing Finished

    Running Premiere CS6 with updates.
    Generally solid application, but the first time I crashed it this year, was during import of HDV footage. After the footage indexes, and the thumbnail pic changes from 'media pending' to the actual footage frame, the mouse pointer freezes. The keyboard is frozen too. Can't even bring up the Task Manager in Windows 7 64. Is HDV even supported anymore? Maybe it's an invalid format? The file plays fine in VLC, so I don't think it's corrupted. But egads, I've never seen an app hang Windows 7 before!

    I've confirmed that the hang does not happen with Canon HV20 footage, only Sony HVR-V1U footage. Any footage shot by that camera will hang not only Premiere, but the entire OS. I have to power down the machine to regain control.
    What's so different about Sony's implementation of HDV that it is incompatible with Premiere?

  • ORA-1652: Index creation error

    At the time of table re-organisation, Index creation got aborted for the secondary index creation. then we have continued our activity leaving the index creation due to short of System downtime.
    Again we have run the sql statment when the sap system is up, then it has thrown an error:
    ORA-1652 unable to extend temp segment by 128 in tablespace PSAPTEMP.
    In the current system, we are not ready to increase the size of datafile or resizing.where it leads to the increase the size of database.
    Here we have option to create a new temp tablespace, will delete after index creation, but while doing this how do we point the new tablespace which is created for this activity.
    SAP 4.6c oracle 9 sun solaris.
    Kindly advise what can be done. reply ASAP

    Martin,
    We have discussed with the team like,
    1. Create new temporary tablespace with desired Size which should be minimum 25GB
                CREATE temporary tablespace PSAPTEMP1......
        2. If the original tablespace is a default temporary tablespace, set the new tablespace as default temporary tablespace in the database.
                 SQL> alter database default temporary tablespace PSAPTEMP1;
        3. Perform the index creation
        4. Make the old tablespace PSAPTEMP as the default temporary tablespace.
                 SQL> alter database default temporary tablespace PSAPTEMP
       5. Drop the new tablespace.
                 SQL> drop tablespace temp including contents.
    Here I have a question, while switching the default temporary tablespace from PSAPTEMP to the much bigger new PSAPTEMP1Tablespace whether this will affect the running transaction.
    Any impact on switching the tablespace online..?
    We are performing this activity in the online system(running sap system)
    Thanks,

  • Oracle 11g R2 client installation on Windows 7 64 bit hangs after performing prerequisite checks

    Windows 7 Professional SP1 64bit OS
    6 GB RAM
    380 GB free space
    Extracted client to C:\Stage\win64_11gR2_client\client
    Ran setup.exe as administrator, but the installation always hangs after step 5 of 7 is completed (Perform Prerequisite checks). Install log ends with entry "INFO: Get view named [SummaryUI]"
    What am I missing here? Any help would be greatly appreciated.
    C:\Stage\win64_11gR2_client\client>dir
    Directory of C:\Stage\win64_11gR2_client\client
    06/15/2013  02:56 PM    <DIR>          .
    06/15/2013  02:56 PM    <DIR>          ..
    06/15/2013  02:56 PM    <DIR>          doc
    06/15/2013  02:56 PM    <DIR>          install
    06/15/2013  02:56 PM    <DIR>          response
    06/15/2013  02:55 PM           341,304 setup.exe
    06/15/2013  02:55 PM                56 setup.ini
    06/15/2013  02:59 PM    <DIR>          stage
    06/15/2013  02:55 PM             4,327 welcome.html
                   3 File(s)        345,687 bytes
                   6 Dir(s)  412,474,404,864 bytes free
    Last few lines of install log:
    WARNING: Active Help Content for InstallLocationPane.cbxOracleBases do not exist. Error :Can't find resource for bundle oracle.install.ivw.client.resource.ContextualHelpResource, key InstallLocationPane.cbxOracleBases.conciseHelpText
    WARNING: Active Help Content for InstallLocationPane.cbxSoftwareLoc do not exist. Error :Can't find resource for bundle oracle.install.ivw.client.resource.ContextualHelpResource, key InstallLocationPane.cbxSoftwareLoc.conciseHelpText
    INFO: View for [InstallLocationUI] is oracle.install.ivw.client.view.InstallLocationGUI@75f0f8ff
    INFO: Initializing view <InstallLocationUI> at state <getOracleHome>
    INFO: inventory location isC:\Program Files\Oracle\Inventory
    INFO: Completed initializing view <InstallLocationUI> at state <getOracleHome>
    INFO: Displaying view <InstallLocationUI> at state <getOracleHome>
    INFO: Completed displaying view <InstallLocationUI> at state <getOracleHome>
    INFO: Loading view <InstallLocationUI> at state <getOracleHome>
    INFO: Completed loading view <InstallLocationUI> at state <getOracleHome>
    INFO: Localizing view <InstallLocationUI> at state <getOracleHome>
    INFO: Completed localizing view <InstallLocationUI> at state <getOracleHome>
    INFO: Waiting for completion of background operations
    INFO: Completed background operations
    INFO: Executing action at state getOracleHome
    INFO: Completed executing action at state <getOracleHome>
    INFO: Waiting for completion of background operations
    INFO: Completed background operations
    INFO: Moved to state <getOracleHome>
    INFO: inventory location isC:\Program Files\Oracle\Inventory
    INFO: Waiting for completion of background operations
    INFO: Completed background operations
    INFO: Validating view at state <getOracleHome>
    INFO: Completed validating view at state <getOracleHome>
    INFO: Validating state <getOracleHome>
    INFO: custom prereq file name: oracle.client_Administrator.xml
    INFO: refDataFile: C:\Stage\win64_11gR2_client\client\stage\cvu\oracle.client_Administrator.xml
    INFO: isCustomRefDataFilePresent: false
    INFO: InstallAreaControl exists: false
    INFO: Checking:NEW_HOME
    INFO: Checking:COMP
    INFO: Checking:COMP
    INFO: Checking:COMP
    INFO: Checking:COMP
    INFO: Checking:COMP
    INFO: Checking:COMP
    INFO: Checking:COMP
    INFO: Checking:COMP
    INFO: Checking:ORCA_HOME
    INFO: Reading shiphome metadata from c:\Stage\win64_11gR2_client\client\install\..\stage\shiphomeproperties.xml
    INFO: Loading beanstore from file:/c:/Stage/win64_11gR2_client/client/install/../stage/shiphomeproperties.xml
    INFO: Translating external format into raw format
    INFO: Restoring class oracle.install.driver.oui.ShiphomeMetadata from file:/c:/Stage/win64_11gR2_client/client/install/../stage/shiphomeproperties.xml
    INFO: inventory location isC:\Program Files\Oracle\Inventory
    INFO: inventory location isC:\Program Files\Oracle\Inventory
    INFO: size estimation for Administratorinstall is 1068.0003070831299
    INFO: PATH has :==>C:\Temp\OraInstall2013-06-17_02-55-15PM\jdk\jre\bin;.;C:\windows\system32;C:\windows;C:\Perl\site\bin;C:\Perl\bin;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Windows Live\Shared;C:\APPS\PuTTY\;C:\Program Files (x86)\IDM Computer Solutions\UltraEdit\
    INFO: Completed validating state <getOracleHome>
    INFO: InstallLocationAction to INVENTORY_NO
    INFO: Verifying route INVENTORY_NO
    INFO: Waiting for completion of background operations
    INFO: Completed background operations
    INFO: Executing action at state prereqExecutionDecider
    INFO: Completed executing action at state <prereqExecutionDecider>
    INFO: Waiting for completion of background operations
    INFO: Completed background operations
    INFO: Moved to state <prereqExecutionDecider>
    INFO: Waiting for completion of background operations
    INFO: Completed background operations
    INFO: Validating view at state <prereqExecutionDecider>
    INFO: Completed validating view at state <prereqExecutionDecider>
    INFO: Validating state <prereqExecutionDecider>
    WARNING: Validation disabled for the state prereqExecutionDecider
    INFO: Completed validating state <prereqExecutionDecider>
    INFO: Verifying route executeprereqs
    INFO: Get view named [PrereqUI]
    INFO: View for [PrereqUI] is [email protected]e4b
    INFO: Initializing view <PrereqUI> at state <checkPrereqs>
    INFO: Completed initializing view <PrereqUI> at state <checkPrereqs>
    INFO: Displaying view <PrereqUI> at state <checkPrereqs>
    INFO: Completed displaying view <PrereqUI> at state <checkPrereqs>
    INFO: Loading view <PrereqUI> at state <checkPrereqs>
    INFO: Completed loading view <PrereqUI> at state <checkPrereqs>
    INFO: Localizing view <PrereqUI> at state <checkPrereqs>
    INFO: Completed localizing view <PrereqUI> at state <checkPrereqs>
    INFO: Waiting for completion of background operations
    INFO: Completed background operations
    INFO: Executing action at state checkPrereqs
    INFO: custom prereq file name: oracle.client_Administrator.xml
    INFO: refDataFile: C:\Stage\win64_11gR2_client\client\stage\cvu\oracle.client_Administrator.xml
    INFO: isCustomRefDataFilePresent: false
    INFO: Completed executing action at state <checkPrereqs>
    INFO: Waiting for completion of background operations
    INFO: Finishing all forked tasks at state checkPrereqs
    INFO: Waiting for completion all forked tasks at state checkPrereqs
    INFO: Creating PrereqChecker Job for leaf task Physical Memory
    INFO: Creating CompositePrereqChecker Job for container task Free Space
    INFO: Creating PrereqChecker Job for leaf task Free Space: PN-PC:C:\Temp
    INFO: Creating PrereqChecker Job for leaf task Architecture
    INFO: Creating PrereqChecker Job for leaf task Environment variable: "PATH"
    INFO: CVU tracingEnabled = false
    INFO: Nodes are prepared for verification.
    INFO: *********************************************
    INFO: Physical Memory: This is a prerequisite condition to test whether the system has at least 128MB (131072.0KB) of total physical memory.
    INFO: Severity:IGNORABLE
    INFO: OverallStatus:SUCCESSFUL
    INFO: -----------------------------------------------
    INFO: Verification Result for Node:PN-PC
    INFO: Expected Value:128MB (131072.0KB)
    INFO: Actual Value:5.9491GB (6238064.0KB)
    INFO: -----------------------------------------------
    INFO: *********************************************
    INFO: Free Space: PN-PC:C:\Temp: This is a prerequisite condition to test whether sufficient free space is available in the file system.
    INFO: Severity:IGNORABLE
    INFO: OverallStatus:SUCCESSFUL
    INFO: -----------------------------------------------
    INFO: Verification Result for Node:PN-PC
    INFO: Expected Value:130MB
    INFO: Actual Value:384.1495GB
    INFO: -----------------------------------------------
    INFO: *********************************************
    INFO: Architecture: This is a prerequisite condition to test whether the system has a certified architecture.
    INFO: Severity:CRITICAL
    INFO: OverallStatus:SUCCESSFUL
    INFO: -----------------------------------------------
    INFO: Verification Result for Node:PN-PC
    INFO: Expected Value:64-bit
    INFO: Actual Value:64-bit
    INFO: -----------------------------------------------
    INFO: *********************************************
    INFO: Environment variable: "PATH": This test checks whether the length of the environment variable "PATH" does not exceed the recommended length.
    INFO: Severity:CRITICAL
    INFO: OverallStatus:SUCCESSFUL
    INFO: -----------------------------------------------
    INFO: Verification Result for Node:PN-PC
    INFO: Expected Value:1023
    INFO: Actual Value:369
    INFO: -----------------------------------------------
    INFO: All forked task are completed at state checkPrereqs
    INFO: Completed background operations
    INFO: Moved to state <checkPrereqs>
    INFO: Waiting for completion of background operations
    INFO: Completed background operations
    INFO: Validating view at state <checkPrereqs>
    INFO: Completed validating view at state <checkPrereqs>
    INFO: Validating state <checkPrereqs>
    INFO: Using default Validator configured in the Action class oracle.install.ivw.client.action.PrereqAction
    INFO: Completed validating state <checkPrereqs>
    INFO: Verifying route success
    INFO: Get view named [SummaryUI]

    Yes, I've tried using the -jreLoc option but it doesn't seem to like the path.
    Version 6.0.200.2 is at C:\Program Files (x86)\Java\jre6\bin
    C:\Stage\win64_11gR2_client\client>setup.exe -jreLoc "C:\Program Files (x86)\Java\jre6"
    Starting Oracle Universal Installer...
    Checking monitor: must be configured to display at least 256 colors Higher than 256 .    Actual 4294967296     Passed
    Preparing to launch Oracle Universal Installer from C:\Temp\OraInstall2013-06-17_04-47-16PM. Please wait ...
    The Java RunTime Environment was not found at "C:\Program Files (x86)\Java\jre6"\bin\javaw.exe. Hence, the Oracle Universal Installer cannot be run.
    Please visit http://www.javasoft.com and install JRE version 1.3.1 or higher and try again
    I downloaded a newer version to C:\APPS\Java thinking the space in the path for Program Files (x86) might be a problem, but that also fails with the same error message
    Version 7.0.210.11 is at C:\APPS\Java\bin
    C:\Stage\win64_11gR2_client\client>setup.exe -jreLoc "C:\APPS\Java\"
    Starting Oracle Universal Installer...
    Checking monitor: must be configured to display at least 256 colors Higher than 256 .    Actual 4294967296     Passed
    Preparing to launch Oracle Universal Installer from C:\Temp\OraInstall2013-06-17_05-09-13PM. Please wait ...
    The Java RunTime Environment was not found at C:APPS\Java"\bin\javaw.exe. Hence, the Oracle Universal Installer cannot be run.
    Please visit http://www.javasoft.com and install JRE version 1.3.1 or higher and try again

  • Oracle AS control hangs after user login - AIX 5.3

    Hi,
    We are engaged with a customer who deploys OracleAS 10.1.2.0.2 on AIX 5.3. After installation, AS control hangs after user logins. This issue is addressed in release note and metalink doc365725.1. We followed the guide in metalink to do the modification and we can access AS control successfully after that. However, due to some other reasons, we restart the AIX, and after that AS control hangs again. This is quite strange. Does anyone meet with this symptom before? How can we resolve this issue?
    Thanks in advance.
    Sindhiya V.

    Thanks everyone for the response and help.
    I completed the task yesterday with the permission method. However, I think I found the solution after completion. The trick lies at the oraInst.loc file. It lies at /etc directory for my case (AIX 5.3).
    If you do a opatch lsinventory, it shows you the Central Inventory location and the location of orainst.loc. For my case, it is pointing to /oracle/DE1/920_64/inventory. Therefore, even though I am logging in as oraqa1 with all env and oracle settings correct, it still points to DE1 inventory.
    I think modifying the Central inventory location in oraInst.loc will solve this problem.

  • Parallel Index creation takes more time...!!

    OS - Windows 2008 Server R2
    Oracle - 10.2.0.3.0
    My table size is - 400gb
    Number of records - 657,45,95,123
    my column definition first_col varchar2(22) ; -> I am creating index on this column
    first_col -> actual average size of column value is 10
    I started to create index on this column by following command
    CREATE INDEX CALL_GROUP1_ANO ON CALL_GROUP1(A_NO) LOCAL PARALLEL 8 NOLOGGING COMPRESS ;
    -> In my first attempt after three hours I got an error :
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    So I increased the size of temp tablespace to 380GB ,Because i expect the size of first_col index this much.
    -> In my second attempt Index creation is keep going even after 17 hours...!!
    Now the usage of temp space is 162 GB ... still it is growing..
    -> I checked EM Advisor Central ADDM :
    it says - The PGA was inadequately sized, causing additional I/O to temporary tablespaces to consume significant database time.
    1. why this takes this much of Temp space..?
    2. It is required this much of time to CREATE INDEX in parallel processing...? more than 17 hrs
    3. How to calculate and set the size of PGA..?

    OraFighter wrote:
    Oracle - 10.2.0.3.0
    My table size is - 400gb
    Number of records - 657,45,95,123
    my column definition first_col varchar2(22) ; -> I am creating index on this column
    first_col -> actual average size of column value is 10
    I started to create index on this column by following command
    CREATE INDEX CALL_GROUP1_ANO ON CALL_GROUP1(A_NO) LOCAL PARALLEL 8 NOLOGGING COMPRESS ;
    Now the usage of temp space is 162 GB ... still it is growing..The entire data set has to be sorted - and the space needed doesn't really vary with degree of parallelism.
    6,574,595,123 index entries with a key size of 10 bytes each (assuming that in your choice of character set one character = one byte) requires per row approximately
    4 bytes row overhead 10 bytes data, 2 bytes column overhead for data, 6 bytes rowid, 2 bytes column overhead for rowid = 24 bytes.
    For the sorting overheads, using the version 2 sort, you need approximately 1 pointer per row, which is 8 bytes (I assumed you're on 64 bit Oracle on this platform) - giving a total of 32 bytes per row.
    32 * 6,574,595,123 / 1073741824 = 196 GB
    You haven't said how many partitions you have, but you might want to consider creating the index unusable, then issuing a rebuild command on each partition in turn. From "Practical Oracle 8i":
    <blockquote>
    In the absence of partitioned tables, what would you do if you needed to create a new index on a massive data set to address a new user requirement? Can you imagine the time it would take to create an index on a 450M row table, not to mention the amount of space needed in the temporary segment. It's the sort of job that you schedule for Christmas or Easter and buy a couple of extra discs to add to the temporary tablespace.
    With suitably partitioned tables, and perhaps a suitably friendly application, the scale of the problems isn't really that great, because you can build the index on each partition in turn. This trick depends on a little SQL feature that appears to be legal even though I haven't managed to find it in the SQL reference manual:
         create index big_new_index on partitioned_table (colX)
         local
         UNUSABLE
         tablespace scratchpad
    The key word is UNUSABLE. Although the manual states that you can 'alter' an index to be unusable, it does not suggest that you can create it as initially unusable, nevertheless this statement works. The effect is to put the definition of the index into the data dictionary, and allocate all the necessary segments and partitions for the index - but it does not do any of the real work that would normally be involved in building an index on 450M rows.
    </blockquote>
    (The trick was eventually documented a couple of years after I wrote the book.)
    Regards
    Jonathan Lewis

  • Error in index creation: null

    Hi,
    I have a problem when I try to create an index.
    The message I get is: Could not create the index: null
    But apparently the index is created. I try to add data sources but then i get portal runtime error.
    Looking in TREX monitor the Queue is created (obviously without any entries).
    And finally when I try to create a taxonomy (after choosing Trex Search and Classification service in index creation) the new button doesn't appear!
    Could anyone give a clue?
    Thanks in advance.
    Guillermo.

    Please visit Google and issue
    ora-29833 site:oracle.com
    Kindly do this with every error you get.
    Sybrand Bakker
    Senior Oracle DBA

  • Fast index creation suggestions wanted

    Hi:
    I've loaded a table with a little over 100,000,000 records. The table has several indexes which I must now create. Need to do this as fast as possible.
    I've read the excellent article by Don Burleson (http://www.dba-oracle.com/oracle_tips_index_speed.htm) but still have a few questions.
    1) If the table is not partitioned, does it still make sense to use "parallel"?
    2) The digit(s) following "compress" indicate the number of consective columns at the head of the index that have duplicates. True?
    3) How will the compressed index effect query performance (vs not compressed) down the line?
    4) In the future I will be doing lots and lots of updates on the indexed columns of the records as well as lots of record deletes and inserts into/out-of the table. Will these updates/inserts/deletes run faster or slower given that the indexes are compressed vs if they were not?
    5) In order to speed up the sorting, is it better to add datafiles to the TEMP table or create more TEMP tables (remember, running "parallel")
    Thanks in Advance

    There are people who would argue that excellent and Mr. Burleson do not belong in the same sentence.
    1) Yes, you can still use parallel (and nologging) to create the index, but don't expect 20 - 30 times faster index creation.
    2) It is the number of columns to compress by, they may not neccesarily have duplicates. For a unique index the default is number of columns - 1, for a non-unique index the default is the number of columns.
    3) If you do a lot of range scans or fast full index scans on that index, then you may see some performance benefit from reading fewer blocks. If the index is mostly used in equality predicates, then the performance benefit will likely be minimal.
    4) It really depends on too many factors to predict. The performance of inserts, updates and deletes will be either
    A) Slower
    B) The same
    C) Faster
    5) If you are on 10G, then I would look at temporary tablespace groups which can be beneficial for parallel operations. If not, then allocate as much memory as possible to sort_area_size to minimize disk sorts, and add space to your temporary tablespace to avoid unable to extend. Adding additional temporary tablespaces will not help because a user can only use one temporary tablespace at a time, and parallel index creation is only one user.
    You might want to do some searching at Tom Kyte's site http://asktom.oracle.com for some more responsible answers. Tom and Don have had their disagreements in the past, and in most of them, my money would be on Tom to be corerct.
    HTH
    John

  • Error while running index creation

    Hi All ,
    I am running index creation step like below .
    alter session enable parallel DDL;
    alter session set workarea_size_policy=manual;
    alter session set sort_area_size=250000000;
    alter session set db_file_multiblock_read_count=128;
    CREATE INDEX "CUST_ROUTE_IDX" ON "CUST_2_K2K" ("PP2S", "KK2S")
    PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING
    STORAGE( INITIAL 5M
    NEXT 16777216
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT)
    TABLESPACE "INDX" PARALLEL 2;
    when i am running like above i am getting following error..
    CREATE INDEX "CUST_ROUTE_IDX" ON "CUST_ROUTE_IDX" ("PP2S", "KK2S")
    **ERROR at line 1:**
    **ORA-00604: error occurred at recursive SQL level 1**
    **ORA-04031: unable to allocate 38410272 bytes of shared memory ("large**
    **pool","SEQ$","QERHJ hash-joi","kllcqc:kllcqslt")**
    DB is ORA 10gR2 installed on 8 cpu machine with 32 GB of RAM . From Unix Top command i can understand that 18GB of RAM is free.
    When i remove "alter session set sort_area_size=250000000;" and run above script no error is coming .
    But enough memory is availabe to the machine as per UNIX . Can any one help me out why this err is coming .
    Also when I do " show parameter sort_area_size " in any new sqlplus session it displays 65536
    This value is in bytes/KB/MB ...pls advice
    Thanks in Advance

    Please read any error message carefully, prior to posting.
    **ORA-04031: unable to allocate 38410272 bytes of shared memory (*"large**
    **pool"*,"SEQ$","QERHJ hash-joi","kllcqc:kllcqslt")**
    Oracle is trying to allocate memory in the large pool, and the large pool is too small.
    The sort_area_size has nothing to do with it.
    Also there is no gain in setting the work_policy to manual, as Oracle will allocate memory beyond pga_aggregate_target automagically.
    There is also no gain in setting db_file_multiblock_read_count, as Oracle will adjust that automagically.
    Sort_area_size is in bytes, as documented, and it is your value in the spfile/init.ora which is the default or which you have set.
    You can read about this in the documentation which is online at http://tahiti.oracle.com
    Sybrand Bakker
    Senior Oracle DBA

  • CONTEXT index creation - performance!

    Hi,
    I have a table with about 5Million rows. The content that needs to be indexed is of RAW datatype. The average size (length) of this field is about 50 characters (it could be more).
    I am trying to index this column to perform a keyword search. DEtails are furnished below.
    table:
    SQL> desc kwtai
    Name Null? Type
    TSD_HH24 DATE
    COUNTRY_CODE_ALPHA_2 VARCHAR2(2)
    ONETWORK NUMBER(6)
    OADDRESS VARCHAR2(25)
    DNETWORK NUMBER(6)
    DADDRESS VARCHAR2(25)
    MESSAGE_LENGTH NUMBER
    MESSAGE_CONTENT RAW(2000)
    Preferences:-
    begin
    Ctx_Ddl.Create_Preference('mc_storage', 'BASIC_STORAGE');
    ctx_ddl.set_attribute('mc_storage','I_TABLE_CLAUSE',
    'tablespace large_index storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('mc_storage', 'K_TABLE_CLAUSE',
    'tablespace large_index storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('mc_storage', 'R_TABLE_CLAUSE',
    'tablespace large_index storage (initial 1M) lob (data) store as (cache)');
    ctx_ddl.set_attribute('mc_storage', 'N_TABLE_CLAUSE',
    'tablespace large_index storage (initial 1M)');
    ctx_ddl.set_attribute('mc_storage', 'I_INDEX_CLAUSE',
    'tablespace large_index storage (initial 1M) compress 2');
    ctx_ddl.create_preference('mc_lex', 'BASIC_LEXER');
    ctx_ddl.set_attribute('mc_lex', 'skipjoins', '_-"''`~!@#$%^&*()+=|}{[]\:;<>?/.,');
    ctx_ddl.set_attribute('mc_lex', 'INDEX_STEMS','NONE');
    end;
    create index kwtaidx on kwtai (message_content) indextype is ctxsys.context
    parameters (' lexer mc_lex storage mc_storage memory 500M ')
    parallel 16;
    This create index takes about 4 hours to complete on a 8CPU dual core machine.
    This is on Oracle 10g (10.2.0.4)
    The reason i am creating the index as opposed to syncing it is because the data gets loaded into this table only once a day and it gets cleared once my keyword analysis is done.
    Any pointers to speed up the index creation will be really appreciated! Thanks in advance!

    My base table has the text that needs to be indexed stored in the "MESSAGE_CONTENT" column which for now is RAW data type. The data stored in this table are in hex representation.
    Some examples -
    MESSAGE_CONTENT
    616C70686120626574612067616D6D612064656C746120657073696C6F6E207A657461206E69F16F
    616C70686120626574612067616D6D612064656C746120657073696C6F6E207A657461
    616C70686120626574612067616D6D612064656C746120657073696C6F6E207A657461206E69C3B16F
    6865792E2C2C77686174277320676F696E67206F6E2E2E2E7066206368616E67277320697320736F6D652072657374617572616E742E2074686579206172652070736564756F2D636F6F6C
    54686520477265656B20616C7068616265742069732074686520736372697074207468617420686173206265656E
    54686520477265656B20616C7068616265742069732074686520736372697074207468617420686173206265656E20706F73742D64617461
    Now with your suggestion i tried to bypass this. So what i did was added a format column to my base table and updated it to "TEXT". My database is in UTF8.
    Now when i create the index with the following preferences it takes less than a minute.
    begin
    Ctx_Ddl.Create_Preference('kwta_storage', 'BASIC_STORAGE');
    ctx_ddl.set_attribute('kwta_storage','I_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('kwta_storage', 'K_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('kwta_storage', 'R_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 1M) lob (data) store as (cache)');
    ctx_ddl.set_attribute('kwta_storage', 'N_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 1M)');
    ctx_ddl.set_attribute('kwta_storage', 'I_INDEX_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 1M) compress 2');
    ctx_ddl.create_preference('mylex', 'BASIC_LEXER');
    ctx_ddl.set_attribute('mylex', 'skipjoins', '_-"''`~!@#$%^&*()+=|}{[]\:;<>?/,');
    ctx_ddl.set_attribute('mylex','punctuations','.?!');
    ctx_ddl.set_attribute('mylex', 'INDEX_STEMS','NONE');
    ctx_ddl.set_attribute('mylex', 'continuation','\-');
    Ctx_Ddl.Create_Stoplist ( 'mystop' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'is' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'has' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'the' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'that' );
    end;
    create index kwtaidx on kwtai (message_content) indextype is ctxsys.context
    parameters ('filter ctxsys.auto_filter format column fmt stoplist mystop lexer mylex storage kwta_storage memory 500M')
    parallel 16;
    When i select distinct tokens from the $I table i get the following
    TOKEN_TEXT
    RESTAURANT
    GREEK
    WHATS
    ARE
    DELTA
    ZETA
    ALPHA
    ALPHABET
    EPSILON
    PF
    PSEDUOCOOL
    SOME
    CHANGS
    NIÃO
    ON
    POSTDATA
    SCRIPT
    BEEN
    GAMMA
    GOING
    HEY
    NI
    O
    THEY
    BETA
    Now what i am also wondering is if the text (message_content column) is being converted to UTF8 (database characterset) by using AUTO_FILTER. Is my assumption correct? Not sure how to validate this?
    And, would you kindly share of why RAW must not be used in this case?
    Thanks for all your pointers!

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Not able to view Forms Server version in Help: About Oracle Applications after the forms upgrade 10.1.2.3.0

    Hi all,
    DB:11.2.0.3.0
    EBS:12.1.3
    O/S: Sun Solaris SPARC 64 bits
    I am not able to view Forms Server version in Help: About Oracle Applications after the forms upgrade 10.1.2.3.0 after the forms upgrade 10.1.2.3.0 as per note:Upgrading OracleAS 10g Forms and Reports to 10.1.2.3 (437878.1)
    Java/jre upgraded to 1.7.0.45 and JAR files regenerated(without force option). Able to opne forms without any issues.
    A)
    $ORACLE_HOME/bin/frmcmp help=y
    FRM-91500: Unable to start/complete the build.
    B)
    $ORACLE_HOME/bin/rwrun ?|grep Release
    Report Builder: Release 10.1.2.3.0 - Production on Thu Nov
    28 14:20:45 2013
    Is this an issue? Could anyone please share the fix if faced the similar issue earlier.
    Thank You for your time
    Regards,

    Hi Hussein,
    You mean reboot the solaris server and then start database and applications services. We have two databases running on this solaris server.
    DBWR Trace file shows:
    Read of datafile '+ASMDG002/test1/datafile/system.823.828585081' (fno 1) header failed with ORA-01206
    Rereading datafile 1 header failed with ORA-01206
    V10 STYLE FILE HEADER:
            Compatibility Vsn = 186646528=0xb200000
            Db ID=0=0x0, Db Name='TEST1'
            Activation ID=0=0x0
            Control Seq=31739=0x7bfb, File size=230400=0x38400
            File Number=1, Blksiz=8192, File Type=3 DATA
    Tablespace #0 - SYSTEM  rel_fn:1
    Creation   at   scn: 0x0000.00000004 04/27/2000 23:14:44
    Backup taken at scn: 0x0001.db8e5a1a 04/17/2010 04:16:14 thread:1
    reset logs count:0x316351ab scn: 0x0938.0b32c3b1
    prev reset logs count:0x31279a4c scn: 0x0938.08469022
    recovered at 11/28/2013 19:43:22
    status:0x2004 root dba:0x00c38235 chkpt cnt: 364108 ctl cnt:364107
    begin-hot-backup file size: 230400
    Checkpointed at scn:  0x0938.0cb9fe5a 11/28/2013 15:04:52
    thread:1 rba:(0x132.49a43.10)
    enabled  threads:  01000000 00000000 00000000 00000000 00000000 00000000
    Hot Backup end marker scn: 0x0000.00000000
    aux_file is NOT DEFINED
    Plugged readony: NO
    Plugin scnscn: 0x0000.00000000
    Plugin resetlogs scn/timescn: 0x0000.00000000 01/01/1988
    00:00:00
    Foreign creation scn/timescn: 0x0000.00000000 01/01/1988
    00:00:00
    Foreign checkpoint scn/timescn: 0x0000.00000000 01/01/1988
    00:00:00
    Online move state: 0
    DDE rules only execution for: ORA 1110
    ----- START Event Driven Actions Dump ----
    ---- END Event Driven Actions Dump ----
    ----- START DDE Actions Dump -----
    Executing SYNC actions
    ----- START DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----
    Successfully dispatched
    ----- END DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK'
    (SUCCESS, 0 csec) -----
    Executing ASYNC actions
    ----- END DDE Actions Dump (total 0 csec) -----
    ORA-01186: file 1 failed verification tests
    ORA-01122: database file 1 failed verification check
    ORA-01110: data file 1:
    '+ASMDG002/test1/datafile/system.823.828585081'
    ORA-01206: file is not part of this database - wrong
    database id
    Thanks,

  • Database hangs after retrieving some records....

    hi,
    I create a two level B+-tree index, for the first level i'm using key as logical database name, for the second level B+-tree i'm using some other field in my data. I am retrieving the records based on logical database name. I am using C++ to implement my index.
    In the following code i'm retrieving records from database. Program is behaving differently for different logical database names. For example in my database i have around 4 lakhs records with logical database name 'A' and around 1 lakh of records with logical database name 'B'. The following code displays all B records but programs hangs after retrieving some A records.
    I'm using PAGE_SIZE=8192, numBuffers=2048, and runnig in Fedora Core3.
    DbEnv myEnv(0);
         myEnv.set_cachesize(0, PAGE_SIZE * numBuffers, 0);
         myEnv.set_data_dir("/home/raviov/new/database");
              try {
                   myEnv.open("./", DB_CREATE | DB_INIT_MPOOL, 0);
              catch (DbException &e) {
                   cerr << "Exception occurred: " << e.what() << endl;
                   exit(1);
              db=new Db(&myEnv,0);
              db->set_pagesize(PAGE_SIZE);     
              db->set_bt_compare(compare_int);
         dbname=itoa(p);
         try
         db->open(NULL,
                   "treedb.db",
                   dbname,
                   DB_BTREE,
                   0,
                   0);
         catch(DbException &e)
              cerr << "Failed to open DB object" << endl;
              exit(1);
         db->cursor(NULL,&cursorp,0);
         key=new Dbt();
         data=new Dbt();
         while(ret=(cursorp->get(key,data,DB_NEXT))==0)
              j=*((int*)key->get_data());
              q=(*((struct tuple*)data->get_data())).startPos;
              r=(*((struct tuple*)data->get_data())).endPos;
              s=(*((struct tuple*)data->get_data())).level;
              cout<<"position : "<<j<<"\t";
              cout<<"start : "<<q<<"\t";
              cout<<"end : "<<r<<"\t";
              cout<<"level : "<<s<<"\n";
         if(ret==DB_NOTFOUND)
              cout<<"no records";
              exit(1);
         if(cursorp!=NULL)
              cursorp->close();
         try
              db->close(0);
         catch(DbException &e)
              cerr << "Failed to close DB object" << endl;
              exit(1);
         myEnv.close(0);

    HI Andrei,
    thank you for giving reply, I'm not using any secondary indecies, subdatabases and not using any threads.
    the following code displays index...
    #include <stdio.h>
    #include <db_cxx.h>
    #include <iostream.h>
    #include <db.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/types.h>
    int PAGE_SIZE=8192;
    int numBuffers=2048;
    char *itoa(const int x)
          char buf[100];
          snprintf(buf, sizeof(buf), "%d", x);
           return strdup(buf);
    int compare_int(Db dbp, const Dbt a, const Dbt *b)
      int ai;
      int bi;
      memcpy(&ai,a->get_data(),sizeof(int));
      memcpy(&bi,b->get_data(),sizeof(int));
      return(ai-bi);
    struct tuple
           int startPos;
           int endPos;
           int level;
    main()
         FILE *fp;
         int i,j,k,l,m,n,total=0;
         char dbname[500],filename[500],str [500];
         tuple t;
         Db *db;
         Dbt key,data;
         DbEnv myEnv(0);
         myEnv.set_cachesize(0, PAGE_SIZE * numBuffers, 0);
         try {
                myEnv.open("/home/raviov/Desktop/example", DB_CREATE | DB_INIT_MPOOL, 0);
           catch (DbException &e) {
                  cerr << "Exception occurred: " << e.what() << endl;
                  exit(1);
         for(n=0;n<=84;n++)
              db = new Db(&myEnv, 0);     // Instantiate the Db object
                    db->set_bt_compare(compare_int);
               db->set_pagesize(PAGE_SIZE);
              strcpy(filename,"/home/raviov/Desktop/GTCReport/code/sequence/splitter/tree/");
              strcat(filename,itoa(n));
              fp=fopen(filename,"r");
              if(fp==NULL)
                   cout<<"error in opening the file";
                   exit(0);
              while(fgets (str , 500 , fp)!=NULL)
                   sscanf(str,"%d(%d,%d,%d,%d)",&i,&j,&k,&l,&m);
                   key=new Dbt(&i,sizeof(int));
                   if(total==0)
                        strcpy(dbname,itoa(j));
                   t.startPos=k;
                   t.endPos=l;
                   t.level=m;
                   data=new Dbt(&t,sizeof(t));     
                   if(total==0)
                        try
                             db->open(NULL,
                             "tree.db",
                                       dbname,
                              DB_BTREE,
                                DB_CREATE,
                                0);
                        catch(DbException &e)
                               cerr << "Failed to create DB object" << endl;
                             exit(1);
                        total=99;
                   int ret=db->put(NULL,key,data,DB_NOOVERWRITE);
                   if(ret==DB_KEYEXIST)
                        cout<<"key already exist\n";
                        exit(1);
                   delete key;
                   delete data;
              total=0;
              fclose(fp);
              try
                   db->close(0);
              catch(DbException &e)
                     cerr << "Failed to close DB object" << endl;
                   exit(1);
         myEnv.close(0);
    }The following code retrieves the records from database that we had built above...
    #include <stdio.h>
    #include <db_cxx.h>
    #include <iostream.h>
    #include <db.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/types.h>
    int PAGE_SIZE=8192;
    int numBuffers=2048;
    char *itoa(const int x)
          char buf[100];
          snprintf(buf, sizeof(buf), "%d", x);
           return strdup(buf);
    int compare_int(Db dbp, const Dbt a, const Dbt *b)
      int ai;
      int bi;
      memcpy(&ai,a->get_data(),sizeof(int));
      memcpy(&bi,b->get_data(),sizeof(int));
      return(ai-bi);
    struct tuple
           int startPos;
           int endPos;
           int level;
    main()
         FILE *fp;
         int i,j,k,l,m,n,total=0;
         char *dbname;
         char filename[200],str [100];
         tuple t;
         Db *db;
         Dbt key,data;
         Dbc *cursorp=NULL;
         int ret,x=4;
         char *ravi;
         int p=84763;
         int y=2134872;
         int r,s,q,count=0;
         DbEnv myEnv(0);
         myEnv.set_cachesize(0, PAGE_SIZE * numBuffers, 0);
         myEnv.set_data_dir("/home/raviov/new/database");
              try {
                      myEnv.open("./", DB_CREATE | DB_INIT_MPOOL, 0);
               catch (DbException &e) {
                      cerr << "Exception occurred: " << e.what() << endl;
                      exit(1);
              db=new Db(&myEnv,0);
              db->set_pagesize(PAGE_SIZE);     
              db->set_bt_compare(compare_int);
         dbname=itoa(p);
         try
              db->open(NULL,
                    "tree.db",
                     dbname,
                     DB_BTREE,
                     0,
                     0);
         catch(DbException &e)
                cerr << "Failed to open DB object" << endl;
              exit(1);
         db->cursor(NULL,&cursorp,0);
         key=new Dbt();
         data=new Dbt();
         while(ret=(cursorp->get(key,data,DB_NEXT))==0)
              j=*((int*)key->get_data());
              q=(*((struct tuple*)data->get_data())).startPos;
              r=(*((struct tuple*)data->get_data())).endPos;
              s=(*((struct tuple*)data->get_data())).level;
              cout<<"position   : "<<j<<"\t";
              cout<<"start : "<<q<<"\t";
              cout<<"end   : "<<r<<"\t";
              cout<<"level   : "<<s<<"\n";
              delete key;
              delete data;
         if(ret==DB_NOTFOUND)
              cout<<"no records";
              exit(1);
         if(cursorp!=NULL)
              cursorp->close();
         try
              db->close(0);
         catch(DbException &e)
                cerr << "Failed to close DB object" << endl;
              exit(1);
         myEnv.close(0);     
    }

Maybe you are looking for