Expdp table with grants indexes constraints etc
hi
i need to export table/sometime table partitions including grants, constraints, indexes
this is what im using..
expdp parfile=par.exp directory=expdir dumpfile=exp1.dmp tables=hr.employee
parfile.exp is:
userid="/ as sysdba"
JOB_NAME=dbajob
do i need to include more options?
thanks in advance
Hi. As i know the indexes and the constraints need to be created with a different name.
Try DBMS_METADATA.get_ddl to export your constraints and indexes.
Similar Messages
-
Drop parent table with disabled child constraints?
I have a table I would like to drop and recreate. I do not want to have to recreate all the foreign key constraints again. Can I disable the foreign key constraints, drop the table and then recreate the table and enable the foreign key constraints?
Thanks,
John E.No.
The Triggers,Indexes,Constraints etc, assocated with a table are bound to the table they are defined for.
They owe their existance to the existance of the table.
If the table goes, so do the constraints.
If you have Oracle9i, can use DBMS_METADATA to generate the DDL script of the table before dropping it. -
Select count from large fact tables with bitmap indexes on them
Hi..
I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
ThxYou may have corruption in the index if the queries ...
Select /*+ full(t) */ count(*) from my_table t
... and ...
Select /*+ index_combine(t my_index) */ count(*) from my_table t;
... give different results.
Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them. -
ORA-00604 ORA-00904 When query partitioned table with partitioned indexes
Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
Query runs fine when query the partitioned table without partitioned indexes.
Here is the query.
SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
al27.accessory_code
FROM vlc.veh_vdc_accessorization_fact al1,
vlc.vdc_dim al2,
vlc.model_attribute_dim al7,
vlc.ppo_list_dim al18,
vlc.ppo_list_indiv_type_dim al23,
vlc.accy_type_dim al27
WHERE ( al2.vdc_id = al1.vdc_location_id
AND al7.model_attribute_id = al1.model_attribute_id
AND al18.mydppolist_id = al1.ppo_list_id
AND al23.mydppolist_id = al18.mydppolist_id
AND al23.mydaccytyp_id = al27.mydaccytyp_id
AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
AND al2.vdc_name IN
('PORT OF BALTIMORE',
'PORT OF JACKSONVILLE - LEXUS',
'PORT OF LONG BEACH',
'PORT OF NEWARK',
'PORT OF PORTLAND'
AND al27.accessory_code IN ('42', '43', '44', '45')
GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_codeI would recommend that you post this at the following OTN forum:
Database - General
General Database Discussions
and perhaps at:
Oracle Warehouse Builder
Warehouse Builder
The Oracle OLAP forum typically does not cover general data warehousing topics. -
Constantly inserting into large table with unique index... Guidance?
Hello all;
So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
This DB is about 1.7 TB of small record data.
One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
Now what we are observing is that the inserts into this table
- Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
- Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
- If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.Hello,
Here is a link to a blog article that will give you the right questions and answers which apply to your case:
http://jonathanlewis.wordpress.com/?s=delete+90%25
As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
(a) unique index (sourceid, timestamp)
(b) index(create time)
Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
create index indexname (sourceid, timestamp) compress;
or
alter index indexname rebuild compress; You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
Best Regards
Mohamed Houri -
Export with expdp tables with name like 'name1%' or like 'name2%'
How can I export data and metadata only from tables with name like 'name1%' or like 'name2%'
what value must have INCLUDE parameter ?
from one match it is
INCLUDE=TABLE:"LIKE 'REF%' "
this exports tables witch name begins with REF, but I need tables REF and REF1
if I write like this:
INCLUDE=TABLE:"LIKE 'REF%' ",TABLE:"LIKE 'REF1%' "
or
INCLUDE=TABLE:"LIKE 'REF%' "
INCLUDE=TABLE:"LIKE 'REF1%' "
it says that
Total estimation using BLOCKS method: 0 KB
ORA-39168: Object path TABLE was not found.
ORA-31655: no data or metadata objects selected for job
I supose such constructions work like between them is logical AND.
I need an OR construction.C:\>EXPDP SCOTT/TIGER DIRECTORY=DATA_DIR DUMPFILE=EXPDATA.DMP INCLUDE=TABLE:"LIK
E'REF%'"
Export: Release 10.1.0.2.0 - Production on Saturday, 07 October, 2006 9:38
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": SCOTT/******** DIRECTORY=DATA_DIR DUMP
FILE=EXPDATA.DMP INCLUDE=TABLE:LIKE'REF%'Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "SCOTT"."REF123" 9.406 KB 10 rows
. . exported "SCOTT"."REF12345" 9.414 KB 10 rows
. . exported "SCOTT"."REF1ABC" 9.406 KB 10 rowsMaster table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
C:\EXPDATA.DMP
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 09:38
C:\>
INCLUDE=TABLE:"LIKE 'REF1%'
no need to specify "ref1%" bcoz if u specify "ref%" it means all tables after "ref" will be export. -
Error dialog when open index dialog on tables with spatial index
Hi all,
When I want to open in the preferences of a table with a spatial index the index dialog, then there appears the following message:
"Index <myIndex> column GEOMETRY, datatype SDO_GEOMETRY is not a valid column type for use in a text index".
I can only click the ok button, but I am not able to modify any of my set index.
Does anyone else have the same problem?
regards markus
Version:
Java: 1.6.0.16
Oracle IDE: 2.1.1.64.39
OS: Linux, Ubuntu 9.10
Edited by: markusin on Mar 3, 2010 12:32 AMI have the same problem on SQLDev 2.1.1 for Windows. I hadn't this problem in 1.5.
I must use a normal sql script to create spatial index.
Vittorio -
Sort a table with no index or key defined?
Greetz!
I've got a script that builds a table, table A, by doing a select into. Later, a select is done on that table and an order by added. However the sort order is not persisting. Could this be due to there being no indexes or primary keys defined
on table A? Can a sort be done on a table without an index of any kind?
Thanks!
Love them all...regardless. - BuddhaTo add to Erland's comment
You cannot "look" directly at the contents of a table - you must use a query to generate a resultset. And a resultset, just like a table, has no defined order unless it was generated using an order by clause. Any order you might observe
is simply an artifact of your data, the load on the db engine, the currently cached data, and many other factors. If you see a consistent order, you might be tempted to assume such an order will always exist. Don't be tempted. Many others
have been so tempted and have discovered the incorrectness of this assumption at a later date - often at an inconvenient time. -
Generate table with dimensions of BPC etc in BW
Hi All,
Is it possible to generate a big table in BW which contains all dimensions and the dimension members of the different dimensions and have this table per application set in BPC.
I am looking for a table with column headers like this:
AppSet - Application - Dimension - Dimension Member
Note: When I look in the backend of BPC 7.5 NW we are using over here; so when I look in BW I can see there are different tables...tables for the applications per AppSet, tables containing the dimension for the different applications and tables for the dimension members. What I'd like to do is combine all these in a big file. Is it possible to generate a master data list in BW?
Thank you in advance for you attention.Hi,
Thats the beauty of [StarSchema|http://help.sap.com/saphelp_nw70ehp1/helpdata/en/4c/89dc37c7f2d67ae10000009b38f889/content.htm].
The star schema is the simplest data warehouse schema. It is called a star schema because its diagram resembles a star: the center (one or more fact tables) is directly joined to its points - the dimension tables.
A star schema is characterized by one or more very large fact tables that contain the primary information in the data warehouse and a number of much smaller dimension tables (or lookup tables), each of which contains information about the entries for a particular attribute in the fact table.
A star query is a join between a fact table and a number of lookup tables. Each lookup table is joined to the fact table using a primary-key to foreign-key join, but the lookup tables are not joined to each other.
A typical fact table contains keys and measures. For example, a simple fact table might contain the measure Sales, and keys Time, Product, and Market. In this case, there would be corresponding dimension tables for Time, Product, and Market. The Product dimension table, for example, would typically contain information about each product entry that appears in the fact table. A measure is typically a numeric or character column, and can be taken from a specified column from the fact table or calculated from two or more columns in one or a few fact tables.
A star join is a primary-key to foreign-key join between a fact table and dimension tables. The fact table normally has a primary-key composed of a few columns.
The main advantages of star schemas are that they:
u2022Provide a direct and intuitive mapping between business entities analyzed by end users and the schema design.
u2022Provide highly optimized performance for typical data warehouse queries.
hope you understand the concept behind storing data in different table instead in a single table.
Thanks,
Raju -
Script will create database, 3 database objects and publish.
The error is due to the generation script to create the conflict tables that is not stripping out default constraints that reference a UDF.
As you can see below, the failure is on the generation script for the conflict table.
The conflict table should be a bucket table that shouldn’t enforce data integrity.
See how the default constraints for the columns someint and somestring were stripped out of the generation logic however the default constraint that utilizes a UDF persist and uses the same object name that was used on the production table (The
bold line) , this occurs if I explicitly name the constraint or let the system generate the name for me like in the example posted.
The only way I could see getting around this right now is to drop all default constraints in the system that uses a UDF, publish then add the constraints back which is vulnerable to invalid data and a lot of moving
steps. This all worked with SQL 2000, 2005, 2008, 2008r2, it’s stopped working in SQL 2012 and continues to not work in SQL 2014.
Error messages:
Message: There is already an object named 'DF__repTable__id__117F9D94' in the database.
Could not create constraint. See previous errors.
Command Text: CREATE TABLE [dbo].[MSmerge_conflict_MergeRepFailurePublication_repTable](
[id] [varchar](8) NULL CONSTRAINT [DF__repTable__id__117F9D94] DEFAULT ([dbo].[repUDF]()),
[somedata] [varchar](64) NULL,
[rowguid] [uniqueidentifier] ROWGUIDCOL NULL,
[someint] [int] NULL,
[somestring] [varchar](64) NULL
Parameters:
Stack: at Microsoft.SqlServer.Replication.AgentCore.ReMapSqlException(SqlException e, SqlCommand command)
at Microsoft.SqlServer.Replication.AgentCore.AgentExecuteNonQuery(SqlCommand command, Int32 queryTimeout)
at Microsoft.SqlServer.Replication.AgentCore.ExecuteDiscardResults(CommandSetupDelegate commandSetupDelegate, Int32 queryTimeout)
at Microsoft.SqlServer.Replication.Snapshot.YukonMergeConflictTableScriptingManager.ApplyBaseConflictTableScriptToPublisherIfNeeded(String strConflictScriptPath)
at Microsoft.SqlServer.Replication.Snapshot.BaseMergeConflictTableScriptingManager.DoConflictTableScriptingTransaction(SqlConnection connection)
at Microsoft.SqlServer.Replication.RetryableSqlServerTransactionManager.ExecuteTransaction(Boolean bLeaveTransactionOpen)
at Microsoft.SqlServer.Replication.Snapshot.BaseMergeConflictTableScriptingManager.DoConflictTableScripting()
at Microsoft.SqlServer.Replication.Snapshot.MergeSmoScriptingManager.GenerateTableArticleCftScript(Scripter scripter, BaseArticleWrapper articleWrapper, Table smoTable)
at Microsoft.SqlServer.Replication.Snapshot.MergeSmoScriptingManager.GenerateTableArticleScripts(ArticleScriptingBundle articleScriptingBundle)
at Microsoft.SqlServer.Replication.Snapshot.MergeSmoScriptingManager.GenerateArticleScripts(ArticleScriptingBundle articleScriptingBundle)
at Microsoft.SqlServer.Replication.Snapshot.SmoScriptingManager.GenerateObjectScripts(ArticleScriptingBundle articleScriptingBundle)
at Microsoft.SqlServer.Replication.Snapshot.SmoScriptingManager.DoScripting()
at Microsoft.SqlServer.Replication.Snapshot.SqlServerSnapshotProvider.DoScripting()
at Microsoft.SqlServer.Replication.Snapshot.MergeSnapshotProvider.DoScripting()
at Microsoft.SqlServer.Replication.Snapshot.SqlServerSnapshotProvider.GenerateSnapshot()
at Microsoft.SqlServer.Replication.SnapshotGenerationAgent.InternalRun()
at Microsoft.SqlServer.Replication.AgentCore.Run() (Source: MSSQLServer, Error number: 2714)
Get help: http://help/2714
Server COL-PCANINOW540\SQL2012, Level 16, State 0, Procedure , Line 1
Could not create constraint. See previous errors. (Source: MSSQLServer, Error number: 1750)
Get help: http://help/1750
Server COL-PCANINOW540\SQL2012, Level 16, State 0, Procedure , Line 1
Could not create constraint. See previous errors. (Source: MSSQLServer, Error number: 1750)
Get help: http://help/1750
Pauly C
USE [master]
GO
CREATE DATABASE [MergeRepFailure]
ALTER DATABASE [MergeRepFailure] SET COMPATIBILITY_LEVEL = 110
GO
USE [MergeRepFailure]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create view
[dbo].[repView] as select right(newid(),8) as id
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION [dbo].[repUDF]()
RETURNS varchar(8)
BEGIN
declare @val varchar(8)
select top 1 @val = id from [repView]
return @val
END
GO
create table repTable
id varchar(8) default([dbo].[repUDF]()),
somedata varchar(64) null,
rowguid uniqueidentifier ROWGUIDCOL default(newid()),
someint int default(1),
somestring varchar(64) default('somestringvalue')
GO
insert into reptable (somedata) values ('whatever1')
insert into reptable (somedata) values ('whatever2')
go
/*test to make sure function is working*/
select * from reptable
GO
/*Publish database*/
use [MergeRepFailure]
exec sp_replicationdboption @dbname = N'MergeRepFailure', @optname = N'merge publish', @value = N'true'
GO
declare @Descrip nvarchar(128)
select @Descrip = 'Merge publication of database ''MergeRepFailure'' from Publisher ''' + @@servername +'''.'
print @Descrip
-- Adding the merge publication
use [MergeRepFailure]
exec sp_addmergepublication @publication = N'MergeRepFailurePublication', @description = N'@Descrip',
@sync_mode = N'native', @retention = 14, @allow_push = N'true', @allow_pull = N'true', @allow_anonymous = N'true',
@enabled_for_internet = N'false', @snapshot_in_defaultfolder = N'true', @compress_snapshot = N'false', @ftp_port = 21,
@ftp_subdirectory = N'ftp', @ftp_login = N'anonymous', @allow_subscription_copy = N'false', @add_to_active_directory = N'false',
@dynamic_filters = N'false', @conflict_retention = 14, @keep_partition_changes = N'false', @allow_synctoalternate = N'false',
@max_concurrent_merge = 0, @max_concurrent_dynamic_snapshots = 0, @use_partition_groups = null, @publication_compatibility_level = N'100RTM',
@replicate_ddl = 1, @allow_subscriber_initiated_snapshot = N'false', @allow_web_synchronization = N'false', @allow_partition_realignment = N'true',
@retention_period_unit = N'days', @conflict_logging = N'both', @automatic_reinitialization_policy = 0
GO
exec sp_addpublication_snapshot @publication = N'MergeRepFailurePublication', @frequency_type = 4, @frequency_interval = 14, @frequency_relative_interval = 1,
@frequency_recurrence_factor = 0, @frequency_subday = 1, @frequency_subday_interval = 5, @active_start_time_of_day = 500, @active_end_time_of_day = 235959,
@active_start_date = 0, @active_end_date = 0, @job_login = null, @job_password = null, @publisher_security_mode = 1
use [MergeRepFailure]
exec sp_addmergearticle @publication = N'MergeRepFailurePublication', @article = N'repTable', @source_owner = N'dbo', @source_object = N'repTable', @type = N'table',
@description = null, @creation_script = null, @pre_creation_cmd = N'drop', @schema_option = 0x000000010C034FD1, @identityrangemanagementoption = N'manual',
@destination_owner = N'dbo', @force_reinit_subscription = 1, @column_tracking = N'false', @subset_filterclause = null, @vertical_partition = N'false',
@verify_resolver_signature = 1, @allow_interactive_resolver = N'false', @fast_multicol_updateproc = N'true', @check_permissions = 0, @subscriber_upload_options = 0,
@delete_tracking = N'true', @compensate_for_errors = N'false', @stream_blob_columns = N'false', @partition_options = 0
GO
use [MergeRepFailure]
exec sp_addmergearticle @publication = N'MergeRepFailurePublication', @article = N'repView', @source_owner = N'dbo', @source_object = N'repView',
@type = N'view schema only', @description = null, @creation_script = null, @pre_creation_cmd = N'drop', @schema_option = 0x0000000008000001,
@destination_owner = N'dbo', @destination_object = N'repView', @force_reinit_subscription = 1
GO
use [MergeRepFailure]
exec sp_addmergearticle @publication = N'MergeRepFailurePublication', @article = N'repUDF', @source_owner = N'dbo', @source_object = N'repUDF',
@type = N'func schema only', @description = null, @creation_script = null, @pre_creation_cmd = N'drop', @schema_option = 0x0000000008000001,
@destination_owner = N'dbo', @destination_object = N'repUDF', @force_reinit_subscription = 1
GOMore information, after running a profile trace the following 2 statements, the column with the default on a UDF returns a row while the other default does not. This might be the cause of this bug. Is the same logic to generate the object on
the subscriber used to generate the conflict table?
exec sp_executesql N'
select so.name, schema_name(so.schema_id)
from sys.sql_dependencies d
inner join sys.objects so
on d.referenced_major_id = so.object_id
where so.type in (''FN'', ''FS'', ''FT'', ''TF'', ''IF'')
and d.class in (0,1)
and d.referenced_major_id <> object_id(@base_table, ''U'')
and d.object_id = object_id(@constraint, ''D'')',N'@base_table nvarchar(517),@constraint nvarchar(517)',@base_table=N'[dbo].[repTable]',@constraint=N'[dbo].[DF__repTable__id__117F9D94]'
exec sp_executesql N'
select so.name, schema_name(so.schema_id)
from sys.sql_dependencies d
inner join sys.objects so
on d.referenced_major_id = so.object_id
where so.type in (''FN'', ''FS'', ''FT'', ''TF'', ''IF'')
and d.class in (0,1)
and d.referenced_major_id <> object_id(@base_table, ''U'')
and d.object_id = object_id(@constraint, ''D'')',N'@base_table nvarchar(517),@constraint nvarchar(517)',@base_table=N'[dbo].[repTable]',@constraint=N'[dbo].[DF__repTable__somein__1367E606]'
Pauly C -
Insert problem using a SELECT from table with a index by function TRUNC
I came across this problem when trying to insert from a select statement, the select returns the correct results however when trying to insert the results into a table, the results differ. I have found a work around by forcing an order by on the select, but surely this is an Oracle bug as how can the select statements value differ from the insert?
Platform: Windows Server 2008 R2
Oracle 11.2.3 Enterprise edition
(I have not tried to replicate this on other versions)
Here are the scripts to create the two tables and source data:
CREATE TABLE source_data
ID NUMBER(2),
COUNT_DATE DATE
CREATE INDEX IN_SOURCE_DATA ON SOURCE_DATA (TRUNC(count_date, 'MM'));
INSERT INTO source_data VALUES (1, TO_DATE('20120101', 'YYYYMMDD'));
INSERT INTO source_data VALUES (1, TO_DATE('20120102', 'YYYYMMDD'));
INSERT INTO source_data VALUES (1, TO_DATE('20120103', 'YYYYMMDD'));
INSERT INTO source_data VALUES (1, TO_DATE('20120201', 'YYYYMMDD'));
INSERT INTO source_data VALUES (1, TO_DATE('20120202', 'YYYYMMDD'));
INSERT INTO source_data VALUES (1, TO_DATE('20120203', 'YYYYMMDD'));
INSERT INTO source_data VALUES (1, TO_DATE('20120301', 'YYYYMMDD'));
INSERT INTO source_data VALUES (1, TO_DATE('20120302', 'YYYYMMDD'));
INSERT INTO source_data VALUES (1, TO_DATE('20120303', 'YYYYMMDD'));
CREATE TABLE result_data
ID NUMBER(2),
COUNT_DATE DATE
);Now run the select statement:
SELECT id, TRUNC(count_date, 'MM')
FROM source_data
GROUP BY id, TRUNC(count_date, 'MM')You should get the following:
1 2012/02/01
1 2012/03/01
1 2012/01/01Now insert into the results table:
INSERT INTO result_data
SELECT id, TRUNC(count_date, 'MM')
FROM source_data
GROUP BY id, TRUNC(count_date, 'MM');Select from that table and you get:
1 2012/03/01
1 2012/03/01
1 2012/03/01The most recent month is repeated for each row.
Truncate your table and insert with the following statement and the results should now be correct:
INSERT INTO result_data
SELECT id, TRUNC(count_date, 'MM')
FROM source_data
GROUP BY id, TRUNC(count_date, 'MM')
ORDER BY 1, 2;If anyone has encountered this behavior before could you please let me know, I can't see that I am making a mistake as the selects results are correct they should not differ from what is being inserted.
Edited by: user11285442 on May 13, 2013 5:16 AM
Edited by: user11285442 on May 13, 2013 6:15 AMHi,
welcome to the forum. I cannot reproduce the same behavior.
Could you please post the SQLPlus output while executing all commands, like it has been done by S10390?
Also post the output of the following command:
SELECT * FROM v$version;When you put some code or output please enclose it between two lines starting with {noformat}{noformat}
i.e.:
{noformat}{noformat}
SELECT ...
{noformat}{noformat}
Formatted code is easier to read.
Regards.
Al -
How to optimize massive insert on a table with spatial index ?
Hello,
I need to implement a load process for saving up to 20 000 points per minutes in Oracle 10G R2.
These points represents car locations tracked by GPS and I need to store at least all position from the past 12 hours.
My problem is that the spatial index is very costly during insert (For the moment I do only insertion).
My several tries for the insertion by :
- Java and PreparedStatement.executeBatch
- Java and generation a SQLLoader file
- Java and insertion on view with a trigger "instead of"
give me the same results... (not so good)
For the moment, I work on : DROP INDEX, INSERT, CREATE INDEX phases.
But is there a way to only DISABLE INDEX and REBUILD INDEX only for inserted rows ?
I used the APPEND option for insertion :
INSERT /*+ APPEND */ INTO MY_TABLE (ID, LOCATION) VALUES (?, MDSYS.SDO_GEOMETRY(2001,NULL,MDSYS.SDO_POINT_TYPE(?, ?, NULL), NULL, NULL))
My spatial index is created with the following options :
'sdo_indx_dims=2,layer_gtype=point'
Is there a way to optimize these heavy load ???
What about the PARALLEL option and how does it work ? (Not so clear for me regarding the documentation... I am not a DBA)
Thanks in advancedIt is possible to insert + commit 20000 points in 16 seconds.
select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
drop table testpoints;
create table testpoints
( point mdsys.sdo_geometry);
delete user_sdo_geom_metadata
where table_name = 'TESTPOINTS'
and column_name = 'POINT';
insert into user_sdo_geom_metadata values
('TESTPOINTS'
,'POINT'
,sdo_dim_array(sdo_dim_element('X',0,1000,0.01),sdo_dim_element('Y',0,1000,0.01))
,null)
create index testpoints_i on testpoints (point)
indextype is mdsys.spatial_index parameters ('sdo_indx_dims=2,layer_gtype=point');
insert /*+ append */ into testpoints
select (sdo_geometry(2001,null,sdo_point_type(1+ rownum / 20, 1 + rownum / 50, null),null,null))
from all_objects where rownum < 20001;
Duration: 00:00:10.68 seconds
commit;
Duration: 00:00:04.96 seconds
select count(*) from testpoints;
COUNT(*)
20000 The insert of 20 000 rows takes 11 seconds, the commit takes 5 seconds.
In this example there is no data traffic between the Oracle database and a client but you have 60 -16 = 44 seconds to upload your points into a temporary table. After uploading in a temporary table you can do:
insert /*+ append */ into testpoints
select (sdo_geometry(2001,null,sdo_point_type(x,y, null),null,null))
from temp_table;
commit;Your insert ..... values is slow, do some bulk processing.
I think it can be done, my XP computer that runs my database isn't state of the art. -
How to expdp table with a BLOB field when table is larger than UNDO tbs?
We have a 4-node RAC instance and are at 11.1. We have a 100 gig schema with a few hundred tables. One table contains about 80 gig of data. the table has pictures in it (BLOB column). Our 4 node RAC has 4 12 gig undo tablespaces.
We run out of undo when export a schema or just this table due to the size of the table.
According to metalink note ID 1086414.1 this can happen on fragmented tables. According to segment advisor, we are all good and not fragmented at all.
I also followed the troubleshooting advice in ID 833635.1 and ID 846079.1, but everything turned out ok.
LOBs and ORA-01555 troubleshooting [ID 846079.1]
Export Fails With ORA-02354 ORA-01555 ORA-22924 and How To Confirm LOB Segment Corruption Using Export Utility? [ID 833635.1]
initially we tried just to export it without special parameters.
expdp MY_SCHEMA/********@RACINSTANC DUMPFILE=MYFILE.dmp PARALLEL=8 directory=DATA_PUMP_DIR SCHEMAS=MY_SCHEMA
ORA-31693: Table data object "MY_SCHEMA"."BIGLOBTABLE" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 71 with name "_SYSSMU71_1268406335$" too small
then tried to export just the table into 8 files of 8G each (the failing table is about 90% of the schema size)
expdp MY_SCHEMA/******@RACINSTANCE DUMPFILE=MYFILE_%U.dmp PARALLEL=8 FILESIZE=8G directory=DATA_PUMP_DIR INCLUDE=TABLE:\"IN ('BIGLOBTABLE') \"
ORA-31693: Table data object "MY_SCHEMA"."BIGLOBTABLE" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 71 with name "_SYSSMU71_1268406335$" too small
We eventually resorted to exporting chunks out of the table by using the QUERY parameter
QUERY=BIGLOBTABLE:"WHERE BIGLOBTABLEPK > 1 AND BIGLOBTABLEPK <=100000"
and that worked but it is a kludge.
Since we will have to export this again down the road I was wondering if there is an easier way to export.
Any suggestions are appreciated.Note that undo data for LOB is not stored in UNDO tablespace but in LOB segments. So I am not sure ORA-1555 is directly linked to LOB data.
What is your undo_retention parameter ?
How long does EXPDP run before getting ORA-1555 ?
You could try to increase undo_retention parameter to avoid ORA-1555.
Are you running Entreprise Edition ? If yes, trying to transport the tablespace storing the table could be a solution. -
Insert in table with unique index
Hi
I Create a table save a factor for to calculate date, but other 2 columns are key table
CREATE TABLE TMP_FATOR
SETID VARCHAR2(5 BYTE) NOT NULL,
COMPANYID VARCHAR2(15 BYTE) NOT NULL,
FATOR NUMBER
CREATE UNIQUE INDEX IDX_TMP_FATOR ON TMP_FATOR
(SETID, COMPANYID)
NOLOGGINGI want to insert in table , but skip errors , I tried with
declare
i number;
begin
i:=1;
EXECUTE IMMEDIATE 'TRUNCATE TABLE SYSADM.TMP_FATOR';
BEGIN
INSERT INTO /*+ APPEND*/ SYSADM.TMP_FATOR
SELECT T1.SETID,
T1.COMPANYID,
SYSADM.pkg_ajusta_kenan.fnc_fator_dias_desconto(T1.SETID,T1.COMPANYID) fator
FROM SYSADM.PS_LOC_ITEM_SN T1;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
NULL;
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(SQLERRM);
END;
COMMIT;
end;But did not work
Why ?The determinisic keyword is just part of the
declaration whether declaring a standalone function
or a packaged function.
SCOTT @ nx102 Local> create package test_pkg
2 as
3 function determin_foo( p_arg in number )
4 return number
5 deterministic;
6 end;
7 /
Package created.
Elapsed: 00:00:00.34
1 create or replace package body test_pkg
2 as
3 function determin_foo( p_arg in number )
4 return number
5 deterministic
6 is
7 begin
8 return p_arg - 1;
9 end;
0* end;
SCOTT @ nx102 Local> /
Package body created.
Elapsed: 00:00:00.14JustinCan I to have other procedures and functions inside pacckage ? -
Importing data into tables with grant access (sql developer 3.2)
Hello,
I want to import data into a table PAY_BALANCE_BATCH_LINES which is an interface table. I'm logged in to a schema (APPS) and this table belongs to the HR schema. However, if you look at the grants, the APPS schema has all access to this particular table. In TOAD, this used to work great.
But in sqldeveloper, when I filter the tables dropdown, I am not able to find this table. Since this is my primary way of uploading data I'm not sure how else I can get access to upload data into this table. I don't know the password for the HR schema by the way.
Is there a way out?
Many ThanksScroll down the tree to the 'Other Users' node, expand it, and then drill down into HR > Tables. Then do your import.
For an alternative browser, right-click on your connection in the tree and open a Schema Browser.
Maybe you are looking for
-
Hi Experts, We are trying to do Depot sales configuration. we have configured stock transfer first . scenario i would explain below. we created Purchase order with the help of ME21N then we have done the delivery with the help of T code VL10B, VL02N
-
Inserting multiple rows using a single Insert statement without using dual
Hi all, i am trying to insert multiple rows using a single insert statement like the below one. The below one works fine.. But is there any other change that can be done in the below one without using dual... insert all into ps_hd_samp (num1,num2) va
-
I have an older iPod running v1.1.2 and when I connect to iTunes it says it needs to upgrade. I expected that. Unfortunately it errors every time (an unfortunately there is no error code other than (1) which doesn't seem to be a real code. It puts
-
Edwardian Script for Adobe Photoshop Lightroom 3
I have Adobe Photoshop Lightroom 3, I'm having trouble finding the font Edwardian Script in this version of lightroom. Is there a way to get this font in this version of lightroom?
-
I used to have a LOGITECH but reverted to the wired APPLE mouse that came with my computer. My problem is when I try to click on a file ONCE to : let's say change a file name or edit a file ...it'll open a file in a program I didn't command it to. A