DB2 backup excluding one table
Hello all,
We have AIX 5.3 and DB2 9.1 for our CRM 2007 server. There is a table which contains CVV number. Is it possible to take the backup by excluding that table. Are there any tools that would help in taking backup excluding the table containing CVV number.
Best Regards,
Ninad
a) You can encrypt the cvv column only - either using db2's encrypt function or encrypting the data before it hits the database.
b) If you have to exclude only CVV numbers, instead of dropping the entire table, you can consider setting the cvv value to null after exporting the data. after the backup completes, you can re-import the cvv data.
This approach is expensive and may take long time. In addition, you will not be able to insert new data into the table for the backup duration
c) You can consider setting up a nickname (reference to a remote table in a different database) for the entire table or the cvv column only. You need not take a backup of this database.
HTH
Similar Messages
-
Exclude a table while exporting a table
Hi ALL
I want to take an export of a DB excluding one table on oracle 11g. please help me.
Regards
MokaremThere is no such option with the old export utility. You can only list the tables you want to include in your export.
Are you on production env ? Is it a very big table ? Are you exporting for further import in 9i ?
Nicolas. -
How to exclude records from one table that is contained in a second table
I am trying to create a Crystal report that excludes records from one Table that is contained in a second table using the != link option and it is not working. I've tried all of the different enforce options, and it is still not excluding those records. Does anyone have any suggestions of what I'm doing wrong or any other suggestions how I can obtain the results I need?
Thanks in advance!Have you tried by Command ?
Thanks,
Gordon -
How to restore one table from the previous backup in 9.2.0.8 version.
Hi,
How to restore one table from the previous backup in 9.2.0.8 version.
Thanks
-GangaHi,
What is the table you want to restore?
Using export/import is supported with Oracle Apps database (for full database exp/imp, and certain schemas like custom ones). For the Apps schema, I believe it is not supported due to object dependencies and integrity constraints.
Regards,
Hussein -
How to delete the double records connected to one or more than one tables in SQL 2008?
Hi
Can anyone please help me with the SQL query. I Im having a table called People with columns names: personno., lastname, firstname and so on. The personno. is having duplicate records,so all the duplicate records i have written with "double" in
the beginning of the numbers. I tried deleting these double records but they are linked to one or more than one tables. I have to find out, all the tables blocking the deleting of double person. And then create select statements which creates update statements
in order to replace the current id of double person with substitute id. (The personno. is in the form of id's in the database)
ThanksYou should not append "double" in the personno. When we append it will not be able to join or relate to other table. Keep the id as it is and use another field(STATUS) to mark as duplicate. Also we will require another field(PRIMARYID) against
those duplicate rows i.e the main or the primary personno.
SELECT * FROM OtherTable a INNER JOIN
(SELECT personno, status, primaryid FROM PEOPLE WHERE status = 'Duplicate') b
ON a.personno = b.personno
UPDATE OtherTable SET personno = b.primaryid
FROM OtherTable a INNER JOIN
(SELECT personno, status, primaryid FROM PEOPLE WHERE status = 'Duplicate') b
ON a.personno = b.personno
NOTE: Please take backup before applying the query. This is not tested.
Regards, RSingh -
Requirment for taking a incremantal backup of Particuler table in oracle 11 G
Dear All ,
We have a requirement of incremental backup of Particular table in SAP . Is there any way in sap 0r oracle to take a incremental backup of Particular table .
If any buddy know please share your valuable points it would be help for me and others as well .
Like Example , i have a list of tables , i want to take backup of only those table , i don't need full db backup ., if some one no please share your idea and also how to run daily if it is possible at sap level or DB level
Tables
KONV
QAVE
QALS
BSAD
BSAK
BSID
BSIK
VBAP
LIKP
LIPS
VBAK
VBRK
VBRP
ZPLAN
MSEG
MKPF
KNC1
T023T
TWEWT
LFA1
T001W
ZCHASSIS
Thanks
Regards
ArpitHello,
You can use flash_back_query if you are using oracle database.
some like this
Create table BACKUP02102014 as select * from SAPSR3.YOURTABLE
AS OF TIMESTAMP
TO_TIMESTAMP('02-10-2014 00:00:00','DD-MM-YYYY HH24:MI:SS');
and tomorow you doing
Create table BACKUP03102014 as select * from SAPSR3.YOURTABLE
AS OF TIMESTAMP
TO_TIMESTAMP('03-10-2014 00:00:00','DD-MM-YYYY HH24:MI:SS');
We have a situation for expecific day. But you can do other ,..like this
Only diferents
select * from BACKUP03102014
minus
select * from BACKUP02102014 ;
or export by expdp oracle utility or make backup
r3trans -w -> you can export data from backuptable
Regards
Sandro Lobo
Message was edited by: Sandro Lobo -
Import one table taking toom uch time
Hi All,
i have database 10g running with ASM under HPUX 04 processor with 08Gb of RAM.
I have a problem of importing one table with 6M lines, it takes too much time even 02 days the data is not yet importing, i don't know where is the problem.
I tried also to use data pump but when exporting the table from the other server in RAC it gives error bellow:
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 2 with process name "DW04" prematurely terminated
ORA-31671: Worker process DW04 had an unhandled exception.
ORA-12801: error signaled in parallel query server P029, instance ab-db2:abdb2 (2)
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-31626: job does not exist
ORA-06512: at "SYS.ORACLE_DATAPUMP", line 19
ORA-06512: at "SYS.KUPW$WORKER", line 1342
ORA-06512: at line 2
Thanks fro your help for this problem.
regards
raitsarevoThis the command i used:
time expdp abillity/4dd1ct3d dumpfile=cb_coupons.dmp logfile=cb_coupons.log directory=exp_dir parallel=4 tables=cb_coupons
and this is a portion of my alert log file, no error signaled:
Fri Apr 18 15:57:35 2008
ALTER SYSTEM SET service_names='abdb','SYS$SYS.KUPC$C_1_20080418155733.ABDB' SCOPE=MEMORY SID='abdb1';
Fri Apr 18 15:57:35 2008
ALTER SYSTEM SET service_names='SYS$SYS.KUPC$C_1_20080418155733.ABDB','abdb','SYS$SYS.KUPC$S_1_20080418155733.ABDB' SCOPE=MEMORY SID='abdb1';
kupprdp: master process DM00 started with pid=212, OS id=10470
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_TABLE_09', 'ABILLITY', 'KUPC$C_1_20080418155733', 'KUPC$S_1_20080418155733', 0);
kupprdp: worker process DW01 started with worker id=1, pid=215, OS id=10621
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_TABLE_09', 'ABILLITY');
kupprdp: worker process DW02 started with worker id=2, pid=219, OS id=11777
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_TABLE_09', 'ABILLITY');
Fri Apr 18 15:59:41 2008
ALTER SYSTEM SET service_names='SYS$SYS.KUPC$S_1_20080418155733.ABDB','abdb' SCOPE=MEMORY SID='abdb1';
Fri Apr 18 15:59:41 2008
ALTER SYSTEM SET service_names='abdb' SCOPE=MEMORY SID='abdb1';
Fri Apr 18 16:01:16 2008
Thread 1 advanced to log sequence 62141
Current log# 5 seq# 62141 mem# 0: +ASM_DG2/abdb/onlinelog/group_5.286.618685253
Current log# 5 seq# 62141 mem# 1: +ASM_DG1/abdb/onlinelog/group_5.10591.618685257
Thanks for your help -
How to improve speed of queries that use ORM one table per concrete class
Hi,
Many tools that make ORM (Object Relational Mapping) like Castor, Hibernate, Toplink, JPOX, etc.., have the one table per concrete class feature that maps objects to follow structure:
CREATE TABLE ABSTRACTPRODUCT (
ID VARCHAR(8) NOT NULL,
DESCRIPTION VARCHAR(60) NOT NULL,
PRIMARY KEY(ID)
CREATE TABLE PRODUCT (
ID VARCHAR(8) NOT NULL REFERENCES ABSTRACTPRODUCT(ID),
CODE VARCHAR(10) NOT NULL,
PRICE DECIMAL(12,2),
PRIMARY KEY(ID)
CREATE UNIQUE INDEX iProduct ON Product(code)
CREATE TABLE BOOK (
ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
AUTHOR VARCHAR(60) NOT NULL,
PRIMARY KEY (ID)
CREATE TABLE COMPACTDISK (
ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
ARTIST VARCHAR(60) NOT NULL,
PRIMARY KEY(ID)
there is a way to improve queries like
SELECT
pd.code CODE,
abpd.description DESCRIPTION,
DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
FROM
ABSTRACTPRODUCT abpd,
PRODUCT pd,
BOOK bk,
COMPACTDISK cd
WHERE
pd.id = abpd.id AND
bk.id(+) = abpd.id AND
cd.id(+) = abpd.id AND
pd.code like '101%'
or like this:
SELECT
pd.code CODE,
abpd.description DESCRIPTION,
DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
FROM
ABSTRACTPRODUCT abpd,
PRODUCT pd,
BOOK bk,
COMPACTDISK cd
WHERE
pd.id = abpd.id AND
bk.id(+) = abpd.id AND
cd.id(+) = abpd.id AND
abpd.description like '%STARS%' AND
pd.price BETWEEN 1 AND 10
think in a table with many rows, then exists something inside MaxDB to improve this type of queries? like some anotations on SQL? or declare tables that extends another by PK? on other databases i managed this using Materialized Views, but i think that this can be faster just using PK, i'm wrong? the better is to consolidate all tables in one table? what is the impact on database size with this consolidation?
note: with consolidation i will miss NOT NULL constraint at database side.
thanks for any insight.
ClóvisHi Lars,
i dont understand because the optimizer get that Index for TM at execution plan, and because dont use the join via KEY column, note the WHERE clause is "TM.OID = MF.MY_TIPO_MOVIMENTO" by the key column, and the optimizer uses an INDEX that the indexed column is ID_SYS, that isnt and cant be a primary key, because its not UNIQUE, follow the index columns:
indexes of TipoMovimento
INDEXNAME COLUMNNAME SORT COLUMNNO DATATYPE LEN INDEX_USED FILESTATE DISABLED
ITIPOMOVIMENTO TIPO ASC 1 VARCHAR 2 220546 OK NO
ITIPOMOVIMENTO ID_SYS ASC 2 CHAR 6 220546 OK NO
ITIPOMOVIMENTO MY_CONTA_DEBITO ASC 3 CHAR 8 220546 OK NO
ITIPOMOVIMENTO MY_CONTA_CREDITO ASC 4 CHAR 8 220546 OK NO
ITIPOMOVIMENTO1 ID_SYS ASC 1 CHAR 6 567358 OK NO
ITIPOMOVIMENTO2 DESCRICAO ASC 1 VARCHAR 60 94692 OK NO
after i create the index iTituloCobrancaX7 on TituloCobranca(OID,DATA_VENCIMENTO) in a backup instance and get surprised with the follow explain:
OWNER TABLENAME COLUMN_OR_INDEX STRATEGY PAGECOUNT
TC ITITULOCOBRANCA1 RANGE CONDITION FOR INDEX 5368
DATA_VENCIMENTO (USED INDEX COLUMN)
MF OID JOIN VIA KEY COLUMN 9427
TM OID JOIN VIA KEY COLUMN 22
TABLE HASHED
PS OID JOIN VIA KEY COLUMN 1350
BOL OID JOIN VIA KEY COLUMN 497
NO TEMPORARY RESULTS CREATED
JDBC_CURSOR_19 RESULT IS COPIED , COSTVALUE IS 988
note that now the optimizer gets the index ITITULOCOBRANCA1 as i expected, if i drop the new index iTituloCobrancaX7 the optimizer still getting this execution plan, with this the query executes at 110 ms, with that great news i do same thing in the production system, but the execution plan dont changes, and i still getting a long execution time this time at 413516 ms. maybe the problem is how optimizer measure my tables.
i checked in DBAnalyser that the problem is catalog cache hit rate (we discussed this at [catalog cache hit rate, how to increase?|;
) and the low selectivity of this SQL command, then its because of this that to achieve a better selectivity i must have an index with, MF.MY_SACADO, MF.TIPO and TC.DATA_VENCIMENTO, as explained in previous posts, since this type of index inside MaxDB isnt possible, i have no choice to speed this type of query without changing tables structure.
MaxDB developers can develop this type of index? or a feature like this dont have any plans to be made?
if no, i must create another schema, to consolidate tables to speed queries on my system, but with this consolidation i will get more overhead, i must solve the less selectivity because i think if the data on tables increase, the query becomes impossible, i see that CREATE INDEX supports FUNCTION, maybe a FUNCTION that join data of two tables can solve this?
about instance configuration it is:
Machine:
Version: '64BIT Kernel'
Version: 'X64/LIX86 7.6.03 Build 007-123-157-515'
Version: 'FAST'
Machine: 'x86_64'
Processors: 2 ( logical: 8, cores: 8 )
data volumes:
ID MODE CONFIGUREDSIZE USABLESIZE USEDSIZE USEDSIZEPERCENTAGE DROPVOLUME TOTALCLUSTERAREASIZE RESERVEDCLUSTERAREASIZE USEDCLUSTERAREASIZE PATH
1 NORMAL 4194304 4194288 379464 9 NO 0 0 0 /db/SPDT/data/data01.dat
2 NORMAL 4194304 4194288 380432 9 NO 0 0 0 /db/SPDT/data/data02.dat
3 NORMAL 4194304 4194288 379184 9 NO 0 0 0 /db/SPDT/data/data03.dat
4 NORMAL 4194304 4194288 379624 9 NO 0 0 0 /db/SPDT/data/data04.dat
5 NORMAL 4194304 4194288 380024 9 NO 0 0 0 /db/SPDT/data/data05.dat
log volumes:
ID CONFIGUREDSIZE USABLESIZE PATH MIRRORPATH
1 51200 51176 /db/SPDT/log/log01.dat ?
parameters:
KERNELVERSION KERNEL 7.6.03 BUILD 007-123-157-515
INSTANCE_TYPE OLTP
MCOD NO
_SERVERDB_FOR_SAP YES
_UNICODE NO
DEFAULT_CODE ASCII
DATE_TIME_FORMAT ISO
CONTROLUSERID DBM
CONTROLPASSWORD
MAXLOGVOLUMES 2
MAXDATAVOLUMES 11
LOG_VOLUME_NAME_001 /db/SPDT/log/log01.dat
LOG_VOLUME_TYPE_001 F
LOG_VOLUME_SIZE_001 6400
DATA_VOLUME_NAME_0005 /db/SPDT/data/data05.dat
DATA_VOLUME_NAME_0004 /db/SPDT/data/data04.dat
DATA_VOLUME_NAME_0003 /db/SPDT/data/data03.dat
DATA_VOLUME_NAME_0002 /db/SPDT/data/data02.dat
DATA_VOLUME_NAME_0001 /db/SPDT/data/data01.dat
DATA_VOLUME_TYPE_0005 F
DATA_VOLUME_TYPE_0004 F
DATA_VOLUME_TYPE_0003 F
DATA_VOLUME_TYPE_0002 F
DATA_VOLUME_TYPE_0001 F
DATA_VOLUME_SIZE_0005 524288
DATA_VOLUME_SIZE_0004 524288
DATA_VOLUME_SIZE_0003 524288
DATA_VOLUME_SIZE_0002 524288
DATA_VOLUME_SIZE_0001 524288
DATA_VOLUME_MODE_0005 NORMAL
DATA_VOLUME_MODE_0004 NORMAL
DATA_VOLUME_MODE_0003 NORMAL
DATA_VOLUME_MODE_0002 NORMAL
DATA_VOLUME_MODE_0001 NORMAL
DATA_VOLUME_GROUPS 1
LOG_BACKUP_TO_PIPE NO
MAXBACKUPDEVS 2
LOG_MIRRORED NO
MAXVOLUMES 14
LOG_IO_BLOCK_COUNT 8
DATA_IO_BLOCK_COUNT 64
BACKUP_BLOCK_CNT 64
_DELAY_LOGWRITER 0
LOG_IO_QUEUE 50
_RESTART_TIME 600
MAXCPU 8
MAX_LOG_QUEUE_COUNT 0
USED_MAX_LOG_QUEUE_COUNT 8
LOG_QUEUE_COUNT 1
MAXUSERTASKS 500
_TRANS_RGNS 8
_TAB_RGNS 8
_OMS_REGIONS 0
_OMS_RGNS 7
OMS_HEAP_LIMIT 0
OMS_HEAP_COUNT 8
OMS_HEAP_BLOCKSIZE 10000
OMS_HEAP_THRESHOLD 100
OMS_VERS_THRESHOLD 2097152
HEAP_CHECK_LEVEL 0
_ROW_RGNS 8
RESERVEDSERVERTASKS 16
MINSERVERTASKS 28
MAXSERVERTASKS 28
_MAXGARBAGE_COLL 1
_MAXTRANS 4008
MAXLOCKS 120080
_LOCK_SUPPLY_BLOCK 100
DEADLOCK_DETECTION 4
SESSION_TIMEOUT 180
OMS_STREAM_TIMEOUT 30
REQUEST_TIMEOUT 5000
_IOPROCS_PER_DEV 2
_IOPROCS_FOR_PRIO 0
_IOPROCS_FOR_READER 0
_USE_IOPROCS_ONLY NO
_IOPROCS_SWITCH 2
LRU_FOR_SCAN NO
_PAGE_SIZE 8192
_PACKET_SIZE 131072
_MINREPLY_SIZE 4096
_MBLOCK_DATA_SIZE 32768
_MBLOCK_QUAL_SIZE 32768
_MBLOCK_STACK_SIZE 32768
_MBLOCK_STRAT_SIZE 16384
_WORKSTACK_SIZE 8192
_WORKDATA_SIZE 8192
_CAT_CACHE_MINSIZE 262144
CAT_CACHE_SUPPLY 131072
INIT_ALLOCATORSIZE 262144
ALLOW_MULTIPLE_SERVERTASK_UKTS NO
_TASKCLUSTER_01 tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
_TASKCLUSTER_02 ti,100*dw;63*us;
_TASKCLUSTER_03 equalize
_DYN_TASK_STACK NO
_MP_RGN_QUEUE YES
_MP_RGN_DIRTY_READ DEFAULT
_MP_RGN_BUSY_WAIT DEFAULT
_MP_DISP_LOOPS 2
_MP_DISP_PRIO DEFAULT
MP_RGN_LOOP -1
_MP_RGN_PRIO DEFAULT
MAXRGN_REQUEST -1
_PRIO_BASE_U2U 100
_PRIO_BASE_IOC 80
_PRIO_BASE_RAV 80
_PRIO_BASE_REX 40
_PRIO_BASE_COM 10
_PRIO_FACTOR 80
_DELAY_COMMIT NO
_MAXTASK_STACK 512
MAX_SERVERTASK_STACK 500
MAX_SPECIALTASK_STACK 500
_DW_IO_AREA_SIZE 50
_DW_IO_AREA_FLUSH 50
FBM_VOLUME_COMPRESSION 50
FBM_VOLUME_BALANCE 10
_FBM_LOW_IO_RATE 10
CACHE_SIZE 262144
_DW_LRU_TAIL_FLUSH 25
XP_DATA_CACHE_RGNS 0
_DATA_CACHE_RGNS 64
XP_CONVERTER_REGIONS 0
CONVERTER_REGIONS 8
XP_MAXPAGER 0
MAXPAGER 64
SEQUENCE_CACHE 1
_IDXFILE_LIST_SIZE 2048
VOLUMENO_BIT_COUNT 8
OPTIM_MAX_MERGE 500
OPTIM_INV_ONLY YES
OPTIM_CACHE NO
OPTIM_JOIN_FETCH 0
JOIN_SEARCH_LEVEL 0
JOIN_MAXTAB_LEVEL4 16
JOIN_MAXTAB_LEVEL9 5
_READAHEAD_BLOBS 32
CLUSTER_WRITE_THRESHOLD 80
CLUSTERED_LOBS NO
RUNDIRECTORY /var/opt/sdb/data/wrk/SPDT
OPMSG1 /dev/console
OPMSG2 /dev/null
_KERNELDIAGFILE knldiag
KERNELDIAGSIZE 800
_EVENTFILE knldiag.evt
_EVENTSIZE 0
_MAXEVENTTASKS 2
_MAXEVENTS 100
_KERNELTRACEFILE knltrace
TRACE_PAGES_TI 2
TRACE_PAGES_GC 20
TRACE_PAGES_LW 5
TRACE_PAGES_PG 3
TRACE_PAGES_US 10
TRACE_PAGES_UT 5
TRACE_PAGES_SV 5
TRACE_PAGES_EV 2
TRACE_PAGES_BUP 0
KERNELTRACESIZE 5369
EXTERNAL_DUMP_REQUEST NO
_AK_DUMP_ALLOWED YES
_KERNELDUMPFILE knldump
_RTEDUMPFILE rtedump
_UTILITY_PROTFILE dbm.utl
UTILITY_PROTSIZE 100
_BACKUP_HISTFILE dbm.knl
_BACKUP_MED_DEF dbm.mdf
_MAX_MESSAGE_FILES 0
_SHMKERNEL 44601
__PARAM_CHANGED___ 0
__PARAM_VERIFIED__ 2008-05-03 23:12:55
DIAG_HISTORY_NUM 2
DIAG_HISTORY_PATH /var/opt/sdb/data/wrk/SPDT/DIAGHISTORY
_DIAG_SEM 1
SHOW_MAX_STACK_USE NO
SHOW_MAX_KB_STACK_USE NO
LOG_SEGMENT_SIZE 2133
_COMMENT
SUPPRESS_CORE YES
FORMATTING_MODE PARALLEL
FORMAT_DATAVOLUME YES
OFFICIAL_NODE
UKT_CPU_RELATIONSHIP NONE
HIRES_TIMER_TYPE CPU
LOAD_BALANCING_CHK 30
LOAD_BALANCING_DIF 10
LOAD_BALANCING_EQ 5
HS_STORAGE_DLL libhsscopy
HS_SYNC_INTERVAL 50
USE_OPEN_DIRECT YES
USE_OPEN_DIRECT_FOR_BACKUP NO
SYMBOL_DEMANGLING NO
EXPAND_COM_TRACE NO
JOIN_TABLEBUFFER 128
SET_VOLUME_LOCK YES
SHAREDSQL YES
SHAREDSQL_CLEANUPTHRESHOLD 25
SHAREDSQL_COMMANDCACHESIZE 262144
MEMORY_ALLOCATION_LIMIT 0
USE_SYSTEM_PAGE_CACHE YES
USE_COROUTINES YES
FORBID_LOAD_BALANCING YES
MIN_RETENTION_TIME 60
MAX_RETENTION_TIME 480
MAX_SINGLE_HASHTABLE_SIZE 512
MAX_HASHTABLE_MEMORY 5120
ENABLE_CHECK_INSTANCE YES
RTE_TEST_REGIONS 0
HASHED_RESULTSET YES
HASHED_RESULTSET_CACHESIZE 262144
CHECK_HASHED_RESULTSET 0
AUTO_RECREATE_BAD_INDEXES NO
AUTHENTICATION_ALLOW
AUTHENTICATION_DENY
TRACE_AK NO
TRACE_DEFAULT NO
TRACE_DELETE NO
TRACE_INDEX NO
TRACE_INSERT NO
TRACE_LOCK NO
TRACE_LONG NO
TRACE_OBJECT NO
TRACE_OBJECT_ADD NO
TRACE_OBJECT_ALTER NO
TRACE_OBJECT_FREE NO
TRACE_OBJECT_GET NO
TRACE_OPTIMIZE NO
TRACE_ORDER NO
TRACE_ORDER_STANDARD NO
TRACE_PAGES NO
TRACE_PRIMARY_TREE NO
TRACE_SELECT NO
TRACE_TIME NO
TRACE_UPDATE NO
TRACE_STOP_ERRORCODE 0
TRACE_ALLOCATOR 0
TRACE_CATALOG 0
TRACE_CLIENTKERNELCOM 0
TRACE_COMMON 0
TRACE_COMMUNICATION 0
TRACE_CONVERTER 0
TRACE_DATACHAIN 0
TRACE_DATACACHE 0
TRACE_DATAPAM 0
TRACE_DATATREE 0
TRACE_DATAINDEX 0
TRACE_DBPROC 0
TRACE_FBM 0
TRACE_FILEDIR 0
TRACE_FRAMECTRL 0
TRACE_IOMAN 0
TRACE_IPC 0
TRACE_JOIN 0
TRACE_KSQL 0
TRACE_LOGACTION 0
TRACE_LOGHISTORY 0
TRACE_LOGPAGE 0
TRACE_LOGTRANS 0
TRACE_LOGVOLUME 0
TRACE_MEMORY 0
TRACE_MESSAGES 0
TRACE_OBJECTCONTAINER 0
TRACE_OMS_CONTAINERDIR 0
TRACE_OMS_CONTEXT 0
TRACE_OMS_ERROR 0
TRACE_OMS_FLUSHCACHE 0
TRACE_OMS_INTERFACE 0
TRACE_OMS_KEY 0
TRACE_OMS_KEYRANGE 0
TRACE_OMS_LOCK 0
TRACE_OMS_MEMORY 0
TRACE_OMS_NEWOBJ 0
TRACE_OMS_SESSION 0
TRACE_OMS_STREAM 0
TRACE_OMS_VAROBJECT 0
TRACE_OMS_VERSION 0
TRACE_PAGER 0
TRACE_RUNTIME 0
TRACE_SHAREDSQL 0
TRACE_SQLMANAGER 0
TRACE_SRVTASKS 0
TRACE_SYNCHRONISATION 0
TRACE_SYSVIEW 0
TRACE_TABLE 0
TRACE_VOLUME 0
CHECK_BACKUP NO
CHECK_DATACACHE NO
CHECK_KB_REGIONS NO
CHECK_LOCK NO
CHECK_LOCK_SUPPLY NO
CHECK_REGIONS NO
CHECK_TASK_SPECIFIC_CATALOGCACHE NO
CHECK_TRANSLIST NO
CHECK_TREE NO
CHECK_TREE_LOCKS NO
CHECK_COMMON 0
CHECK_CONVERTER 0
CHECK_DATAPAGELOG 0
CHECK_DATAINDEX 0
CHECK_FBM 0
CHECK_IOMAN 0
CHECK_LOGHISTORY 0
CHECK_LOGPAGE 0
CHECK_LOGTRANS 0
CHECK_LOGVOLUME 0
CHECK_SRVTASKS 0
OPTIMIZE_AGGREGATION YES
OPTIMIZE_FETCH_REVERSE YES
OPTIMIZE_STAR_JOIN YES
OPTIMIZE_JOIN_ONEPHASE YES
OPTIMIZE_JOIN_OUTER YES
OPTIMIZE_MIN_MAX YES
OPTIMIZE_FIRST_ROWS YES
OPTIMIZE_OPERATOR_JOIN YES
OPTIMIZE_JOIN_HASHTABLE YES
OPTIMIZE_JOIN_HASH_MINIMAL_RATIO 1
OPTIMIZE_OPERATOR_JOIN_COSTFUNC YES
OPTIMIZE_JOIN_PARALLEL_MINSIZE 1000000
OPTIMIZE_JOIN_PARALLEL_SERVERS 0
OPTIMIZE_JOIN_OPERATOR_SORT YES
OPTIMIZE_QUAL_ON_INDEX YES
DDLTRIGGER YES
SUBTREE_LOCKS NO
MONITOR_READ 2147483647
MONITOR_TIME 2147483647
MONITOR_SELECTIVITY 0
MONITOR_ROWNO 0
CALLSTACKLEVEL 0
OMS_RUN_IN_UDE_SERVER NO
OPTIMIZE_QUERYREWRITE OPERATOR
TRACE_QUERYREWRITE 0
CHECK_QUERYREWRITE 0
PROTECT_DATACACHE_MEMORY NO
LOCAL_REDO_LOG_BUFFER_SIZE 0
FILEDIR_SPINLOCKPOOL_SIZE 10
TRANS_HISTORY_SIZE 0
TRANS_THRESHOLD_VALUE 60
ENABLE_SYSTEM_TRIGGERS YES
DBFILLINGABOVELIMIT 70L80M85M90H95H96H97H98H99H
DBFILLINGBELOWLIMIT 70L80L85L90L95L
LOGABOVELIMIT 50L75L90M95M96H97H98H99H
AUTOSAVE 1
BACKUPRESULT 1
CHECKDATA 1
EVENT 1
ADMIN 1
ONLINE 1
UPDSTATWANTED 1
OUTOFSESSIONS 3
ERROR 3
SYSTEMERROR 3
DATABASEFULL 1
LOGFULL 1
LOGSEGMENTFULL 1
STANDBY 1
USESELECTFETCH YES
USEVARIABLEINPUT NO
UPDATESTAT_PARALLEL_SERVERS 0
UPDATESTAT_SAMPLE_ALGO 1
SIMULATE_VECTORIO IF_OPEN_DIRECT_OR_RAW_DEVICE
COLUMNCOMPRESSION YES
TIME_MEASUREMENT NO
CHECK_TABLE_WIDTH NO
MAX_MESSAGE_LIST_LENGTH 100
SYMBOL_RESOLUTION YES
PREALLOCATE_IOWORKER NO
CACHE_IN_SHARED_MEMORY NO
INDEX_LEAF_CACHING 2
NO_SYNC_TO_DISK_WANTED NO
SPINLOCK_LOOP_COUNT 30000
SPINLOCK_BACKOFF_BASE 1
SPINLOCK_BACKOFF_FACTOR 2
SPINLOCK_BACKOFF_MAXIMUM 64
ROW_LOCKS_PER_TRANSACTION 50
USEUNICODECOLUMNCOMPRESSION NO
about send you the data from tables, i dont have permission to do that, since all data is in a production system, the customer dont give me the rights to send any information. sorry about that.
best regards
Clóvis -
Exclude a table from time-based reduction
Hi,
Iu2019d like to exclude a table from time-based reduction. How can I do this ? Is there any manual how to do customizing in TDMS ?
Regards
p121848Thank you Markus for your annotation.
AUFK is technically declared as an Master Data Table, but stores orders. Standard
TDMS provides a reduction of this file and in the client copies we did via TDMS a lot of records disappeared when we selected time-reduction.
Now we fond out that some Transactions as OKB9 or KA03 refer to old internal orders. So we would like to maintain the customizing, to exclude AUFK from reduction. But this is not possible in activity TD02P_TABLEINFO, because no changes can be done to the tables, which have got the transfer_status 1 = Reduce.
You can manipulate the Transfer-Status in file CNVTDMS_02_STEMP before getting to activity TD02P_TABLEINFO, but I wonder whether this is the way one should do.
Any idea ?
Regards p121848 -
I have a DMP file and I want to import all tables except one (it's a large table). I know I can do that with a parameter file, but my question is, can I.........
import all but the 1 table (the large one), start the import of that 1 table and have people start using the DB while that one table is importing. This large table does not have critical data they need to access right away so my thought was that I could import everything else first (small amount of data), start the import of the large table and they users could access the DB while that 1 table is importing.
Due to special circumstances IMP/EXP is their only backup solution (please no lectures on that, I KNOW, I know....., but it is what it is)I can't think of anything that would prevent this from working. You just need to make sure that the large table does not have any ref constraints, or other associations with the other tables that may get screwed up while the other users are using the database.
Dean -
EXPORT all but one table?
Is there a simple way to exclude one (and only one) table from an EXPORT and/or IMPORT script?
I have over 200 tables and I want to Export them all except one.If you are on 10g then you have it through DATA PUMP.
It has INCLUDE and EXCLUDE options using which you can specify the name and type of the object to include/exclude. -
Hi,
We are running SAP on AIX 5.3 AND db2 udb 8.2.3. We have taken full offline backup from OS level. e.g. /usr, /sapmnt, /usr/sap/trans, /home, /db2/<SID>, /db2/log_dir etc. May i restore the backup on another server without installing DB2 software? and run the SAP? or do we need to re-install DB2 & SAP again and then restore the database backup?
Please guide.
-SNHi Jennifer,
it is much easier and less error prone to do a system copy with the build in DB2 backup and restore commands. Copying DB2 by copying files from one server to the other should only be done by experienced DB2 administrators and in most cases there is no need to do this. For example using the wrong OS tools may create sparse allocated DB2 container files and can create unexpected error situations.
The DB2 V8 software installer on AIX uses the OS installer. The DB2 V8 software is installed under /opt and the /db2/db2<sid>/sqllib directory contains links to this directory. Therefore it is not enough to copy the /usr/sap ; /sapmnt and /db2 directories. With DB2 V8 the software installtion directory is fixed. With DB2 V9+ the software can be installed in any directory and as a default new SAPINST versions install the DB2 software under /db2/db2<sid> .
To do a cold copy of DB2 you may also need to use the db2inidb and db2relocated tools ( mostly used for split mirror copies ), especially if you want to rename your dbsid. You can find more information about those tools in the DB2 docu. The SAP utility brdb6brt ( note 867914 ) may also provide some help.
Regards
Frank -
Dear Expert,
We tried backup verification using TSM script and get the result successfully. Now, we plan to perform backup verification using db2 command, our questions are:
1. Which command and option could be used that we can rely on?
2. Which the valid one to verify backup, using TSM or DB2 command?
Thanks in advance.
Kind Regards,
RudiHi,
I tried using command db2adutl as follows:
- db2adutl verify full taken at 20090405103108 database <dbname>, showed the following:
Warning: There are no file spaces created by DB2 on the ADSM server
Warning: No DB2 backup images found in ADSM for any alias.
- db2adutl query full db <dbname> nodename <nodename>, showed the same as above:
Warning: There are no file spaces created by DB2 on the ADSM server
Warning: No DB2 backup images found in ADSM for any alias.
Please advice.
Thanks and Regards,
Rudi -
How to exclude some tables from schema level replicatio????
Hi,
I am working on oracle10g stream replication.
My replication type is "Schema Based".
So can anyone assist me to undersatnd, how to exclude some tables from schema based replication.
Thanks,
FaziarainYou can use rules and include them in the rule set, lets say you dont want LCR to be queued for table_1 in schema SALES, write two rules one for DDL and another for DML with NOT logical condition.
DBMS_RULE_ADM.CREATE_RULE(
rule_name => 'admin.SALES_not_TALBE_1_dml', condition => ' (:dml.get_object_owner() = ''SALES'' AND NOT ' ||
' :dml.get_object_name() = ''REGIONS'') AND ' ||
' :dml.is_null_tag() = ''Y'' ');
DBMS_RULE_ADM.CREATE_RULE(
rule_name => 'admin.hr_not_regions_dlll',
condition => ' (:dml.get_object_owner() = ''SALES'' AND NOT ' ||
' :ddl.get_object_name() = ''table_!'') AND ' ||
' :dsl.is_null_tag() = ''Y'' ');
just go through this document once, http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_rules.htm#i1017376
Edited by: user8710159 on Sep 16, 2009 5:21 PM -
How to exclude some tables in inoort
Hi all,
how to exclude some tables . For example, I have Oracle
export file which contains a hundred tables, but I want to import all the tables except one, i.e. its (some table name) . Can I achieve this goal?
thanks in advance.Hello,
It depends on your Oracle Release.
Up to Oracle *9.2* you have just the classical export/import utility.
So, you'll have to list the Tables you want to Import with the following parameter:
TABLES=(
<table_1>,
<table_n>
)Starting with *10.1* you have the Datapump (expdp / impdp). With this new utility you have the very useful parameter EXCLUDE. It works like that:
EXCLUDE=TABLE:"='<table>'"Please find a link about this topic:
http://www.oraclefaq.net/2007/03/09/expdp-datapump-excludeinclude-parameters/
Hope this help.
Best regards,
Jean-Valentin
Maybe you are looking for
-
** Urgent help *** Reg : Confirmations in Purchase orders .
Hi all , I have one requirement where the report should display open PO.s with the materials which are 'AB' confirmed but not ' CO' confirmed before the delivery date . I developed the report with these conditions . My question is '<b></b> I
-
Compilation failed while executing : compile-abc
This is with the latest AIR 4.0 beta, dated 12/17, with "-useLegacyAOT no" on Mac, OS X 10.9. 0 compile-abc 0x0095d728 PrintStackTrace(void*) + 40 1 compile-abc 0x0095dba1 SignalHandler(int) + 449 2 libsystem_platform.dyl
-
Can't drag video into timeline premiere cc
Been working with PP CC since it came out and have had almost no problems. Now I have a head scratcher. I can't drag a video clip from the project panel to the sequence. The audio comes but the video won't. All I get is the close hand with a line thr
-
G31M3-L V2 onboard video/system problems
Building a small system for the wife. Got the mo board/processor and mem from Frys yesterday. Bios is 2.2 version (have yet to upgrade bios) Installed XP Pro Installed all latest drivers at install runs very slow with 2 mem sticks installed. Runs ok
-
How to use bean objects in java script
function insertRow() var row=1; var tbl = document.all("TAU"); var tr = tbl.rows(row); var td = tr.cells(0); var td2 = tr.cells(1); var td3 = tr.cells(2); row++; tr = tbl.insertRow(); tr.setAttri