Maxdb and datafiles
Hello Everyone,
I am facing a problem in file system designing for ECC 6.0 System. We are working with MAXDB on linux redhat platform.We have created a file system "/sapdb/sid/sapdata1" for datafile. We got info from the Linux admininstrator that this file system cannot be extended to higher size in future.
So if I come across a situation that the file system "/sapdb/sid/sapdata1" is full then can I create a new file system "/sapdb/sid/sapdata2" and use it for new datafiles of maxdb as we do in oracle . Is this work or I should give more space to "/sapdb/sid/sapdata1", Currently the size given is 80 GB. We are having problem with the space availability of local harddisk.In future we will get NAS or SAN so we can use that for creating new file system "/sapdb/sid/sapdata2"
Please help us in this regard
Thanks
Hany
Hello All,
We are using MaxDB for our content management system. We have MaxDB data volumn of size 200MB each.
We can add more data volumn but not able to increase the size of data volumn (we want to create a data volumn of size 1GB each).
Can you please advise how can we increase the size of data volumn?
Thanks in advance and best regards.
Avijit
Similar Messages
-
MaxDB and LiveCache installation
Hello All,
Tying to install LiveCache 7.7 part SCM 5.1 = SCM 2007
I am trying to install liveCache on SuSE 10(Linux) with MaxDB. I would like to know that if I need to install MaxDB before I install the Livecache server using sapinst? Please advice!
I thought sapinst should ask for MaxDB file during the sapinst process? I started the with sapinst without installing MaxDB and now I am on the steps (Intance parameters) --> I checked Volume Medium type as Raw Devices --> Clicked on Next, I entered /sapdb as raw devices according to SAP Inst manual I click on NEXT and I got this error message: Selected item is not a valid Raw Device: /sapdb. SOLUTION: Select a valid raw device?
Please someone tell me what is exactly valid RAW Device?
Thanks in Advance
KumarHello Kumar,
-> Please run /nLC10 -> set LCA -> click on Integration
=> You will get the screen where you could see the User Data settings.
You have to set the DBM user & Standard liveCache user <LCUSER> with the passwords
correctly. To run the liveCache application you have to connect first from the application
as the LCUSER.
-> Please logon to the liveCache server as user in the 'sdba' group & run :
dbmcli -d <LC-name> -u control,<pwd>
<enter>
dbmcli on <LC-name>>db_state
< if the liveCache is offline =>next step, skip it if it's already 'online' >
dbmcli on <LC-name>>db_online
dbmcli on <LC-name>>sql_connect superdba,admin
dbmcli on <LC-name>>sql_execute select * from users
dbmcli on <LC-name>>sql_release
dbmcli on <LC-name>>exit
-> Please follow up the liveCache installation guide. If the liveCache was installed, there are also post installation steps.
-> For SAP liveCache documentation in English < See the SAP note 767598 >:
http://help.sap.com/saphelp_nw70/helpdata/en/b6/cbdda6248ff648ae9f39f8e28eb24f/frameset.htm
-> Database Administration in CCMS: SAP liveCache Technology
-> liveCache Assistant
-> Integration
-> <User Data>
Thank you and best regards, Natalia Khlopina -
Taking rollback segment and datafile offline caused application error
One of the DBA's added a large rollback segment which caused the rman backup to abort. The rollback segment was taking offline and it's datafile was taken offline, all went normally no errors. An Application started gettting errors the database and application were taking down and up, no errors on either but the problem was still there. The rbs datafile and RBS were placed back online and the application worked properly. It looks as if Oracle let us take the RBS and datafile offline with active segments in the RBS is this possible?? If so it means you can pull the rug out from under Oracle and it doesn't even complain
Pls check any transactions written in application level,
explicitly assigned to the RBs which was taken offline. -
Tablespace and Datafiles... HELP
Hi, I3m new to Oracle, I3m trying to install Oracle 8.1.5 under
Tru64 UNIX, I3m having a hard time creating the Tablespace and
Datafiles to the oracle User... I cannot find documentation
about it, can somebody here tell me how to do it????, it3s
urgent, Please HELP mee!!!!
nullHi Ruben,
I'm an Ora admin working under NT.
Use oracle server manager to create datafiles & tablespaces.
You can find the executable file name for oracle server manager
(svrmgr??.exe for NT) under BIN irectory of ORACLE HOME.
Hope this will be tip for you.
Regards
Sukumar
Ruben Gomez (guest) wrote:
: Hi, I3m new to Oracle, I3m trying to install Oracle 8.1.5 under
: Tru64 UNIX, I3m having a hard time creating the Tablespace and
: Datafiles to the oracle User... I cannot find documentation
: about it, can somebody here tell me how to do it????, it3s
: urgent, Please HELP mee!!!!
null -
I have 3 test databases with Oracle 11.2.3 on the same box
data.dbf and index.dbf were deleted from 2 of the databases in SQL and now EP Manager can't get to any of the tablespaces. when clicking on Server - Tablespace i get the following error:
java.sql.SQLException: ORA-01116: error in opening database file 2 ORA-01110: data file 2: '/u02/oradata/LAWTEST/dev_data01.dbf' ORA-27041: unable to open file IBM AIX RISC System/6000 Error: 2: No such file or directory Additional information: 3 Additional information: 4 Additional information: 4194304
I have tried dropping the tablespaces and refreshing the database from a Prod backup but no luck and have been searching the web for any info.
Any help would be greatly appreciated.Hi,
I would like to ask where is stored in the database
the information regarding the tablespace and
datafiles (for example for a tablespace which
datafiles belong?). Is it in controlfile? I think no.
Where is it?
Thank you,
MihaelaHi,
You may query DBA_TABLESPACES or V$TABLESPACE to views to gain information on tablespaces. Similarly, DBA_DATA_FILES and V$DATAFILE views will list you the details related to the data files like tablespace name/id, file location ......etc..
Regards -
Difference Between Cofiles and Datafiles
Hi
Can Any one pls explain the Difference Between Cofiles and Datafiles in Trans Directory.
Need to know their Role and Internal Process in Transporting the Request or a Client.
Thanks in Advance
DanHi,
Data file contains actual change data.
Cofile contains information on change requests (different steps of a change request and their exit codes).
So you will find cofile of same size for all requests.
Regards
Payal
Message was edited by:
Payal Patel -
Control file lost and datafile addeed restore/recovery with no data loss
Here i have tried to the following
created new table called t2 and made sure data went to a specific tablespace...
took a level 0 backup
removed the control file
added couple of datafile to above tablespace and then inserted more data
then went out to restore control file and the database...but datafile still could not be opened ?? what did i do wrong here....
SQL> @datafile
-- list of datafile
Tablespace File Typ Tablespac File Stat Used MB Free MB FILE_MB MAXMB Datafile_name FILE_ID AUT
UNDOTBS1 Datafile ONLINE AVAILABLE 16 84 100 1,024 /data/trgt/undotbs01.dbf 3 YES
USERS Datafile ONLINE AVAILABLE 1153 895 2048 3,072 /data3/trgt/user02.dbf 5 YES
CNT_TST Datafile ONLINE AVAILABLE 1 9 10 0 /data3/trgt/cnt_tst01.dbf 7 NO
SYSAUX Datafile ONLINE AVAILABLE 626 35 660 32,768 /data/trgt/sysaux01.dbf 2 YES
USERS Datafile ONLINE AVAILABLE 2031 17 2048 2,048 /data3/trgt/move/users01.dbf 4 YES
SYSTEM Datafile ONLINE AVAILABLE 712 58 770 32,768 /data/trgt/system01.dbf 1 YES
USERS Datafile ONLINE AVAILABLE 65 35 100 32,768 /data3/trgt/users03.dbf 6 YES
7 rows selected.
-- new table is created called t2 and its going into TS called cnt_tst
SQL> CREATE TABLE TEST.T2
C1 DATE,
C2 NUMBER,
C3 NUMBER,
C4 VARCHAR2(300 BYTE)
TABLESPACE cnt_tst; 2 3 4 5 6 7 8
Table created.
-- data inserted
SQL> INSERT INTO
test.T2
SELECT
FROM
(SELECT
SYSDATE,
ROWNUM C2,
DECODE(MOD(ROWNUM,100),99,99,1) C3,
RPAD('A',300,'A') C4
FROM
DUAL
CONNECT BY
LEVEL <= 10000)
; 2 3 4 5 6 7 8 9 10 11 12 13 14 15
10000 rows created.
SQL> commit;
Commit complete.
-- to check of cnt_tst has any free space or not, as we can see its full
SQL> @datafile
Tablespace File Typ Tablespac File Stat Used MB Free MB FILE_MB MAXMB Datafile_name FILE_ID AUT
UNDOTBS1 Datafile ONLINE AVAILABLE 16 84 100 1,024 /data/trgt/undotbs01.dbf 3 YES
USERS Datafile ONLINE AVAILABLE 1153 895 2048 3,072 /data3/trgt/user02.dbf 5 YES
SYSAUX Datafile ONLINE AVAILABLE 626 35 660 32,768 /data/trgt/sysaux01.dbf 2 YES
USERS Datafile ONLINE AVAILABLE 2031 17 2048 2,048 /data3/trgt/move/users01.dbf 4 YES
SYSTEM Datafile ONLINE AVAILABLE 712 58 770 32,768 /data/trgt/system01.dbf 1 YES
USERS Datafile ONLINE AVAILABLE 65 35 100 32,768 /data3/trgt/users03.dbf 6 YES
CNT_TST Datafile ONLINE AVAILABLE 10 0 10 0 /data3/trgt/cnt_tst01.dbf 7 NO
7 rows selected.
SQL> select count(*) from test.t2;
COUNT(*)
10000
1 row selected.
-- to get a count and max on date
SQL> select max(c1) from test.t2;
MAX(C1)
29-feb-12 13:47:52
1 row selected.
SQL> -- AT THIS POINT A LEVEL 0 BACKUP IS TAKEN (using backup database plus archivelog)
SQL> -- now control files are removed
SQL> select name from v$controlfile;
NAME
/ctrl/trgt/control01.ctl
/ctrl/trgt/control02.ctl
2 rows selected.
SQL>
SQL> ! rm /ctrl/trgt/control01.ctl
SQL> ! rm /ctrl/trgt/control02.ctl
SQL> ! ls -ltr /ctrl/trgt/
ls: /ctrl/trgt/: No such file or directory
SQL>
-- new datafile is added to CNT_TST TABLESPACE and new data is added as well
SQL> ALTER TABLESPACE CNT_TST ADD DATAFILE '/data3/trgt/CNT_TST02.dbf' SIZE 100M AUTOEXTEND OFF;
Tablespace altered.
SQL> ALTER SYSTEM CHECKPOINT;
System altered.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> /
System altered.
SQL> ALTER TABLESPACE CNT_TST ADD DATAFILE '/data3/trgt/CNT_TST03.dbf' SIZE 100M AUTOEXTEND OFF;
Tablespace altered.
SQL> INSERT INTO
test.T2
SELECT
FROM
(SELECT
SYSDATE,
ROWNUM C2,
DECODE(MOD(ROWNUM,100),99,99,1) C3,
RPAD('A',300,'A') C4
FROM
DUAL
CONNECT BY
LEVEL <= 10000)
; 2 3 4 5 6 7 8 9 10 11 12 13 14 15
10000 rows created.
SQL> /
10000 rows created.
SQL> commit;
Commit complete.
SQL> INSERT INTO
test.T2
SELECT
FROM
(SELECT
SYSDATE,
ROWNUM C2,
DECODE(MOD(ROWNUM,100),99,99,1) C3,
RPAD('A',300,'A') C4
FROM
DUAL
CONNECT BY
LEVEL <= 40000)
; 2 3 4 5 6 7 8 9 10 11 12 13 14 15
40000 rows created.
SQL> commit;
Commit complete.
SQL> @datafile
-- to make sure new datafile has been registered with the DB
Tablespace File Typ Tablespac File Stat Used MB Free MB FILE_MB MAXMB Datafile_name FILE_ID AUT
CNT_TST Datafile ONLINE AVAILABLE 9 91 100 0 /data3/trgt/CNT_TST03.dbf 9 NO
UNDOTBS1 Datafile ONLINE AVAILABLE 16 84 100 1,024 /data/trgt/undotbs01.dbf 3 YES
USERS Datafile ONLINE AVAILABLE 1153 895 2048 3,072 /data3/trgt/user02.dbf 5 YES
CNT_TST Datafile ONLINE AVAILABLE 9 91 100 0 /data3/trgt/CNT_TST02.dbf 8 NO
SYSAUX Datafile ONLINE AVAILABLE 626 35 660 32,768 /data/trgt/sysaux01.dbf 2 YES
USERS Datafile ONLINE AVAILABLE 2031 17 2048 2,048 /data3/trgt/move/users01.dbf 4 YES
SYSTEM Datafile ONLINE AVAILABLE 712 58 770 32,768 /data/trgt/system01.dbf 1 YES
USERS Datafile ONLINE AVAILABLE 65 35 100 32,768 /data3/trgt/users03.dbf 6 YES
CNT_TST Datafile ONLINE AVAILABLE 10 0 10 0 /data3/trgt/cnt_tst01.dbf 7 NO
9 rows selected.
-- now the count and max ... note count before backup was 10000 and max(c1) was diff
SQL> select count(*) from test.t2;
COUNT(*)
70000
1 row selected.
SQL> select max(c1) from test.t2;
MAX(C1)
29-feb-12 13:58:25
1 row selected.
SQL> -- now restore starts
SQL> shutdown abort;
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@berry trgt]$ rman
Recovery Manager: Release 11.2.0.1.0 - Production on Wed Feb 29 14:01:48 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
RMAN> connect catalog rman/pass@rcat
connected to recovery catalog database
RMAN> connect target /
connected to target database (not started)
RMAN> startup nomount;
Oracle instance started
Total System Global Area 188313600 bytes
Fixed Size 1335388 bytes
Variable Size 125833124 bytes
Database Buffers 58720256 bytes
Redo Buffers 2424832 bytes
RMAN> restore controlfile from autobackup;
Starting restore at 29-FEB-12 14:02:37
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
recovery area destination: /backup/trgt/flash_recovery_area
database name (or database unique name) used for search: TRGT
channel ORA_DISK_1: no AUTOBACKUPS found in the recovery area
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120229
channel ORA_DISK_1: AUTOBACKUP found: /backup/trgt/backup/cont_c-3405317011-20120229-09
channel ORA_DISK_1: restoring control file from AUTOBACKUP /backup/trgt/backup/cont_c-3405317011-20120229-09
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/ctrl/trgt/control01.ctl
output file name=/ctrl/trgt/control02.ctl
Finished restore at 29-FEB-12 14:02:39
RMAN> alter database mount;
database mounted
released channel: ORA_DISK_1
RMAN> recover database;
Starting recover at 29-FEB-12 14:02:55
Starting implicit crosscheck backup at 29-FEB-12 14:02:55
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
Crosschecked 96 objects
Finished implicit crosscheck backup at 29-FEB-12 14:02:57
Starting implicit crosscheck copy at 29-FEB-12 14:02:57
using channel ORA_DISK_1
Finished implicit crosscheck copy at 29-FEB-12 14:02:57
searching for all files in the recovery area
cataloging files...
no files cataloged
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 13 is already on disk as file /redo_archive/trgt/online/redo01.log
archived log for thread 1 with sequence 14 is already on disk as file /redo_archive/trgt/online/redo02.log
archived log for thread 1 with sequence 15 is already on disk as file /redo_archive/trgt/online/redo03.log
archived log file name=/redo_archive/trgt/archive/1_10_776523284.dbf thread=1 sequence=10
archived log file name=/redo_archive/trgt/archive/1_10_776523284.dbf thread=1 sequence=10
archived log file name=/redo_archive/trgt/archive/1_11_776523284.dbf thread=1 sequence=11
archived log file name=/redo_archive/trgt/archive/1_12_776523284.dbf thread=1 sequence=12
archived log file name=/redo_archive/trgt/online/redo01.log thread=1 sequence=13
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 02/29/2012 14:02:59
ORA-01422: exact fetch returns more than requested number of rows
RMAN-20505: create datafile during recovery
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/redo_archive/trgt/online/redo01.log'
ORA-00283: recovery session canceled due to errors
ORA-01244: unnamed datafile(s) added to control file by media recovery
ORA-01110: data file 9: '/data3/trgt/CNT_TST03.dbf'
RMAN> -- wnet to session 2 and renamed datafile from unammed-- wnet to session 2 and renamed datafile for unammed file
SQL> select name from v$datafile;
NAME
/data/trgt/system01.dbf
/data/trgt/sysaux01.dbf
/data/trgt/undotbs01.dbf
/data3/trgt/move/users01.dbf
/data3/trgt/user02.dbf
/data3/trgt/users03.dbf
/data3/trgt/cnt_tst01.dbf
/oracle/app/product/11.2.0/dbhome_1/dbs/UNNAMED00008
/oracle/app/product/11.2.0/dbhome_1/dbs/UNNAMED00009
9 rows selected.
SQL> alter database create datafile '/oracle/app/product/11.2.0/dbhome_1/dbs/UNNAMED00008' as '/data3/trgt/CNT_TST02.dbf';
Database altered.
SQL> alter database create datafile '/oracle/app/product/11.2.0/dbhome_1/dbs/UNNAMED00009' as '/data3/trgt/CNT_TST03.dbf';
Database altered.
SQL> select name from v$datafile;
NAME
/data/trgt/system01.dbf
/data/trgt/sysaux01.dbf
/data/trgt/undotbs01.dbf
/data3/trgt/move/users01.dbf
/data3/trgt/user02.dbf
/data3/trgt/users03.dbf
/data3/trgt/cnt_tst01.dbf
/data3/trgt/CNT_TST02.dbf
/data3/trgt/CNT_TST03.dbf
9 rows selected.after above was done, went back to session 1 and tried recovered the DB
RMAN> recover database;
Starting recover at 29-FEB-12 14:06:16
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 13 is already on disk as file /redo_archive/trgt/online/redo01.log
archived log for thread 1 with sequence 14 is already on disk as file /redo_archive/trgt/online/redo02.log
archived log for thread 1 with sequence 15 is already on disk as file /redo_archive/trgt/online/redo03.log
archived log file name=/redo_archive/trgt/online/redo01.log thread=1 sequence=13
archived log file name=/redo_archive/trgt/online/redo02.log thread=1 sequence=14
archived log file name=/redo_archive/trgt/online/redo03.log thread=1 sequence=15
media recovery complete, elapsed time: 00:00:00
Finished recover at 29-FEB-12 14:06:17
RMAN> alter database open resetlogs;
database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
starting full resync of recovery catalog
full resync complete
RMAN> exit
Recovery Manager complete.
[oracle@berry trgt]$
[oracle@berry trgt]$
[oracle@berry trgt]$ sq
SQL*Plus: Release 11.2.0.1.0 Production on Wed Feb 29 14:07:18 2012
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> alter session set NLS_DATE_FORMAT="dd-mon-yy hh24:mi:ss:
2
SQL>
SQL> alter session set NLS_DATE_FORMAT="dd-mon-yy hh24:mi:ss";
Session altered.
SQL> select count(*) from test.t2;
select count(*) from test.t2
ERROR at line 1:
ORA-00376: file 8 cannot be read at this time
ORA-01110: data file 8: '/data3/trgt/CNT_TST02.dbf'
SQL> select max(c1) from test.t2;
select max(c1) from test.t2
ERROR at line 1:
ORA-00376: file 8 cannot be read at this time
ORA-01110: data file 8: '/data3/trgt/CNT_TST02.dbf'
SQL> alter database datafile 8 online;
alter database datafile 8 online
ERROR at line 1:
ORA-01190: control file or data file 8 is from before the last RESETLOGS
ORA-01110: data file 8: '/data3/trgt/CNT_TST02.dbf'
{code}
so what did i do wrong in my recovery that i could not get my data?? how can i avoid this?? and restore my DB?
Edited by: user8363520 on Feb 29, 2012 12:24 PMuser8363520 wrote:
Here i have tried to the following
created new table called t2 and made sure data went to a specific tablespace...
took a level 0 backup
removed the control file
added couple of datafile to above tablespace and then inserted more data
then went out to restore control file and the database...but datafile still could not be opened ?? what did i do wrong here....
SQL> @datafile
-- list of datafile
Tablespace File Typ Tablespac File Stat Used MB Free MB FILE_MB MAXMB Datafile_name FILE_ID AUT
UNDOTBS1 Datafile ONLINE AVAILABLE 16 84 100 1,024 /data/trgt/undotbs01.dbf ; 3 YES
USERS Datafile ONLINE AVAILABLE 1153 895 2048 3,072 /data3/trgt/user02.dbf ; 5 YES
CNT_TST Datafile ONLINE AVAILABLE 1 9 10 0 /data3/trgt/cnt_tst01.dbf ; 7 NO
SYSAUX Datafile ONLINE AVAILABLE 626 35 660 32,768 /data/trgt/sysaux01.dbf ; 2 YES
USERS Datafile ONLINE AVAILABLE 2031 17 2048 2,048 /data3/trgt/move/users01.dbf ; 4 YES
SYSTEM Datafile ONLINE AVAILABLE 712 58 770 32,768 /data/trgt/system01.dbf ; 1 YES
USERS Datafile ONLINE AVAILABLE 65 35 100 32,768 /data3/trgt/users03.dbf ; 6 YES
7 rows selected.
-- new table is created called t2 and its going into TS called cnt_tst
SQL> CREATE TABLE TEST.T2
C1 DATE,
C2 NUMBER,
C3 NUMBER,
C4 VARCHAR2(300 BYTE)
TABLESPACE cnt_tst; 2 3 4 5 6 7 8
Table created.
-- data inserted
SQL> INSERT INTO
test.T2
SELECT
FROM
(SELECT
SYSDATE,
ROWNUM C2,
DECODE(MOD(ROWNUM,100),99,99,1) C3,
RPAD('A',300,'A') C4
FROM
DUAL
CONNECT BY
LEVEL <= 10000)
; 2 3 4 5 6 7 8 9 10 11 12 13 14 15
10000 rows created.
SQL> commit;
Commit complete.
-- to check of cnt_tst has any free space or not, as we can see its full
SQL> @datafile
Tablespace File Typ Tablespac File Stat Used MB Free MB FILE_MB MAXMB Datafile_name FILE_ID AUT
UNDOTBS1 Datafile ONLINE AVAILABLE 16 84 100 1,024 /data/trgt/undotbs01.dbf ; 3 YES
USERS Datafile ONLINE AVAILABLE 1153 895 2048 3,072 /data3/trgt/user02.dbf ; 5 YES
SYSAUX Datafile ONLINE AVAILABLE 626 35 660 32,768 /data/trgt/sysaux01.dbf ; 2 YES
USERS Datafile ONLINE AVAILABLE 2031 17 2048 2,048 /data3/trgt/move/users01.dbf ; 4 YES
SYSTEM Datafile ONLINE AVAILABLE 712 58 770 32,768 /data/trgt/system01.dbf ; 1 YES
USERS Datafile ONLINE AVAILABLE 65 35 100 32,768 /data3/trgt/users03.dbf ; 6 YES
CNT_TST Datafile ONLINE AVAILABLE 10 0 10 0 /data3/trgt/cnt_tst01.dbf ; 7 NO
7 rows selected.
SQL> select count(*) from test.t2;
COUNT(*)
10000
1 row selected.
-- to get a count and max on date
SQL> select max(c1) from test.t2;
MAX(C1)
29-feb-12 13:47:52
1 row selected.
SQL> -- AT THIS POINT A LEVEL 0 BACKUP IS TAKEN (using backup database plus archivelog)
SQL> -- now control files are removed
SQL> select name from v$controlfile;
NAME
/ctrl/trgt/control01.ctl
/ctrl/trgt/control02.ctl
2 rows selected.
SQL>
SQL> ! rm /ctrl/trgt/control01.ctl
SQL> ! rm /ctrl/trgt/control02.ctl
SQL> ! ls -ltr /ctrl/trgt/
ls: /ctrl/trgt/: No such file or directory
SQL>
-- new datafile is added to CNT_TST TABLESPACE and new data is added as well
SQL> ALTER TABLESPACE CNT_TST ADD DATAFILE '/data3/trgt/CNT_TST02.dbf' SIZE 100M AUTOEXTEND OFF;
Tablespace altered.
SQL> ALTER SYSTEM CHECKPOINT;
System altered. Upto this i was clear, but now i cann't understand when you actually dropped the control file from you database(in running stat) how can you perform "alter system checkpoint" and other "alter tablespace.." command? Once controlfile is inaccessible, oracle database is not suppose to function. -
Database migration to MAXDB and Performance problem during R3load import
Hi All Experts,
We want to migrate our SAP landscape from oracle to MAXDB(SAPDB). we have exported database of size 1.2 TB by using package and table level splitting method in 16 hrs.
Now I am importing into MAXDB. But the import is running very slow (more than 72 hrs).
Details of import process as per below.
We have been using distribution monitor to import in to target system with maxdb database 7.7 release. We are using three parallel application servers to import and with distributed R3load processes on each application servers with 8 CPU.
Database System is configured with 8CPU(single core) and 32 GB physical RAM. MAXDB Cache size for DB instance is allocated with 24GB. As per SAP recommendation We are running R3load process with parallel 16 CPU processes. Still import is going too slow with more that 72 hrs. (Not acceptable).
We have split 12 big tables in to small units using table splitting , also we have split packages in small to run in parallel. We maintained load order in descending order of table and package size. still we are not able to improve import performance.
MAXDB parameters are set as per below.
CACHE_SIZE 3407872
MAXUSERTASKS 60
MAXCPU 8
MAXLOCKS 300000
CAT_CACHE_SUPPLY 262144
MaxTempFilesPerIndexCreation 131072
We are using all required SAP kernel utilities with recent release during this process. i.e. R3load ,etc
So Now I request all SAP as well as MAXDB experts to suggest all possible inputs to improve the R3load import performance on MAXDB database.
Every input will be highly appreciated.
Please let me know if I need to provide more details about import.
Regards
SantoshHello,
description of parameter:
MaxTempFilesPerIndexCreation(from version 7.7.0.3)
Number of temporary result files in the case of parallel indexing
The database system indexes large tables using multiple server tasks. These server tasks write their results to temporary files. When the number of these files reaches the value of this parameter, the database system has to merge the files before it can generate the actual index. This results in a decline in performance.
as for max value, I wouldn't exceed the max valuem for 26G value 131072 should be sufficient. I used same value for 36G CACHE SIZE
On the other side, do you know which task is time consuming? is it table import? index creation?
maybe you can run migtime on import directory to find out
Stanislav -
NOARCHIVE mode and datafile DROP OFFLINE
I have a 10gR2 database on linux that I use for testing. The db is in NOARCHIVE mode.
I had created test tablespaces on a NAS (as opposed to the system tables on the local machine). The NAS crashed, losing the drive mapping on the db machine. Of course, I couldn't start the database due to the missing datafiles. I did an 'alter database datafile 'xxxx' drop offline', which put the datafiles in RECOVERY mode in the datafiles view.
Amazingly, I was able to recover the NAS several days later, and the datafiles.
Is there any way to restore them after a drop offline? When I try to online them, I get recovery needed, when I try to recover, none of the redo logs have the right checkpoint. (The checkpoint needed is less than what are in the redo logs)
I don't care about any data that might have been lost. The data was static for months.
Any suggestions?
Edited by: user10822921 on Nov 4, 2009 5:03 AMIf the database is asking for recovery when you online the files, and you don't have those transactions in the redo logs, you can't recover to a consistent point in time to online the files.
# Note that the following is not supported by Oracle Support. You may want to ask them for a second opionion.
You could try taking a copy of all files to another location, recreate the controlfile, and use parameter "_allow_resetlogs_corruption=TRUE" to allow "open resetlogs" to ignore the inconsistent SCN. If it works, you should be able to export the data in the copy database, drop the objects in PROD, and re-import. -
SAP MaxDB and EclipseLink/JPA?
Folks;
is anyone out here using MaxDB along with EclipseLink JPA 2.0 implementation, especially as far as it concerns automatic table creation using predefined @Entity classes? So far, I mainly happened to see a whole load of different error messages but yet haven't found a way of getting things to work cleanly, like, at the moment:
Call: ALTER TABLE S_CFG_SS_CFG_ATTS ADD CONSTRAINT FK_S_CFG_SS_CFG_ATTS_atts_ID FOREIGN KEY (atts_ID) REFERENCES SS_CFG_ATTS (ID)
Query: DataModifyQuery(sql="ALTER TABLE S_CFG_SS_CFG_ATTS ADD CONSTRAINT FK_S_CFG_SS_CFG_ATTS_atts_ID FOREIGN KEY (atts_ID) REFERENCES SS_CFG_ATTS (ID)")
[EL Warning]: 2009-11-24 12:25:30.376--ServerSession(30633470)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.0.0.v20091031-r5713): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.sap.dbtech.jdbc.exceptions.DatabaseException: [-5016] (at 71): Missing delimiter: )
Error Code: -5016
Does anyone around here know how (if so) to get EclipseLink to behave well teaming with MaxDB?
TIA and all the best,
KristianPlease check for the official EclipseLink MaxDB support. You might want to post this question to the Eclipse forum.
[http://wiki.eclipse.org/EclipseLink/Development/DatabasePlatform/MaxDBPlatform]
Harald -
I would like to install SAP Netweaver 2004S SR2 Developer Workplace on my windows XP desktop. I am using the official DVD containing the Developer Workplace, but the installation requires the maxDB. Where can I get a copy of maxDB suitable for my installation? On the internet I found many places from which I can download a copy of maxDB, but which one is the right one?
Thanks in advance.It´s on the DVD - no need to separately install something, the installer will automatically install and configure the database.
Markus -
Hi,
I would like to know if the community releases of MaxDB can run in a VMware Windows guest.
I had access to SAP Note 1142243 that mention that this is the case as of version 7.6.04.07.
I don't quite understand the hierarchy of MaxDB releases, but I read somewhere that the third number is more like a feature number than a release order.
So what about 7.6.3.x and 7.6.5.x ?
Thanks,
GuillaumeHi Guillaume,
basically you may run also a 7.6.3.x MaxDB in a VM host.
I did that e.g. with Parallels on Mac OS X as host with Windows XP and Fedora 8 as guest os and had no issues.
Anyhow, the official supported versions have to be 7.6.4.x or higher - so 7.6.5.x from the SDN Download will be perfectly supported with VM scenarios.
Concerning the version string, this is the explanation from note "820824 - FAQ: MaxDB / SAP liveCache technology"
4. How can I interpret the MaxDB version string?
Importance for Version 7.5 or lower, for example kernel version 7.5:
7.5.0 BUILD 026-123-094-430
Major Number: 7
Minor Number: 5
Correction Level: 0
Build Number: 026
Support Information: 123-094-430
The interpretation of the MaxDB/liveCache version string changes with MaxDB Version 7.6.
Importance for Version 7.6 or higher, for example kernel version 7.6:
7.6.01 BUILD 020-123-115-206
Major Number: 7
Minor Number: 6
Support Package: 01.
Patch Level:020 (Build Number)
Support Information: 123-115-206
As of Version 7.6, a model was introduced with Support Packages and patch levels.
Support Packages:
- are created on a quarterly basis (approximately)
- in addition to the error corrections, Support Packages also include new functions (change requests).
Patch levels:
- are created more often than Support Packages and according to requirements (error situation),
- only high priority errors are corrected,
- corrections for delivered versions always result in a new patch level number.
Best regards,
Lars -
I need to start a heterogeneous copy using R3load (Linux --> HP-UX IA64) of a 2,6 TB MaxDB 7.7 and checked beforehand some notes and found the above note stating
The general R3ta support for MaxDB will be available for SAP Systems
with MaxDB 7.8 and above. At the moment there is only limited MaxDB
R3ta support available.
The source systems contains tables with up to 110,000,000 rows so it's just impossible to copy them sequentially, that would take WEEKS.
What kind of "limitation" is implied and is there no chance to get R3ta to work? This would kill our current project.
MarkusHey Roland,
> just FYI: I'm checking your question with the creator of the mentioned note. I'll get back to you.
Thank you!
We had - quite some time ago - a call about R3ta and the slowness of it during calculation times (OSS 1039828 / 2008 ), maybe that is somehow related.
Markus -
SAP Newbie - MaxDB and NetWeaver limitations
We are new SAP partner (SR) licensing mySAP Business Suite on MaxDB/Windows platform. What are limitations regarding further NetWeaver configurations (ordering). I saw on Product Availability Matrix that only MaxDB/Linux configurations are available for EP. Does that mean that we will not be available to license EP?
Thanks in advance,
IgorHi Igor,
The Product Availability Matrix doucment is the most up to date information on the availability for SAP NetWeaver components. It will be supported with SAP NetWeaver '04 SR1 which should be available later in the year.
If you can't wait, I would suggest that you contact your partner contact at SAP and discuss getting either the Linux version or a supported windows/DB combination like SQL Server.
I hope this helps,
Mike. -
Database Performance (Tablespaces and Datafiles)
Hi guys!
What's the best for performance in database, tablespace with various datafiles distribuited in diferents filesystems or tablespaces with various datafiles in only one filesystem?
Thanks,
AugustoIt depends on contents of the tablespaces, tablespace level LOGGING/NOLOGGING and env such as either OLTP or OLAP and LUN presentation to the server with RAID or without RAID and SAN Read per second and write per second.
In general, tablespace with various datafiles distribuited in diferents filesystems/LUN's is in practice for non dev/system test databases.
Moreover using ASM is better then standard filesystems.
Regards,
Kamalesh
Maybe you are looking for
-
My iPod usually is recognised on my iTunes, but for some reason isn't. My iPhone is being recognised, so i think it's my iPod thats the problem.
-
I have tried all sugestions for how to charge my ipad 2 that no longer charges, and nothing seems to work, can anyone offer any new advise?
-
Hi I have been looking for a laptop when i start at my new school, i have looked at different laptops, some that is a bit bigger, worse battery life and cheaper, but then i found the MacBook and theres a few questions i would like to ask. Im going fo
-
I recently installed Mavericks on this iMac and upgraded to Adobe Cloud 2014 now InDesign CC 2014 fails to launch and I get a crash report. Has this happened to anyone else?
-
DeskJet 9670 on Windows 8.1.
I have been using the above printer on Windows XP for several years, having set up from the drivers on the installation CD. I purchased a laptop running Win8.1 last July, and found that the native drivers would not install, but was advised to use the