Datafile autoextent
Hi,
How can I check if the autoextend feature of the datafile is working.Is there any SQL command we can use to compare the actual datafile size and the incremented datafile size.
We have 6 datafile each of 2 GB in PSAPDAT tablespace with utoextend on next 20M maxsize 10G.
Does it mean that that tablespace can grow upto 60G. How can I check this. The issue is that I see the datafile in dba_data_files of the same size (i.e 2G).
Regards
Hello Mohammed,
at first just a suggestion:
> We have 6 datafile each of 2 GB in PSAPDAT tablespace with utoextend on next 20M maxsize 10G.
Please increase the AUTOEXTEND size up to 1 or 2 GB. Your actual autoextend size is too small to handle a "big" database object extent in a LMT tablespace with one extent. In a DMT tablespace it can be much more worse.
> Does it mean that that tablespace can grow upto 60G
Yes, your tablespace can grow to 6 * 10 GB = 60 GB
> Is there any SQL command we can use to compare the actual datafile size and the incremented datafile size.
You can query DBA_DATA_FILES to get these actual size and the max size, but you can not "check" the AUTOEXTEND feature - the datafiles are extended if needed.
>SQL> SELECT FILE_NAME, BYTES, MAXBYTES FROM DBA_DATA_FILES;
But you can check the initial datafile size after the creation and the actual size:
> SQL> SELECT NAME, BYTES, CREATE_BYTES FROM V$DATAFILE;
Regards
Stefan
Similar Messages
-
Different between datafile autoextent and tablespace extent management auto
hi guys,
the above got me thinking.
blocks are arrange in extents. The rest is ???? //??? ... . .... ..flaskvacuum wrote:
hi guys,
the above got me thinking.
blocks are arrange in extents. The rest is ???? //??? ... . .... .. Datafile autoextend (its not datafile auto extent) - It will increase datafile size when all space in datafile is occupied and all its extents are used. So when objects needs to grow, oracle will increase the size of datafile (if autoextend is ON) and allocate extents to objects.
Also see http://docs.oracle.com/cd/B28359_01/server.111/b28310/dfiles003.htm
Tablespace extent management auto - Local managed tablespace will keep the track of extents in Bitmaps. Now creating this type of tablespace you can specify AUTOALLOCATE clause or the UNIFORM clause, AUTOALLOCATE will assign new extents to obejcts of size determine by oracle internally. UNIFORM if specified, oracle will assign uniform size extents to objects. Lets say you have specified 1M of Uniform size, then oracle will assign 1M of new extents to objects, all assigned extents will have same size i.e 1M. But its not the case with Autoallocate.
Autoallocate will determine whats the best extent size which he needs to allocate to segement. Its internal mechanism
http://docs.oracle.com/cd/B19306_01/server.102/b14231/tspaces.htm -
Why is my autoextend datafile didn't grow for a tablespace?
These are my autoextend datafiles for PSAPSR3700. DB02 reported that PSAPSR3700 tablespace is utilized 100%. Shouldn't the datafiles autoexted by them self to solve the 100% utilization problem?
SQL> select file_name,bytes,maxbytes,blocks,maxblocks from dba_data_files where tablespace_name = 'PSAPSR3700';
FILE_NAME
BYTES MAXBYTES BLOCKS MAXBLOCKS
/oracle/TDM/sapdata3/sr3700_1/sr3700.data1
3565158400 1.0486E+10 435200 1280000
/oracle/TDM/sapdata3/sr3700_2/sr3700.data2
3690987520 1.0486E+10 450560 1280000
/oracle/TDM/sapdata3/sr3700_3/sr3700.data3
3670016000 1.0486E+10 448000 1280000
FILE_NAME
BYTES MAXBYTES BLOCKS MAXBLOCKS
/oracle/TDM/sapdata3/sr3700_4/sr3700.data4
1551892480 1.0486E+10 189440 1280000
/oracle/TDM/sapdata4/sr3700_5/sr3700.data5
1488977920 5368709120 181760 655360
/oracle/TDM/sapdata4/sr3700_6/sr3700.data6
1488977920 5368709120 181760 655360Hi,
You can consider the following Factors for measuring the FileSystem capacity planning and requirement to hold datafiles of tablespace.
1. Current Size of the Datafiles associated with Tablespace
[Showing Disk Volumes with BR*Tools|http://help.sap.com/saphelp_nw70/helpdata/en/85/0ba86126e3b34091746bcfc2aa8a1d/content.htm]
[Showing Tablespaces with BR*Tools|http://help.sap.com/saphelp_nw70/helpdata/en/1d/9c8d3f7057eb0ce10000000a114084/content.htm]
[ Showing Data Files with BR*Tools|http://help.sap.com/saphelp_nw70/helpdata/en/05/710a4654de0b4b9ad131879a2c68d3/content.htm]
2. Autoextend -If it is set to yes, the new data file is created autoextensible. In this case, maxsize and incrsize will paly an important role for the capacity planning
3.maxsize -For autoextensible data files, this parameter specifies the maximum size in MB up to which the data file can be increased.
4.incrsize - For autoextensible data files, this parameter specifies the size in MB, by which the data file is automatically increased when necessary.
Regards,
Bhavik G. Shroff -
Problems with autoextent on a 8.1.7.3 64bit installation
Hi,
two weeks ago I installed a 8.1.7.3 database with 4 tablespaces on a testbase,
two for the data, two for the indexes. The data-tablespaces and the index-tablespaces
were splittet to 4 datafiles. The datafiles of the data-tbs had a start size of
2000MB each, autoextent on and next extent of 500MB. The datafiles of the index-tbs
had a start size of 500 MB each, autoextent on and next of 500MB.
I had a look at the datafile parameters every file is set to autoextent, but the
autoextent doesn't work. I analysed this problem completly, everything is set up correct.
The only thing I noticed is that the data is loaded with sqlloader.
Could it be that this is known bug of this version or is this a well known problem?
Does someone of you have a tip for me what i can do?
Thank you for your help!
TobiasQ -
1) What is the actual error you are getting ? cut paste error/alertlog/oserror.
2) Is the table running out of extents ? ( maxextents ???? )
3) Is your mount where all the datafiles are stored capable of handling files with 2GB+ ? ( see largefiles )
4) Is your sqlldr using "direct=yes" ? If so see the hwm of the tables as well as the tablespace.
5) Are the tables sized correctly ?
6) Are you truncating the tables before the load ? If so are u using drop storage clause ?
Too many questions - very little details about your problem.
Not to offend you ( apologies ) , but should help: http://www.tuxedo.org/~esr/faqs/smart-questions.html
HTH,
V
Hi,
two weeks ago I installed a 8.1.7.3 database with 4 tablespaces on a testbase,
two for the data, two for the indexes. The data-tablespaces and the index-tablespaces
were splittet to 4 datafiles. The datafiles of the data-tbs had a start size of
2000MB each, autoextent on and next extent of 500MB. The datafiles of the index-tbs
had a start size of 500 MB each, autoextent on and next of 500MB.
I had a look at the datafile parameters every file is set to autoextent, but the
autoextent doesn't work. I analysed this problem completly, everything is set up correct.
The only thing I noticed is that the data is loaded with sqlloader.
Could it be that this is known bug of this version or is this a well known problem?
Does someone of you have a tip for me what i can do?
Thank you for your help!
Tobias -
Should Datafile Autoextend feature be turned on or off?
Hi Guys,
I have a questions on datafile auto extend on/off feature. I am hearing that this feature should be turned off and no one in my network seems to have a convincing answer for their claim. Is it true that it should be turned off? If yes, why? I am not seeing any evidence that it should not be turned on. Databases in question are 9i and 10g on Solaris and Linux. Any suggestions/comments are welcome.
--MMI believe that autoextend feature should be set to “on” for most tablespaces (their datafiles to be exact).
First, some thoughts about monitoring (and forecasting) tablespace size. Those important activities should be done in any production system. I even wrote a presentation (NYOUG, Brainsurface VirthaThon) on (among many other things) how to forecast tablespace size using the data in the OEM repository (http://iiotzov.files.wordpress.com/2011/08/iotzov_oem_repository.pdf) . Manual or automatic, simple or complex, all monitoring and forecasting methods have a significant limitation – they cannot handle a sudden change in the growth pattern (a surge). That’s where “autoextend on” comes in. It is very important to use “autoextend on” in addition to proper monitoring/forecasting, not in lieu of it.
Imagine you have an ASM disk group with 50G free space. You do not use “autoextend on”, so one small (3G data + 2G free) tablespace gets a surge of data (around 4G) and runs out of disk space. Your boss asks: “So how did we run out of disk space when there was more than 40G free at all times?” What do you say? “The application team did not inform us”.”It was an unexpected situation”. One can do better than that...
Now, about the valid concern that unchecked “autoextend” growth can fill the disk and cause DB-wide problems. Proper planning can practically eliminate those concerns:
->TEMP and UNDO tablespaces should not have autoextent on
->All tablespaces should have free space already allocated, so there would be enough to hold the data for some time (3 months for instance) without growing the data files. That would ensure that the system works properly even if there is no disk space to grow the data files.
->Archive log should be in a separate disk group
-> The amount a single tablespace can grow should be less than the available free space at all times. If we have 50G free space, no single tablespace should be able to grow more than 45G. The policies how much a tablespace can grow can be even more strict - like no tablespace can grow more than 20G, no two tablespaces can fill the system.
Iordan Iotzov
http://iiotzov.wordpress.com/ -
hello
i am trying to import the migration content
its showing
following error
ERROR 2007-08-08 17:04:56
MDB-06068 Bad autoextent mode. <br>SOLUTION: Use values ON or OFF.
ERROR 2007-08-08 17:04:56
MDB-06010 Key db operation (read or write) failed. Could not work on table tTablespaces, records WHERE dbSid = 'TRY'.<br>SOLUTION: Check that keydb.xml is writable in the local installation directory (default: sapinst_instdir).
kindly help
source system
HP UX B.11.23
oracle 9.2.0.6
dest system
Win 2K3
oracle 9.2.0.6
<b>reward points fro valuable answers</b>
thx
regards
shoebHi,
in your enterprise manager, look at your tablespaces/datafiles.
Maybe there's one that is full, and you need to enable "autoextend" mode so it can grow bigger.
OR
you don't have enough space on C: (or other partition).
Brad -
Hi Experts,
Please let me know how can i add the data files using brtools
Please note the db02 datafiles current sizes
Table Space Size (kb) Free size Used (%) Tab/ind Extents AutoExt (kb) Used (%) Status Backup
PSAPSR3 40.960.000 832.512 97 155.893 194.35 204.800.000 20 ONLINE NOT ACTIVE
PSAPSR3700 65.024.000 386.752 99 908 10.971 163.840.000 39 ONLINE NOT ACTIVE
PSAPSR3USR 20.48 18.304 10 33 33 10.240.000 0 ONLINE NOT ACTIVE
PSAPTEMP 2.048.000 2.048.000 0 0 0 10.240.000 0 ONLINE NOT ACTIVE
PSAPUNDO 8.949.760 8.475.456 5 17 292 10.240.000 5 ONLINE NOT ACTIVE
SYSAUX 266.24 22.464 91 949 1.888 10.240.000 2 ONLINE NOT ACTIVE
SYSTEM 839.68 2.112 99 1.204 2.739 10.240.000 8 ONLINE NOT ACTIVE
Total 118.108.160 11.785.600 90 159.004 210.273
Regards,
Reddy VHi Andreas,
Thank You for your detailed information.
As per your information i have checked all the table space sizes and all are having enough space.
But i have observed that in db02 under tables and indexes option it is showing that 3 indexes are missing in database
Tables Indexes
Total number 70.933 86.319
Total size/kb 32.346.368 40.426.560
More than 1 extent 2.799 3.313
Missing in database 0 3
Missing in R/3 DDIC 0 0
Space-critical objects 0 0
The indexes are
/BIC/FZCF_FLWN-010
/BIC/FZCF_FLWN-020
/BIC/FZCF_FLWN-040
Please let me know you suggestion/solution on the same
Regards,
Prakash V -
Error while opening a datafile in dataprep editor
Hi
Error while opening a datafile in dataprep editor
"opening the datafile failed see the message panel for details"
"server.com.DEV.Perf Read data file September 17, 2009 8:31:51 AM EDT Failed : 1030001"
can anyone help me through this error..
right i am trying to load data by creating a rules file..but could not open the file in dataprep editor..
Thanks,
RamIn some cases restarting the EAS service has fixed this error for me.
-
Renaming the Physical Filename for Datafiles in SQL Server 2008
Can anyone tell me how to change the physical filename of the datafiles? There doesn't seem to be any documentation on this, yet its quite easy to change the logical filename.
There are several ways to make this change, however to rename the physical database files at operating system level you will have to take the database offline
1. Use SSMS to take the database Offline (right-click on Database, select Tasks, Take Offline), change the name of the files at the OS level and then Bring it Online.
2. You could Detach the database, rename the files and then Attach the database pointing to the renamed files to do so.
3. You could Backup the database and then restore, changing the file location during the restore process.
4. using T SQL
ALTER DATABASE databaseName SET OFFLINE
GO
ALTER DATABASE databaseNAme MODIFY FILE (NAME =db, FILENAME = 'C:\Program
Files\Microsoft SQL Server\MSSQL.2\MSSQL\Data\db.mdf')
GO
--if changing log file name
ALTER DATABASE databaseNAme MODIFY FILE (NAME = db_log, FILENAME =
'C:\Program Files\Microsoft SQL Server\MSSQL.2\MSSQL\Data\db.ldf')
GO
ALTER DATABASE databaseName SET ONLINE
GO
for more info http://technet.microsoft.com/en-us/library/ms174269.aspx -
Logical corruption in datafile
what is logical corruption.
How this can occur in datafile , is it related caused due to disk.
how to avoid this.
Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
Any good reading/url is most welcomed.
Thank You Very Much.user642237 wrote:
what is logical corruption.
How this can occur in datafile , is it related caused due to disk.
how to avoid this.
Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
Any good reading/url is most welcomed.
Thank You Very Much.What's the db version and o/s? Where did you read the term logical corruption in datafiles? AFAIK, datafiles get physically corrupted only. The logical corruption happens within the blocks , for example some index entry pointing towards a null rowid. I am not sure that I have come across any situation/reference where this corruption is mentioned for files as well. To check it, the best possible tool is RMAN which can do the job by some simple commands.
HTH
Aman.... -
How can I clean all data in a database but keep all datafile still same nam
hi,
How can I get a easy way to clean all data in a database but keep all datafiles still same name and location just like a new created database?
dbca has two choice: create templet without datafile or with all datafiles but not empty datafiles.
thanksWhat version is your database? DBCA in 10gR2 allows you to create a template from an existing database using only the structure. From DBCA
From an existing database (structure only)
The template will contain structural information about the source database including database options, tablespaces, datafiles, and initialization parameters specified in the source database. User defined schemas and thier data will not be part of the created template. -
Can I Select from table skipping extents linked with lost datafiles?
Hi~,
I need your help to recover my database.
I'm using oracle 9.2.0.8 at Fedora 3 with no-archive mode.
and I don't have any backup.
Last night, I experenced hard disk failure.
I tried OS-level recovery, but I lost some datafiles of tablespace.
anyway, I wanted to recover my database without data of lost datafiles.
so, I issued "alter database datafile offline drop" and
start oracle instance.
But, datafiles were not removed from dba_data_files view and
extents linked with lost datafiles were not removed from dba_extents view!
Selecting query of some table containing extents linked with lost data files,
I got "ORA-00376: file xxx cannot be read at this time" message.
So, my question is that..
HOW CAN I SELECT FROM THAT TABLE WITHOUT SCANNING EXTENTS LINKED WITH LOST DATA FILES?
Thanks.Hi,
Without being in archivelog and without backup, one can't do any sort of recovery. That's why backups and archivelog are so so important.
The offline data file command never does actually drop the datafile. It merely indicates to the control file that now the said tablespace will also be dropped. This won't update any view that the files are not supposed to be used or shown to you anymore.
This is what documentation says about the recovery of the database in the NoARch mode,
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/osrecov.htm#i1007937
You do need a backup in order to get those tables being read. Oracle doesn't have any feature which can offline/skip the missing extents for you and let you read the data without them.
HTH
Aman.... -
Removing completely datafile.....
I've added to tablespace another datafile, but it was mistake and i wanted to remove.
So i run the syntax ALTER TABLESPACE <tablespace> DROP DATAFILE 'datafile'
When i finished i noticed that datafile is located in folder "oradata" but in dba_data_files i don't see this datafile is belonged to tablespace.(there is no more this datafile)
I tried to delete manually from the folder, but (of course) it would not delete. It shows me that file is busy.
I know that if alter my database to mount state i can delete this file,but if there is another way to delete this file from folder without stopping database.......There is not other clause for DROP DATAFILE because it should already be removed according to ALTER TABLESPACE documentation:
>
Specify DROP to drop from the tablespace an empty datafile or tempfile specified by filename or file_number. This clause causes the datafile or tempfile to be removed from the data dictionary and deleted from the operating system. The database must be open at the time this clause is specified.
>
The only other possibility would be to use DROP TABLESPACE ... INCLUDING CONTENTS AND DATAFILES but that's different. -
Drop a datafile from physical standby's control file
Hi,
I am trying to create a physical standby database for my production...
1) I have taken cold backup of my primary database on 18-Nov-2013...
2) I added a datafile on 19-nov-2013 ( 'O:\ORADATA\SFMS\SFMS_DATA4.DBF' )
3) Standby control file was generated on 20-ov-2013 (today) after shutting down and then mounting the primary database...
When i try to recover the newly setup standby using archive files, i am getting the following error (datafile added on 19th Nov is missing)
SQL> recover standby database;
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 39: 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
ORA-01157: cannot identify/lock data file 39 - see DBWR trace file
ORA-01110: data file 39: 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
How to overcome this situation...
Can i delete the entry for the newly added datafile from the backup control file ?
When i tried to delete datafile using "alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF';", it is showing that database should be open..
SQL> alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
ERROR at line 1:
ORA-01109: database not open
SQL> show parameter STANDBY_FILE_MANAGEMENT
NAME TYPE VALUE
standby_file_management string AUTO
SQL> alter system set STANDBY_FILE_MANAGEMENT=manual;
System altered.
SQL> show parameter STANDBY_FILE_MANAGEMENT
NAME TYPE VALUE
standby_file_management string MANUAL
SQL> alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
ERROR at line 1:
ORA-01109: database not open
Regards,
JibuJibu wrote:
Hi,
I am trying to create a physical standby database for my production...
1) I have taken cold backup of my primary database on 18-Nov-2013...
2) I added a datafile on 19-nov-2013 ( 'O:\ORADATA\SFMS\SFMS_DATA4.DBF' )
3) Standby control file was generated on 20-ov-2013 (today) after shutting down and then mounting the primary database..
Hi,
What is your version?
If you added new datafile or created new tablespace, take backup again for restore new created standby database.
If your standby database running well, DG configuration success, then this datafile will create on standby side, too.
Set STANDBY_FILE_MANAGEMENT=AUTO best practice.
When i try to recover the newly setup standby using archive files, i am getting the following error (datafile added on 19th Nov is missing)
SQL> recover standby database;
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 39: 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
ORA-01157: cannot identify/lock data file 39 - see DBWR trace file
ORA-01110: data file 39: 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
How to overcome this situation...
Can i delete the entry for the newly added datafile from the backup control file ?
Not need any delete datafile from standby side, you must recreate standby database, or you can take RMAN backup and restore to standby side again.
When i tried to delete datafile using "alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF';", it is showing that database should be open..
SQL> alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
ERROR at line 1:
ORA-01109: database not open
SQL> show parameter STANDBY_FILE_MANAGEMENT
NAME TYPE VALUE
standby_file_management string AUTO
SQL> alter system set STANDBY_FILE_MANAGEMENT=manual;
System altered.
SQL> show parameter STANDBY_FILE_MANAGEMENT
NAME TYPE VALUE
standby_file_management string MANUAL
SQL> alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
alter tablespace SFMS_BR_DATA drop datafile 'O:\ORADATA\SFMS\SFMS_DATA4.DBF'
ERROR at line 1:
ORA-01109: database not open
It is not logical, Physical standby must be bit-for-bit same with Primary database.
Regards
Mahir M. Quluzade -
Error while doing an expdp on a large datafile
Hello,
I tried an export using expdp in oracle 10g express edition. It was working perfectly until when the db size reached 2.1 gb. I got the following error message:
---------------- Start of error message ----------------
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Starting "SERVICE_2_8"."SYS_EXPORT_SCHEMA_05": service_2_8/******** LOGFILE=3_export.log DIRECTORY=db_pump DUMPFILE=service_2_8.dmp CONTENT=all
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/usr/lib/oracle/xe/oradata/service_3_0.dbf'
ORA-27041: unable to open file
Linux Error: 13: Permission denied
Additional information: 3
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235
----- PL/SQL Call Stack -----
object line object
handle number name
0x3b3ce18c 14916 package body SYS.KUPW$WORKER
0x3b3ce18c 6300 package body SYS.KUPW$WORKER
0x3b3ce18c 9120 package body SYS.KUPW$WORKER
0x3b3ce18c 1880 package body SYS.KUPW$WORKER
0x3b3ce18c 6861 package body SYS.KUPW$WORKER
0x3b3ce18c 1262 package body SYS.KUPW$WORKER
0x3b0f9758 2 anonymous block
Job "SERVICE_2_8"."SYS_EXPORT_SCHEMA_05" stopped due to fatal error at 03:04:34
---------------- End of error message ----------------
Selinux was disabled completely and I have put the permission of 0777 to the appropriate datafile.
Still, it is not working.
Can you please tell me how to solve this problem or do you have any ideas or suggestions regarding this?Hello rgeier,
I cannot access this tablespace which is service_3_0 (2.1 gb) through a php web-application or through sqlplus. I can access a small tablespace which is service_2_8 through the web-application or through sqlplus. When I tried to access service_3_0 through sqlplus, the following error message was returned:
---------------- Start of error message ----------------
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/usr/lib/oracle/xe/oradata/service_3_0.dbf'
ORA-27041: unable to open file
Linux Error: 13: Permission denied
Additional information: 3
---------------- End of error message ----------------
The following are the last eset of entries in the alert_XE.log file in the bdump folder:
---------------- Start of alert log ----------------
db_recovery_file_dest_size of 40960 MB is 9.96% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Aug 20 05:13:59 2008
Completed: alter database open
Wed Aug 20 05:19:58 2008
Shutting down archive processes
Wed Aug 20 05:20:03 2008
ARCH shutting down
ARC2: Archival stopped
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=27, OS id=7463 to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_06', 'SERVICE_2_8', 'KUPC$C_1_20080820054031', 'KUPC$S_1_20080820054031', 0);
kupprdp: worker process DW01 started with worker id=1, pid=28, OS id=7466 to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_06', 'SERVICE_2_8');Wed Aug 20 05:40:48 2008
The value (30) of MAXTRANS parameter ignored.
The value (30) of MAXTRANS parameter ignored.
The value (30) of MAXTRANS parameter ignored.
The value (30) of MAXTRANS parameter ignored.
The value (30) of MAXTRANS parameter ignored.
---------------- End of alert log ----------------
Maybe you are looking for
-
How do I add a text box to an ibook?
How do I add a text box to an ibook page?
-
Need Help Installing Flash CS4 For Windows
Hi, I have downloaded the two files to install Flash CS4 for Windows from here: http://helpx.adobe.com/uk/creative-suite/kb/cs4-product-downloads.html Unfortunately, there is no explanation whatsoever provided regarding how to use the two files, and
-
Portal session in Lotus Notes browser
Hi Gurus, In our portal we have enabled IWA..so when ever user click on the IE..its load with the home page of portal ( portal is set as default home page). Till the time session is active, user can navigate from one pages to other. However when a us
-
Error R6034 in simple MFC Application with OCCI
Hello everyone, I´m new to the whole occi-topic, and have a little problem. After about 3 days of try and error and endless searching through forums I finally managed to make my first application with occi to run. It´s a Win32 console application. Af
-
Avoid multiple function calls in query
I have a query as follows: select col1, substr(test_func(col2,test_func(col3)) , 1 , length(test_func(col2,test_func(col3))), col3, substr(test_func(col2,test_func(col3)) , 1 , length(test_func(col2,test_func(col3)) - 5), col4 from table1 where pk_co