Log file utilization problem
I've encountered a problem with log file utilization during a somwhat long transaction during which some data is inserted in a StoredMap.
I've set the minUtilization property to 75%. During insertion, things seem to go smoothly, but at one point log files are created WAY more rapidly than what the amount of data would call for. The test involves inserting 750K entries for a total of 9Mb, the total size of log files is 359 Mb. Using DbSpace shows that the first few log files use approx 65% of their total space, but most only use 2%.
I understand that during a transaction, the Cleaner may not clean the log files involved. What I don't understand is why are most of the log files only using 2%:
File Size (KB) % Used
00000000 9763 56
00000001 9764 68
00000002 9765 68
00000003 9765 69
00000004 9765 69
00000005 9765 69
00000006 9765 68
00000007 9765 70
00000008 9764 68
00000009 9765 61
0000000a 9763 61
0000000b 9764 25
0000000c 9763 2
0000000d 9763 1
0000000e 9763 2
0000000f 9763 1
00000010 9764 2
00000011 9764 1
00000012 9764 2
00000013 9764 1
00000014 9764 2
00000015 9763 1
00000016 9763 2
00000017 9763 1
00000018 9763 2
00000019 9763 1
0000001a 9765 2
0000001b 9765 1
0000001c 9765 2
0000001d 9763 1
0000001e 9765 2
0000001f 9765 1
00000020 9764 2
00000021 9765 1
00000022 9765 2
00000023 9765 1
00000024 9763 2
00000025 7028 2
TOTALS 368319 21
I've created a test class that reproduces the problem. It might be possible to make it even more simple, but I haven't had time to work on it to much.
Executing this test with 500K values does not reproduce the problem. Can someone please help me shed some light on this issue?
I'm using 3.2.13 and the following properties file:
je.env.isTransactional=true
je.env.isLocking=true
je.env.isReadOnly=false
je.env.recovery=true
je.log.fileMax=10000000
je.cleaner.minUtilization=75
je.cleaner.lookAheadCacheSize=262144
je.cleaner.readSize=1048576
je.maxMemory=104857600
Test Class
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Properties;
import com.sleepycat.bind.EntityBinding;
import com.sleepycat.bind.EntryBinding;
import com.sleepycat.bind.tuple.StringBinding;
import com.sleepycat.bind.tuple.TupleBinding;
import com.sleepycat.collections.CurrentTransaction;
import com.sleepycat.collections.StoredMap;
import com.sleepycat.je.Database;
import com.sleepycat.je.DatabaseConfig;
import com.sleepycat.je.DatabaseEntry;
import com.sleepycat.je.DatabaseException;
import com.sleepycat.je.Environment;
import com.sleepycat.je.EnvironmentConfig;
public class LogFileTest3 {
private long totalSize = 0;
private Environment env;
private Database myDb;
private StoredMap storedMap_ = null;
public LogFileTest3() throws DatabaseException, FileNotFoundException, IOException {
Properties props = new Properties();
props.load(new FileInputStream("test3.properties"));
EnvironmentConfig envConfig = new EnvironmentConfig(props);
envConfig.setAllowCreate(true);
File envDir = new File("test3");
if(envDir.exists()==false) {
envDir.mkdir();
env = new Environment(envDir, envConfig);
DatabaseConfig dbConfig = new DatabaseConfig();
dbConfig.setAllowCreate(true);
dbConfig.setTransactional(true);
dbConfig.setSortedDuplicates(false);
myDb = env.openDatabase(null, "testing", dbConfig);
EntryBinding keyBinding = TupleBinding.getPrimitiveBinding(String.class);
EntityBinding valueBinding = new TestValueBinding();
storedMap_ = new StoredMap(myDb, keyBinding, valueBinding, true);
public void cleanup() throws Exception {
myDb.close();
env.close();
private void insertValues(int count) throws DatabaseException {
CurrentTransaction ct = CurrentTransaction.getInstance(this.env);
try {
ct.beginTransaction(null);
int i = 0;
while(i < count) {
TestValue tv = createTestValue(i++);
storedMap_.put(tv.key, tv);
System.out.println("Written "+i+" values for a total of " totalSize" bytes");
ct.commitTransaction();
} catch(Throwable t) {
System.out.println("Exception " + t);
t.printStackTrace();
ct.abortTransaction();
private TestValue createTestValue(int i) {
TestValue t = new TestValue();
t.key = "key_"+i;
t.value = "value_"+i;
return t;
public static void main(String[] args) throws Exception {
LogFileTest3 test = new LogFileTest3();
if(args[0].equalsIgnoreCase("clean")) {
while(test.env.cleanLog() != 0);
} else {
test.insertValues(Integer.parseInt(args[0]));
test.cleanup();
static private class TestValue {
String key = null;
String value = null;
private class TestValueBinding implements EntityBinding {
public Object entryToObject(DatabaseEntry key, DatabaseEntry entry) {
TestValue t = new TestValue();
t.key = StringBinding.entryToString(key);
t.value = StringBinding.entryToString(key);
return t;
public void objectToData(Object o, DatabaseEntry entry) {
TestValue t = (TestValue)o;
StringBinding.stringToEntry(t.value, entry);
totalSize += entry.getSize();
public void objectToKey(Object o, DatabaseEntry entry) {
TestValue t = (TestValue)o;
StringBinding.stringToEntry(t.key, entry);
}
Yup, that solves the issue. By doubling the
je.maxMemory property, I've made the problem
disapear.Good!
How large is the lock on 64 bit architecture?Here's the complete picture for read and write locks. Read locks are taken on get() calls without LockMode.RMW, and write locks are taken on get() calls with RMW and all put() and delete() calls.
Arch Read Lock Write Lock
32b 96B 128B
64b 176B 216B
I'm setting the je.maxMemory property becauce I'm
dealing with many small JE environments in a single
VM. I don't want each opened environment to use 90%
of the JVM RAM...OK, I understand.
I've noticed that the je.maxMemory property is
mutable at runtime. Would setting a large value
before long transactions (and resetting it after) be
a feasable solution to my problem? Do you see any
potential issue by doing this?We made the cache size mutable for just this sort of use case. So this is probably worth trying. Of course, to avoid OutOfMemoryError you'll have to reduce the cache size of other environments if you don't have enough unused space in the heap.
Is there a way for me to have JE lock multiple
records at the same time? I mean have it create a
lock for a insert batch instead of every item in the
batch...Not currently. But speaking of possible future changes there are two things that may be of interest to you:
1) For large transaction support we have discussed the idea of providing a new API that locks an entire Database. While a Database is locked by a single transaction, no individual record locks would be needed. However, all other transactions would be blocked from using the Database. More specifically, a Database read lock would block other transactions from writing and a Database write lock would block all access by other transactions. This is the equivalent of "table locking" in relational DBs. This is not currently high on our priority list, but we are gathering input on this issue. We are interested in whether or not a whole Database lock would work for you -- would it?
2) We see more and more users like yourself that open multiple environments in a single JVM. Although the cache size is mutable, this puts the burden of efficient memory management onto the application. To solve this problem, we intend to add the option of a shared JE cache for all environments in a JVM process. The entire cache would be managed by an LRU algorithm, so if one environment needs more memory than another, the cache dynamically adjusts. This is high on our priority list, although per Oracle policy I can't say anything about when it will be available.
Besides increasing the je.maxMemory, do you see any
other solution to my problem?Use smaller transactions. ;-) Seriously, if you have not already ruled this out, you may want to consider whether you really need an atomic transaction. We also support non-transactional access and even a non-locking mode for off-line bulk loads.
Thanks a bunch for your help!You're welcome!
Mark
Similar Messages
-
Hi,
I'm evaluating Berkeley DB Java Edition for my application, and I have the following code:
public class JETest {
private Environment env;
private Database myDb;
public JETest() throws DatabaseException {
EnvironmentConfig envConfig = new EnvironmentConfig();
envConfig.setAllowCreate(true);
envConfig.setTransactional(false);
envConfig.setCachePercent(1);
env = new Environment(new File("/tmp/test2"),envConfig);
DatabaseConfig dbConfig = new DatabaseConfig();
dbConfig.setDeferredWrite(true);
dbConfig.setAllowCreate(true);
dbConfig.setTransactional(false);
myDb = env.openDatabase(null, "testing", dbConfig);
public void cleanup() throws Exception {
myDb.close();
env.close();
private void insertDelete() throws DatabaseException {
int keyGen = Integer.MIN_VALUE;
byte[] key = new byte[4];
byte[] data = new byte[1024];
ByteBuffer buff = ByteBuffer.wrap(key);
for (int i=0;i<20000;i++) {
buff.rewind();
buff.putInt(keyGen++);
myDb.put(null, new DatabaseEntry(key), new DatabaseEntry(data));
int count = 0;
System.out.println("done inserting");
keyGen = Integer.MIN_VALUE;
OperationStatus status;
for (int i=0;i<20000; i++) {
buff.rewind();
buff.putInt(keyGen++);
count++;
status = myDb.delete(null, new DatabaseEntry(key));
if (status != OperationStatus.SUCCESS) {
System.out.println("Delete failed.");
System.out.println("called delete "+count+" times");
env.sync();
public static void main(String[] args) throws Exception {
JETest test = new JETest();
test.insertDelete();
test.cleanup();
After running the above, I expected that the log file utilization be 0%, because I delete each and every record in the database. The status returned by the delete() method was OperationStatus.SUCCESS for all the invocations.
I ran the DbSpace utility, and this is what I found:
$ java -cp je-3.2.13/lib/je-3.2.13.jar com.sleepycat.je.util.DbSpace -h /tmp/test2 -d
File Size (KB) % Used
00000000 9765 99
00000001 3236 99
TOTALS 13001 99
Obviously, the cleaner thread won't clean log files that are 99% used.
I did expect the logs to be completely empty, though. What is going on here?
Thanks,
LiorLior,
With the default heap size (64m) I was able to reproduce the problem you're seeing. I think I understand what's happening.
First see this note in the javadoc for setDeferredWrite:
http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/je/DatabaseConfig.html#setDeferredWrite(boolean)
Note that although deferred write databases can page to disk if the cache is not large enough to hold the databases, they are much more efficient if the database remains in memory. See the JE FAQ on the Oracle Technology Network site for information on how to estimate the cache size needed by a given database. In the current implementation, a deferred write database which is synced or pages to the disk may also add additional log cleaner overhead to the environment. See the je.deferredWrite.temp property in <JEHOME>/example.properties for tuning information on how to ameliorate this cost.
The statement above about "additional log cleaner overhead" is not quite accurate. Specifically, we do not currently keep track of obsolete information about DW (Deferred Write) databases. We are considering improvements in this area.
Your test has brought out this issue to an extreme degree because of the very small cache size, and the fact that you delete all records. Most usage of DW that we see involves a bulk data load (no deletes), or deletes performed in memory so the record never goes to disk at all. With the tiny cache in your test, the records do go to disk before they are deleted.
You can avoid this situation in a couple of different ways:
1) Use a larger cache. If you insert and delete before calling sync, the inserted records will never be written to disk.
2) If this is a temporary database (no durability is required), then you can set the je.deferredWrite.temp configuration parameter mentioned above. Setting this to true enables accurate utilization tracking for a DW database, for the case where durability is not required.
# If true, assume that deferred write database will never be
# used after an environment is closed. This permits a more efficient
# form of logging of deferred write objects that overflow to disk
# through cache eviction or Database.sync() and reduces log cleaner
# overhead.
# je.deferredWrite.temp=false
# (mutable at run time: false)
Will either of these options work for your application?
Mark -
I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
Please help - it is driving me nuts!Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
a. is the application doing anything that prohibits log cleaning? (in your case, no)
b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
1) Ran DbDump with and withour -r. I am expecting the
data to stay consistent. So, after the first run it
creates the data, and leaves 20mb in place, 3 log
files near 100% used. After the second run it should
update the records (which it does from the
applications point of view) but I now have 40mb
across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
java -jar je.jar DbPrintLog -h <envhome> -S
and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
So in summary, let's try these steps
- use DbDump and DbPrintLog to double check the amount and size of your application data
- make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
- run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
If it all points to JE, we'll probably take it offline, and ask for your test case.
Regards,
Linda -
APPS 11I:과도한 JVM LOGGING FILE DISCOGROUP.0.STDOUT이 생성됨
제품 : AOL
작성날짜 : 2005-10-17
APPS 11I:과도한 JVM LOGGING FILE DISCOGROUP.0.STDOUT이 생성됨
======================================================
PURPOSE
Unix Server 에 $IAS_ORACLE_HOME/Apache/Jserv/logs/jvm directory 에 log file 이 과도하게 생성 되었을 경우의 해결책을 제시한다.
Problem Description
$IAS_ORACLE_HOME/Apache/Jserv/logs/jvm directory 에 DISCOGROUP.0.STDOUT file size 가 점점 과도하게 증가 하여 resource 를 차지 하게 된다.
Workaround
N/A
Solution Description
Bug:3132644 에 의한 문제로 아래와 같이 조치 하여야 한다.
1. Please apply the latest released TXK Autoconfig roll up patch.
--2005년 10월 가장 최신의 TXK Autoconfig rollup patch는 4104924 이다.
2. Verify that variable s_jserv_std_log is disabled.
-- 해당 variable 은 $APPL_TOP/admin/<SID>_<Hostname>.xml (Applications Context File) 에서 확인 할 수 있다.
Reference Documents
Note. 275567.1 - Excessive JVM Logging Generated In DiscoGroup.0.stdout With Applications 11iHi;
Please shutdown all services first, when sometimes pass like 20 min check apps services down or not. Please check alert.log also for any error message
Also see below links
runaway processes
Re: runaway processes
script issue_urgent
If possible Please restart server
Regard
Helios -
Log File Error when open MDS_Log from MDM Console
Hi Experts,
I am facing a problem when I tried to open MDS_Log file from MDM Console. The error message was showing as below:
Log File Error
Problems in the log file prevent Console from displaying its contents. To view log contents, open the log file in a seperate text editor.
Could any of you advise me how to open the log in a seperate text editor from MDM console ?
Thanks very much in advance
Regards,
Wei DonaHi ,
Just right click on record pane of log node and select save .it will sav to user defined location in .csv format.
Please try at yr end.
Thanks,
Sudhanshu -
About unable to initialize log file!
Hi,
I insatll the Sun Java communications suite 5 in one linux host.
When I restart web server, I got the following error, would you please give me a hand ?
Thank you in advance.
*# /var/opt/sun/webserver7/https-comms.swabplus.com/bin/stopserv*
server has been shutdown
*# /var/opt/sun/webserver7/https-comms.swabplus.com/bin/startserv*
Sun Java System Web Server 7.0 B12/04/2006 08:17
warning: CORE3283: stderr: Java HotSpot(TM) Server VM warning: Can't detect initial thread stack location - find_vma failed+_
info: CORE5076: Using [Java HotSpot(TM) Server VM, Version 1.5.0_09] from [Sun Microsystems Inc.]
info: WEB0100: Loading web module in virtual server [com5.my.com] at [amserver]
warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
info: WEB0100: Loading web module in virtual server [com5.my.com] at [ampassword]
warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
info: WEB0100: Loading web module in virtual server [com5.my.com] at [amcommon]
warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
info: WEB0100: Loading web module in virtual server [com5.my.com] at [amconsole]
warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
info: WEB0100: Loading web module in virtual server [com5.my.com] at [da]
info: WEB0100: Loading web module in virtual server [com5.my.com] at [commcli]
info: url: jar:file:/opt/sun/mfwk/share/lib/mfwk_instrum_tk.jar!/com/sun/mfwk/config/MfConfig.class
info: url: jar:file:/opt/sun/mfwk/share/lib/mfwk_instrum_tk.jar!/com/sun/mfwk/config/MfConfig.class
info: LogFile is: //var/opt/sun/mfwk/logs/instrum.%g_
warning: Warning: unable to initialize log file!+_
warning: Couldn't get lock for //var/opt/sun/mfwk/logs/instrum.%g+_
info: group = 227.227.227.1, port = 54320
info: Set Time-to-live to 0
info: join Group /227.227.227.1
info: Starting listening thread
info: sends initial RESP message in SDK
info: HTTP3072: http-listener-1: http://com5.my.com:80 ready to accept requests
info: CORE3274: successful server startupThis happens when you install web server as "non-root" and start the instance as "root".
If you can avoid this, you can get around of
warning: CORE3283: stderr: Java HotSpot(TM) Server VM warning: Can't detect initial thread stack location - findvma failed_*
The cause for log file initialization problem appears to be some permission problem.
Can you get back once you take care of what suggested above. -
Problem with logging in log files.
HI,
Our's is a client/server application.
In our application there are so many clients.
They each have separate page(web page).
When a client download any file from their site it logs(client name, time, file name etc.) into a common log file(access.log) and a client specific log file(access.client_name).
Logging in common file(access.log) is performing well.
Logging in client specific log file works for some clients and don't work for some clients.
For example there is a client called 'candy'.
Some times it logs in the log file, access.candy
some times it don't log.
Tell me wht is the problem.
If you want more information regarding my problem, I will send.
Please give me some solution.
Thank you.............Third Party Client: ((__null) != m_lock && 0 == (*__error())) Can't create semaphore lock
There seems to be something wrong with a handling (m_lock = method of lock) of semaphore mechanism-- It appears to be a program issue, which is either on your local machine or a remote Web site' page(s).
'semaphore' Apple definition (quotation from ADC) :
A programming technique for coordinating activities in which multiple processes compete for the same kernel resources. Semaphores are commonly used to share a common memory space and to share access to files. Semaphores are one of the techniques for interprocess communication in BSD.
In short, it is a flag to terminate a task/thread efficiently without fail prior to another task/thread starts-- a synchronization mechanism among cooperating threads/tasks. (You might need to have some understanding of the basic concepts of locks and semaphores.)
I would test any suspect applications to uninstall temporarily to see if the erratic events are displayed on Console. Perhaps, vlc player?
Fumiaki
Tokyo -
I have one problem with Data Guard. My archive log files are not applied.
I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
In Enterprise Manager on Primary database it looks ok. I get the following message Data Guard status Normal
But as I wrote above the archive log files are not applied
After I created the Physical Standby database, I have also done:
1. I connected to the Physical Standby database instance.
CONNECT SYS/SYS@luda AS SYSDBA
2. I started the Oracle instance at the Physical Standby database without mounting the database.
STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
3. I mounted the Physical Standby database:
ALTER DATABASE MOUNT STANDBY DATABASE
4. I started redo apply on Physical Standby database
alter database recover managed standby database disconnect from session
5. I switched the log files on Physical Standby database
alter system switch logfile
6. I verified the redo data was received and archived on Physical Standby database
select sequence#, first_time, next_time from v$archived_log order by sequence#
SEQUENCE# FIRST_TIME NEXT_TIME
3 2006-06-27 2006-06-27
4 2006-06-27 2006-06-27
5 2006-06-27 2006-06-27
6 2006-06-27 2006-06-27
7 2006-06-27 2006-06-27
8 2006-06-27 2006-06-27
7. I verified the archived redo log files were applied on Physical Standby database
select sequence#,applied from v$archived_log;
SEQUENCE# APP
4 NO
3 NO
5 NO
6 NO
7 NO
8 NO
8. on Physical Standby database
select * from v$archive_gap;
No rows
9. on Physical Standby database
SELECT MESSAGE FROM V$DATAGUARD_STATUS;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
ARC1: Becoming the heartbeat ARCH
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery not using Real Time Apply
MRP0: Background Media Recovery terminated with error 1110
MRP0: Background Media Recovery process shutdown
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 2148
RFS[1]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[2]: Assigned to RFS process 2384
RFS[2]: Identified database type as 'physical standby'
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[3]: Assigned to RFS process 3188
RFS[3]: Identified database type as 'physical standby'
Primary database is in MAXIMUM PERFORMANCE mode
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[4]: Assigned to RFS process 3168
RFS[4]: Identified database type as 'physical standby'
RFS[4]: No standby redo logfiles created
Primary database is in MAXIMUM PERFORMANCE mode
RFS[3]: No standby redo logfiles created
10. on Physical Standby database
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
ARCH CONNECTED 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 0 0 0 0
RFS IDLE 1 9 13664 2
RFS IDLE 0 0 0 0
10) on Primary database:
select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC4: Archival started
ARC5: Archival started
ARC6: Archival started
ARC7: Archival started
ARC8: Archival started
ARC9: Archival started
ARCa: Archival started
ARCb: Archival started
ARCc: Archival started
ARCd: Archival started
ARCe: Archival started
ARCf: Archival started
ARCg: Archival started
ARCh: Archival started
ARCi: Archival started
ARCj: Archival started
ARCk: Archival started
ARCl: Archival started
ARCm: Archival started
ARCn: Archival started
ARCo: Archival started
ARCp: Archival started
ARCq: Archival started
ARCr: Archival started
ARCs: Archival started
ARCt: Archival started
ARCm: Becoming the 'no FAL' ARCH
ARCm: Becoming the 'no SRL' ARCH
ARCd: Becoming the heartbeat ARCH
Error 1034 received logging on to the standby
Error 1034 received logging on to the standby
LGWR: Error 1034 creating archivelog file 'luda'
LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
11)on primary db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
Luda 4 NO
Luda 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
Luda 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
Luda 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
Luda 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
Luda 8 NO
12) on standby db
select name,sequence#,applied from v$archived_log;
NAME SEQUENCE# APP
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
13) my init.ora files
On standby db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
*.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_unique_name='luda'
*.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='luda'
*.fal_server='irina'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
*.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
On primary db
irina.__db_cache_size=79691776
irina.__java_pool_size=4194304
irina.__large_pool_size=4194304
irina.__shared_pool_size=75497472
irina.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
*.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
*.compatible='10.2.0.1.0'
*.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='luda','irina'
*.db_name='irina'
*.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
*.fal_client='irina'
*.fal_server='luda'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(irina,luda)'
*.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
*.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=30
*.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
Please help me!!!!Hi,
After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
In another session 'show configuration' results in the following, confirming that the enable succeeded.
DGMGRL> show configuration
Configuration
Name: avhtest
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
avhtest - Primary database
avhtestls53 - Physical standby database
Current status for "avhtest":
Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
It there anybody that experienced the same problem and/or knows the solution to this?
With kind regards,
Martin Schaap -
Problem about space management of archived log files
Dear friends,
I have a problem about space management of archived log files.
my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
Daily Script:
run {
allocate channel oem_disk_backup device type disk;
recover copy of database with tag 'ORA$OEM_LEVEL_0';
backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
sorry for the long message, I just want to describe in details to easy diagnosis.
any hint will be greatly appreciated!
Sammythank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
thanks again. -
Problem to send result from log file, the logfile is to large
Hi SCOM people!
I have problem when monitoring a log file on a Red Hat system, I get a alert that tells me that the log file is too large to send (see the alert context below).I guess that the problem is that the server logs to much between the 5 minutes that SCOM checks.
Any ideas how to solve this?
Date and Time: 2014-07-24 19:50:24
Log Name: Operations Manager
Source: Cross Platform Modules
Event Number: 262
Level: 1
Logging Computer: XXXXX.samba.net
User: N/A
Description:
Error scanning logfile / xxxxxxxx / server.log on values xxxxx.xxxxx.se as user <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser>; The operation succeeded and cannot be reversed but the result is too large to send.
Event Data:
< DataItem type =" System.XmlData " time =" 2014-07-24T19:50:24.5250335+02:00 " sourceHealthServiceId =" 2D4C7DFF-BA83-10D5-9849-0CE701139B5B " >
< EventData >
< Data > / xxxxxxxx / server.log </ Data >
< Data > xxxxx.xxxxx.se </ Data >
< Data > <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser> </ Data >
< Data > The operation succeeded and cannot be reversed but the result is too large to send. </ Data >
</ EventData >
</ DataItem >Hi Fredrik,
At any one time, SCX can return 500 matching lines. If you're trying to return > 500 matching lines, then SCX will throttle your limit to 500 lines (that is, it'll return 500 lines, note where it left off, and pick up where it left off next time log files
are scanned).
Now, be aware that Operations Manager will "cook down" multiple regular expressions to a single agent query. This is done for efficiency purposes. What this means: If you have 10 different, unrelated regular expressions against a single log file, all of
these will be "cooked down" and presented to the agent as one single request. However, each of these separate regular expressions, collectively, are limited to 500 matching lines. Hope this makes sense.
This limit is set because (at least at the time) we didn't think Operations Manager itself could handle a larger response on the management server itself. That is, it's not an agent issue as such, it's a management server issue.
So, with that in mind, you have several options:
If you have separate RegEx expressions, you can reconfigure your logging (presumably done via syslog?) to log your larger log messages to a separate log file. This will help "cook down", but ultimately, the limit of 500 RegEx results is still there; you're
just mitigating cook down.
If a single RegEx expression is matching > 500 lines, there is no workaround to this today. This is a hardcoded limit in the agent, and can't be overridden.
Now, if you're certain that your regular expression is matching < 500 lines, yet you're getting this error, then I'd suggest contacting Microsoft Support Services to open an RFC and have this issue escalated to the product team. Due to a logging issue
within logfilereader, I'm not certain you can enable tracing to see exactly what's going on (although you could use command line queries to see what's happening internally). This is involved enough where it's best to get Microsoft Support involved.
But as I said, this is only useful if you're certain that your regular expression is matching < 500 lines. If you are matching more than this, this is a known restriction today. But with an RFC, even that could at least be evaluated to see exactly the
load > 500 matches will have on the management server.
/Jeff -
Another Install Problem (With Log Files)
Hey there.
I've read many of the install problem threads and have tried numerous things to get this working, but to no avail. This is getting VERY frustrating.. :-E
Machine is a Dell Latitude with 1Gb mem, Running XP Pro SP2.
My login ID is dgault.
I've set my temp directories (temp and tmp) both to point to c:\temp
Hera are my log files:
====================================
XE.bat.log -- START
====================================
Instance created.
====================================
XE.bat.log -- END
====================================
====================================
CloneRmanRestore.log -- START
====================================
SQL> startup nomount pfile="C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\init.ora";
ORA-24324: service handle not initialized
ORA-24323: value not allowed
ORA-28547: connection to server failed, probable Oracle Net admin error
SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\rmanRestoreDatafiles.sql;
SQL> set echo off;
ERROR:
ORA-03114: not connected to ORACLE
ERROR:
ORA-03114: not connected to ORACLE
ERROR:
ORA-03114: not connected to ORACLE
ERROR:
ORA-28547: connection to server failed, probable Oracle Net admin error
SQL> spool C:\oraclexe\app\oracle\product\10.2.0\server\config\log\cloneDBCreation.log
====================================
CloneRmanRestore.log -- END
====================================
====================================
CloneDBCreation.log -- START
====================================
SQL> Create controlfile reuse set database "XE"
2 MAXINSTANCES 8
3 MAXLOGHISTORY 1
4 MAXLOGFILES 16
5 MAXLOGMEMBERS 3
6 MAXDATAFILES 100
7 Datafile
8 'C:\oraclexe\oradata\XE\system.dbf',
9 'C:\oraclexe\oradata\XE\undo.dbf',
10 'C:\oraclexe\oradata\XE\sysaux.dbf',
11 'C:\oraclexe\oradata\XE\users.dbf'
12 LOGFILE GROUP 1 ('C:\oraclexe\oradata\XE\log1.dbf') SIZE 51200K,
13 GROUP 2 ('C:\oraclexe\oradata\XE\log2.dbf') SIZE 51200K,
14 GROUP 3 ('C:\oraclexe\oradata\XE\log3.dbf') SIZE 51200K RESETLOGS;
SP2-0640: Not connected
SQL> exec dbms_backup_restore.zerodbid(0);
SP2-0640: Not connected
SP2-0641: "EXECUTE" requires connection to server
SQL> shutdown immediate;
ORA-24324: service handle not initialized
ORA-24323: value not allowed
ORA-28547: connection to server failed, probable Oracle Net admin error
SQL> startup nomount pfile="C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\initXETemp.ora";
ORA-24324: service handle not initialized
ORA-01041: internal error. hostdef extension doesn't exist
SQL> Create controlfile reuse set database "XE"
2 MAXINSTANCES 8
3 MAXLOGHISTORY 1
4 MAXLOGFILES 16
5 MAXLOGMEMBERS 3
6 MAXDATAFILES 100
7 Datafile
8 'C:\oraclexe\oradata\XE\system.dbf',
9 'C:\oraclexe\oradata\XE\undo.dbf',
10 'C:\oraclexe\oradata\XE\sysaux.dbf',
11 'C:\oraclexe\oradata\XE\users.dbf'
12 LOGFILE GROUP 1 ('C:\oraclexe\oradata\XE\log1.dbf') SIZE 51200K,
13 GROUP 2 ('C:\oraclexe\oradata\XE\log2.dbf') SIZE 51200K,
14 GROUP 3 ('C:\oraclexe\oradata\XE\log3.dbf') SIZE 51200K RESETLOGS;
SP2-0640: Not connected
SQL> alter system enable restricted session;
SP2-0640: Not connected
SQL> alter database "XE" open resetlogs;
SP2-0640: Not connected
SQL> alter database rename global_name to "XE";
SP2-0640: Not connected
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE 'C:\oraclexe\oradata\XE\temp.dbf' SIZE 20480K REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED;
SP2-0640: Not connected
SQL> select tablespace_name from dba_tablespaces where tablespace_name='USERS';
SP2-0640: Not connected
SQL> select sid, program, serial#, username from v$session;
SP2-0640: Not connected
SQL> alter user sys identified by "&&sysPassword";
SP2-0640: Not connected
SQL> alter user system identified by "&&systemPassword";
SP2-0640: Not connected
SQL> alter system disable restricted session;
SP2-0640: Not connected
SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\postScripts.sql
SQL> connect "SYS"/"&&sysPassword" as SYSDBA
ERROR:
ORA-28547: connection to server failed, probable Oracle Net admin error
SQL> set echo on
SQL> spool C:\oraclexe\app\oracle\product\10.2.0\server\config\log\postScripts.log
====================================
CloneDBCreation.log -- END
====================================
====================================
postScripts.log -- START
====================================
SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\rdbms\admin\dbmssml.sql;
SP2-0310: unable to open file "C:\oraclexe\app\oracle\product\10.2.0\server\rdbms\admin\dbmssml.sql"
SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\rdbms\admin\dbmsclr.plb;
SQL> DROP PUBLIC DATABASE LINK DBMS_CLRDBLINK;
SP2-0640: Not connected
SQL> CREATE PUBLIC DATABASE LINK DBMS_CLRDBLINK USING 'ORACLR_CONNECTION_DATA';
SP2-0640: Not connected
SQL> CREATE OR REPLACE LIBRARY ORACLECLR_LIB wrapped
2 a000000
3 1
4 abcd
5 abcd
6 abcd
7 abcd
8 abcd
9 abcd
10 abcd
11 abcd
12 abcd
13 abcd
14 abcd
15 abcd
16 abcd
17 abcd
18 abcd
19 16
20 51 8d
21 LSqVp2u3D6yxyD42bHCkpHL03/8wg04I9Z7AdBjDpSjA9TNSMjO9GP4I9Qm4dCtp6jfnlRLO
22 EXUFAGLlV0fbBvBjoirfWNdXU3VV0WYkgIWTZhXOjnGHQ2VzowkkIKuoKmprxsHwQ=
23
24 /
SP2-0640: Not connected
SQL> DROP TYPE DBMS_CLRParamTable;
SP2-0640: Not connected
SQL> DROP TYPE DBMS_CLRType;
SP2-0640: Not connected
SQL> CREATE OR REPLACE TYPE DBMS_CLRType wrapped
2 a000000
3 1
4 abcd
5 abcd
6 abcd
7 abcd
8 abcd
9 abcd
10 abcd
11 abcd
12 abcd
13 abcd
14 abcd
15 abcd
16 abcd
17 abcd
18 abcd
19 d
20 4be 207
21 3WAupYEFJyUtDT58GzFPeWkUS6wwgwKJr0hqynRAv7leuFljpGFIgxvNNkagWXCAOYNjnLy1
22 ulbIGu/7Jr4I+E4ghHw/fZT2AjJ43oXGRL90ldDxQSra1CPcaBsAtcpUa02tik8fNqx/KMgr
23 633l8+Va2DhCmvZXp9G7vbOPt7Pl3MM9zMw2e9Y0okY53GpiRO894C9geS1Y7KzzE/IgLaEu
24 32gKwpBN6M0RCm7BYQ+ovzICzvY5VVyfs/mJp4oYS98qQpcbag5dyZAf0OP/aKDRu8nMxkFb
25 i/etbPODbix+jSyOsHVw8+Q+m5vbJnoYgrAEVyEgB3LQctJbF95qK2fWuM+PzvFnTTxAGGzD
26 bbFaBpyXAP09LtZsxHxeICUOFvBRezKHmWrTb5DRlika6Lg9ukf6Rh9Vb+74Kw+dCaqdPNsm
27 BbgD7N+pj3pEKfdUH3CrGeQtEflPW7LZ5wEdk1k/oTs5nee7t70+LOfUmSdFznr3wK/OVfn4
28 KShfwfMR
29
30 /
SP2-0640: Not connected
SQL> CREATE OR REPLACE TYPE BODY DBMS_CLRType wrapped
2 a000000
3 1
4 abcd
5 abcd
6 abcd
7 abcd
8 abcd
9 abcd
10 abcd
11 abcd
12 abcd
13 abcd
14 abcd
15 abcd
16 abcd
17 abcd
18 abcd
19 e
20 41f 191
21 WGxKHaEucYlWwCTtmi+GiJKjekYwgwK3ctxqfHQHrbs+zza9qFIBBo/k3vRdV42GdJcBu7Vv
22 ITu0l2CDDI1d+D9K6+h7yxxZwO9Xtk4x8RFMvTqmcLYXjeAqvfUCO0DbqqDG+0SG03B8N8zU
23 x3CB7ZzBJqbdVlPKP72aumnr8weouKrQT4tmBg3nhDN3+4ve7JkpJVEIEI+T5dJDg3IF2nEb
24 xv03mcyUhyCvDbOazgEBY+LaQTQ99WwuW3WZw4y5xOakbH7mnBiomlFxUQglR1Hft6tRchhS
25 tJTSEuprYV4kbm7IcRmC1LRlilvfcjDmMRWJUyC8NDvKu45v5GiDxx268uhVJTkhTBGaNgPz
26 idKIcZk/6eV4Myw05MkyijGkKIEIpR3Fl0SO
27
28 /
SP2-0640: Not connected
SQL> CREATE OR REPLACE TYPE DBMS_CLRParamTable AS TABLE OF DBMS_CLRType;
2 /
SP2-0640: Not connected
SQL> CREATE OR REPLACE PACKAGE DBMS_CLR wrapped
2 a000000
3 1
4 abcd
5 abcd
6 abcd
7 abcd
8 abcd
9 abcd
10 abcd
11 abcd
12 abcd
13 abcd
14 abcd
15 abcd
16 abcd
17 abcd
18 abcd
19 9
20 3162 65e
21 igQsRO8he8CDCdDl4nWpC6D62Xcwgz0T2UgFey9AmP9euDHhTNtIIypFDhpSVolmshjyUX7k
22 SDMhxRY91oYjSjLiIwWaV61R3iM8yqEjBdxa/QqeVR3pZs7ue/BsPqTYpXW8XRTJmbmDO5
23 y6g6sM26+9djcF+m6Fqq8mC6NyZn6S5/u5YqlKUW6Z0/jFVzc+7lxa51jAi2w83JxUetuepc
24 Egxc0uEGzxAtwztimeUcybwG552DvNxfbRYPmlZcF9ms5bun8tEOU37kSxAxwg78sGNmXyJg
25 Jp+fefVhVk3C9oZaBEqX7v/i8BgyRDcEjUz9lIky1qFGl+LwK6UjnlZNwvaMFeGiVd1F/AUF
26 mHTk3md05YqDaT+DTqV8W1zC30fR3VfRvaLGYXiY3Q7FSir0QtQzyR8EXCMAYA3EXEaUFpex
27 HwxcYAocVlx+EIrX0XzluGgiDXiY3Q4l/lmPizTlkrkJ9LGUPSicGFqTaYHrCe0hotIXVND2
28 F7HUVK9cmOiDrcMQA+iDHp686BzH3ZSlKjFqVM6JTMPDsiJPMkNbw/6M6OgXOuH2yHO9AMlb
29 OziQdfrmRltzw9EUNffiMMtRhoLdqYs1e2XMMqCVgGctzFg7P2tU+kbANpabiyUIvhhaAu7a
30 xyvmPVJnmysL4u823iZM2GqZiZCpKW3Qv4NbJpkxn9LDl13NZ651CmCRtTHYpzbEOxcukq0t
31 lwO08hc0bwA3SconEG/mRIBo82vHgSlwIZu7C4AMzIIYYHFCc85MYN2EANfivUZrD486W1F/
32 gR3t490htjoHcFdVf1DiPqkXdtb79WooM4LoLHkw8U+qpiF2NYvSl6lJgb7BVdDiI3dux9EI
33 z61yE26Ss4Fd8U7cZM56fUJJ7aWLcdeAiNbVenhTe3KFBHHuOq+tP/9upKGieCQXcjKNfxCw
34 +1WK69iQf7XbU9OsMBAoNQ7Bo27SJLPVjEvTtkKuNfMrly1CbKAe9AzUNy5bE5S593CX54xc
35 Vw68Qij7gam+GE04w25o+7JJ3oiAgi8jYYbYD2zZxIWMz4MmrVq3eE390NbSHyo7jwHegxKK
36 f3h+yaUTftrGMN6jT2lokTEy1KiyE7MSEwHBtNF5y79IE8xyVuVpIMIMc0DE/TJ0uJ7SOfLE
37 6SqgfhRxYRnsuAM1/GFNB7fwRPx19omV1+MCt2mBmwWKreim3q4NJgWKrexOr0FoZGET9buf
38 RaRVyXcxl/K3Xu/C19hkaqBibbH9eQf9JAWUOtDPAvh/ThmIIy15+VGDFNmummh9SXftWiSE
39 D0vX9JgmaYFFgfMECrWS664SELEFQKBDY2tyhUXo5a0E6EMyi2X4B+aqeJszH5WuDGcKF+d/
40 7NklyocS0C9rvMWyDj1qV73XI6vfmBdSFS55SOx3O5uzoKk4Vw3sFlLVkwyA3w2fuV/6PcOI
41 mayz9ZGxGT3tryZDopGviZT6Zd+BJdzRDexA9vz6kHEnKqSxtLQws8Nbtzm7e+9X7kd2yDnN
42 zdju2xPRoVlXR/M41DFx8QRY5B1OfryhhCITa25oua0+Yrt8bQJCmke63jDNWP+92nHIEU+e
43 eWu1mrm9oOz5JJXuag+ENbhu
44
45 /
SP2-0640: Not connected
SQL> show errors
SP2-0640: Not connected
SP2-0641: "SHOW ERRORS" requires connection to server
SQL> CREATE OR REPLACE PACKAGE BODY DBMS_CLR wrapped
2 a000000
3 1
4 abcd
5 abcd
6 abcd
7 abcd
8 abcd
9 abcd
10 abcd
11 abcd
12 abcd
13 abcd
14 abcd
15 abcd
16 abcd
17 abcd
18 abcd
19 b
20 933d 1c32
21 LjzBBzQRtLt3jlDfh/c2/PSd1T8wg1VMr0iGl8DXM4HqbvrJkWfzixk0XWxmoBbxAb73ueCM
22 RRbLF4Q2NZ+TRL3Ilc/PFpNhoqGGvhwPEl1/yYy50S2Sbuvp5ZgYt02SeKOCl+i5zJx/KFxp
23 aZ/LBLWh73oUCRg8SdRqDz1a39OEKQKgLDQEZJMtce5ef+zwT5ZUAAEz+DyK3yH1r6W9A6po
24 7D0uukDHeE98+B48WYNUwiLGik+f6u8SGxS1NCqCLEJ2L+t3M70DnS5Hitkt7rbJtWV/mbaY
25 SUf5MnL9HkDmkEmHIjgzBbALmCL5OJiaYZ89pClOS+R5SYmyKWzrsIqf8r3w2E9C7RImcZ/S
26 PpiQK13CjK4xzdtRdwDHc+QzxAc6TEsQl0hJnMUhQ4JSOrEScdGrIg3/vyM+IHMCRPgaVdyW
27 QwNz5BCwH3l7DyS7I9rtz0o42vmIMPki/JV51sHtvfA3KX/YHCrw73K6F3iVIvxALReJLslq
28 D2EfaNl9/jEPJM3UfluFv4B9udP9PIr9vlcV2XlOnFshHFvkM/i7mPMqWyxzU8ItLAPNQXOf
29 A4H5hrHQlWGBGTicoCZTSI2zFvC3BnJxDdSCxCqMbq2nax8YekAYxpnXgFXwEMHX983iJnIF
30 Ts5j/DsoNO5LzewGJJpMeW6xn6Ne2e99xjPoDZmlcmt+O5e/QFVwJD6lwfP9a0v4ds8mjJb+
31 TsGz4AS6uQe5G5v+16q1EoEPtde1/k+1CJ21Tk+qqpq2WjzNMzO6zSKfGblhBsIIE7+ymAqb
32 MI16BXhySREcqDBfg70JTltZSlJ1cGVlgN8YkeGv19z6B50dxsR+PZCbg8GzKuIseoOH4GHG
33 7m409J5hUCL1Vd3BVQAUxMTEvJs0EDBpnYiE2+zEFupuYf95bFiJPPfLee+BGcmafJCGLD/4
34 0tCd8E4WgA8BmMWC0GgEn+5JSeJhv+LJ/IM73/OOFbgktiRFUFUIKzGQXww4iT+5ToDIdyhu
35 KNqYlEroIub+fYYzRYZ4hc58Kl8oKCFo380RfgvrSpFsTzq665o/s1fOvdttC8nl2uL5zX+j
36 185OV4CGkhWj+1w8JQJcoLCMpHOhJrOzIxHTmh6G0MhSs7gMlSS167uqAIsVmgaznSgKW6rc
37 cL7OeQVtIMwIxBIw6OtBZtN10ktKYbeY/o9XopbUaXifH/4P3w0WGyUsHblz0zGydaQrKbm5
38 uPuL+L7kLd3CHT9fH2jwpJiWzwQyJDsIOVO/EGURdMaGsPq0MYyuTsYzlfeGgDuMxcSGZmNd
39 Ae2Z5FdOIy3wMgkfsM0Dhn/EhwNVilWtwCOZ/I7E4CNytJpiHP2fSz/VyH740Zp4YQCaUzJ+
40 mLzH/rRqJPREB7oGsJCfsFkiwbz5TZIkBNqwCMC/KbYppPMw5P3NIUaGXUrk1sTQ7uT5UsAK
41 V9C/11OnxpR4TLP4lBLyOrTPBfINmWUokO9/KHkkofP+XnoQR5jAkHqojfq7m09jiZAHEpGA
42 ePrJmr0Whow8Un6YMdwLLGA/WTKAFYNg/oLuzTOo4vIj2tCHXjDvPmQEUdzfnxlkm5+2Qvcz
43 G4NFjoG5vwPi8hD+0e2x+IYpM4/4XJpzWYcUnSZF0Sm7P7rSe9K/u9kymbsmSQO3pIv+CjT4
44 WDRaQl5MTAkZQXceyBnWs5iUmjE8Tvhcmj/FlvGa9FPRYLwK0w40KEQi83M/qESXT6g1Oh2r
45 NBxzeWIZCtI/lDHtVMCaskLqjrsZA49dnL31ltDAmrJSaz5kFNvwQTQFL3itnqrGqEhuxtnm
46 aPdu0QdTCrMTNBev+mRRV0ItXV0S7AVDxHH6bxk8jf7lvrd6a/4KvlWihxq+9BrRJ7knFXE6
47 SoxxOm02vptjf+Lk3OMF6K+HB2hhQTQFA73CD4aR4G7G2f9sSl31oUgFRzweyAU8t/7FxN75
48 TviNBZ8clvEFLW68bHhjuRiOeCNOQVx4+vKqmhX9sJvgzaTeHvHknzr8sai8n8HZEo0ZoQa1
49 +JQZSGaW8VWiXpyiFygqhLGoNIC/GQozijQGHnP8u1JlliWPWNtBd2sQvt9suZ4hYSwIY/M/
50 /hV64rLkRBreD/l2Uhz1/hp6ao38giE9YUoGnMzezpWRq/lWkECwAiMWi+3LWCLO1uwjVAMN
51 9l1VIpOHxY0/sYiB+DEaHxs8T1q5PjgzCJdGMYIpK1gt939KvMc4HLEGnao4Mwiu84s1wJxG
52 vpb+vMtcuZBCZGV61ZCqnatorkPp4Xr3PKHege67z9V9o5+omgg5XZbCOs4l8MYp6Ib3dzyG
53 gkO8Gkhf980Qc825jJzsJIZCjfeaVg8/FodBp9EsJo+4+qSHPaB1cxowCKVibcY8kFidAB63
54 30Z58Dqw788cxVnmtKsAibcse8sPUhZ4aEp7RApXNZtNWsHG3XriYSNiVnL2URnL/6GU6xyz
55 XlDcNQB3VXME6ICBt2REKZPwhgWoI3GNU1vSNkteetD8QkG9fVKhPPY1Qod4gZ9U3MWQM3BB
56 UTIYi4tNV49YuEgb6RxkRH2LNNOGzS9VWfJJM8hBNZ/oUB+pxSDW5eTDVENm4ptMcKqOdztV
57 HgY6Tkt6xgjaBuQ4AbwiGJu2bEI10JrzhoTsg8eVznXgzifgeqE2z4R/HAn+HNtXNSlxXyTn
58 UTQiGJtOcInHdkPeyiihRXIQhXpdVJ1vyBdYUCBbXVK5mxyFthr/qeQ1Nadk4sabsPotel4L
59 OhoELILFT/TuqP0zPT/aQV3YvO6WxoSnKWq71L3ysAQi6L0itmqEGMH2ODDs4zqfBBxj3Ll/
60 blH1vWoH8LAsNwhSBaUqa3oxjxK6ISgFwICp3MraldLIR4FZotC3CIeZcgOJvsSETlf6edBD
61 vcOwWoMUYilYEYMhaooNpg0MQnAgW+WQkUjNN+2paHivVUlW5Hw0nCXoh6TN3jyFrt34f9eG
62 jggLV530Qs5eZ511mdL8UAdPShDOG87uPtKuJcpB9HNevdFkMBbAqLJDLnJg6PTvB+/xghSd
63 AjP5frWAs7zIDQiDEa7H1RkczcZ+47ag0Pd66fjOjvhYaa4J84eZBZm9HSBbitLjqtD8iOCV
64 ldaSzV6X0ADKnZIDCK4S0SISGyIQHEE0zPjueoGpaEi0rcD+ZOsZ8E3tmwD7+Qa1HUsy7xmd
65 65LTHSTh+DEMYa7cGrA/19BMMGc6MMCIJbTLLn4PG6plCvOS6O0HQ93d6fGn+LX1W5z/2CxD
66 wlv5dWHWX0qHuxDlO/j5Zx8Ziu2qZP6zBTBJ2ByQKT8TtPg16tQeOinOKswSRh79S9oQwX1G
67 j2qITsQ6VfC+ZSNy3Pxk9FUdTSBnuV0y1LZI0Eo0lsgSmhIBoEEXsnG2ZICpvPst6/4N3HVV
68 dqQvDw6fTs4sYXGUvhNOjDP24P4Ed3gOv4IQ/UP1Qz8HcL4JQEOXPqd4i1RBZjo+rMQQ6tTN
69 Kk6Sp24/ErMivuBkyMy+/GOS6B7SBW3S7qn+JWak+OJ590Fu8A89ZhCpm2JvKbMKA9xvKbZG
70 l5RlxbFZJjRssJsuCSgmVpw/20jaZF93A1kO9maBqYv9yHtCJgaJd0lvJ0IQHqA0BgGjvO7F
71 Yp0NWizrz9Glvs2YYXNqt3QmCoMAz6mbYjPLKDqjXiIsXkrRpb7NmGUirgMN4vRygBaaqXKG
72 sbmQCDq4FU6y8mt31+6mFAlFq6MyI+anWj48h75lqrJHxTL0iWan1RQJGP2eYh/LcCYIsLcK
73 d2wJGALHoRMYHiuIWM3IAirHptM+lbICp+4s8SWLuKbTPpWD1eqL/TcfiYda+K9tCOwyuaZU
74 T1cJ8oc8pawlmd4kMH+HAxndF1vnv1xpHraM0Qsc5Q48SdFx+vaWyy+55Q48SdTARO7LMohO
75 aUQNIghZE0jsladaPjyHvmWahXY8SUqJ8ZyBLu6mqm2i8lKEawHOdN50JUCm8av0ieDNjdVO
76 8I9qni729IlmikqV+6m46kQNIghm9wmJ3zZW6s7DV6YvueUOPLCGenhW6loajniOo9F23Qlq
77 mm/LWhqO8Jfdmfl+VjLuqaLys010egwajvCX3W1nMUhLcTVTCqyjT1O4ViplE9+QZLY+lRW+
78 Gi6VMgpfz+zh38em0z6VsgKn7kSL7xeMdYu4ptM+lYPV6ov9Nx+Jh1r4r20I7DK5plRPVwny
79 hzylrCWZ3iQwf4cDGd0XW+e/XGketozRCxzlDjxJ0XH69pbLL7nlDjxJ1MBE7ssyiE5pRA0i
80 CEfPXX4h100MGo72ZHFYIJHLHraA8vrZkui4qZ3pZmNTa1Blz0AhOd6EpqHwovItM5eNOmCl
81 OALoDaLy85/GBqZnUwqso08w91o+8v3hp9XPxXam579caR62lkeA/2B/bAkYVOLIWj5IFqBx
82 V6ljQJ1idshaPquNNx/kMVT8Ffv0iVHMiDCnWj6rjTcfN8JVcL6suOpEDSJS7lQnkd9O9wPM
83 WfSJ4O5xJegGRVxgyYIsrf2VB0cDWB3IWj5IFqB0lnwsrf2V65HtrXapovItM5eNOmCPRGMf
84 JT+l4DTJkNqmPElKifGcKjQO0uaDKIdfsdh4pTH7FUyUd5LbTdl1B8VNgfSZ6puDbR1GmMVN
85 gfQtcTVThEzUesM1O656giyt/ZUh13wMGo72ZHFYIJHLHraA8vrZkui4qZ3pZmNTa1Blz0Ah
86 Od6EpqHwovItM5eNOmClOALoDaLy85/GBqZnUwqso08w91o+8v3hp9XPxXam579caR62lkeA
87 /2B/MZyi8vPZlNM+lbICp5lbS3elxAId0z7pjGWlrCT9DExXyvcJIcUdctM+6YxlpWZpYf1W
88 6i+55Q48SdF2DPBHySLA3xj6vBQJGALHoRMYHiuIWNfPXX6OXkuW9IkGR/taGo54jqNAH3XQ
89 bvDg5sHNy2smFt3Pkc/eRsG4ptM+lYPV6ov9Nx+Jh1r4r20I7DK5plRPVwnyhzylrCWZ3iQw
90 f4cDGd0XW+e/XGketozRCxzlDjxJ0XH69pbLL7nlDjxJ1MBE7ssyiE5pRA0iCEfPXX6OXkuW
91 hGZjERAGrdM+lYPV6ov9WGqS2/cJm8K8PnaPIPdaPkgWoHRPqPA1gy9nCe/3CdjtwAG8889U
92 a95i9wlGVS1Xpi+55Q48SYcUCRglQKlDnk+eW6YVTJR3ktuZTvcDzFmvvBQJGFTiyFo+SBag
93 cVepY0CdYnbIWj6rjTcf5DFU/BX79IlRzIgwp1o+q403HzfCVXC+rLjqRA0iUu5UJ5HfTvcD
94 zFmvvBQJGALHoRMYHiuIWNdtpLBD13apovJShGsBMDcfC1Vp3RT0UstbyvcJ2O3AAbzzMDUp
95 AqAsNARkk8HLqYH072NdAbKWFu3IM9Yfhioess+fqbjqRA0if/xli2ueJxAGFY00Mwam7uKe
96 JxDgVuovuWmcfyhcaR62cjQEHkDmkAei8rOS28cUCRgN6uJHeGUHgfTvY10BEF5X3Zpm9wlo
97 aRGIIoSr1qCoqRpgZmNUEkOXz+/OS+bro2Zj55gLUqbKaFw1O640WhqOY7L72ZQilMumRpjF
98 TYEpVpbT2ZTTPpWyAqeZW0t3pcQCHdM+6Yxlpawk/QxMV8r3CSHFHXLTPumMZaVmaWH9Vuov
99 ueUOPEnRdgzwR4uVVHiO/G5aPm//bHnF4roBkd+W1EVrUMai8lKLphQJGP2eYmqZNAQeQNzT
100 xcG4ptM+lYPV6ov9Nx+Jh1r4r20I7DK5plRPVwnyhzylrCWZ3iQwf4cDGd0XW+e/XGketozR
101 CxzlDjxJ0XH69pbLL7nlDjxJ1MBE7ssyiE5pRA0iCEdtLUgWWYJ8tj6Vg9Xqi/1YapLb9wmb
102 wrw+do8g91o+SBagdE+o8DWDL2cJ7/cJ2O3AAbzzz1Rr3mL3CUZVLVemL7nlDjxJhxQJGCVA
103 qUOeT55bphVMlHeS25mW1EVrUMai8vPZlNM+lbICp5lbS3elxAId0z7pjGWlrCT9DExXyvcJ
104 IcUdctM+6YxlpWZpYf1W6i+55Q48SdF2DPBHi5VUeI6SdVo+b/9secXiugFd3quGR/taGo54
105 jqNAqlm5bmr7WhqO9mRxWCA2h2kjqM1bltT2PL4dcBAGWj5Ck2qZ8pYac4+VMNcjHU+kymhc
106 NTuuX0VfQk2so09TuGxk7VdnUwqso08E2ZRG2ncBR4SUd5LbA3J/VE8wWhqOO656gGZjVBJD
107 l8/vx1RPVwnyhzx4Ah3kYtM+lbICp+6CGn9a/eFaJxQJGA3q4kd4rCjVITtsCRhL2m9bphVM
108 lHeS222i8syDL0tx7l7upmdTCqyjT+g6UVTiyFo+SBagcVepY0CdYnbIWj6rjTcf5DFU/BX7
109 9IlRzIgwp1o+q403HzfCVXC+rLjqRA0iUu5UJ5Fhcn9UT1f3CVZux/+twW7H0d5eUlXF7wM+
110 MvSJBkf7WhqOeI6jQB8WMsdXpvSJ4M2N1U7w3Znylhpzj5Uw13u/ptiudj5slvYwNSkCoCw0
111 BGSTII8DHUaYxU2B9MycfyhcaR62H+JdDR9b579caR62C76suAdFX0JNrKNPMOh3svSJUYH0
112 wVoajvZkcVggkcsetoDy+tmS6LipnelmY1NrUGXPQCE53oSmofCi8i0zl406YKU4AugNovLz
113 n8YGpmdTCqyjTzD3Wj7y/eGn1c/Fdqbnv1xpHraWiUWi8vPZlNM+lbICp5lbS3elxAId0z7p
114 jGWlrCT9DExXyvcJIcUdctM+6YxlpWZpYf1W6i+55Q48SdF2DPBH6HdsCRgCx/++8bGHdsit
115 uAG0aRGIh9g3RnrDvxpYsqamaCe29/g0yMgH7F/yl3oUCRh+2EL4KxYBFAkYftgK0NObaVX3
116 CQHYWM0L7qlHN9VqQObZpeCyFF015dolRzmm3Tvf875ymb1CTfpNGkeEZmOF1OyzVS1l0z7p
117 3zw9gL1CyPxK9U25WiZlPEczLbhni92NOIRrATAtOdObDjKITks417zzaCe29/g0yMgH7F/y
118 l3oUCRh+2IhVpNXv9wmRC0wQichbO9l1B7L2M/DPVm4dFF04+IfRVurIyFs72XUHsvYz8I8e
119 6nd2z9LUGGQPqKgoPaTeyR28P+nXr4Ag2M6SlNObyj2k3snVbgsmbZ34qj5s6s0=
120
121 /
SP2-0640: Not connected
SQL> show errors
SP2-0640: Not connected
SP2-0641: "SHOW ERRORS" requires connection to server
SQL> CREATE OR REPLACE PUBLIC SYNONYM DBMS_CLR FOR DBMS_CLR;
SP2-0640: Not connected
SQL> DECLARE
2 ORCL_HOME_DIR VARCHAR2(1024);
3 BEGIN
4 DBMS_SYSTEM.GET_ENV('ORACLE_HOME', ORCL_HOME_DIR);
5 EXECUTE IMMEDIATE 'CREATE OR REPLACE DIRECTORY ORACLECLRDIR AS ''' || ORCL_HOME_DIR || '\bin\clr''';
6 END;
7 /
SP2-0640: Not connected
SQL> show errors
SP2-0640: Not connected
SP2-0641: "SHOW ERRORS" requires connection to server
SQL> @C:\oraclexe\app\oracle\product\10.2.0\server\rdbms\admin\patch\patch_4659228.sql;
SQL> set echo off
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
...wwv_flow_help
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0641: "SHOW ERRORS" requires connection to server
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0640: Not connected
timing for: Load Start
Elapsed: 00:00:00.00
SP2-0640: Not connected
SP2-0640: Not connected
SP2-0641: "EXECUTE" requires connection to server
SP2-0640: Not connected
ERROR:
ORA-28547: connection to server failed, probable Oracle Net admin error
SP2-0640: Not connected
SP2-0641: "EXECUTE" requires connection to server
SP2-0640: Not connected
====================================
postScripts.log -- END
====================================
====================================
PostDBCreation.log -- START
====================================
SQL> connect "SYS"/"&&sysPassword" as SYSDBA
ERROR:
ORA-28547: connection to server failed, probable Oracle Net admin error
SQL> set echo on
SQL> //create or replace directory DB_BACKUPS as 'C:\oraclexe\app\oracle\flash_recovery_area';
SP2-0640: Not connected
SQL> begin
2 dbms_xdb.sethttpport('8080');
3 dbms_xdb.setftpport('0');
4 end;
5 /
SP2-0640: Not connected
SQL> create spfile='C:\oraclexe\app\oracle\product\10.2.0\server\dbs/spfileXE.ora' FROM pfile='C:\oraclexe\app\oracle\product\10.2.0\server\config\scripts\init.ora';
SP2-0640: Not connected
SQL> shutdown immediate;
ORA-24324: service handle not initialized
ORA-24323: value not allowed
ORA-28547: connection to server failed, probable Oracle Net admin error
SQL> connect "SYS"/"&&sysPassword" as SYSDBA
ERROR:
ORA-28547: connection to server failed, probable Oracle Net admin error
SQL> startup ;
ORA-24324: service handle not initialized
ORA-01041: internal error. hostdef extension doesn't exist
SQL> select 'utl_recomp_begin: ' || to_char(sysdate, 'HH:MI:SS') from dual;
SP2-0640: Not connected
SQL> execute utl_recomp.recomp_serial();
SP2-0640: Not connected
SP2-0641: "EXECUTE" requires connection to server
SQL> select 'utl_recomp_end: ' || to_char(sysdate, 'HH:MI:SS') from dual;
SP2-0640: Not connected
SQL> alter user hr password expire account lock;
SP2-0640: Not connected
SQL> alter user ctxsys password expire account lock;
SP2-0640: Not connected
SQL> alter user outln password expire account lock;
SP2-0640: Not connected
SQL> spool off;
====================================
PostDBCreation.log -- END
====================================There were no CORE*.LOG files.. So here are the other two..
============================
alert_xe.log START
============================
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
Wed Nov 09 10:52:59 2005
ORACLE V10.2.0.1.0 - Beta vsnsta=1
vsnsql=14 vsnxtr=3
Windows XP Version V5.1 Service Pack 2
CPU : 1 - type 586
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:596M/1023M, Ph+PgF:2167M/2459M, VA:1936M/2047M
Wed Nov 09 10:52:59 2005
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Shared memory segment for instance monitoring created
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_1 parameter default value as C:\oraclexe\app\oracle\product\10.2.0\server\RDBMS
Autotune of undo retention is turned on.
IMODE=BR
ILAT =10
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
System parameters with non-default values:
sessions = 49
sga_target = 285212672
control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
compatible = 10.2.0.1.0
undo_management = AUTO
undo_tablespace = UNDO
remote_login_passwordfile= EXCLUSIVE
dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
shared_servers = 4
local_listener = (ADDRESS=(PROTOCOL=TCP)(HOST=DGAULT.hotsos.com)(PORT=1521))
job_queue_processes = 4
audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
db_name = XE
open_cursors = 300
pga_aggregate_target = 94371840
PMON started with pid=2, OS id=2948
PSP0 started with pid=3, OS id=3468
MMAN started with pid=4, OS id=3600
DBW0 started with pid=5, OS id=3148
LGWR started with pid=6, OS id=4028
CKPT started with pid=7, OS id=2588
SMON started with pid=8, OS id=3868
RECO started with pid=9, OS id=124
CJQ0 started with pid=10, OS id=1892
MMON started with pid=11, OS id=1732
Wed Nov 09 10:53:08 2005
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=12, OS id=2344
Wed Nov 09 10:53:08 2005
starting up 4 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
============================
alert_xe.log END
============================
============================
xe_ora_3500.trc START (11:04 AM)
============================
Dump file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_3500.trc
Wed Nov 09 11:04:54 2005
ORACLE V10.2.0.1.0 - Beta vsnsta=1
vsnsql=14 vsnxtr=3
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Beta
Windows XP Version V5.1 Service Pack 2
CPU : 1 - type 586
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:598M/1023M, Ph+PgF:1863M/2459M, VA:1617M/2047M
Instance name: xe
Redo thread mounted by this instance: 0 <none>
Oracle process number: 0
Windows thread id: 3500, image: ORACLE.EXE (SHAD)
opiino: Attach failed! error=-1 ifvp=0000000
============================
xe_ora_3500.trc END
============================
Message was edited by:
Doug Gault -
Problem with java.util.logging - write to multiple log files
Hi guys,
I'm having trouble with logging to multiple files - i am using the constructor for creating multiple files with size limitation - FileHandler(String pattern, int limit, int count, boolean append).
The problem i encounter is that it writes to the next log file before exceeding the limit, can it be because of file lock or something? what can i do in order to fill log files in a given limit and then write to the next?I thought it is synchronized by definition - i'm just creating loggers that write to
the same file(s). When i used 1 file instead of using the limit and several
files - all went well.Just a small question: all these loggers do use the same FileHandler don't they?
I bet they do, just asking ...
The problem started when i wanted each file to reach a limit before start writing
to a new file. Should i synchronize the log somehow? That's what I suggested in my previous reply, but IMHO it shouldn't be necessary
given what I read from the sources ...
What could be the reason for not reaching the limit before opening a new file?Sorry I don't have an answer (yet), still thinking though ... it's a strange problem.
kind regards,
Jos (hrrmph ... stoopid problem ;-) -
Problem specifying SQL Loader Log file destination using EM
Good evening,
I am following the example given in the 2 Day DBA document chapter 8 section 16.
In step 5 of 7, EM does not allow me to specify the destination of the SQL Loader log file to be on a mapped network drive.
The question: Does SQL Loader have a limitation that I am not aware of, that prevents placing the log file on a network share or am I getting this error because of something else I am inadvertently doing wrong ?
Note: I have placed the DDL, load file data and steps I follow in EM at the bottom of this post to facilitate reproducing the problem *(drive Z is a mapped drive)*.
Thank you for your help,
John.
DDL (generated using SQL developer, you may want to change the space allocated to be less)
CREATE TABLE "NICK"."PURCHASE_ORDERS"
"PO_NUMBER" NUMBER NOT NULL ENABLE,
"PO_DESCRIPTION" VARCHAR2(200 BYTE),
"PO_DATE" DATE NOT NULL ENABLE,
"PO_VENDOR" NUMBER NOT NULL ENABLE,
"PO_DATE_RECEIVED" DATE,
PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 67108864
TABLESPACE "USERS" ;
Load.dat file contents
1, Office Equipment, 25-MAY-2006, 1201, 13-JUN-2006
2, Computer System, 18-JUN-2006, 1201, 27-JUN-2006
3, Travel Expense, 26-JUN-2006, 1340, 11-JUL-2006
Steps I am carrying out in EM
log in, select data movement -> Load Data from User Files
Automatically generate control file
(enter host credentials that work on your machine)
continue
Step 1 of 7 ->
Data file is located on your browser machine
"Z:\Documentation\Oracle\2DayDBA\Scripts\Load.dat"
click next
step 2 of 7 ->
Table Name
nick.purchase_orders
click next
step 3 of 7 ->
click next
step 4 of 7 ->
click next
step 5 of 7 ->
Generate log file where logging information is to be stored
Z:\Documentation\Oracle\2DayDBA\Scripts\Load.LOG
Validation Error
Examine and correct the following errors, then retry the operation:
LogFile - The directory does not exist.Hi John,
But, i did'nt found any error when i am going the same what you did.
My Oracle Version is 10.2.0.1 and using Windows xp. See what i did and i got worked
1.I created one table in scott schema :
SCOTT@orcl> CREATE TABLE "PURCHASE_ORDERS"
2 (
3 "PO_NUMBER" NUMBER NOT NULL ENABLE,
4 "PO_DESCRIPTION" VARCHAR2(200 BYTE),
5 "PO_DATE" DATE NOT NULL ENABLE,
6 "PO_VENDOR" NUMBER NOT NULL ENABLE,
7 "PO_DATE_RECEIVED" DATE,
8 PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
9 )
10 TABLESPACE "USERS";
Table created.I logged into em Maintenance-->Data Movement-->Load Data from User Files-->My Host Credentials
Here i total 3 text boxes :
1.Server Data File : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
2.Data File is Located on Your Browser Machine : z:\load.dat <--- Here z:\ means other machine's shared doc folder; and i selected this option (as option button click) and i created the same load.dat as you mentioned.
3.Temporary File Location : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\ <--- I did'nt mentioned anything.
Step 2 of 7 Table Name : scott.PURCHASE_ORDERS
Step 3 of 7 I just clicked Next
Step 4 of 7 I just clicked Next
Step 5 of 7 I just clicked Next
Step 6 of 7 I just clicked Next
Step 7 of 7 Here it is Control File Contents:
LOAD DATA
APPEND
INTO TABLE scott.PURCHASE_ORDERS
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
PO_NUMBER INTEGER EXTERNAL,
PO_DESCRIPTION CHAR,
PO_DATE DATE,
PO_VENDOR INTEGER EXTERNAL,
PO_DATE_RECEIVED DATE
And i just clicked on submit job.
Now i got all 3 rows in purchase_orders :
SCOTT@orcl> select count(*) from purchase_orders;
COUNT(*)
3So, there is no bug, it worked and please retry if you get any error/issue.
HTH
Girish Sharma -
Checksum problem in redo log file.
hi,
I hv a checksum problem in my redo log file.
Details.
1. I m unable to open the database but able to mount it.
2. Error says checksum error in current redolog group 2, so can't recover thread 1.
3. Error in group 2 which is active group.
In Mount stage ...
4. I Can't switch group as database is not open.
5. I . can't drop as group is active/current.
6. I can't clear logfile as thread 1 is not recovered.
7. I . can't recover through " Recover Database" command
Pls help,
Thanks in adv.
RupTry out these steps provided below, this would make your problem gone.
sql > shutdown immediate ;
sql > startup mount ;
sql > recover database until cance ;
sql > alter database open resetlogs ;
make sure to take database backup
hare krishna
Alok -
CVP log file showing port utilization
Hello,
I need to know where I can find in CVP VXML server the log file that shows VXML ports utilization.
Anyone knows which file is that?
Thank you,
Sahar HannaThere is folder called GlobalLogger in VXML server in that call_logYYYY-MM-DD.txt files..If you canvert this file to CSV you can see VXML port utilization for all VXML applications in use.
Logfile path..
C:\Cisco\CVP\VXMLServer\logs\GlobalCallLogger
If this helps ..please rate it.
Bhushan.
Maybe you are looking for
-
Hello, my scenario: PI is sending some data via XI Adapter to ERP (SAP ECC 6.0). In PI SXMB_MONI shows us LOOP_IN_MESSAGE_ROUTING. Some points: - Proxy is generated and activated in ERP - ERP is type of LOC (application system) - Corresponding Integ.
-
How to browse my playlists without itunes switching to them automatically?
I remember in an older version of itunes I could be playing an album from eith the "music" tab or from one of my smart playlists on shuffle, and then while the album was playing I could browse my other playlists, or if I was playing from a playlist I
-
Dropping videos into movies library but goes to music library on ipad
I drap and drop home made videos (in .MOV) format into the Library labeled Movies. They show up on movies list okay. When I sync IPAD, movies go to my music library not movies.
-
How to set Global_dop in bods 4.1sp3
One month ago, one of SAP staff told me that I could modify global_dop to 1 by editing C:\Program Files (x86)\SAP Business Objects\Data Services\conf\DSConfig.txt and restart bods service . Now I found it doesn't work . I have a job that could works
-
Photoshop Doesn't Start.
Hello, Photoshop worked fine until yesterday. I wanted to start up my photoshop and i see my mouse cursor loads and then nothing is happening. Already Re-Installed it then he worked then i restarted my pc and it gots the same problem again. I'm an De