DB2 Log file retrieval during normal operation
Hi,
We recently upgraded our databases to DB2 v9.7 FP1 and started noticing entries in the db2diag.log that state a log file is being retrieved (after it was archived). We are not performing a restore, although this only seems to be happening during the time our online backup is running.
This is one example of an entry in the db2diag.log:
2010-08-18-21.52.43.687630-300 I46847828A425 LEVEL: Warning
PID : 27014 TID : 19 PROC : db2sysc 0
INSTANCE: db2qas NODE : 000
EDUID : 19 EDUNAME: db2logmgr (QAS) 0
FUNCTION: DB2 UDB, data protection services, sqlpgArchiveLogFile, probe:3180
MESSAGE : Completed archive for log file S0052437.LOG to VENDOR chain 1 from
/db2/QAS/log_dir/NODE0000/.
.<other messages>
2010-08-19-03.13.37.185763-300 I46895785A364 LEVEL: Warning
PID : 27014 TID : 19 PROC : db2sysc 0
INSTANCE: db2qas NODE : 000
EDUID : 19 EDUNAME: db2logmgr (QAS) 0
FUNCTION: DB2 UDB, data protection services, sqlpgRetrieveLogFile, probe:4130
MESSAGE : Started retrieve for log file S0052437.LOG.
2010-08-19-03.14.38.221285-300 I46896150A418 LEVEL: Warning
PID : 27014 TID : 19 PROC : db2sysc 0
INSTANCE: db2qas NODE : 000
EDUID : 19 EDUNAME: db2logmgr (QAS) 0
FUNCTION: DB2 UDB, data protection services, sqlpgRetrieveLogFile, probe:4148
MESSAGE : Completed retrieve for log file S0052437.LOG on chain 1 to
/db2/QAS/log_dir/NODE0000/.
It doesn't appear to be causing any harm, but can someone please explain why DB2 would need to retrieve an already archived log file?
Thanks,
Setu
Hi,
The online backup by default invokes the option "include logs". And that is why you see the required archived logs being retrieved.
Benny
Similar Messages
-
Logical end-of-file reached during read operation
Hi,
Something strange happened to my Logic a couple of days ago - all of a sudden. When I start Logic up it gives me the following error message:
*Error reading/writing file “com.apple.logic.pro.cs”: Logical end-of-file reached during read operation.*
I have re-installed Logic, as well as updated it (8.0.2), I have repaired disk permissions in Disk Utility, and it still keeps popping up. I cannot get on with my projects and I have deadlines!
I found an IT support site, according to that I don't have enough space on my HD. However, I do... I'm confused, and desperately need help.
Thank you so much, I'm looking forward to hearing from somebody.
AgnesI am not opening any files, just launching Logic.
I know. And when you launch Logic, it loads its preferences files - and it seems this is failing - most likely due to file corruption. So delete the file, as I said, and Logic should load fine. -
Logic froze while I was working on something so I forced quit. Now every time I open LOGIC a message pops up that says:
"Error reading/writing file
“com.apple.logic.pro.cs”:
Logical end-of-file reached during read operation."
The only button option is cancel so I press it and another message appears that says:
"The Preferences are not loaded completely.
Do not save them, as you would overwrite the Preferences file with incomplete data."
Then when i close logic a box appears saying:
"The Preferences are not loaded completely.
Save them to "com.apple.logic.pro.cs" anyway?"
There are 3 button options to press; ok, cancel or dont save.
I press "don’t save" cuz I don't want to ruin anything.
I found a discussion located here: https://discussions.apple.com/message/9564253#9564253 that says if I delete the file "com.apple.logic.pro.cs" it will resolve the problem. If I do this will I loose or mess anything up at all, automation, saved channel strips customizations, saved effects, synth, or ultrabeat customizations etc? Or especially will I loose any work I've done? I have hundreds pieces of music files I've created. I'm scared to mess anything up with all the hours and months of work I've done. Is there anyway to fix this without loosing anything? I'm using Logic Pro 8.
Thank youYou can safely delete this file - its the preference file for control surface settings - you haven't said whether you're actually using a control surface or not, if so, you will have to set it up again. A new file will be created when you fire Logic up again, but of course it will contain default settings. If you have a complicated control surface setup, remember to keep a backup copy somewhere in case any future problems arise.
Other than that, you really have nothing to be scared about - hopefully your problem is as simple as that and there isn't an underlying problem (a corrupt hard drive, for example). You seem concerned about losing work, so I guess you want to think about backing that up on a regular basis too. If you're saving your projects to your system drive, do get an external one for this. And also check that you have plent of free space on your system drive - you need to keep about a quarter or third of it free for your OS and programs to run properly (some temporarily stored files can be quite large). -
Error "logical end of file reached during read operation"
I am trying to burn a dvd of a wedding I did, and this is the error I am getting. After 5 hours of it trying to burn when I click cancel this it says " logical end of file reached during read operation". Does anyone know what this means, and how I can fix it? I am anxious to get this wedding finished and out to the customer. Any help would be greatly appreciated.
How long is the wedding video?
My standard list of things to do first...
Run MacJanitor (free download) to do all the Unix Cron Maintenance scripts.
Run Disk Utility (Applications -> Utilities) and repair disk permissions on your start up drive (typically your internal drive). Also verify any other drives mounted on the system.
Run Preferential Treatment (free download) to check for corrupt/damaged application and system preference files.
Run Cache Out X (free download) to clear all system and application caches.
Reboot your Mac.
If you still can not get it to run correctly, next thing to try is to throw out the iDVD preference file (don't forget to change back those preferences you want different from the defaults next time you run it). If it still doesn't work, then I would suggest you reinstall iDVD.
Patrick -
EXS 24 Logical end-of-file reached during read operation Result code = -39
EXS kit Trip Hop Remix will not completely load. Upon trying to load it I get the error, Logical end-of-file reached during read operation Result code = -39. The kit ends up lacking samples. It worked at one time but now it will not load up. To try and fix I extracted two kits with the same name but oddly they are slightly different sizes and have different creation dates, one 2004 the other 2007 hmmm (one goes in Garage band for remix tools the other gets placed in 02 Electronic Drum kits) from the install disks using Pacifist hoping to replace corrupted files. NG. I also reinstalled all the samples from the LP8 install disk, still NG. Anybody have a solution. THanks.
To further clarify, the samples are all located in the folder Garage band/.../.../.../Sampler Files/Treated Percussion Sets/Perc-DnB+Triphop. They are used in the EXS24 Trip Hop Remix kit.exs. The 3 samples that seem to be causing the error are, YT2BGLRHTX02_3ue.aif, YT2BGLRHTX03_3ue.aif, YT2BGLRHTX04_3ue.aif. The kit is useless and my song is screwed up now that the kit will not load. Any ideas for a fix out there ?
-
Error: Logical end-of-file reached during read operation. Result Code = -39
Hello all,
Recently, I have been suddenly getting this error message:
"Logical end-of-file reached during read operation. Result Code = -39."
In my case, when it does come up, it always pops up during the recording of an audio take. Almost immediately after hitting the stop button, this error message would pop up. And then when I go to check the take that was just recorded, there is nothing but shrieking noise. As a result, this pop up error trashes any good take I may have had. It's unpredictable as to when this error decides to pop up as well. So now, when I record, I tend to just have my fingers crossed and hope that this error doesn't pop up especially right after recording a good take!
To no avail, I have tried different I/O buffer settings, repaired permissions, rebooted, and even re-formatted my recording hard disk.
Could this be a possible hint that my hard disk is about to physically collapse? I have had it for about 2 years. Could it be some sort of bug in Logic? Or something else perhaps?
Appreciate any feedback! Thanks a lot!DamonGrant wrote:
Thanks for those suggestions Erik...
It's been happening for a month I guess and I don't think that anything new's been installed. I will try recording to the start up disc and to maybe to some USB discs I have.
It seems to happen after working on a project for some time.
After say tracking takes for 45 mins or so. Maybe it's just coincidence, but do you think that might indicate a hardware issue?
Now there might be a clue. What happens when you then quit Logic, reboot your Mac, and reopen the active session? It might be that your disk has become a bit messy from all the traffic, and it just needs to 'settle' in OS X again. Click on the *Spotlight icon* top right of your screen and see if any indexing is taking place. Wait for it to finish before launching Logic.
Alternatively, you could exclude your recording disk from Spotlight indexing alltogether, by dragging the disk to the Privacy field in the Spotlight Sysprefpane.
One other thing to do: turn off Journaling , using *Disk Utility.* Select your recording volume in Disk Utility, hold the alt key while dropping down the File menu. Choose *Disable Journaling.*
On the other hand, it might also point to a RAM issue. Do you ever suffer Kernel panics (=the darkening 'curtain' and the message "You need to restart your Mac... etc")? or 'unexplained' crashes? If not or very rarely, I'ld rule out a RAM issue. If daily, it could be. What you can try in this case is to turn of your Mac and physically take out and reinsert your RAM.
And when you trash (=eject/unmount), physically disconnect and reconnect your recording disk? Does that help? (Logic should not be running when you do this) -
DB2 Log file Management is not happening properly
Hi experts,
I am running SAP ERP6.0 EHP4 on DB2 9.5 FP4
I have enabled Archive log by providing First Log Archive Method (LOGARCHMETH1) =DISK: /db2/PRD/log_archive
I have brought down SAP & Database and started database .After starting DB , taken full offline backup.
Didnu2019t change below parameters as these are replaced by LOGARCHMETH1 in newer versions as per documents.
Log retain for recovery enabled=OFF
User exit for logging enabled =OFF
/db2/PRD/log_dir (Online transaction Logs) is 25% full but I couldnu2019t find even single file in /db2/PRD/log_archive
I am suspecting DB2 Log file Management is not happening properly as I couldnu2019t find any offline transaction logs in /db2/PRD/log_archive file system
Request you let me know where it went wrong & what could be the issue
thanksHello,
From your post it seems that there is a space character between "DISK:" and the path you have provided.
Maybe this is only a wrong display here in the forum.
Nevertheless, can you check it in your system, it should rather look like DISK:/db2/PRD/log_archive.
Can you initiate an archive command with the "db2 archive log for db prd" command.
What does the db2diag.log tell for this archiving attempt? -
Pavilion DV7 2120 ed, above 90 degrees C during normal operating, 1 year old
Hello,
My HP Pavilion dv-7 2120 ed has recently had his first birthday. Unfortunately the fan started working harder and harder. Yesterday I didn't trusted it anymore, therfore I downloaded Speedfan. He told me during normal operating (internet + music player) that the CPU reaches a temperature above the 90 degrees Celsius.
Quite a shock!!!
He doesn't seem dirty from inside.
Does someone have the same problem with the same notebook?
How come?
But more important, what can I do?
Thanks in forward
William
HollandEven if there is not visible dirt when you just open the keyboard, for example, sometimes there is a dust clot blocking the fan outlet, that you cannot see until you open it all the way up. After one year it could use new thermal compound between the processor and heatsink, too.
-
Wait Events "log file parallel write" / "log file sync" during CREATE INDEX
Hello guys,
at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
To get some performance values, that i can compare i just built up a normal oracle database in the first step.
Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
And now take a look at these values from the AWR
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 10,019 .0 132 13 33.5
log file sync 293 .7 4 15 1.0
......How can this be possible?
Regarding to the documentation
-> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
Do you have any idea how these values come about?
Any thoughts/ideas are welcome.
Thanks and RegardsSurachart Opun (HunterX) wrote:
Thank you for Nice Idea.
In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
Two points on nologging, though:
<ul>
it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
</ul>
Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Log file sync during RMAN archive backup
Hi,
I have a small question. I hope someone can answer it.
Our database(cluster) needs to have a response within 0.5 seconds. Most of the time it works, except when the RMAN backup is running.
During the week we run one time a full backup, every weekday one incremental backup, every hour a controlfile backup and every 15 minutes an archival backup.
During a backup reponse time can be much longer then this 0.5 seconds.
Below an typical example of responsetime.
EVENT: log file sync
WAIT_CLASS: Commit
TIME_WAITED: 10,774
It is obvious that it takes very long to get a commit. This is in seconds. As you can see this is long. It is clearly related to the RMAN backup since this kind of responsetime comes up when the backup is running.
I would like to ask why response times are so high, even if I only backup the archivelog files? We didn't have this problem before but suddenly since 2 weeks we have this problem and I can't find the problem.
- We use a 11.2G RAC database on ASM. Redo logs and database files are on the same disks.
- Autobackup of controlfile is off.
- Dataguard: LogXptMode = 'arch'
Greetings,Hi,
Thank you. I am new here and so I was wondering how I can put things into the right category. It is very obvious I am in the wrong one so I thank the people who are still responding.
-Actually the example that I gave is one of the many hundreds a day. The respone times during the archive backup is most of the time between 2 and 11 seconds. When we backup the controlfile with it, it is for sure that these will be the response times.
-The autobackup of the controfile is put off since we already have also a backup of the controlfile every hour. As we have a backup of archivefiles every 15 minutes it is not necessary to also backup the controlfile every 15 minutes, specially if that even causes more delay. Controlfile is a lifeline but if you have properly backupped your archivefiles, a full restore with max 15 minutes of data loss is still possible. We put autobackup off since it is severely in the way of performance at the moment.
As already mentioned for specific applications the DB has to respond in 0,5 seconds. When it doesn’t happen then an entry will be written in a table used by that application. So I can compare the time of failure with the time of something happening. The times from the archivelog backup and the failure match in 95% of the cases. It also show that log file sync at that moment is also part of this performance issue. I actually built a script that I used for myself to determine out of the application what the cause is of the problem;
select ASH.INST_ID INST,
ASH.EVENT EVENT,
ASH.P2TEXT,
ASH.WAIT_CLASS,
DE.OWNER OWNER,
DE.OBJECT_NAME OBJECT_NAME,
DE.OBJECT_TYPE OBJECT_TYPE,
ASH.TIJD,
ASH.TIME_WAITED TIME_WAITED
from (SELECT INST_ID,
EVENT,
CURRENT_OBJ#,
ROUND(TIME_WAITED / 1000000,3) TIME_WAITED,
TO_CHAR(SAMPLE_TIME, 'DD-MON-YYYY HH24:MI:SS') TIJD,
WAIT_CLASS,
P2TEXT
FROM gv$active_session_history
WHERE PROGRAM IN ('yyyyy', 'xxxxx')) ASH,
(SELECT OWNER, OBJECT_NAME, OBJECT_TYPE, OBJECT_ID FROM DBA_OBJECTS) DE
WHERE DE.OBJECT_id = ASH.CURRENT_OBJ#
AND ASH.TIME_WAITED > 2
ORDER BY 8,6
- Our logfiles are 250M and we have 8 groups of 2 members.
- Large pool is not set since we use memory_max_target and memory_target . I know that Oracle maybe doesn’t use memory well with this parameter so it is truly a thing that I should look into.
- I looked for the size of the logbuffer. Actually our logbuffer is 28M which in my opinion is very large so maybe I should put it even smaller. It is very well possible that the logbuffer is causing this problem. Thank you for the tip.
- I will also definitely look into the I/O. Eventhough we work with ASM on raid 10 I don’t think it is wise to put redo logs and datafiles on the same disks. Then again, it is not installed by me. So, you are right, I have to investigate.
Thank you all very much for still responding even if I put this in the totally wrong category.
Greetings, -
DB2, Log File has reached its saturation point" DIA8309C Log file was full,
Hello Experts,
I have successfully installed a ECC 6.0 System-ABAP + JAVA (DB2 v9.5 windows server 2008-x64 bit).
Kernel: 700 , Patch: 185 ; SP level : rel 700 , level 17.
However now i suddenly cannot connect to database and SAP is down.
C:\Users\dsqadm.DUCATI>r3trans -d
This is r3trans version 6.14 (release 700 - 16.10.08 - 16:26:00).
unicode enabled version
2EETW169 no connect possible: "DBMS = DB6 --- DB2DB
DFT = 'DSQ'"
r3trans finished (0012).
db2diag.log:-
ADM1823E The active log is full and is held by application handle "51886". Terminate this application by COMMIT, ROLLBACK or FORCE APPLICATION.
"Log File has reached its saturation point" DIA8309C Log file was full.
"Backup pending. Database has been made recoverable. Backup now required." DIA8168C Backup pending for database .
Also, regarding DB2 licensing,i have a query:
db2licm -l gives the following:
C:\Users\db2dsq.DUCATI>db2licm -l
Product name: "DB2 Enterprise Server Edition"
License type: "CPU Option"
Expiry date: "Permanent"
Product identifier: "db2ese"
Version information: "9.5"
Enforcement policy: "Soft Stop"
Features:
DB2 Database Partitioning: "Licensed"
DB2 Performance Optimization ESE: "Licensed"
DB2 Storage Optimization: "Licensed"
DB2 Advanced Access Control: "Not licensed"
DB2 Geodetic Data Management: "Not licensed"
IBM Homogeneous Replication ESE: "Not licensed"
Product name: "DB2 Connect Server"
License type: "Trial"
Expiry date: "10/19/2009"
Product identifier: "db2consv"
Version information: "9.5"
I have applied both sap and DB2 license. Is everything ok regarding licensing of DB v9.5 for using with SAP?
I am new to DB2 database and looking for expert guidance regarding the above mentioned issues.
Thanks,
RakeshC:\Users\db2dsq.DUCATI>db2 get dbm cfg
Database Manager Configuration
Node type = Enterprise Server Edition with local and remote clients
Database manager configuration release level = 0x0c00
Maximum total of files open (MAXTOTFILOP) = 16000
CPU speed (millisec/instruction) (CPUSPEED) = 4,723442e-007
Communications bandwidth (MB/sec) (COMM_BANDWIDTH) = 1,000000e+002
Max number of concurrently active databases (NUMDB) = 8
Federated Database System Support (FEDERATED) = NO
Transaction processor monitor name (TP_MON_NAME) =
Default charge-back account (DFT_ACCOUNT_STR) =
Default database monitor switches
Buffer pool (DFT_MON_BUFPOOL) = ON
Lock (DFT_MON_LOCK) = ON
Sort (DFT_MON_SORT) = ON
Statement (DFT_MON_STMT) = ON
Table (DFT_MON_TABLE) = ON
Timestamp (DFT_MON_TIMESTAMP) = ON
Unit of work (DFT_MON_UOW) = ON
Monitor health of instance and databases (HEALTH_MON) = OFF -
How to Monitor the size of log file (Log4j) During its Generation.
i have made a program using log4j technology. Now i want to know that is there any method that i can restrict the size of the log file. for example if the size of generating log file exceeds the restricted size then it should be placed in backup and a new file should be automatically generate and remaining contents should be written in that file.
is there any method that can monitor the size of file during the generation of that (say) log file.
Waiting for ur Urgent responseI have wrote that code
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<appender name="appender" class="org.apache.log4j.FileAppender">
<param name="File" value="c:\\abc.txt"/>
<param name="MaxFileSize" value="100B"/>
<param name="MaxBackupIndex" value="3"/>
<param name="Append" value="false"/>
<layout class="org.apache.log4j.SimpleLayout"></layout>
</appender>
<root>
<priority value ="debug"/>
<appender-ref ref="appender"/>
</root>
</log4j:configuration>
When i run it it gave me error Message
log4j:WARN No such property [maxFileSize] in org.apache.log4j.FileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.FileAppender. -
Hi,
We are taking the online backup in db13.The logs files are stored in db2/<sid>/log_archive.How this logs files stored in this postion.I want it to see the configration setting this.Please help me..
Regards,
ManjiniHello Manjini
DB2 UDB doesn't have BRTOOLS...
you should study some admin guides for DB2 on sap, lot of difference whether sap on oracle or on db2
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/82e754ff-0701-0010-32bd-acb2be34e9ff
or you can find most of them at the following link
http://www-01.ibm.com/support/docview.wss?rs=71&uid=swg27009552
thanks
Bhudev -
113 degrees Fahrenheit during normal operation.
I heard about heating problems on the 2011 MacBook Pro, and I was wondering whether 113 degree Fahrenheit durning normal operation (i.e. idling, Mail, simple Safari sites, etc.)?
Most MBPs run hotter than that most of the time. You are probably using yours very lightly. Don't even begin to worry until it goes over 180°F and stays there with its fans roaring. That may never happen, depending on how you use the machine.
-
Program hangs for a number of seconds before continuing. All operations within the window are arrested.
Attempted reinstall of Firefox 6.0, and also disabled all plugins. Reenabled plugins (shown in info. below). Attempted rebooting.
Problem occurs periodically, at random intervals when browsing, typing, or doing any task.I assume that it isn't caused by an extension and that it still happens with all extensions disabled?
Which security software (firewall, anti-virus) do you have?
Create a new profile as a test to check if your current profile is causing the problems.
See Basic Troubleshooting: Make a new profile:
*https://support.mozilla.com/kb/Basic+Troubleshooting#w_8-make-a-new-profile
There may be extensions and plugins installed by default in a new profile, so check that in "Tools > Add-ons > Extensions & Plugins" in case there are still problems.
If that new profile works then you can transfer some files from the old profile to that new profile (be careful not to copy corrupted files)
See:
*http://kb.mozillazine.org/Transferring_data_to_a_new_profile_-_Firefox
Maybe you are looking for
-
Running APEX 3.1.2 and 4.0.2 on one server
We currently have a Solaris server running an Oracle 10g R2 DB with APEX 3.1.2 installed. Oracle Application Server 10g (Apache) is used as HTTP server. This server is used for development and UAT. We want to upgrade to APEX 4.0.2. and first want to
-
Memory issue in oracle database 11g ...
Hi All, I have installed oracle11gR2 on Linux (CentOS 5.5)... SErver has good amount of HDD - 669G and 16G RAM. I have two database running on this server. I am getting out of memory error and end user also complains about dB is very slow... I have c
-
Java heap error when using compc
I have a large project, 100's of source files that I'm trying to compile into a swc. In Flash Builder 4 it works fine. When I try to do it in compc I get a Java heap space error. I've scoured google for help and haven't found anything useful. Does an
-
Players in other sites are working.
-
I just got a new work station (win7 Pro) which has the following sql server SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition') returns 12.0.2254.0 RTM Developer Edition (64-bit) Select @@versi