Run multiple statement at a time in a procedure without scheduler
Dear,
I create a procedure that have many select, insert, update statement. How can I execute all the sql statement at a time without using scheduler. Please help me.
Dear,
I create a procedure that have many select,
insert, update statement. How can I execute all the
sql statement at a time without using scheduler.
create procedure p is
e number;
n varchar2(10);
begin
select 1000,'MWZ' into e,n from dual ;
insert into emp(empno,ename) values (e,n);
select 7788,'Scotty' into e,n from dual;
update emp set ename=n where empno=e;
end;
/There I can do many select, insert and update at a time
create or replace procedure p is
begin
merge into emp using (
select 1000 e,'MWZ' n from dual
union all
select 7788,'Scotty' from dual
) on (empno=e)
when matched then
update set ename=n
when not matched then
insert (empno,ename) values (e,n);
end;
/But again, why do not you want to use dbms_job? You could maybe do this in java
Similar Messages
-
Oracle Concurrent Programs remains in Running Normal State for long time.
Hi All,
We have encountered an issue where Concurrent Programs remains in the state of Running Normal.
Importantly when we try to cancel those requests , it displays a pop-up message "*Could not lock request*".
Also, when we click on Diagnostics button; it says
*"Post-processing has completed for this request, but the manager has not yet marked it as Completed", the report is completing normal but the status is not marked as completed by manager."*
Please help me to find the root cause of this issue.
It happened twice in this week and we had to bounce the Database followed by the CMCLEAN.sql
Thanks in advance.
Regards,
AvadhutHi,
We have encountered an issue where Concurrent Programs remains in the state of Running Normal.
Importantly when we try to cancel those requests , it displays a pop-up message "*Could not lock request*".Please see (Note: 1076452.1 - Requests Show as Running Forever But Give Could Not Lock Request When Cancelling).
Also, when we click on Diagnostics button; it says
*"Post-processing has completed for this request, but the manager has not yet marked it as Completed", the report is completing normal but the status is not marked as completed by manager."*
Please help me to find the root cause of this issue.
It happened twice in this week and we had to bounce the Database followed by the CMCLEAN.sqlPlease mention the application release.
Is the issue with specific concurrent program?
Was this working before? What changes have been done recently?
Thanks,
Hussein -
Running multiple queries in parallel in a stored procedure
Hi,
I have two queries one on table1 (group bys) and second on table2 (again group bys) and then I need to compare the results.
Is it possible to execute both of them in parallel as they have no dependencies ? SInce it is going to be called from a JAVA program right now the only way I know is
to execute them in multiple threads but it would be great to have ideas of how to do it in a stored procedure. As a series of steps(all sql quries) have to be carried out and
the data has to be locked for that period ?
Just for clarity the steps are :
Step1 : select for update cursor to lock the rows from table1
Step2: open curosr then select data from table1 --- perform validations
Step3 : select data using group bys on table1 and select data using group bys on table2 ---- this needs to be done in parallel (just to gain on time)
Step4 : compare data
Step5 : insert statement
Step6 : close cursor then commit and exit
This might be really silly question but please pardon my ignorance as I am not much of SQL guy !
Thanks,
Neetesh
Edited by: user13312817 on 10 Nov, 2011 8:27 AMMaybe something like this (not tested)?
SELECT t1.referenceno
, t1.col1
, t1.col2
, t1.entityparent
, t1.class
, t1.annual - t2.annual
FROM (
SELECT table1.Referenceno
, table1.col1
, table1.col2
, entitycodes.entityparent
, classes.class
, SUM(nvl(table1.AnnualAmt,0)) as Annual
FROM table1
JOIN entitycodes ON entitycodes.entitycode = table1.fundentity
JOIN classes ON classes.accountcode = table1.account
GROUP BY table1.Referenceno
, table1.col1
, table1.col2
, entitycodes.entityparent
, classes.class
) t1
JOIN (
SELECT table2.Referenceno
, table2.col1
, table2.col2
, entitycodes.entityparent
, classes.class
, SUM(nvl(table2.AnnualAmt,0)) as Annual
FROM table2
JOIN entitycodes ON entitycodes.entitycode = table2.fundentity
JOIN classes ON classes.accountcode = table2.account
GROUP BY table1.Referenceno
, table1.col1
, table1.col2
, entitycodes.entityparent
, classes.class
) t2 ON t2.referenceno = t1.referenceno
AND t2.col1 = t1.col1
AND t2.col2 = t1.col2
AND t2.entityparent = t1.entityparent
AND t2.class = t1.class -
Multiple rows to be returned from a procedure without a cursor?
We can return multiple rows using ref cursor as the return type in a procedure/function but I want to avoid processing a cursor. I would like to use the select statement on the returned multiple rows. I believe pl/sql table can not be used here also as select are not permitted on it. Is there some way to get around this defficiency in oracle?? Please help out!! My email is [email protected]
nullOracle 8i has temporary tables. As far as I know, this is your only option.
I, too, would like to be able to return a true "relation" (or "table") from a pl/sql function. Why can't I just specify the return type of the function as a "table" of "records"?
Note that if pl/sql is completely orthogonality (neat word, huh?) I should be allowed to use this function that returns a "relation" as a table in the "from" clause of a sql query.
Speaking of complete orthogonality, why can't I treat a pl/sql table just like any other table -- I should be able to query it, "insert" into it, "update" it, join it to other "tables", etc.
How about this for a slogan, "tables everywhere"? Anything, from arrays scalars to functions to arrays of classes to pl/sql tables should be able to participate in a sql statement as a "relation/table."
Anyone from Oracle listening out there? :-) -
SSIS execute single SQL Task for running multiple SQL statements on TeraData DB connection
Hi,
I need to run multiple statements in TeraData connection ("Go" command between the statements is not recognized). how can I execute 1 SQL task referring to those multi statements? I'm using file connection as well. all the statements located in a single
file.
I'm working with SSIS 2008, TERADATA 12.0.
Thankssure.
create winbatch file (*.bat).
in batch file set that command:
bteq < scriptfilePath.txt (or .bteq)> LogsFolderPathOutputfile.out
scriptfile should contain:
.logon
Server/user, pswd
your script.
if you want to set error handeling points, set the followings between yor command statements:
.IF ERRORCODE <> 0 THEN .GOTO SqlError;
at the end of script, set:
-- Log successful completion
.LOGOFF;.QUIT 0;
.LABEL SqlError.QUIT ERRORLEVEL;
you can view logs at the outputfile you stated above.
enjoy. -
Run multiple reports through SOAP call,
Hi Folks,
We are able to run a single report at a time through SOAP call , But Is it possible to run multiple reports at a time?
Is there any solution for it or any other way to do it?Hello,
By default, the reports server is configured with maxEngine=1
It means that ther eis only one engine and only one report can be executed at a given time.
You can increase this value in the file rwserver.conf in order to allow several reports engines to be launched by the reports server and several reports being executed at the same tilme.
regards -
Running multiple instances of program at same time problems
I have a c++ program that uses BerkeleyDB 4.8.26 with transactions. Program works well, and I can run multiple instances of program and they cooperate on database very well. Problem arises when I run a small test program that runs 10 instances of the same program at same time.
Test program opens database with transactions, it does some database reads and writes and quits. This test is run in 10 instances of the same program at the same time. Some instances run and finish well, other instances end up with some of these errors:
unable to allocate memory for the lock table
PANIC: Cannot allocate memory
Error opening environment: DB_RUNRECOVERY: Fatal error, run database recovery
another with:
db_files/__db.001: No such file or directory
db_files: No such file or directory
Error opening environment: Invalid argument
others with this one:
Log file corrupt at LSN: [1][84474]
fileops: close db_files/log.0000000001
PANIC: Invalid argument
unable to join the environment
db_files: No such file or directory
Error opening environment: DB_RUNRECOVERY: Fatal error, run database recovery
I don't understand reason of this error, as I thought (and read) that BerkeleyDB should be able to handle multiple instances accessing database and is thread-safe.
I tried to run the test on both NFS and local disk drive with same results.
Is there anyone with ide what could be causing this problem ? My platform is x86_64 GNU/Linux 2.6.18-164.el5
Edited by: Miro Janosik on 21.9.2010 22:42If there is someone who would like to look at this problem I'd like to show you program log files with verbose output turned on. There are 11 log files in the archive: http://bin.mypage.sk/FILES/log.rar
Log files that end up with lines like this one below mean that program finished running ok:
4000|0 4 20100922 07:07:24.094 20100922 07:07:24.094 FLOW_CMD
Here below is output from one of log files:
FilePersistentStorage::open() start
FilePersistentStorage::open() ReadDir
FilePersistentStorage::open() db_env_create
FilePersistentStorage::open() db_env_create ok
FilePersistentStorage::open() set cachesize
FilePersistentStorage::open() set_tx_max
FilePersistentStorage::open() set_timeout
FilePersistentStorage::open() set_lk_detect
FilePersistentStorage::open() exists homedir?
envp->open
FilePersistentStorage::open() envp->open
fileops: stat /var/tmp
fileops: open db_files/__db.rep.init
fileops: close db_files/__db.rep.init
fileops: open db_files/__db.001
fileops: close db_files/__db.001
fileops: open db_files/__db.001
fileops: mmap db_files/__db.001
fileops: close db_files/__db.001
unable to join the environment
fileops: directory list db_files
fileops: unlink db_files/__db.005
fileops: unlink db_files/__db.004
fileops: unlink db_files/__db.003
fileops: unlink db_files/__db.002
fileops: unlink db_files/__db.001
fileops: open db_files/__db.001
fileops: open db_files/__db.001
fileops: read db_files/log.0000000001: 12 bytes at offset 371
Finding last valid log LSN: file: 1 offset 371
fileops: close db_files/log.0000000001
fileops: open db_files/__db.005
fileops: seek db_files/__db.005 to 794624
fileops: write db_files/__db.005: 8192 bytes
fileops: mmap db_files/__db.005
fileops: close db_files/__db.005
fileops: open db_files/__db.006
fileops: seek db_files/__db.006 to 376832
fileops: write db_files/__db.006: 8192 bytes
fileops: mmap db_files/__db.006
fileops: close db_files/__db.006
fileops: open db_files/log.0000000001
fileops: read db_files/log.0000000001: 12 bytes at offset 335
fileops: read db_files/log.0000000001: 371 bytes at offset 0
fileops: directory list db_files
fileops: open db_files/log.0000000001
fileops: read db_files/log.0000000001: 28 bytes
fileops: close db_files/log.0000000001
Recovery starting from [1][243]
fileops: close db_files/log.0000000001
fileops: open db_files/log.0000000001
fileops: read db_files/log.0000000001: 28 bytes
fileops: write db_files/log.0000000001: 92 bytes at offset 371
fileops: flush db_files/log.0000000001
fileops: close db_files/log.0000000001
fileops: open db_files/log.0000000002
fileops: close db_files/log.0000000002
fileops: open db_files/log.00002
fileops: close db_files/log.00002
fileops: open db_files/log.0000000001
fileops: seek db_files/log.0000000001 to 463
fileops: write db_files/log.0000000001: 4096 bytes
fileops: write db_files/log.0000000001: 4096 bytes
fileops: write db_files/log.0000000001: 4096 bytes
fileops: write db_files/log.0000000001: 4096 bytes
fileops: write db_files/log.0000000001: 3633 bytes
fileops: close db_files/log.0000000001
Recovery complete at Wed Sep 22 07:07:23 2010
Maximum transaction ID 80000002 Recovery checkpoint [1][371]
FilePersistentStorage::open() ok
envp->lock_detect
FilePersistentStorage::open() lock_detect
rejected locks count: 0
db_create
dbp->open
fileops: stat db_files/test_aaps.db
fileops: stat db_files/test_aaps.db
fileops: stat db_files/__db.80000001.d9f23b56
fileops: open db_files/log.0000000001
fileops: read db_files/log.0000000001: 28 bytes
fileops: write db_files/log.0000000001: 67 bytes at offset 463
fileops: flush db_files/log.0000000001
fileops: open db_files/__db.80000001.d9f23b56
fileops: stat db_files/__db.80000001.d9f23b56
fileops: seek db_files/__db.80000001.d9f23b56 to 0
fileops: write db_files/__db.80000001.d9f23b56: 4096 bytes
fileops: seek db_files/__db.80000001.d9f23b56 to 4096
fileops: write db_files/__db.80000001.d9f23b56: 4096 bytes
fileops: flush db_files/__db.80000001.d9f23b56
fileops: close db_files/__db.80000001.d9f23b56
fileops: stat db_files/test_aaps.db
fileops: stat db_files/__db.80000001.d9f23b56
fileops: unlink db_files/__db.80000001.d9f23b56
fileops: open db_files/log.0000000001
fileops: read db_files/log.0000000001: 12 bytes at offset 463
DB_LOGC->get: LSN 1/463: invalid log record header
DB_TXN->abort: log undo failed for LSN: 1 463: Input/output error
fileops: close db_files/log.0000000001
PANIC: Input/output error
fileops: open db_files/log.0000000001
fileops: read db_files/log.0000000001: 12 bytes at offset 463
PANIC: fatal region error detected; run recovery
DB_LOGC->get: LSN: 1/463: read: DB_RUNRECOVERY: Fatal error, run database recovery
DB_TXN->abort: log undo failed for LSN: 1 463: DB_RUNRECOVERY: Fatal error, run database recovery
fileops: close db_files/log.0000000001
PANIC: DB_RUNRECOVERY: Fatal error, run database recovery
PANIC: DB_RUNRECOVERY: Fatal error, run database recovery
dbp->open end
dbp->close
PANIC: fatal region error detected; run recovery
envp->close
File handles still open at environment close
Open file handle: db_files/log.0000000001
fileops: close db_files/log.0000000001
PANIC: fatal region error detected; run recovery
Database 'db_files//test_aaps.db' open failed: DB_RUNRECOVERY: Fatal error, run database recoveryDatabase close failed: DB_RUNRECOVERY: Fatal error, run database recovery
environment close failed: -
When I run multiple Microsoft powerpoint files at a time on my MacBook Air, it simply gets hotter and generates little noise (most probably fan noise). Will this make any effect on life span of MBAir??
Theoretically, yes.
Heat, and heat cycling certainly do take a toll on even solid state components. I have several "very old" Macintosh computers, and the logic boards exhibit the effects.
The fan too, running at higher speeds, does eventually cause wear. More than anything, the higher airflow leads to a higher propensity for dust accumulation on the fan, and the heat exchangers.
In practical terms though, maybe not so much. If you plan on having your computer exhibit a reasonable useful life before you retire it or upgrade, you may never experience any of the detrimental effects. I rarely use a computer longer than 3 years, before I sell, give it away, or just stop using it.
Personally, I wouldn't worry about it. Your computer is well designed to give you a long life, even in the manner in which you describe. Take care of it, but use it for what you bought it for. Nothing you mention is in any way, abuse. -
Hi,
I have multiple statements and I have to run each one, one at a time. I was wondering if there was a way I could run them alltogether?
I have used the below as an example.
select *
from user
where username = 'Bob'
select *
from acc
where account = '11111'
However I would like to run both the select statement at once, I don't want to combine them like this:
select *
from user, acc
where username = 'Bob'
and account = '11111'
Any ideas?>
where username = 'Bob'
and account = '11111'returns users named Bob with account number '11111'. What you want is users named Bob OR users with account number '11111':
select *
from user, acc
where username = 'Bob'
OR account = '11111'SY. -
Multiple running queries at the same time
Hi!
I looked around (and RTM) for this but didn't find anything, so I'm asking here.
I have quite a few long running queries (data loading and such things, warehousing stuff), and I need to be able to run multiple queries/statements at the same time. In TOAD I can do this, start a procedure and while it is running I can do SQL statements in another session tab (it supports threaded sessions - it starts queries in their own background thread/session).
When I start a long running procedure or query in SQL Developer I can not do anything until the procedure execution finishes. Is there any way (setting/preference) to enable SQL Developer to be able to run multiple queries at the same time?
I really would like to move away from TOAD, but this is a major showstopper for me.
Thanx for any tips.
AlexHi!
This post is going to be a little longer, but I have to clarify things out.
I did not mean to throw any wild accusations, because I did my fair share of RTFM and searching the help. I can tell you that if you put any of these in the help search box:
session
non shared
non-shared
connection
concurrent <- I guess this one should yeld something
multiple
spawn
you won't find anything usefull, the article that comes closest, is this:
"Sharing of Connections
By default, each connection in SQL Developer is shared when possible. For example, if you open a table in the Connections navigator and two SQL Worksheets using the same connection, all three panes use one shared connection to the database. In this example, a commit operation in one SQL Worksheet commits across all three panes. If you want a dedicated session, you must duplicate your connection and give it another name. Sessions are shared by name, not connection information, so this new connection will be kept separate from the original."
It does not mention any spawning of non-shared connections from the current one, nor it does mention using a accelerator key combo. But since there could be written something about it, I guess you could call it a documentation bug, because it does not provide any clue to this functionality. The help is definitely of no help in this case. As you can see, I do not throw accusations without trying to find out something first. I guess if someone is not as deep into SQL Developer as you are, there is no way for him/her to know this.
OK, I tried your suggestion, and (sadly) it does not work as I suppose it should.
Here's what I did:
- start a new connection, and enter the following code in SQL Worksheet:
declare
j number;
begin
for i in 1..1000000
LOOP
j := sin(i);
end LOOP;
end;
As you can see, it doesn't do much besides holding the connection busy for a while when executed.
- start a new non-shared connection from the first one using CTRL-SHIFT-N (as you suggested) and put the following statement in the new SQL Worksheet (with "__1" appended to connection name)
select sysdate from dual;
- go to the first SQL Worksheet and execute the procedure
- while the procedure is executing, go to the second SQL Worksheet and hit F9.
The sysdate is returned as soon as the first SQL Worksheet finishes and not any sooner. It may run in separate session, but the result is not returned before the other session is finished doing what it is doing. I guess the correct behaviour would be to return the sysdate immediately.
I verified this behaviour repeating it 3 times starting with a new instance of SQL Developer, each time connecting to another schema and spawning the new non-shared session. The database used was Oracle 10.2.0.3 EE on RHEL 4 UPD3.
The concurrent execution lacks concurrency. The statements might be executed concurently on the database (i did not went the extra mile to verfiy this), but the returning of results is just not independent of other sessions. To the end user this is as much concurrent as it is serial execution.
I hope developers get this issue straightened out soon, as I said, I'd love to move away from Toad, but I'll have to wait until they fix this out.
Is there anything else that can be done to make it behave correctly?
Kind regards
Alex -
PowerShell using start job to run multiple code blocks at the same time
I will be working with many 1000’s of names in a list preforming multiple function on each name for test labs.
I notice when it is running the functions on each name I am using almost no CPU or memory. That led me to research can I run multiple threads at once in a PowerShell program. That lead me to articles suggesting start-job would do just want I am looking
for.
As a test I put this together. It is a simple action as an exercise to see if this is indeed the best approach. However it appears to me as if it is still only running the actions on one name at a time.
Is there a way to run multiple blocks of code at once?
Thanks
Start-Job {
$csv1 = (Import-Csv "C:\Copy AD to test Lab\data\Usergroups1.csv").username
foreach ($name1 in $csv1) { Write-Output "Job1 $name1"}
Start-Job {
$csv2 = (Import-Csv "C:\Copy AD to test Lab\data\Usergroups2.csv").username
foreach ($name2 in $csv2) { Write-Output " Job2 $name2"}
Get-Job | Receive-Job
LishronYou say your testing shows that you are using very little cpu or memory in processing each name, which suggests that processing a single name is a relatively trivial task.
You need to understand that using a background job is going to spin up another instance of powershell, and if you're going to do that per name what used to require a relatively insignificant amount of memory is going to take around 60 MB.
Background jobs are not really well suited for multi-threading short-running, trivial tasks. The overhead of setting up and tearing down the job session can be more than the task itself.
Background jobs are good for long-running tasks. For multi-threading short, trivial tasks runspaces or a workflow would probably be a better choice.
[string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " " -
How can i run multiple instances of Photoshop EXE at the same time on windows 7
Alright. You may ask why do you need multiple ?
Assume that i have 10000 PSD files in 10 different folders
I have a script that just save as them as PNG
And these files are each 3000x3000 px
My computer has 8 cores and 1 photoshop exe is only using 1 cpu core
Also i have SSD raid system it has 750 mb read write per second
So right now i am wasting my time with running only 1 photoshop exe instead of at least 4I'm not aware of a way to run multiple instances or different versions of photoshop at the same time on windows.
Assuming of course they are all installed on the same operating system.
On mac versions you can running two different versions of photoshop at the same time, but i don't think that's possible on windows where it
appears that only one version of photoshop can run at a time. -
JOB in Sql Server Agent should run multiple times.
Hi Guys,
I have a ETL SSIS job in Sql Server Agent, Which should run multiple times.
1. For Example : I scheduled a job at 10:00 PM, If the job fails at 10:00 PM it should run automatically again at 10:10 PM, if the job fails again at 10:10 PM then the job should run at 10:40 PM.
If the job gets success at first attempt i.e 10:00 PM, then it should not run at 10:10 PM.
Note : The time difference between jobs is 10 minutes and 30 minutes. And i know that we can run the job at regular intervals.
Thanks in advanceJust add retry attempts to whatever number you want (2 as per your original explanation) in Job step properties as below
Have a logic to include a delay of 10 mins . You can make use of WAITFOR function for that
see
http://www.mssqltips.com/sqlservertip/1423/create-delays-in-sql-server-processes-to-mimic-user-input/
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
Problem running multiple folder actions at the same time
Hi,
Does anyone have some experience running multiple folder actions at the same time?
I've written a Applescript folder action that processes some files. When adding some files, the script starts and all goes well. The processing of each file takes a few minutes, but all files are processed correctly.
However, when adding some new files to the dropfolder, while the folder action is all ready processing other files dropped a few minutes earlier, the process that is all ready running immediately aborts and the folder action is relaunched on the new files, leaving the old files for what they are...
The same problem occurs with multiple folders, each having a folder action attached. When dropping files in "Folder A", the script starts processing. But if someone droppes files in "Folder B", The script of folder A aborts, and the script of folder B starts processing. When adding more files to the folders, the result is the same. In some casses, when the last dropped files are processed, the os continues where the operation was aborted on the previous files. But this not always happens. To make things worse, when you remove the items from the folders afterwords, the folder actions starts running again on items no longer present in the folder !!!!
Does anyone know how I can prevent the folder action being aborted when new files are dropped and placing the new files in "Hold", until the previous files are processed? How can I prevent a folder action being aborted when a item is dropped into another folder?
I've written a small script to test this behaviour. Just create one or more folders and attach the script below. Drop a item into the folder, wait a few seconds and drop another one in the same or another folder. To monitor what happens please check the console.
on adding folder items to this_folder after receiving added_items
set FolderName to this_folder as string
set ItemName to added_items as string
repeat with theIncrementValue from 1 to 15
delay 2
do shell script ("logger \"Folder: " & FolderName & " - Item: " & ItemName & " - Step: " & theIncrementValue & "\"")
end repeat
do shell script ("logger \"Folder: " & FolderName & " - Item: " & ItemName & " - Done...\"")
end adding folder items to
Thanks for any feedback.That is pretty much the way Folder Actions work, especially since AppleScript is not multi-threaded. If you are using them as some intermediate step in a workflow, you might rethink the way you are handling the files (for example, use a droplet instead). Other options would be using launchd to watch a path or a shell script on a different thread.
-
hello, my eyesight issue happens occasionally, from time to time i run multiple apps that use the isight camera. when i close each app the isight camera's green light is still active. When that happens the only way to stop the isight camera is to reboot.
Is there a way to use activity monitor to identify and kill the process controlling the isight? My apologies in advance if this sounds windows (task manager) like.
Thankstqtclipper wrote:
hello, my eyesight issue happens occasionally, from time to time i run multiple apps that use the isight camera...
http://support.apple.
com/kb/HT2411: Your camera can be used by only one application at a time.
(Over time, Apple has changed the built-in camera's name on newer Macs from "iSight" to "FaceTime" and then to "FaceTime HD." Regardless of the name of your Mac's built-in camera, the same info and troubleshooting applies.)
tqtclipper wrote:
... Is there a way to use activity monitor to identify and kill the process controlling the isight? ...
http://support.apple.
com/kb/PH5147: You can quit any process you can see in those listed in the window that opens when you use Activity Mornitor's
Window >
Activity Monitor
menu command.
Message was edited by: EZ Jim
Mac OSX 10.8.3
Maybe you are looking for
-
When I tried to mount my backup drive to Snow Leopard, it said backup drive didn't have enough memory, but it mounted it anyway. Now disk utility says it won't unmount, and it doesn't have enough memory to function. So I backed up all the files on my
-
EXPDP got error ORA-39014,ORA-39029 in oracle 10.2.0.4
oracle database = 10.2.0.4 OS= Oracle enterprise linux 4.8 after run expdp file i got the following error: ORA-39014: One or more workers have prematurely exited. ORA-39029: worker 1 with process name "DW01" prematurely terminated ORA-31671: Worker p
-
I just wiped my Mac OS X 10.5.8. Reinstalled everything with the original Mac OS X software discs I got with the computer in 2008. No probems and disc drive worked just fine. Updated until I reached 10.5.8. Now I am trying to update to Mac OS X Snow
-
Look for download Adobe Acrobat 9
Hello, I lost my installation disk Adobe Acrobat version 9 that came with the ScanSnap S1500 that I bought at the end of 2010. Where can I download Adobe Acrobat again, knowing I have my serial number for version 9? thank you Didier
-
Stop Finder headers from moving list view
I'm new to mac so thanks for advice! Everytime I resize a finder window list view the headers (date mod, size kind) keep moving to the right. Then when I resize the window smaller I lose headers ,have to enlarge and drag back to the left. More annoyi