Write Operation to Oracle DB fails
Hi,
One of my provisioning jobs should write user data (name, telephone, mail etc.) in an oracle database (Oracle Database 10g Express). This operation works a few days ago(!).
But now I get an error, if i submit this provisioning job:
Requested operation requires an OLE DB Session object, which is not supported by the current provider.
Have you any ideas...?!
I'll give you good advice ... Never use Oracle and IDM.
But when it has to be, you must use an additional oracle ODAC installation for using OLE DB.
Similar Messages
-
Oracle coherence first read/write operation take more time
I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.In which case, why bother using Coherence? You're not really gaining anything, are you?
What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
Just my two cents.
Cheers,
Steve
NB. I don't work for Oracle, so maybe they have a different opinion. :) -
Requeue Failed cachestore write operation.
I am trying to requeue failed cachestore write operation , for this I add <write-requeue-threshold> property in coherence-config file,but I am not sure about its behaviour
1) what will happen if number of cachestore failed operation are greater than threshold specified, means will the extra objects also be requeued of simply dropped off?
2) is the sequence of write operation maintained, let say
i) DB is down at time T1
ii)Object A is requeued at time T2
iii)DB is up again at time T3 and at the same time another object B is also placed in queue
then will object A be written first or Object B.
Gauravuser8157603 wrote:
I am trying to requeue failed cachestore write operation , for this I add <write-requeue-threshold> property in coherence-config file,but I am not sure about its behaviour
1) what will happen if number of cachestore failed operation are greater than threshold specified, means will the extra objects also be requeued of simply dropped off?
AFAIK, if the number of entries which failed to be written grows above the write-requeue-threshold, then the surplus entries will just not be queued for retries. Coherence would just not have the information that it should write it out that entry because it is still dirty.
2) is the sequence of write operation maintained, let say
i) DB is down at time T1
ii)Object A is requeued at time T2
iii)DB is up again at time T3 and at the same time another object B is also placed in queue
then will object A be written first or Object B.
If you mean object A and B are put in the cache for the same key, then, one of the following two scenarios can happen:
As there is no such thing as two events happening at the same time when race conditions are guarded against, let's say that the cache processes the placement of Object B into the cache at time T4 which is very close to T3. The question is how does T4 compare to the time when the write-behind thread starts collecting entries to write and pass it to the cache-store (T5) .
If T5 < T3 and T5 < T4 then the database is still down and the write does not succeed, and afterwards Object B will be enqueued for its first write, so the entry is not retried yet. Object A will not be written to the DB at all.
If T3 < T5 < T4 then Object A will be written and Object B will be enqueued for writing in the next batch write.
If T5 > T4 then Object A will not be found by the write-behind thread at all, and Object B will be written out.
Best regards,
Robert -
The first binary file write operation for a new file takes progressively longer.
I have an application in which I am acquiring analog data from multiple
PXI-6031E DAQ boards and then writing that data to FireWire hard disks
over an extended time period (14 days). I am using a PXI-8145RT
controller, a PXI-8252 FireWire interface board and compatible FireWire
hard drive enclosures. When I start acquiring data to an empty
hard disk, creating files on the fly as well as the actual file I/O
operations are both very quick. As the number of files on the
hard drive increases, it begins to take considerably longer to complete
the first write to a new binary file. After the first write,
subsequent writes of the same data size to that same file are very
fast. It is only the first write operation to a new file that
takes progressively longer. To clarify, it currently takes 1 to 2
milliseconds to complete the first binary write of a new file when the
hard drive is almost empty. After writing 32, 150 MByte files,
the first binary write to file 33 takes about 5 seconds! This
behavior is repeatable and continues to get worse as the number of
files increases. I am using the FAT32 file system, required for
the Real-Time controller, and 80GB laptop hard drives. The
system works flawlessly until asked to create a new file and write the
first set of binary data to that file. I am forced to buffer lots
of data from the DAQ boards while the system hangs at this point.
The requirements for this data acquisition system do not allow for a
single data file so I can not simply write to one large file.
Any help or suggestions as to why I am seeing this behavior would be
greatly appreciated.I am experiencing the same problem. Our program periodically monitors data and eventually save it for post-processing. While it's searching for suitable data, it creates one file for every channel (32 in total) and starts streaming data to these files. If it finds data is not suitable, it deletes the files and creates new ones.
On our lab, we tested the program on windows and then on RT and we did not find any problems.
Unfortunately when it was time to install the PXI on field (an electromechanic shovel on a copper mine) and test it, we've come to find that saving was taking to long and the program screwed up. Specifically when creating files (I.E. "New File" function). It could take 5 or more seconds to create a single file.
As you can see, field startup failed and we will have to modify our programs to workaround this problem and return next week to try again, with the additional time and cost involved. Not to talk about the bad image we are giving to our costumer.
I really like labview, but I am particularly upset beacuse of this problem. LV RT is supposed to run as if it was LV win32, with the obvious and expected differences, but a developer can not expect things like this to happen. I remember a few months ago I had another problem: on RT Time/Date function gives a wrong value as your program runs, when using timed loops. Can you expect something like that when evaluating your development platform? Fortunately, we found the problem before giving the system to our costumer and there was a relatively easy workaround. Unfortunately, now we had to hit the wall to find the problem.
On this particular problem I also found that it gets worse when there are more files on the directory. Create a new dir every N hours? I really think that's not a solution. I would not expect this answer from NI.
I would really appreciate someone from NI to give us a technical explanation about why this problem happens and not just "trial and error" "solutions".
By the way, we are using a PXI RT controller with the solid-state drive option.
Thank you.
Daniel R.
Message Edited by Daniel_Chile on 06-29-2006 03:05 PM -
Ok...... How can I delete this post? ---- > I resolve the issues......
Hi, When I create the trusted Recon, connector sendme:
ERROR [ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)' org.apache.struts.actions.DispatchAction - Request[CreateConnector] does not contain handler parameter named method
follow by
ERROR [ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)' XELLERATE.SERVER - Class/Method: tcTableDataObj/setTimestamp encounter some problems: {1}
java.lang.NullPointerException
at com.thortech.xl.dataobj.tcDataSet.setTimestamp(Unknown Source)
at com.thortech.xl.dataaccess.tcDataSet.setTimestamp(Unknown Source)
at com.thortech.xl.dataobj.tcTableDataObj.setTimestamp(Unknown Source)
at com.thortech.xl.ddm.instance.visitor.ImportVisitor.visitStarted(Unknown Source)
===================================================================
Hi, I think I need to re install the GTC or OIM because all works fine (Recon with AD, Exchange... DataBase Provisioning) But when Im try to Recon with Oracle Databse11g Fails.
I read the manuals but nothing..... ok I do the next configs:
- Generic Technology Connector
- Create
=========================*Step 1*=================
Name ReconCity
Transport Provider (Reconciliation)................Database Application Tables Reconciliation
Format Provider (Reconciliation).....................Database Application Tables Reconciliation
Trusted Source Reconciliation.........................Selected
========================*Step 2*==================
Database Drive : oracle.jdbc.driver.OracleDriver
Database URL : jdbc:oracle:thin:@192.168.1.101:1521:DFGOB
Database User ID : system
Database Pass......: my_pass
parent Table/View
Name.......................: Recon_Users
Batch Size...............: All
Stop Reconciliat
ion Threshold..........: None
Stop Threshold........: None
Sourcedate Format: yyyy/MM/dd hh:mm:ss z
Reconcile Deletion
of Multival.................: "Selected"
Reconciliation
Type..........................: Full
========================*Step 3 (try 1)*===========
SOURCE.................*RECON STAGING*.......................*OIM*
USER_ID--------------> USER_ID -------------> User ID
FIRTS_NAME--------> FIRST_NAME ----------> FIRST_NAME
LAST_NAME---------> LAST_NAME ----------> . --> Yes, I map all fields to OIM
PASSWORD----------> PASSWORD ---------> .
EMAIL------------------> EMAIL ----------------> .
STATUS---------------> STATUS --------------> .
..........|Xellerate |--> ORGANIZATION ------------> .
..........|End-User |------> EMPLOYEE TYPE -----> .
..........|Full-Time |-------> USER TYPE ------------> .
- (STATUS values from my database table is: Active or Disabled)
- ( I try (Enabled/Disabled) )
For Fields USER_ID to STATUS
(Edit option)
Dataset.......................: Reconciliation Staging
Child Dataset Name :
Field Name.................: USER_ID
Mapping Action..........: Create mapping Without Transformation
Matching Only............: Not Applicable
Case-Insensitive........: Not Applicable
Data Type *.............: String
Required.....................: Selected
For Fields ORGANIZATION to USER TYPE
(Add option)
Dataset........................: Reconciliation Staging
Child Dataset Name :
Field Name.................: ORGANIZATION
Mapping Action..........: Create mapping Without Transformation
Matching Only.............: Not Applicable
Case-Insensitive........: Not Applicable
Data Type *.............: String
Required.....................: Selected
Input
Literal: Xellerate
===After save on step 4 I go to Resource Manager-->Manage Scheduled task--> RUN NOW and the log send me
DEBUG QuartzWorkerThread-4 XELLERATE.ADAPTERS - Class/Method: tcStructureUtil/getUserDefinedCols entered.
ERROR QuartzWorkerThread-4 XELLERATE.APIS - Class/Method: tcReconciliationOperationsBean/ignoreEventData encounter some problems: {1}
java.lang.NullPointerException
at com.thortech.xl.dataobj.util.tcAttributeSource.getAttrColumnName(Unknown Source)
at com.thortech.xl.dataobj.util.tcReconciliationUtil.getRuleElementWhere(Unknown Source)
at com.thortech.xl.dataobj.util.tcReconciliationUtil.getRuleWhere(Unknown Source)
at com.thortech.xl.dataobj.util.tcReconciliationUtil.getMatchedUserList(Unknown Source)
at com.thortech.xl.dataobj.util.tcReconciliationUtil.getMatchedUserList(Unknown Source)
at com.thortech.xl.dataobj.util.tcReconciliationUtil.ignoreEvent(Unknown Source)
at com.thortech.xl.ejb.beansimpl.tcReconciliationOperationsBean.ignoreEventData(Unknown Source)
at com.thortech.xl.ejb.beansimpl.tcReconciliationOperationsBean.ignoreEvent(Unknown Source)
at com.thortech.xl.ejb.beans.tcReconciliationOperationsSession.ignoreEvent(Unknown Source)
at com.thortech.xl.ejb.beans.tcReconciliationOperations_gmh3ba_EOImpl.ignoreEvent(tcReconciliationOperations_gmh3ba_EOImpl.java:692)
at Thor.API.Operations.tcReconciliationOperationsClient.ignoreEvent(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at Thor.API.Base.SecurityInvocationHandler$1.run(Unknown Source)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.security.Security.runAs(Security.java:41)
at Thor.API.Security.LoginHandler.weblogicLoginSession.runAs(Unknown Source)
at Thor.API.Base.SecurityInvocationHandler.invoke(Unknown Source)
at $Proxy62.ignoreEvent(Unknown Source)
at com.thortech.xl.gc.runtime.GCScheduleTask.execute(Unknown Source)
at com.thortech.xl.scheduler.tasks.SchedulerBaseTask.run(Unknown Source)
at com.thortech.xl.scheduler.core.quartz.QuartzWrapper$TaskExecutionAction.run(Unknown Source)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.security.Security.runAs(Security.java:41)
at Thor.API.Security.LoginHandler.weblogicLoginSession.runAs(Unknown Source)
at com.thortech.xl.scheduler.core.quartz.QuartzWrapper.execute(Unknown Source)
at org.quartz.core.JobRunShell.run(JobRunShell.java:178)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:477)
ERROR QuartzWorkerThread-4 XELLERATE.GC.FRAMEWORKRECONCILIATION - Reconciliation Encountered error:
Thor.API.Exceptions.tcAPIException: java.lang.NullPointerException
at com.thortech.xl.ejb.beansimpl.tcReconciliationOperationsBean.ignoreEventData(Unknown Source)
at com.thortech.xl.ejb.beansimpl.tcReconciliationOperationsBean.ignoreEvent(Unknown Source)
at com.thortech.xl.ejb.beans.tcReconciliationOperationsSession.ignoreEvent(Unknown Source)
at com.thortech.xl.ejb.beans.tcReconciliationOperations_gmh3ba_EOImpl.ignoreEvent(tcReconciliationOperations_gmh3ba_EOImpl.java:692)
at Thor.API.Operations.tcReconciliationOperationsClient.ignoreEvent(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at Thor.API.Base.SecurityInvocationHandler$1.run(Unknown Source)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.security.Security.runAs(Security.java:41)
at Thor.API.Security.LoginHandler.weblogicLoginSession.runAs(Unknown Source)
at Thor.API.Base.SecurityInvocationHandler.invoke(Unknown Source)
at $Proxy62.ignoreEvent(Unknown Source)
at com.thortech.xl.gc.runtime.GCScheduleTask.execute(Unknown Source)
at com.thortech.xl.scheduler.tasks.SchedulerBaseTask.run(Unknown Source)
at com.thortech.xl.scheduler.core.quartz.QuartzWrapper$TaskExecutionAction.run(Unknown Source)
at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
at weblogic.security.service.SecurityManager.runAs(Unknown Source)
at weblogic.security.Security.runAs(Security.java:41)
at Thor.API.Security.LoginHandler.weblogicLoginSession.runAs(Unknown Source)
at com.thortech.xl.scheduler.core.quartz.QuartzWrapper.execute(Unknown Source)
at org.quartz.core.JobRunShell.run(JobRunShell.java:178)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:477)
WARN QuartzWorkerThread-4 XELLERATE.GC.FRAMEWORKRECONCILIATION - Though Reconciliation Scheduled task has encountered an error, Reconciliation Transport providers have been "ended" smoothly. Any provider operation that occurs during that "end" or "clean-up" phase would have been executed e.g. Data archival. In case you want that data to be a part of next Reconciliation execution, restore it from Staging. Provider logs must be containing details about storage entities that would have been archived
==========================================
========================*Step 3 (try 2)*==============
SOURCE..................*RECON STAGING*....................*OIM*
USER_ID--------------> USER_ID -------------> User ID
FIRTS_NAME--------> FIRST_NAME ----------> FIRST_NAME
LAST_NAME---------> LAST_NAME ----------> .
PASSWORD----------> PASSWORD ----------> .
EMAIL------------------> EMAIL -----------------> .
........|Xellerate |--> ORGANIZATION -------------> .
........|End-User |------> EMPLOYEE TYPE ------> .
........|Full-Time|-------> USER TYPE --------------> .
........|Active |------> SATUS -----------------> .
- (On STATUS I try (*Active/Enabled/Disabled*) )
For Fields USER_ID to EMAIL
(Edit option)
Dataset.......................: Reconciliation Staging
Child Dataset Name :
Field Name.................: USER_ID
Mapping Action.........: Create mapping Without Transformation
Matching Only.............: Not Applicable
Case-Insensitive.........: Not Applicable
Data Type *..............: String
Required.......................: Selected
For Fields ORGANIZATION to STATUS
(Add option)
Dataset.......................: Reconciliation Staging
Child Dataset Name :
Field Name................: STATUS
Mapping Action..........: Create mapping Without Transformation
Matching Only............: Not Applicable
Case-Insensitive........: Not Applicable
Data Type *.............: String
Required......................: Selected
Input
Literal: Active
===After save on step 4 I go to Resource Manager-->Manage Scheduled task--> RUN NOW and the log sende me the same ERRROR posted up.
===============Step 3 (try 3)=======================
SOURCE...................*RECON STAGING*...................... OIM
USER_ID--------------> USER_ID -----------> User ID
FIRTS_NAME--------> FIRST_NAME ----------> FIRST_NAME
LAST_NAME---------> LAST_NAME ----------> .
PASSWORD----------> PASSWORD ----------> .
EMAIL------------------> EMAIL ------------> .
USER_STATUS----| T |--> USER_STATUS ---> .
.....................................|
.........................|Lookup.M4..|
............|Xellerate |--> ORGANIZATION ------------> .
............|End-User |------> EMPLOYEE TYPE ------> .
............|Full-Time |-------> USER TYPE -------------> .
- (USER_STATUS have the next Values: True/False)
For Fields USER_ID to EMAIL
(Edit option)
Dataset.......................: Reconciliation Staging
Child Dataset Name :
Field Name.................: USER_ID
Mapping Action..........: Create mapping Without Transformation
Matching Only.............: Not Applicable
Case-Insensitive.........: Not Applicable
Data Type *..............: String
Required......................: Selected
For Fields ORGANIZATION to USER_TYPE
(Add option)
Dataset........................: Reconciliation Staging
Child Dataset Name :
Field Name................: ORGANIZATION
Mapping Action..........: Create mapping Without Transformation
Matching Only............: Not Applicable
Case-Insensitive........: Not Applicable
Data Type *.............: String
Required.....................: Selected
Input
Literal: Xellerate
Dataset.......................: Reconciliation Staging
Child Dataset Name:
Field Name................: USER_STATUS
Mapping Action.........: Create Mapping with Translation
Matching Only...........: Not Applicable
Case-Insensitive.......: Not Applicable
Data Type *............: String
Required....................: Selected
Field Name USER_STATUS
Input
Dataset........:Source
Field Name :USER_STATUS
Lookup Code Name
Literal...........: Lookup.M4.Recon
===============
Lookup Definition
Code : Lookup.M4Recon
Field:
Lookup Type(Selected)
Required (Not Selected)
Group: Object
Lookup Code Info
| Code Key | Decode |
1 | True/False | Active/Disabled|
===After save on step 4 I go to Resource Manager-->Manage Scheduled task--> RUN NOW and the log sende me the same ERROR posted up.=============
I hope you can help me, Thanks.....
Edited by: user11296330 on Oct 18, 2009 8:55 PM
Edited by: user11296330 on Oct 18, 2009 10:10 PM
Edited by: user11296330 on Oct 19, 2009 3:33 PM
Edited by: user11296330 on Oct 20, 2009 1:52 AM
Edited by: user11296330 on Oct 21, 2009 9:08 AM
Edited by: user11296330 on Nov 8, 2009 10:16 PM
Edited by: user11296330 on Nov 8, 2009 10:20 PMHi amigo I solve my problem with the next:
OK, all above is good but in the manual they mistake something.... one step.
(All succesful with --> ========================*Step 3 (try 2)*==============)
1.- Ok, if you got every thing like above the last thing you need is follow the next link:
http://www.oracle.com/technology/obe/fusion_middleware/im1014/oim/obe12_using_gtc_for_reconciliation/using_the_gtc.htm
Go to: "Modifying the GTC" ( just do this part)
Restart the OIM and Enjoy it.
Good day Amigo.
And don't forget on step three:
Click the Edit icon of the User ID field of the OIM - User data set.
b. On the Step 1: Provide Field Information page:
- From the Mapping Action list, select Create Mapping Without Transformation.
- Select Matching Only.
- Click Continue.
Mapping Information page, select Reconciliation
Staging from the Dataset list, select EMPLOYEE_ID (your ID Field) from the Field Name
list, and then click Continue.
If something go bad, tell me the steps like I did with the log.... see ya
Edited by: user11296330 on Nov 8, 2009 9:57 PM
Edited by: user11296330 on Nov 8, 2009 10:21 PM -
CRMD_ORDER failed with operating system recv call failed 10054
Hi ,
Our Functioanl fox trying to create sales order in cRM using CRMD_ORDER
transaction. In the menu of the sales oerder creation after selecting
product GUI abnormally termainte with error message "CR1: sAP system
messgae: Work process restarted; session terminated".
In the system log I found the operating system recv failed like as
follows:
=========================
A1 0 Initialization complete
Q0 I Operating system call recv failed (error no. 10054)
Q0 Q Start Workproc 0, Pid 6380
R4 7 Delete session 001 after error 023
A1 0 Initialization complete
A1 4 > in program , line ??? , event
================
In workprocess trace file I found the following erorr
=========
ThJAttachVmContainer2: found eventBits 0x40 for V1
TH_VMC_EVENT_ROLL_IN
ThJAttachVm: vm V1 already attached
ThJAttachAll: return 0
ERROR => VMCErrInfo 1 [thxxvmc.c 6049]
msgArea=14
B dbcalbuf: Buffer CALE (addr: 0000000010D20050, size: 500000, end:
0000000010D9A170)
M CCMS: AlInitGlobals : alert/use_sema_lock = TRUE.
I *** ERROR => OpenProcess PID 3692 failed for checking semaphore 12
ERROR_INVALID_PARAMETER: The parameter is incorrect.
[semnt.c 1920]
S *** init spool environment
S TSPEVJOB updates inside critical section: event_update_nocsec = 0
S initialize debug system
T Stack direction is downwards.
T debug control: prepare exclude for printer trace
T new memory block 000000000D1B0B60
========
I have updated the latest kerenel NW701 patch no 48 and tried but no
luck. also as per the note 559119 I change the parameter
gw/gw_disconnect value 0 but still no luck.
I also tried following :
To prevent firewall idle timeouts, set a low value
for the parameter "gw/keepalive" in instances. and users are being
disconnected so tried the rdisp/keepalive to a low value , (value 20)
I deactived both the parameters with value 0 but still no luck ...
Appreciate any help ...Glen,
What OS/DB combination are you on? We are seeing something similar on NW 7.0/CRM 2007 on windows 2003/oracle, where the work process dies in similar fashion but an error 050.
Take care,
Stephen -
The share operation Blu- ray has failed. quicktime Error: 0
the share operation Blu- ray has failed. quicktime Error: 0 anyone any idea how i overcome this ?
Are you working with native media? If you are, see if Final Cut will let you transcode a clip to Pro Res 422. Do an export test with that single clip. (Write the output to a disk image so you don't waste a disk.)
Russ -
Write data from Oracle 9i 64 bit to Oracle 32 bit through database link
Hi to All:
We are using following Oracle 64 bit application.
Operating System: Sun Solaris 5.9
Database: Oracle 9i (64 bit)
Machine: Sun Server Spark V890
Now my questin is that I want to move data from the production database running in Oracle 9i 64bit on Sun server to Oracle 9i 32 bit on Windows Server 2002 thru database link.
I heard that it is not possible to write data from oracle 64 bit to oracle 32 bit, no matter what operating system we are using.
Please confirm me that whether we can write data through database link from Oracle 9i 64 bit to Oracle 32 bit.
Please specify every possible solution or any site from where I can have the solution.
Thanks in advance.
Hassan Raza Khan LodhiCreate a test table ( in 64bits db) , add some data
From 32bits create a dblink
Execute the following query
create table test_32
as select * from test64@dblinkto64;
Michael -
OFS Verified Oracle group failed
Dear All,
We verified OFS Oracle group failed, and we found below error information:
FS-10890: Oracle Services for MSCS failed during the verifyGroup operation
FS-10111: Failed to verify group ORACLE PIP
FS-10046: Failed to verify group resource OraclePIP11202TNSListenerFslmaspipdb
FS-10151: Unable to open the parameter file D:\oracle\PIP\11202\database\initPIP_OFS.ora
FS-10180: File D:\oracle\PIP\11202\database\initPIP_OFS.ora is not found
0x3: The system cannot find the path specified.
We checked this file is exist in correct path.( D:\oracle\PIP\11202\database)
Anyone can help us? thank you in advance.
Best Regards,
Kent
More log as below
Versions: client = 3.4.2 server = 3.4.2 OS =
Operation: Verifying group "ORACLE PIP"
Starting Time: Jan 04, 2014 12:37:42
Elapsed Time: 0 minutes, 3 seconds
1 12:37:42 Starting clusterwide operation
2 12:37:42 FS-10371: MASSAP07 : Performing initialization processing
3 12:37:43 FS-10371: MASSAP09 : Performing initialization processing
4 12:37:43 FS-10373: MASSAP07 : Determining owner node of resource
5 12:37:43 FS-10374: MASSAP07 : Gathering cluster information needed to perform the specified operation
6 12:37:43 FS-10374: MASSAP09 : Gathering cluster information needed to perform the specified operation
7 12:37:43 FS-10461: MASSAP07 : Starting verification of group ORACLE PIP
8 12:37:43 FS-10463: MASSAP07 : Verifying group attributes for group ORACLE PIP
9 12:37:43 FS-10464: MASSAP07 : Verifying attributes for resources in group ORACLE PIP
10 12:37:43 > FS-10315: Verifying disk resource Cluster Disk 3
11 12:37:43 > FS-10310: Verifying the IP address resource IP Address 10.236.149.43
12 12:37:43 > FS-10325: Verifying the network name resource Network Name MASPIPDB
13 12:37:43 > FS-10300: Verifying Oracle Net listener resource OraclePIP11202TNSListenerFslmaspipdb
14 12:37:44 > FS-10320: Verifying database resource PIP.WORLD
15 12:37:44 FS-10465: MASSAP07 : Verifying dependencies for resources in group ORACLE PIP
16 12:37:44 > FS-10316: Verifying dependencies for disk resource Cluster Disk 3
17 12:37:44 > FS-10311: Verifying the dependencies for IP address resource IP Address 10.236.149.43
18 12:37:44 > FS-10326: Verifying dependencies for the network name resource Network Name MASPIPDB
19 12:37:44 > FS-10301: Verifying dependencies for Oracle Net listener resource OraclePIP11202TNSListenerFslmaspipdb
20 12:37:44 > FS-10321: Verifying dependencies for database resource PIP.WORLD
21 12:37:44 ** WARNING : FS-10288: Parameter file D:\oracle\PIP\11202\database\initPIP_OFS.ora is not located on a cluster disk
22 12:37:44 FS-10466: MASSAP07 : Verification of group ORACLE PIP completed successfully
23 12:37:45 FS-10461: MASSAP09 : Starting verification of group ORACLE PIP
24 12:37:45 FS-10463: MASSAP09 : Verifying group attributes for group ORACLE PIP
25 12:37:45 FS-10464: MASSAP09 : Verifying attributes for resources in group ORACLE PIP
26 12:37:45 > FS-10315: Verifying disk resource Cluster Disk 3
27 12:37:45 > FS-10310: Verifying the IP address resource IP Address 10.236.149.43
28 12:37:45 > FS-10325: Verifying the network name resource Network Name MASPIPDB
29 12:37:45 > FS-10300: Verifying Oracle Net listener resource OraclePIP11202TNSListenerFslmaspipdb
30 12:37:45 ** ERROR : 0x3: The system cannot find the path specified.
31 12:37:45 ** ERROR : FS-10180: File D:\oracle\PIP\11202\database\initPIP_OFS.ora is not found
32 12:37:45 ** ERROR : FS-10151: Unable to open the parameter file D:\oracle\PIP\11202\database\initPIP_OFS.ora
33 12:37:45 ** ERROR : FS-10046: Failed to verify group resource OraclePIP11202TNSListenerFslmaspipdb
34 12:37:45 ** ERROR : FS-10111: Failed to verify group ORACLE PIP
35 12:37:45 ** ERROR : FS-10890: Oracle Services for MSCS failed during the verifyGroup operation
36 12:37:45 Please check your Windows Application log using the Event Viewer for any additional errors
37 12:37:45 The clusterwide operation failed !Its just the too many open files caused by the bug in collecting the health metrics (see Note:430805.1, 563575.1).
Disable the health metrics Note:379423.1
If you're worried about doing this, read Note:469227.1 -
Specs:
iMac 10.8.5
3,4 GHz Intel Core i7
32 GB 1600 MHz DDR3
Can't read or write some files, internet is failing, youtube won't load, security software I tried to install was in Slovenian, not dutch or english like in my systempreferences settings, pictures and files won't preview with spacebar and are randomly corrupted, when I entered something in the Youtube searchbar (when it was still working) it send me to a site with sexadds.
I tried restart my iMac and when I was logged back in, my dock preferences were reset.
Also tried to download some security software to check my Mac for malware, but when I did, I tried several, I got a notification that said something like 'dumpfiles (don't know if this is the right translation...) damaged'.
I'm taking screenshots from all the weird notifications I get and even three quarters off the screenshots I took in the last three hours are already unreadable.
It started this morning when I tried opening a Premiere Pro file on which I worked the night before.
When I tried opening it, it said the file was damaged and could not be openend.
I tried opening it with AE or importing the file in a new project but nothing helped.
When I tried looking for autosaves, this is the really weird part, there were none.
Even though there are autosaves from my other projects, this one was completely gone.
It looked like the day before never happened on my computer.
Also when I openend Premiere all the recent projects had been wiped.
So at first I thought it was a Premiere Pro failure.
But than, later on the day, I tried loading some RAW files from my compact flash card.
This is where I would get an error (error -36) which said some data cannot be read or written.
I tried importing the files with a view different technics, from dragging to importing via Lightroom and I succeeded with Image Browser.
But when I tried moving the files to an other folder the same error occurred.
While dealing with this issue I wanted to put on some soothing music on youtube.
This is when the next weird thing occurred: youtube wasn't completely loading in Chrome. I refreshed a view times, checked the internet connection and still no difference.
When I tried in Safari it did work but when I clicked enter on the searchbar in Youtube, a page with sexadds appeared (I didn't install AdBlock in Safari...).
I read about this 'phishing' where you are redirected to a site were a possible malware installment can take place...
I don't know if it's connected to any of the problems I've been having but I just never experienced this on a mac, I have been a Mac user for 10 years now.
On top of it all, internet started working worse and worse and now it's not even working at all. I had to fill in the password over and over, normally it remembers.
Just like my system preferences, all the preferences I had with Chrome where also reset.
Also somewhere in between I got this notification: Mac OS X must restore library to run programs. Type your password to allow.
To me this is all very weird and suspicious. I have clearly no idea what's going on. Could this be another sort of trojan horse or malware?
Some background info which could be helpful for solving this mystery:
two months ago the one year old Fusion Drive in my iMac just broke out of nowhere.
I got it replaced by a qualified apple repair store.
When I got my computer back, all the files where gone.
I got on the internet without AdBlock installed yet.
A game or whatever it was, can't clearly remember, got installed by accident.
I deleted it immediately.
Only two weeks later, I couldn't log in to my account. It didn't recognize my password and username.
So I brought my mac back to the store.
Here the repair guy said it was a minor thing and he just needed to reconnect my account. He also mentioned he found a downloaded game name Sparta and it probably had something to do with the error.
I asked him; could it be a virus? He replied no way.
I don't know why I couldn't be a virus, just because it's a mac doesn't mean it cannot be done.
So today I tried installing anti virus software (such as avast- was in a weird language looked like slovenian, clamxav - was in slovenian) but I couldn't install them.
PLEASE help me! I don't know what to do anymore, I work fulltime and I need my computer, I have no time to bring it in for repair, are there other perhaps easier ways?
Could this be the work of a virus or a malware? Or is it a disk permissions issue?It sounds like you may have multiple problems, but none of them are likely to be caused by malware.
First, the internet-related issues may be related to adware or a network compromise. I tend to lean more towards the latter, based on your description of the problem. See:
http://www.adwaremedic.com/kb/baddns.php
http://www.adwaremedic.com/kb/hackedrouter.php
If investigation shows that this is not a network-specific issue, then it's probably adware. See my Adware Removal Guide for help finding and removing it. Note that you mention AdBlock as if it should have prevented this, but it's important to understand that ad blockers do not protect you against adware in any way. Neither would any kind of anti-virus software, which often doesn't detect adware.
As for the other issues, it sounds like you've got some serious corruption. I would be inclined to say it sounds like a failing drive, except it sounds like you just got it replaced. How did you get all your files back after the new drive was installed?
(Fair disclosure: I may receive compensation from links to my sites, TheSafeMac.com and AdwareMedic.com, in the form of buttons allowing for donations. Donations are not required to use my site or software.) -
The share operation "master file" has failed. the operation could not be completed because an error occurred creating Frame 93164 (error-1). how can i find the frame?
https://discussions.apple.com/thread/6219522
-
I did try in all of them and it doesnt work it apears like that........
The share operation Master File has failed
The operation could not be completed because an error occurred when creating frame 608 (error -1).
and i dont know what to do plese hlp me....
Re: when i try to share my project to quicktime aroun 50% appears error -1 and i can't share it what to do please helpDouble post…
-
Hello
When I try to start the SQl server service i get the following error:
Event id 17058
Could not open error log file ''. Operating system error = 5(failed to retrieve text for this error. Reason: 15105).
As a test I have made sure the errorlog file ,and the entire drive it is, has everyone full control permissions, but to no avail. Does anyone have any ideas to resolve this issue?
Thank youHi,
Try running:
SELECT SERVERPROPERTY('ErrorLogFileName')
Then verify that the account being used to run the SQL Server service account has access to the path output above. If possible, you could try logging onto the server with the same account used to run SQL Server then navigate to the errorlog folder.
Thanks,
Andrew Bainbridge
SQL Server DBA
Please click "Propose As Answer" if a post solves your problem, or "Vote As Helpful" if a post has been useful to you -
File Adapter Write Operation inserts a new line at the end
Hi,
I am using the write operation in the File Adapter from BPEL Process. The data is written successfully to the file and file is also created in the location specified in the adapter wsdl. But in the file created, it creates a new line at the end of the file. This new line has to be avoided when writing data to a file.
Has anyone faced this and solved this ?
I am SOA 10.1.3.4.
Cheers,
- ARIt is a bug in 10g and will be resolved in one of the versions of 11g.
Cheers,
-AR -
Oracle Forms equivalent operations in Oracle ADF
I need to know whether there are equivalent operations in ADF corresponding to these operations in Oracle Forms
System.Current_block
System.Current_record
System.Current_form
etc...As a mechanism to working out how ADF works, a suggested read is the Fusion Dev Guide: http://download.oracle.com/docs/cd/E16162_01/web.1112/e16182/toc.htm
Once you've done this, as extension the dev guide has appendixes "F - ADF Equivalents of Common Oracle Forms Triggers" & "G Performing Common Oracle Forms Tasks in Oracle ADF" for assisting Forms programmers in moving to ADF, a feature/mind map as you will.
However as Vinod says, don't make the mistake of thinking ADF is similar to Forms in its implementation. While Forms and ADF attempt to achieve similar results (ie. applications querying data from a db and showing to the user to view or edit), they do it in totally different technologies.
CM.
Maybe you are looking for
-
After selecting "login as guest" MBP reboots and only boots in OS X Utilities mode
Yesterday, I started my mac. I wanted to use the guest account and my mac told me to reboot for that option. From that time, my mac only boots in OSX Utilities mode and there's no way to get there out. When I used detailed login stats, it seems that
-
3D Line Graph.vi not displaying line connecting data points.
I'm having trouble getting 3D Line Graph.vi to display the line connecting my data points. I'm collecting latitude, longitude, and depth data from a binary file, loading this data into arrays with a while loop, and sending those arrays to 3D Line Gr
-
KB2919355 not required on some systems, difficult to deploy
Ok this 8.1/2012 R2 update (KB2919355) is kinda driving me crazy. Here's an example, 4 identical 2012 R2 servers, they all have the same updates installed including prereqs like KB2919442, 2 of them see that they need they need to run 2919355, 2 of
-
How do I restore only a few photos from iphoto using time machine?
I have about 100 photos that have become corrupt and I need to restore from time machine. When I try to restore the iphoto library, it says it cannot restore because library is too big. How can I get back only the few photos that are corrupt? Thank
-
TS1367 MacBook Pro got multicolored horizontal lines then went black and
MacBook Pro got multicolored horizontal lines then went black and now when start button is pressed a regularly spaced beep accompanied by the power led is all that will happen.. Nothing starts.