Cannot Create Data Repository with VM Server (3.1.1) Cluster
Dear All,
I have 3 Servers (one for VM Manager, two for VM Servers) and 1 Storage (Sun Storage 6180 (Fiber Chanel) and map volume as Default Storage Domain ).
I try to create my 2 VM Servers as Cluster on VM Manager but it has problem as below:
Case1. I can create a Server Pool with cluster by select Clustered Server Pool by select physical disk, add 2 servers then create data repository but it got problem: in page Create a Data Repository: Select Physical Disk is shown blank not see a physical disk.
Case 2. when I create server pool by uncheck Clustered Server Pool box (None Cluster) and add 2 vm servers then create data repository, the physical disk is shown but got error message:
"OVMRU_002030E Cannot create OCFS2 file system with local file server: Local FS vmserver1. Its server is not in a cluster
Wed Dec 26 01:27:12 ICT 2012"
Please kindly give advice for this.
Thanks and regards,
Vandy
1. So you're trying to create a server pool and a storage repository on a single LUN that is direct attached? Don't your software on the 6180 allow you present multiple LUNS from disk group? Think about it. You create a clustered server pool on a single LUN and then try to use that same LUN for repository.....
2. How are both hosts attached? Are there multiple HBA cards in your 6180? Is so, how many? If you have redundant cards you should be using multipathing for redundancy.
Similar Messages
-
How i can Create Master Repository with MySQL Database?
How i can Create Master Repository with MySQL Database? i need to using MySQL Database to Master & Work Repository.
I try to add mysql libary jar file to drivers . But , can't display MySQL Technology in Database List for Create Master Repository ?
Please..
Edited by: MadoatZ on Feb 19, 2011 1:47 AMCreation of ODI master repository is limited to few relational databases only. Check certification matrix for ODI 11g
Oracle 10.2.0.4+
Oracle 11.1.0.7+
Oracle 11.2.0.1+
Microsoft SQL Server 2005
Microsoft SQL Server 2008
IBM DB2/UDB 9.7 and later FixPaks
IBM DB2/400 (V5R4+)
Hypersonic SQL 1.7.3+
Sybase AS Enterprise 15.0.x
thanks -
How create data store with PermSize = 4096MB on HP-UX 64-bit?
Hi!
I use TimesTen 7.0.2:
TimesTen Release 7.0.2.0.0 (64 bit HPUX/IPF) (tt70_1:17001) 2007-05-02T05:22:15Z
Instance admin: root
Instance home directory: /opt/TimesTen/tt70_1
Daemon home directory: /var/TimesTen/tt70_1
Access control enabled.
I set PermSize = 4096MB for my new data store. Then I tryid to create it:
ttIsql -connStr "DSN=tt_rddb1;UID=ttsys;PWD=ttsys;OraclePWD=ttsys;Overwrite=1" -e "exit;"
But operation was failed:
836: Cannot create data store shared-memory segment, error 22.
Can I create data store with such size on HP-UX 64-bit???Is largefiles enabled? I believe you can check with fsadm -F vxfs /filesystem
Also please understand that 'PermSize' is not the only attribute affecting the size of the timesten shared memory segment. The actual resulting size is
PermSize + TempSize + LogBuffSize + Overhead
So you would need to configure shmmax to be > 4g. Have you tried setting it to (say) 8g (just for testing purposes to see if it eliminates the error). -
925: Cannot create data store semaphores (Invalid argument)
I'm trying to connect to Timesten, but I'm getting this error.
I have looked at other similar discussions, but yet I could not solve the problem.
[timesten@atd info]$ ttisql "dsn=tpch"
Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "dsn=tpch";
925: Cannot create data store semaphores (Invalid argument)
703: Subdaemon connect to data store failed with error TT925
The command failed.
Done.
Here is my information.
[tpch]
Driver=/home_sata/timesten/TimesTen/tt1122/lib/libtten.so
DataStore=/home_sata/timesten/TimesTen/tt1122/tpch/tpch
LogDir=/home_sata/timesten/TimesTen/tt1122/tpch/logs
PermSize=1024
TempSize=512
PLSQL=1
DatabaseCharacterSet=US7ASCII
kernel.sem = 400 32000 512 5029
kernel.shmmax=68719476736
kernel.shmall=16777216
[timesten@atd info]$ cat /proc/meminfo
MemTotal: 297699764 kB
MemFree: 96726036 kB
Buffers: 582996 kB
Cached: 155831636 kB
SwapCached: 0 kB
Active: 115729396 kB
Inactive: 78767560 kB
Active(anon): 44040440 kB
Inactive(anon): 8531544 kB
Active(file): 71688956 kB
Inactive(file): 70236016 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 112639992 kB
SwapFree: 112639992 kB
Dirty: 160 kB
Writeback: 0 kB
AnonPages: 38082348 kB
Mapped: 15352480 kB
Shmem: 14489676 kB
Slab: 3993152 kB
SReclaimable: 3826768 kB
SUnreclaim: 166384 kB
KernelStack: 18344 kB
PageTables: 245352 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 261457104 kB
Committed_AS: 74033552 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 903384 kB
VmallocChunk: 34205870424 kB
HardwareCorrupted: 0 kB
AnonHugePages: 35538944 kB
HugePages_Total: 32
HugePages_Free: 32
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 6384 kB
DirectMap2M: 2080768 kB
DirectMap1G: 299892736 kB
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 67108864
max total shared memory (kbytes) = 67108864
min seg size (bytes) = 1The error message suggests that the system is running out of semaphores although plenty seem to be configured:
kernel.sem = 400 32000 512 5029
Could it be that there are other programs on this machine as this user using semaphores?
Have you made changes to the kernel parameters and haven't made them permanent with
# /sbin/sysctl -p
or a re-boot?
If you've done a # /sbin/sysctl -p have you recycled the TT daemon
$ ttDaemonAdmin -restart
So TT takes up the new settings?
Tim -
Cannot create data store shared-memory segment error
Hi,
Here is some background information:
[ttadmin@timesten-la-p1 ~]$ ttversion
TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
Instance admin: ttadmin
Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
Group owner: ttadmin
Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
PL/SQL enabled.
[ttadmin@timesten-la-p1 ~]$ uname -a
Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
68719476736
[ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
MemTotal: 148426936 kB
MemFree: 116542072 kB
Buffers: 465800 kB
Cached: 30228196 kB
SwapCached: 0 kB
Active: 5739276 kB
Inactive: 25119448 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 148426936 kB
LowFree: 116542072 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 164740 kB
Mapped: 39188 kB
Slab: 970548 kB
PageTables: 10428 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 90990676 kB
Committed_AS: 615028 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 274804 kB
VmallocChunk: 34359462519 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
extract from sys.odbc.ini
[cachealone2]
Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
DataStore=/u02/timesten/datastore/cachealone2/cachealone2
PermSize=14336
OracleNetServiceName=ttdev
DatabaseCharacterset=WE8ISO8859P1
ConnectionCharacterSet=WE8ISO8859P1
[ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
SwapTotal: 16777208 kB
Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
[ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
Copyright (c) 1996-2009, Oracle. All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.
connect "DSN=cachealone2";
836: Cannot create data store shared-memory segment, error 28
703: Subdaemon connect to data store failed with error TT836
The command failed.
Done.
I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
Regards,
RajThose parameters look ok for a 100GB shared memory segment. Also check the following:
ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
To view the current setting run the OS command
$ ulimit -l
and to set it to a value dynamically use
$ ulimit -l <value>.
Once changed you need to restart the TimesTen master daemon for the change to be picked up.
$ ttDaemonAdmin -restart
Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
-linuxLargePageAlignment 2
So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
PermSize+TempSize+LogBufMB+64MB Overhead
For example consider a TimesTen database of size:
PermSize=250000 (unit is MB)
TempSize=100000
LogBufMB=1024
Total Memory = 250000+100000+1024+64 = 351088MB
The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
351088/2 = 175544
As user root edit the /etc/sysctl.conf file
Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
vm.nr_hugepages=175544
Add/modify vm.hugetlb_shm_group = 600
This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
$ id
$ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
As user root edit the /etc/security/limits.conf file
Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
oracle hard memlock 359514112
oracle soft memlock 359514112
THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
$ sysctl -p
Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
Check Hugepages has been setup correctly, look for Hugepages_Total
$ cat /proc/meminfo | grep Huge
Based on the example values above you would see the following:
HugePages_Total: 175544
HugePages_Free: 175544 -
836: Cannot create data store shared-memory segment, error 22
Hi,
I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
report on it through a J2EE website.
We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
see if we can store about 50-60gb in memory.
Is this correct? Or are there any caveats in relation to this?
We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
836: Cannot create data store shared-memory segment, error 22
703: Subdaemon connect to data store failed with error TT836
Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
* Existing Oracle Database instances are not adversely impacted
* We are able to create a Data Store which is able fully utilise the physical memory on the box
* We don't need to change these settings for quite some time, and still be able to complete our evaluation
We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
Machine
## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
FJSV,SPARC64-V
System Configuration: Sun Microsystems sun4us
Memory size: 32768 Megabytes
12 processors
/etc/system
set rlim_fd_max = 1080 # Not set on the machine
set rlim_fd_cur=4096 # Not set on the machine
set rlim_fd_max=4096 # Not set on the machine
set semsys:seminfo_semmni = 20 # machine has 0x42, Decimal = 66
set semsys:seminfo_semmsl = 512 # machine has 0x81, Decimal = 129
set semsys:seminfo_semmns = 10240 # machine has 0x2101, Decimal = 8449
set semsys:seminfo_semmnu = 10240 # machine has 0x2101, Decimal = 8449
set shmsys:shminfo_shmseg=12 # machine has 1024
set shmsys:shminfo_shmmax = 0x20000000 # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
$ /usr/sbin/sysdef | grep -i sem
sys/sparcv9/semsys
sys/semsys
* IPC Semaphores
66 semaphore identifiers (SEMMNI)
8449 semaphores in system (SEMMNS)
8449 undo structures in system (SEMMNU)
129 max semaphores per id (SEMMSL)
100 max operations per semop call (SEMOPM)
1024 max undo entries per process (SEMUME)
32767 semaphore maximum value (SEMVMX)
16384 adjust on exit max value (SEMAEM)Hi,
I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
Regards, Chris -
'Cannot begin data load. Analytic Server Error(1042006): Network Error
Hi...
I got a error message when I upload data from source file into Planning via IKM SQL to Essbase (data).
Some records are found following errors.
'Cannot begin data load. Analytic Server Error(1042006): Network Error [10061]: Unable To Connect To [localhost:32774]. The client timed out waiting to connect to the Essbase Agent using TCP/IP. Check your network connections. Also please make sure that Server and Port values are correct'
What is this error about? is the commit interval too large? now the value is 1000.Hi,
You could try the following
1. From the Start menu, click Run.
2. Type regedit and then click OK.
3. In the Registry Editor window, click the following directory:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters
4. From the Edit menu, click New, DWORD Value.
The new value appears in the list of parameters.
5. Type MaxUserPort and then press Enter.
Double-click MaxUserPort.
6. In the Edit DWORD Value window, do the following:
* Click Decimal.
* Enter 65534.
* Click OK.
7. From the Edit menu, click New, DWORD Value.
The new value appears in the list of parameters.
8. Type TcpTimedWaitDelay and then press Enter.
9. Double-click TcpTimedWaitDelay.
10. In the Edit DWORD Value window, do the following:
* Click Decimal.
* Type 300
* Click OK.
11. Close the Registry Editor window.
12. Reboot essbase server
Let us know how it goes.
Cheers
John
http://john-goodwin.blogspot.com/ -
ORA-02348: cannot create VARRAY column with embedded LOB
Hi
This error message I get when I try to create a table from my schema file which has a (sub-) element of type CLOB.
In my XML document I have an element which needs to become declared a CLOB (because it's > 4000 bytes), in my Schema I define it's element node like:
<xs:element name="MocovuState" xdb:SQLType="CLOB">
I can register this Schema file but when I create the table, I get the error:
ORA-02348: cannot create VARRAY column with embedded LOB
Does anybody know how to handle this ?
MarcelYou need to use the xdb:storeVarrayAsTable="true" schema annotation so that unbounded elements are created at schema registration time as nested tables. Varrays can not contain CLOBs/BLOBS. Use the schema annotation xdb:SQLType="CLOB" to tell Oracle XMLDB to use CLOB storage for the element. See your schema below:
P.S. XMLSPY is invaluable as it supports Oracle XML Schema annotations.
<?xml version="1.0"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" targetNamespace="http://www.yourregisteredschemanamespace.com" elementFormDefault="qualified" attributeFormDefault="unqualified" xdb:storeVarrayAsTable="true">
<xs:element name="nRootNode">
<xs:complexType>
<xs:all>
<xs:element name="nID" type="xs:long"/>
<xs:element name="nStringGroup" type="nStringGroup" minOccurs="0"/>
</xs:all>
</xs:complexType>
</xs:element>
<xs:complexType name="nStringGroup">
<xs:sequence>
<xs:element name="nString" type="nString" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
<xs:complexType name="nString" xdb:SQLType="CLOB">
<xs:sequence>
<xs:element name="nValue" type="nValue" minOccurs="0" xdb:SQLType="CLOB"/>
</xs:sequence>
<xs:attribute name="id" type="xs:long" use="required"/>
</xs:complexType>
<xs:simpleType name="nValue">
<xs:restriction base="xs:string">
<xs:minLength value="1"/>
</xs:restriction>
</xs:simpleType>
</xs:schema> -
How create data store with PermSize 512MB on WIN32?
Hi!
How create data store with PermSize > 512MB on WIN32? If I set PermSize > 512MB on WIN32, then data store becomes invalid.Thanks for the details. As I mentioned, due to issues with the way Windows manages memory and address space it is generally not possible to create a datastore larger than around 700 Mb on WIN32. Sometimes you may be lucky and get close to 1 GB but usually not. The issue is as follows; on Windows, a TimesTen datastore is a shared mapping created from memory backed by the paging file. This shared mapping must be mapped into the process address space as a contiguous range of addresses. So, if you have a 1 GB datastore then your process needs to have a contiguous 1 GB range of addresses free in order to be able to connect to (map) the datastore. Unfortunately the default behaviour of Windows is to map DLLs into a process address space all over the place and any process that uses any significant number of DLLs is very unlikely to have a contiguous free address range larger than 500-700 Mb.
This problem does not exist with other O/S such as Unix or Linux nor does it exist with 64-bit Windows. So, if you need to use a cache or datastore larger than around 700 Mb you need to use either 64-it Windows or another O/S. Note that even on 32-bit Linux/Unix TimesTen datastores are limited to a maximum size of 2 GB. If you need more than 2 GB you need to use a 64-bit O/S.
Chris -
You cannot create a customer with grouping ZLES
hi friends,
I am encountered with the below message during BP creation "You cannot create a customer with grouping ZLES"
ZLES being the account group..
I ve come across the following..
I made config for BP to Customer ( auto creation)
and also BP to Vendor ( auto creation)
at a time only one of the thing is working either BP to customer or BP to vendor..
when i check the config under Number Range>Define Groupings and Assign Number ranges> in this there is a radio button Int.Std.Grping if this is marked for the grouping of Mast tenant with Cust acc then there is no error for this but error persists while creating BP to Vendor.. and if the radio button is ticked for Landlord with vendor Account then the issue comes for BP to Customer i.e the error "You cannot create a customer with grouping ZLES"
how can i tick both of them together ( if there is possiblity)
kindly helpsorry for late reply.
if ur issue isn;t solved?----
in Define Groupings and Assign Number Ranges dont select the raido button for your grouping . and next thing is while creating business partner. follow these steps...
1 bp
2 click on person, group or org as desired
3 select the role and click create. AND
4 at right hand side corner there is Grouping select the Grouping ( i was missing this step due to which i used to get the error)
hope this will solve ... also check the account groups of both BP and Customer or else there will be synchronisation error..
regards -
Essbase Analytics Link cannot create data synchronization server database
When I try to create data synchronization server database using Essbase Analytics Link, the below error occur, anyone can help?Thnaks
dss.log:
19 Oct 2011 17:28:55] [dbmgr] ERROR: last message repeated 2 more times
[19 Oct 2011 17:28:55] [dbmgr] removed "C:\oracle\product\EssbaseAnalyticsLink\oem\hfm\Comma\Default\Comma.hdf"
[19 Oct 2011 17:28:55] [dbmgr] removed "C:\oracle\product\EssbaseAnalyticsLink\oem\hfm\Comma\Default\PERIOD.hrd"
[19 Oct 2011 17:28:55] [dbmgr] removed "C:\oracle\product\EssbaseAnalyticsLink\oem\hfm\Comma\Default\VIEW.hrd"
[19 Oct 2011 17:28:55] [dbmgr] removed "C:\oracle\product\EssbaseAnalyticsLink\oem\hfm\Comma\Default\YEAR.hrd"
[19 Oct 2011 17:28:58] [dbmgr] Create metadata: "C:/oracle/product/EssbaseAnalyticsLink/oem/hfm/Comma/Default/Comma.hdf"
[19 Oct 2011 17:28:59] [dbmgr] WARN : HR#03826: Directory "C:\oracle\product\EssbaseAnalyticsLink/Work/XOD/backUp_2" not found. Trying to create
[19 Oct 2011 17:29:15] [dbmgr] ERROR: ODBC: HR#01465: error in calling SQLDriverConnect ([Microsoft][ODBC SQL Server Driver][Shared Memory]Invalid connection. [state=08001 code=14]).
[19 Oct 2011 17:29:15] [dbmgr] ERROR: HR#00364: Cannot open source reader for "ACCOUNT"
[19 Oct 2011 17:29:15] [dbmgr] ERROR: HR#00627: Cannot create dimension: "ACCOUNT".
[19 Oct 2011 17:29:16] [dbmgr] ERROR: HR#07722: Cube 'main_cube' of application 'Comma' is not registered.
eal.log:
[2011-Oct-19 17:28:56] http://localhost/livelink/Default.aspx?command=readYear&server=TestEss64&application=Comma&domain=
[2011-Oct-19 17:28:56] http://localhost/livelink/Default.aspx?command=readPeriod&server=TestEss64&application=Comma&domain=
[2011-Oct-19 17:28:57] http://localhost/livelink/Default.aspx?command=readView&server=TestEss64&application=Comma&domain=
[2011-Oct-19 17:28:57] http://localhost/livelink/Default.aspx?command=getVersion&server=TestEss64&application=Comma&domain=
[2011-Oct-19 17:28:58] DSS Application created
[2011-Oct-19 17:28:58] http://localhost/livelink/Default.aspx?command=getICPWeight&server=TestEss64&application=Comma&domain=
[2011-Oct-19 17:29:15] (-6981) HR#07772: cannot register HDF
[2011-Oct-19 17:29:15] com.hyperroll.jhrapi.JhrapiException: (-6981) HR#07772: cannot register HDF
[2011-Oct-19 17:29:15] at com.hyperroll.jhrapi.JhrapiImpl.updateMetadata(Native Method)
[2011-Oct-19 17:29:15] at com.hyperroll.jhrapi.Application.updateMetadata(Unknown Source)
[2011-Oct-19 17:29:15] at com.hyperroll.hfm2ess.bridge.HyperRollProcess.updateMetadata(Unknown Source)
[2011-Oct-19 17:29:15] at com.hyperroll.hfm2ess.bridge.ws.BridgeOperationManagerImpl.createAggServerApp(Unknown Source)
[2011-Oct-19 17:29:15] at com.hyperroll.hfm2ess.bridge.ws.BridgeOperationManager.createAggServerApp(Unknown Source)
[2011-Oct-19 17:29:15] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[2011-Oct-19 17:29:15] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[2011-Oct-19 17:29:15] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[2011-Oct-19 17:29:15] at java.lang.reflect.Method.invoke(Method.java:597)
[2011-Oct-19 17:29:15] at weblogic.wsee.jaxws.WLSInstanceResolver$WLSInvoker.invoke(WLSInstanceResolver.java:92)
[2011-Oct-19 17:29:15] at weblogic.wsee.jaxws.WLSInstanceResolver$WLSInvoker.invoke(WLSInstanceResolver.java:74)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.server.InvokerTube$2.invoke(InvokerTube.java:151)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.server.sei.EndpointMethodHandlerImpl.invoke(EndpointMethodHandlerImpl.java:268)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.server.sei.SEIInvokerTube.processRequest(SEIInvokerTube.java:100)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.api.pipe.Fiber.__doRun(Fiber.java:866)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.api.pipe.Fiber._doRun(Fiber.java:815)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.api.pipe.Fiber.doRun(Fiber.java:778)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.api.pipe.Fiber.runSync(Fiber.java:680)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.server.WSEndpointImpl$2.process(WSEndpointImpl.java:403)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.transport.http.HttpAdapter$HttpToolkit.handle(HttpAdapter.java:532)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.transport.http.HttpAdapter.handle(HttpAdapter.java:253)
[2011-Oct-19 17:29:15] at com.sun.xml.ws.transport.http.servlet.ServletAdapter.handle(ServletAdapter.java:140)
[2011-Oct-19 17:29:15] at weblogic.wsee.jaxws.WLSServletAdapter.handle(WLSServletAdapter.java:171)
[2011-Oct-19 17:29:15] at weblogic.wsee.jaxws.HttpServletAdapter$AuthorizedInvoke.run(HttpServletAdapter.java:708)
[2011-Oct-19 17:29:15] at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
[2011-Oct-19 17:29:15] at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146)
[2011-Oct-19 17:29:15] at weblogic.wsee.util.ServerSecurityHelper.authenticatedInvoke(ServerSecurityHelper.java:103)
[2011-Oct-19 17:29:15] at weblogic.wsee.jaxws.HttpServletAdapter$3.run(HttpServletAdapter.java:311)
[2011-Oct-19 17:29:15] at weblogic.wsee.jaxws.HttpServletAdapter.post(HttpServletAdapter.java:336)
[2011-Oct-19 17:29:15] at weblogic.wsee.jaxws.JAXWSServlet.doRequest(JAXWSServlet.java:98)
[2011-Oct-19 17:29:15] at weblogic.servlet.http.AbstractAsyncServlet.service(AbstractAsyncServlet.java:99)
[2011-Oct-19 17:29:15] at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
[2011-Oct-19 17:29:15] at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
[2011-Oct-19 17:29:15] at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
[2011-Oct-19 17:29:15] at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
[2011-Oct-19 17:29:15] at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:183)
[2011-Oct-19 17:29:15] at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3717)
[2011-Oct-19 17:29:15] at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
[2011-Oct-19 17:29:15] at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
[2011-Oct-19 17:29:15] at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
[2011-Oct-19 17:29:15] at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2277)
[2011-Oct-19 17:29:15] at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
[2011-Oct-19 17:29:15] at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
[2011-Oct-19 17:29:15] at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207)
[2011-Oct-19 17:29:15] at weblogic.work.ExecuteThread.run(ExecuteThread.java:176)
[2011-Oct-19 17:29:15] LiveLinkException [HR#09746]: Data Synchronization Server database cannot be createdWhat version of EAL have you installed, what OS + 32bit/64bit are you installing it on.
What version of the OUI did you use.
Have you gone through all the configuration steps successfully.
Cheers
John
http://john-goodwin.blogspot.com/ -
Problem to create data source with OCI type of driver
Hi Experts
We are creating an XA datasource with OCI type of driver using an enterprise application with data-sources.xml, the xml is like this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE data-sources SYSTEM "data-sources.dtd" >
<data-sources>
<data-source>
<data-source-name>OCI_DS_XA</data-source-name>
<driver-name>ORACLE_DRIVER</driver-name>
<init-connections>1</init-connections>
<max-connections>25</max-connections>
<max-time-to-wait-connection>120</max-time-to-wait-connection>
<expiration-control>
<connection-lifetime>60</connection-lifetime>
<run-cleanup-thread>300</run-cleanup-thread>
</expiration-control>
<sql-engine>Vendor_SQL</sql-engine>
<jdbc-2.0>
<xads-class-name>
oracle.jdbc.xa.client.OracleXADataSource
</xads-class-name>
<object-factory>
oracle.jdbc.pool.OracleDataSourceFactory
</object-factory>
<properties>
<property>
<property-name>serverName</property-name>
<property-value><SERVER_NAME></property-value>
</property>
<property>
<property-name>serverPort</property-name>
<property-value>1521</property-value>
</property>
<property>
<property-name>databaseName</property-name>
<property-value>ORCL</property-value>
</property>
<property>
<property-name>driverType</property-name>
<property-value>oci</property-value>
</property>
<property>
<property-name>user</property-name>
<property-value>username</property-value>
</property>
<property>
<property-name>password</property-name>
<property-value>password</property-value>
</property>
</properties>
</jdbc-2.0>
</data-source>
</data-sources>
We have installed oracle client in server M/C and configured all the environment variables. While we are deploying the application from CE developer studio IDE to the CE 7.1 AS, we are getting the following error:
Description:
1. Exception has been returned while the 'sap.com/DS_TEST_EAR' was starting. Warning/Exception :
[ERROR CODE DPL.DS.6193] Error while ; nested exception is:
com.sap.engine.services.deploy.exceptions.ServerDeploymentException: [ERROR CODE DPL.DS.5030] Clusterwide exception: server ID 7653550:com.sap.engine.services.dbpool.exceptions.BaseDeploymentException: Cannot create DataSource "OCI_DS_XA".
at com.sap.engine.services.dbpool.deploy.ContainerImpl.startDataSources(ContainerImpl.java:1467)
at com.sap.engine.services.dbpool.deploy.ContainerImpl.prepareStart(ContainerImpl.java:468)
at com.sap.engine.services.deploy.server.application.StartTransaction.prepareCommon(StartTransaction.java:219)
at com.sap.engine.services.deploy.server.application.StartTransaction.prepare(StartTransaction.java:179)
at com.sap.engine.services.deploy.server.application.ApplicationTransaction.makeAllPhasesOnOneServer(ApplicationTransaction.java:419)
at com.sap.engine.services.deploy.server.application.ParallelAdapter.makeAllPhasesImpl(ParallelAdapter.java:495)
at com.sap.engine.services.deploy.server.application.StartTransaction.makeAllPhasesImpl(StartTransaction.java:554)
at com.sap.engine.services.deploy.server.application.ParallelAdapter.runInTheSameThread(ParallelAdapter.java:248)
at com.sap.engine.services.deploy.server.application.ParallelAdapter.makeAllPhasesAndWait(ParallelAdapter.java:389)
at com.sap.engine.services.deploy.server.DeployServiceImpl.startApplicationAndWait(DeployServiceImpl.java:3387)
at com.sap.engine.services.deploy.server.DeployServiceImpl.startApplicationAndWait(DeployServiceImpl.java:3373)
at com.sap.engine.services.deploy.server.DeployServiceImpl.startApplicationAndWait(DeployServiceImpl.java:3276)
at com.sap.engine.services.deploy.server.DeployServiceImpl.startApplicationAndWait(DeployServiceImpl.java:3249)
at com.sap.engine.services.dc.lcm.impl.J2EELCMProcessor.doStart(J2EELCMProcessor.java:99)
at com.sap.engine.services.dc.lcm.impl.LifeCycleManagerImpl.start(LifeCycleManagerImpl.java:62)
at com.sap.engine.services.dc.cm.deploy.impl.LifeCycleManagerStartVisitor.visit(LifeCycleManagerStartVisitor.java:34)
at com.sap.engine.services.dc.cm.deploy.impl.DeploymentItemImpl.accept(DeploymentItemImpl.java:83)
at com.sap.engine.services.dc.cm.deploy.impl.DefaultDeployPostProcessor.postProcessLCMDeplItem(DefaultDeployPostProcessor.java:80)
at com.sap.engine.services.dc.cm.deploy.impl.DefaultDeployPostProcessor.postProcess(DefaultDeployPostProcessor.java:56)
at com.sap.engine.services.dc.cm.deploy.impl.DeployerImpl.doPostProcessing(DeployerImpl.java:741)
at com.sap.engine.services.dc.cm.deploy.impl.DeployerImpl.performDeploy(DeployerImpl.java:732)
at com.sap.engine.services.dc.cm.deploy.impl.DeployerImpl.doDeploy(DeployerImpl.java:576)
at com.sap.engine.services.dc.cm.deploy.impl.DeployerImpl.deploy(DeployerImpl.java:270)
at com.sap.engine.services.dc.cm.deploy.impl.DeployerImpl.deploy(DeployerImpl.java:192)
at com.sap.engine.services.dc.cm.deploy.impl.DeployerImplp4_Skel.dispatch(DeployerImplp4_Skel.java:875)
at com.sap.engine.services.rmi_p4.DispatchImpl._runInternal(DispatchImpl.java:351)
at com.sap.engine.services.rmi_p4.server.ServerDispatchImpl.run(ServerDispatchImpl.java:70)
at com.sap.engine.services.rmi_p4.P4Message.process(P4Message.java:62)
at com.sap.engine.services.rmi_p4.P4Message.execute(P4Message.java:37)
at com.sap.engine.services.cross.fca.FCAConnectorImpl.executeRequest(FCAConnectorImpl.java:872)
at com.sap.engine.services.rmi_p4.P4Message.process(P4Message.java:53)
at com.sap.engine.services.cross.fca.MessageReader.run(MessageReader.java:58)
at com.sap.engine.core.thread.execution.Executable.run(Executable.java:108)
at com.sap.engine.core.thread.execution.CentralExecutor$SingleThread.run(CentralExecutor.java:304)
Caused by: com.sap.engine.frame.core.database.DatabaseException: Exception of type java.sql.SQLException occurred: Closed Connection.
at com.sap.engine.core.database.impl.DataSourceAdministratorImpl.createDataSource(DataSourceAdministratorImpl.java:49)
at com.sap.engine.services.dbpool.deploy.ContainerImpl.startDataSources(ContainerImpl.java:1400)
... 33 more
Caused by: java.sql.SQLException: Closed Connection
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:162)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:227)
at oracle.jdbc.driver.GetCharSetError.processError(T2CConnection.java:3082)
at oracle.jdbc.driver.T2CConnection.getCharSetIds(T2CConnection.java:2811)
at oracle.jdbc.driver.T2CConnection.logon(T2CConnection.java:300)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:344)
at oracle.jdbc.driver.T2CConnection.<init>(T2CConnection.java:136)
at oracle.jdbc.driver.T2CDriverExtension.getConnection(T2CDriverExtension.java:79)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:545)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:194)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPhysicalConnection(OracleConnectionPoolDataSource.java:121)
at oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:333)
at oracle.jdbc.xa.client.OracleXADataSource.getXAConnection(OracleXADataSource.java:84)
at com.sap.sql.connect.factory.XADSPooledConnectionFactory.getPooledConnection(XADSPooledConnectionFactory.java:27)
at com.sap.sql.connect.datasource.DBDataSourceImpl.createPooledConnection(DBDataSourceImpl.java:677)
at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.initConnections(DBDataSourcePoolImpl.java:1099)
at com.sap.sql.connect.datasource.DBDataSourcePoolImpl.<init>(DBDataSourcePoolImpl.java:49)
at com.sap.sql.connect.datasource.DataSourceManager.createDataSource(DataSourceManager.java:507)
at com.sap.sql.connect.datasource.DataSourceManager.createDataSource(DataSourceManager.java:136)
at com.sap.sql.manager.OpenSQLManager.createDataSource(OpenSQLManager.java:141)
at com.sap.engine.core.database.impl.DataSourceAdministratorImpl.createDataSource(DataSourceAdministratorImpl.java:42)
... 34 more
But when we are creating the datasource with thin type of driver, it is working fine. We have already created a driver with name ORACLE_DRIVER in server using the ojdbc14.jar using Netweaver administrator consol.
Same this is happening when we are creating the OCI type of datasource by using Netweaver administrator consol by specifying all the parameter as above and also the initial connection pool size is more than zero.
Same thing is happening for normal (JDBC 1x) datasource creation with OCI type driver.Hello,
I ran a search on SDN, related to your error code, i found the following:
UnsupportedClassVersionError in NWDS CE
DeploymentWarning-WebService-Collection
Maybe its useful to you
Regards,
Siddhesh -
Problem in creating DATA Model from SQL SERVER 2008 in BI PUBLISHER
Dear Team,
I connect BI Publisher with SQL SERVER 2008 But On creating Report on BI,when we create data model...dataset,
i select the tables but when i click on RESULT i am geting this error.
error--
[Hyperion][SQLServer JDBC Driver][SQLServer]Invalid object name 'DBNAME.DBO.TABLE'.
please resolve this problem...
Thanks,
Him
Edited by: h on Aug 22, 2011 6:31 PMHi David,
The things I said are not a fix for this problem.
If your RCU installation worked, then you do not have to worry about modifying the createfr.sql.
Edit:
I've just tracked the problem. It appears that when using the query builder, BI forgets to add the " sign.
For example:
This query will give the hyperion error.
select "table"."field"
from "database.user"."table"
To correct it write it like this:
select "table"."field"
from "database"."user"."table"
Edited by: EBA on Nov 14, 2011 10:21 AM -
Can't create a repository with a local physical disk
Hi,
I'm using Oracle VM Manager 3.0.3.
I created a non clustered server pool with one server. That server has 2 identical SATA 500GB internal drives and 1 eSATA 750GB drive in AHCI mode. The 750G eSATA drive is the primary boot drive and hosts the MBR with Windows 7. One of the 500GB SATA internal drive hosts Oracle VM Server. The other 500GB SATA drive has no partition and I want to use it as a local OVM repository. When I boot the server I can select to boot Windows 7 or Oracle VM Server with no issue.
After the OVM server boots, OVM Manager 3.0.3 can discover it along with 2 physical disks: the 500GB SATA internal drive (SATA_WDC_WD5000BEKT-_WD-WX41A11X1750) I want to use as a repository and the eSATA 750GB drive. OVM Manager reports these physical disks as SAN type with no file system.
When I create my local repository for the server pool, I select the physical disk SATA_WDC_WD5000BEKT-_WD-WX41A11X1750 and click the "Next" button. But the job always fails with the following details:
Job Construction Phase
begin()
Appended operation 'File System Construct' to object '0004fb0000090000865c3f26b528bc39 (Local FS OracleVS01)'.
Appended operation 'Repository Construct' to object '0004fb0000030000182872191a913bac (SATA_WDC_WD5000BEKT-_WD-WX41A11X1750)'.
commit()
Completed Step: COMMIT
Objects and Operations
Object (IN_USE): [LocalFileServer] 0004fb0000090000865c3f26b528bc39 (Local FS OracleVS01)
Operation: File System Construct
Object (CREATED): [LocalFileSystem] 0004fb0000050000f36710bdcca530d4 (fs_MyLocalRepository)
Object (CREATED): [Repository] 0004fb0000030000182872191a913bac (MyLocalRepository)
Operation: Repository Construct
Object (IN_USE): [StorageElement] 0004fb00001800004e793d6f05fced03 (SATA_WDC_WD5000BEKT-_WD-WX41A11X1750)
Job Running Phase at 12:18 on Tue, Jan 31, 2012
Job Participants: [00:25:22:dc:0a:ee:ff:ff:ff:ff:ff:ff:ff:ff:ff:ff (OracleVS01)]
Actioner
Starting operation 'File System Construct' on object '0004fb0000050000f36710bdcca530d4 (fs_MyLocalRepository)'
Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000865c3f26b528bc39] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750, Status: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012] OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012
at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1325)
at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:868)
at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.createFileSystem(FileSystemConstruct.java:57)
at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.action(FileSystemConstruct.java:49)
at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:193)
at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:264)
at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1090)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:247)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:207)
at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:751)
at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:475)
at com.oracle.ovm.mgr.action.ActionEngine.sendUndispatchedServerCommand(ActionEngine.java:427)
at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:369)
at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:864)
... 18 more
Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:753)
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:471)
... 21 more
FailedOperationCleanup
Starting failed operation 'File System Construct' cleanup on object 'fs_MyLocalRepository'
Complete rollback operation 'File System Construct' completed with direction=fs_MyLocalRepository
Rollbacker
Objects To Be Rolled Back
Object (IN_USE): [LocalFileServer] 0004fb0000090000865c3f26b528bc39 (Local FS OracleVS01)
Object (CREATED): [LocalFileSystem] 0004fb0000050000f36710bdcca530d4 (fs_MyLocalRepository)
Object (CREATED): [Repository] 0004fb0000030000182872191a913bac (MyLocalRepository)
Object (IN_USE): [StorageElement] 0004fb00001800004e793d6f05fced03 (SATA_WDC_WD5000BEKT-_WD-WX41A11X1750)
Write Methods Invoked
Class=InternalJobDbImpl vessel_id=5063 method=addTransactionIdentifier accessLevel=6
Class=LocalFileServerDbImpl vessel_id=4954 method=createFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setName accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setFoundryContext accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=onPersistableCreate accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setLifecycleState accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setRollbackLifecycleState accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setRefreshed accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setBackingDevices accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setUuid accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setPath accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setSimpleName accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=addFileServer accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setStorageDevice accessLevel=6
Class=StorageElementDbImpl vessel_id=4972 method=addLayeredFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=setSimpleName accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=createRepository accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setName accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setFoundryContext accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=onPersistableCreate accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setLifecycleState accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setRollbackLifecycleState accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setRefreshed accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setDom0Uuid accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setSharePath accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setSimpleName accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setFileSystem accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=addRepository accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setManagerUuid accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setVersion accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=addJobOperation accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setSimpleName accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=setDescription accessLevel=6
Class=InternalJobDbImpl vessel_id=5063 method=setCompletedStep accessLevel=6
Class=InternalJobDbImpl vessel_id=5063 method=setAssociatedHandles accessLevel=6
Class=InternalJobDbImpl vessel_id=5063 method=setFailedOperation accessLevel=6
Class=LocalFileServerDbImpl vessel_id=4954 method=nextJobOperation accessLevel=6
Class=LocalFileSystemDbImpl vessel_id=5072 method=nextJobOperation accessLevel=6
Class=RepositoryDbImpl vessel_id=5078 method=nextJobOperation accessLevel=6
Class=StorageElementDbImpl vessel_id=4972 method=nextJobOperation accessLevel=6
Completed Step: ROLLBACK
Job failed commit (internal) due to OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000865c3f26b528bc39] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750, Status: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012] OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012
com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_B000E Storage plugin command [storage_plugin_createFileSystem] failed for storage server [0004fb0000090000865c3f26b528bc39] failed with [com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750, Status: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012] OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012
at com.oracle.ovm.mgr.action.StoragePluginAction.processException(StoragePluginAction.java:1325)
at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:868)
at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.createFileSystem(FileSystemConstruct.java:57)
at com.oracle.ovm.mgr.op.physical.storage.FileSystemConstruct.action(FileSystemConstruct.java:49)
at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:193)
at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:264)
at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1090)
at sun.reflect.GeneratedMethodAccessor867.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:247)
at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:207)
at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:751)
at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: storage_plugin_createFileSystem to server: OracleVS01 failed. OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
Tue Jan 31 12:18:46 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:475)
at com.oracle.ovm.mgr.action.ActionEngine.sendUndispatchedServerCommand(ActionEngine.java:427)
at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:369)
at com.oracle.ovm.mgr.action.StoragePluginAction.createFileSystem(StoragePluginAction.java:864)
... 18 more
Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: storage_plugin_createFileSystem oracle.ocfs2.OCFS2.OCFS2Plugin 0004fb0000050000f36710bdcca530d4 /dev/mapper/SATA_WDC_WD5000BEKT-WD-WX41A11X1750, Status: org.apache.xmlrpc.XmlRpcException: OSCPlugin.FileSystemBusyEx:'An ocfs2 filesystem already exists on /dev/mapper/SATAWDC_WD5000BEKT-_WD-WX41A11X1750'
Tue Jan 31 12:18:46 CST 2012
at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:753)
at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:471)
... 21 more
End of Job
I don't understand why I always get the error message 'An ocfs2 filesystem already exists on /dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750' since that disk is blank with no partition.
Please help me.
Daniel.Hi Avi,
Blkid returns the following:
/dev/sda1: LABEL="/boot" UUID="00b66c82-c4e2-4067-bbc4-f22899fce856" TYPE="ext3"
/dev/sda2: LABEL="/" UUID="b6bc51a3-884a-4d9d-8bc1-8c68792cbe57" TYPE="ext3"
/dev/sda3: TYPE="swap" LABEL="SWAP-sda3" UUID="58f3de58-a13b-4239-9894-8dffa0856a1a"
/dev/sdb1: TYPE="ntfs"
/dev/sdb2: TYPE="ntfs"
/dev/sdc: LABEL="OVSd62418098e5d5" UUID="0004fb00-0005-0000-d3ad-62418098e5d5" TYPE="ocfs2"
/dev/mapper/SATA_ST750LX003-1AC1_W2001LHFp2: TYPE="ntfs"
/dev/mapper/SATA_ST750LX003-1AC1_W2001LHFp1: TYPE="ntfs"
/dev/mapper/SATA_WDC_WD5000BEKT-_WD-WX41A11X1750: LABEL="OVSd62418098e5d5" UUID="0004fb00-0005-0000-d3ad-62418098e5d5" TYPE="ocfs2"
/dev/sdd1: SEC_TYPE="msdos" LABEL="CLE" UUID="827A-C3D5" TYPE="vfat"
"CLE" is an attached USB jump drive.
Thanks. -
Cannot create data source using custom third-party driver
Hi,
I've just installed Weblogic Server 10.3.6 and I'm getting problems creating a generic data source using my own third party jdbc driver which I had no problems doing in Weblogic Server 10.0; this is what I did in 10.0;
before starting server i put my driver jar file (and any jar files it needed) in the lib folder of the domain user project which weblogic appends to the classpath during server startup
inside weblogic console -
1. first page - provided a data source name and jndi name and selected 'other' for database type
2. second page - selected 'other' for jdbc driver
3. third page - deselected the global transactions
4. fourth page - provided database name, username, host name, port, and password.
5. fifth page - provided driver class, url, database user name and password (didn't bother with the test)
6. Selected target server
I then saved and activated changed and was done.
This what I did for 10.3.6
did the same thing I did for 10.0 before server start up
inside weblogic console -
1. first page - provided data source name and jndi name and selected 'other' for database type
2. second page - selected 'other' for jdbc driver
3. third page - deselected the support global transactions
4. fourth page - here's where things are different - page only asked for database username and password, which I give.
One this page I get a criptic error saying 'errors must be corrected before proceeding' - no other message as to what these errors might be either in the console or the cmd window of the server. I tried making changes to the provider authenticiation of the security realm but no luck. I tried following password creation requirements, I even tried proceeding to the next page without entering anything on this page, all no luck. I have no idea what's going on.
Hope someone can help.
Sam.I've fixed the class issue but am struggling a bit with SQL Server authentication. I'm running in mixed mode (originally set to Windows authentication but I've modified the security setting), I've enabled TCP/IP and Named Pipes, I've created a user (who I can log in to SQL Server Mgmt Studio successfully with), but still get a connection refused error.
Any insight? Thanks.
Maybe you are looking for
-
I downloaded some videos to Premier Elements 13 from an external hard drive but now the external hard drive is damaged. While setting up my new MAC the external hard drive was dropped and I have been told by an outside source they could not retrieve
-
Moving volumes between servers
First, I'm new at this, so allow for my ignorance. Second, we are ARCHIVING data on StorEdge L3500s and sometime in the next 10 months intend to upgrade to L7000s. The issue: we have two L3500s on different LANs. ServerA has volumes labeled from AWR1
-
2 Controlling areas - Intercompany postings
Hello Experts, Our client is looking at the possibility of having more than One Controlling area and Chart of Accounts. Say for instance, the Production Plants are in different Controlling area and using different COA, whereas the Sales companies loc
-
How can I show 2 fields in a Listbox ?
In my datatable I have a field "firstname" and a field "lastname". How can I manage to get "firstname lastname" shown in the listbox?
-
Reverting layer of still images sent to Motion back to still images
Hi, I drag selected a series of still images on a layer in FCP then sent that selection/layer to Motion for editing. How can I revert the layer back to the still images? Or, alternatively... The problem is these still images appear squarish in FCP's