How create data store with PermSize = 4096MB on HP-UX 64-bit?

Hi!
I use TimesTen 7.0.2:
TimesTen Release 7.0.2.0.0 (64 bit HPUX/IPF) (tt70_1:17001) 2007-05-02T05:22:15Z
Instance admin: root
Instance home directory: /opt/TimesTen/tt70_1
Daemon home directory: /var/TimesTen/tt70_1
Access control enabled.
I set PermSize = 4096MB for my new data store. Then I tryid to create it:
ttIsql -connStr "DSN=tt_rddb1;UID=ttsys;PWD=ttsys;OraclePWD=ttsys;Overwrite=1" -e "exit;"
But operation was failed:
836: Cannot create data store shared-memory segment, error 22.
Can I create data store with such size on HP-UX 64-bit???

Is largefiles enabled? I believe you can check with fsadm -F vxfs /filesystem
Also please understand that 'PermSize' is not the only attribute affecting the size of the timesten shared memory segment. The actual resulting size is
PermSize + TempSize + LogBuffSize + Overhead
So you would need to configure shmmax to be > 4g. Have you tried setting it to (say) 8g (just for testing purposes to see if it eliminates the error).

Similar Messages

  • How create data store with PermSize 512MB on WIN32?

    Hi!
    How create data store with PermSize > 512MB on WIN32? If I set PermSize > 512MB on WIN32, then data store becomes invalid.

    Thanks for the details. As I mentioned, due to issues with the way Windows manages memory and address space it is generally not possible to create a datastore larger than around 700 Mb on WIN32. Sometimes you may be lucky and get close to 1 GB but usually not. The issue is as follows; on Windows, a TimesTen datastore is a shared mapping created from memory backed by the paging file. This shared mapping must be mapped into the process address space as a contiguous range of addresses. So, if you have a 1 GB datastore then your process needs to have a contiguous 1 GB range of addresses free in order to be able to connect to (map) the datastore. Unfortunately the default behaviour of Windows is to map DLLs into a process address space all over the place and any process that uses any significant number of DLLs is very unlikely to have a contiguous free address range larger than 500-700 Mb.
    This problem does not exist with other O/S such as Unix or Linux nor does it exist with 64-bit Windows. So, if you need to use a cache or datastore larger than around 700 Mb you need to use either 64-it Windows or another O/S. Note that even on 32-bit Linux/Unix TimesTen datastores are limited to a maximum size of 2 GB. If you need more than 2 GB you need to use a 64-bit O/S.
    Chris

  • How to create data stores in ODI ?

    Hi all,
    I am new to this ODI part.Can anyone please help me as how to create data stores in ODI.
    A prompt reply will be highly aprreciated.
    Thanks
    Saurabh.

    What do you mean by "create datastores"?
    If you mean you want to reverse engineer existing tables from a database, then the phrase used in the ODI docs is "reverse enginnering". If you mean to create new tables in a database, then:
    1) ODI is not meant to be a database design tool.
    2) Using the "diagrams" node under a data model, you are able to use the "Common Format Designer" (CFD) tool to design and create the structure. The CFD tool is a simple ER-digram tool, but importantantly, if you drag structures in from one model to another, it remembers where it came from, allowing automatic generation of interfaces, and it automatically translates the data types.

  • Cannot create data store shared-memory segment error

    Hi,
    Here is some background information:
    [ttadmin@timesten-la-p1 ~]$ ttversion
    TimesTen Release 11.2.1.3.0 (64 bit Linux/x86_64) (cmttp1:53388) 2009-08-21T05:34:23Z
    Instance admin: ttadmin
    Instance home directory: /u01/app/ttadmin/TimesTen/cmttp1
    Group owner: ttadmin
    Daemon home directory: /u01/app/ttadmin/TimesTen/cmttp1/info
    PL/SQL enabled.
    [ttadmin@timesten-la-p1 ~]$ uname -a
    Linux timesten-la-p1 2.6.18-164.6.1.el5 #1 SMP Tue Oct 27 11:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
    [root@timesten-la-p1 ~]# cat /proc/sys/kernel/shmmax
    68719476736
    [ttadmin@timesten-la-p1 ~]$ cat /proc/meminfo
    MemTotal: 148426936 kB
    MemFree: 116542072 kB
    Buffers: 465800 kB
    Cached: 30228196 kB
    SwapCached: 0 kB
    Active: 5739276 kB
    Inactive: 25119448 kB
    HighTotal: 0 kB
    HighFree: 0 kB
    LowTotal: 148426936 kB
    LowFree: 116542072 kB
    SwapTotal: 16777208 kB
    SwapFree: 16777208 kB
    Dirty: 60 kB
    Writeback: 0 kB
    AnonPages: 164740 kB
    Mapped: 39188 kB
    Slab: 970548 kB
    PageTables: 10428 kB
    NFS_Unstable: 0 kB
    Bounce: 0 kB
    CommitLimit: 90990676 kB
    Committed_AS: 615028 kB
    VmallocTotal: 34359738367 kB
    VmallocUsed: 274804 kB
    VmallocChunk: 34359462519 kB
    HugePages_Total: 0
    HugePages_Free: 0
    HugePages_Rsvd: 0
    Hugepagesize: 2048 kB
    extract from sys.odbc.ini
    [cachealone2]
    Driver=/u01/app/ttadmin/TimesTen/cmttp1/lib/libtten.so
    DataStore=/u02/timesten/datastore/cachealone2/cachealone2
    PermSize=14336
    OracleNetServiceName=ttdev
    DatabaseCharacterset=WE8ISO8859P1
    ConnectionCharacterSet=WE8ISO8859P1
    [ttadmin@timesten-la-p1 ~]$ grep SwapTotal /proc/meminfo
    SwapTotal: 16777208 kB
    Though we have around 140GB memory available and 65GB on the shmmax, we are unable to increase the PermSize to any thing more than 14GB. When I changed it to PermSize=15359, I am getting following error.
    [ttadmin@timesten-la-p1 ~]$ ttIsql "DSN=cachealone2"
    Copyright (c) 1996-2009, Oracle. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    connect "DSN=cachealone2";
    836: Cannot create data store shared-memory segment, error 28
    703: Subdaemon connect to data store failed with error TT836
    The command failed.
    Done.
    I am not sure why this is not working, considering we have got 144GB RAM and 64GB shmmax allocated! Any help is much appreciated.
    Regards,
    Raj

    Those parameters look ok for a 100GB shared memory segment. Also check the following:
    ulimit - a mechanism to restrict the amount of system resources a process can consume. Your instance administrator user, the user who installed Oracle TimesTen needs to be allocated enough lockable memory resource to load and lock your Oracle TimesTen shared memory segment.
    This is configured with the memlock entry in the OS file /etc/security/limits.conf for the instance administrator.
    To view the current setting run the OS command
    $ ulimit -l
    and to set it to a value dynamically use
    $ ulimit -l <value>.
    Once changed you need to restart the TimesTen master daemon for the change to be picked up.
    $ ttDaemonAdmin -restart
    Beware sometimes ulimit is set in the instance administrators "~/.bashrc" or "~/.bash_profile" file which can override what's set in /etc/security/limits.conf
    If this is ok then it might be related to Hugepages. If TT is configured to use Hugepages then you need enough Hugepages to accommodate the 100GB shared memory segment. TT is configured for Hugepages if the following entry is in the /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/ttendaemon.options file:
    -linuxLargePageAlignment 2
    So if configured for Hugepages please see this example of how to set an appropriate Hugepages setting:
    Total the amount of memory required to accommodate your TimesTen database from /u01/app/oracle/EXALYTICS_MWHOME/TimesTen/tt1122/info/sys.odbc.ini
    PermSize+TempSize+LogBufMB+64MB Overhead
    For example consider a TimesTen database of size:
    PermSize=250000 (unit is MB)
    TempSize=100000
    LogBufMB=1024
    Total Memory = 250000+100000+1024+64 = 351088MB
    The Hugepages pagesize on the Exalytics machine is 2048KB or 2MB. Therefore divide the total amount of memory required above in MB by the pagesize of 2MB. This is now the number of Hugepages you need to configure.
    351088/2 = 175544
    As user root edit the /etc/sysctl.conf file
    Add/modify vm.nr_hugepages= to be the number of Hugepages calculated.
    vm.nr_hugepages=175544
    Add/modify vm.hugetlb_shm_group = 600
    This parameter is the group id of the TimesTen instance administrator. In the Exalytics system this is oracle. Determine the group id while logged in as oracle with the following command. In this example it’s 600.
    $ id
    $ uid=700(oracle) gid=600(oinstall) groups=600(oinstall),601(dba),700(oracle)
    As user root edit the /etc/security/limits.conf file
    Add/modify the oracle memlock entries so that the fourth field equals the total amount of memory for your TimesTen database. The unit for this value is KB. For example this would be 351088*1024=359514112KB
    oracle hard memlock 359514112
    oracle soft memlock 359514112
    THIS IS VERY IMPORTANT in order for the above changes to take effect you to either shutdown the BI software environment including TimesTen and reboot or issue the following OS command to make the changes permanent.
    $ sysctl -p
    Please note that dynamic setting (including using 'sysctl -p') of vm.nr_hugepages while the system is up may not give you the full number of Hugepages that you have specified. The only guaranteed way to get the full complement of Hugepages is to reboot.
    Check Hugepages has been setup correctly, look for Hugepages_Total
    $ cat /proc/meminfo | grep Huge
    Based on the example values above you would see the following:
    HugePages_Total: 175544
    HugePages_Free: 175544

  • 925: Cannot create data store semaphores (Invalid argument)

    I'm trying to connect to Timesten, but I'm getting this error.
    I have looked at other similar discussions, but yet I could not solve the problem.
    [timesten@atd info]$ ttisql "dsn=tpch"
    Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    connect "dsn=tpch";
      925: Cannot create data store semaphores (Invalid argument)
      703: Subdaemon connect to data store failed with error TT925
    The command failed.
    Done.
    Here is my information.
    [tpch]
    Driver=/home_sata/timesten/TimesTen/tt1122/lib/libtten.so
    DataStore=/home_sata/timesten/TimesTen/tt1122/tpch/tpch
    LogDir=/home_sata/timesten/TimesTen/tt1122/tpch/logs
    PermSize=1024
    TempSize=512
    PLSQL=1
    DatabaseCharacterSet=US7ASCII
    kernel.sem = 400 32000 512 5029
    kernel.shmmax=68719476736
    kernel.shmall=16777216
    [timesten@atd info]$ cat /proc/meminfo
    MemTotal:       297699764 kB
    MemFree:        96726036 kB
    Buffers:          582996 kB
    Cached:         155831636 kB
    SwapCached:            0 kB
    Active:         115729396 kB
    Inactive:       78767560 kB
    Active(anon):   44040440 kB
    Inactive(anon):  8531544 kB
    Active(file):   71688956 kB
    Inactive(file): 70236016 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    SwapTotal:      112639992 kB
    SwapFree:       112639992 kB
    Dirty:               160 kB
    Writeback:             0 kB
    AnonPages:      38082348 kB
    Mapped:         15352480 kB
    Shmem:          14489676 kB
    Slab:            3993152 kB
    SReclaimable:    3826768 kB
    SUnreclaim:       166384 kB
    KernelStack:       18344 kB
    PageTables:       245352 kB
    NFS_Unstable:          0 kB
    Bounce:                0 kB
    WritebackTmp:          0 kB
    CommitLimit:    261457104 kB
    Committed_AS:   74033552 kB
    VmallocTotal:   34359738367 kB
    VmallocUsed:      903384 kB
    VmallocChunk:   34205870424 kB
    HardwareCorrupted:     0 kB
    AnonHugePages:  35538944 kB
    HugePages_Total:      32
    HugePages_Free:       32
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    DirectMap4k:        6384 kB
    DirectMap2M:     2080768 kB
    DirectMap1G:    299892736 kB
    ------ Shared Memory Limits --------
    max number of segments = 4096
    max seg size (kbytes) = 67108864
    max total shared memory (kbytes) = 67108864
    min seg size (bytes) = 1

    The error message suggests that the system is running out of semaphores although plenty seem to be configured:
    kernel.sem = 400 32000 512 5029
    Could it be that there are other programs on this machine as this user using semaphores?
    Have you made changes to the kernel parameters and haven't made them permanent with
    # /sbin/sysctl -p
    or a re-boot?
    If you've done a # /sbin/sysctl -p have you recycled the TT daemon
    $ ttDaemonAdmin -restart
    So TT takes up the new settings?
    Tim

  • 836: Cannot create data store shared-memory segment, error 22

    Hi,
    I am hoping that there is an active TimesTen user community out there who could help with this, or the TimesTen support team who hopefully monitor this forum.
    I am currently evaluating TimesTen for a global investment organisation. We currently have a large Datawarehouse, where we utilise summary views and query rewrite, but have isolated some data that we would like to store in memory, and then be able to
    report on it through a J2EE website.
    We are evaluating TimesTen versus developing our own custom cache. Obviously, we would like to go with a packaged solution but we need to ensure that there are no limits in relation to maximum size. Looking through the documentation, it appears that the
    only limit on a 64bit system is the actual physical memory on the box. Sounds good, but we want to prove it since we would like to see how the application scales when we store about 30gb (the limit on our UAT environment is 32gb). The ultimate goal is to
    see if we can store about 50-60gb in memory.
    Is this correct? Or are there any caveats in relation to this?
    We have been able to get our Data Store store 8gb of data, but want to increase this. I am assuming that the following error message is due to us not changing the /etc/system on the box:
         836: Cannot create data store shared-memory segment, error 22
         703: Subdaemon connect to data store failed with error TT836
    Can somebody from the User community, or an Oracle Times Ten support person recommend what should be changed above to fully utilise the 32gb of memory, and the 12 processors on the box.
    Its quite a big deal for us to bounce the UAT unix box, so l want to be sure that l have factored in all changes that would ensure the following:
    * Existing Oracle Database instances are not adversely impacted
    * We are able to create a Data Store which is able fully utilise the physical memory on the box
    * We don't need to change these settings for quite some time, and still be able to complete our evaluation
    We are currently in discussion with our in-house Oracle team, but need to complete this process before contacting Oracle directly, but help with the above request would help speed this process up.
    The current /etc/system settings are below, and l have put in the current machines settings as comments at the end of each line.
    Can you please provide the recommended settings to fully utilise the existing 32gb on the box?
    Machine
    ## I have contrasted the minimum prerequisites for TimesTen and then contrasted it with the machine's current settings:
    SunOS uatmachinename 5.9 Generic_118558-11 sun4us sparc FJSV,GPUZC-M
    FJSV,SPARC64-V
    System Configuration: Sun Microsystems sun4us
    Memory size: 32768 Megabytes
    12 processors
    /etc/system
    set rlim_fd_max = 1080                # Not set on the machine
    set rlim_fd_cur=4096               # Not set on the machine
    set rlim_fd_max=4096                # Not set on the machine
    set semsys:seminfo_semmni = 20           # machine has 0x42, Decimal = 66
    set semsys:seminfo_semmsl = 512      # machine has 0x81, Decimal = 129
    set semsys:seminfo_semmns = 10240      # machine has 0x2101, Decimal = 8449
    set semsys:seminfo_semmnu = 10240      # machine has 0x2101, Decimal = 8449
    set shmsys:shminfo_shmseg=12           # machine has 1024
    set shmsys:shminfo_shmmax = 0x20000000     # machine has 8,589,934,590. The hexidecimal translates into 536,870,912
    $ /usr/sbin/sysdef | grep -i sem
    sys/sparcv9/semsys
    sys/semsys
    * IPC Semaphores
    66 semaphore identifiers (SEMMNI)
    8449 semaphores in system (SEMMNS)
    8449 undo structures in system (SEMMNU)
    129 max semaphores per id (SEMMSL)
    100 max operations per semop call (SEMOPM)
    1024 max undo entries per process (SEMUME)
    32767 semaphore maximum value (SEMVMX)
    16384 adjust on exit max value (SEMAEM)

    Hi,
    I work for Oracle in the UK and I manage the TimesTen pre-sales support team for EMEA.
    Your main problem here is that the value for shmsys:shminfo_shmmax in /etc/system is currently set to 8 Gb therby limiting the maximum size of a single shared memory segment (and hence Timesten datastore) to 8 Gb. You need to increase this to a suitable value (maybe 32 Gb in your case). While you are doing that it would be advisable to increase ny of the other kernel parameters that are currently lower than recommended up to the recommended values. There is no harm in increasing them other possibly than a tiny increase in kernel resources, but with 32 GB of RAM I don't think you need be concerned about that...
    You should also be sure that the system has enough swap space configured to supprt a shared memory segment of this size. I would recommend that you have at least 48 GB of swap configured.
    TimesTen should detect that you have a multi-CPU machine and adjust its behaviour accordingly but if you want to be absolutely sure you can set SMPOptLevel=1 in the ODBC settings for the datastore.
    If you want more direct assistance with your evaluation going forward then please let me know and I will contact you directly. Of course, you are free to continue using this forum if you would prefer.
    Regards, Chris

  • Creating Data Server with DB2

    Hello Experts,
    I am creating data servers with database DB2 in xMII 12.1
    This is the details:
    Connector : IDBC
    Connector type : SQL
    JDBC driver : com.inet.tds.TdsDriver
    Server Package : com.sap.xmii.Illuminator.connectors.IDBC
    Server Url : jdbc:inetdae:BGC125:5912?database=MD1&sql7=true
    When I am checking the status, it is coming stopped
    I had checked in NWA, and JDBC driver is green.
    Please help me on that.
    Regards,
    Dipak

    Thanks for the quick reply John. I am trying to get user and group information, and also doing some complex manipulation. I need this to verify user access and some other stuff. I will not always have access to HSS and cannot always export the information, thus the need to use the HSS java API.
    I'm trying to pull it out of HSS and put it into an Oracle table in a standard format where I'll use the information and do reporting off of it. I've developed a java class that does migrates the data over, but its crude and I'd like to move it into ODI with some of our projects that are currently doing the same thing with other ERP such as Ebiz and Peoplesoft.
    If I can recode some of the java in ODI, how would I go about doing that. Where would I include the libraries, etc.? is there a tutorial or a place where I can get started on that.
    Thanks for all the help.

  • How create and work with Z output to meet the client requirment?

    hi  gurus,
    I am SD functional consultant and need ur help
    Please explain me how create and work with Z output .
    How we arrange and change the fields in header and footer
    where and how we do changes in Layouts setting and SAP scripts to meet the user requirments.
    pls forward functional or Tech spec of Z output
    points will be rewarded
    thanx & regards
    shabnum

    Hi shabnum ,
    I hope you can do it.
    Goto SE71, enter form name--> click change
    1) click in page window command button, Identify the header and footer window
    2) single click on Header window and click change button(pencil symbol)
    3) identify the fields and change order of the fields
    I hope this will help to solve your issue
    Regards,
    SaiRam

  • How create a DLL with module python?

    Hello,
    I am a beginner in TestStand,
    I would like to know, how create a DLL with module Python? because for create a sequence, we must own a DLL.
    Thank you for your answer
    Vincent
    Solved!
    Go to Solution.

    Hello,
    I would like use TestStand with Python module. So I believe, we must use a DLL for execute a test, I'm wrong because I can use .exe for realize a test.
    Now I know how use TestStand with Python module but I have a mistake. I wonder which is error 255?
    Whereas I use .exe in Language C, I have no error.
    Thank you for your anwser.
    Vincent
    Attachments:
    Setup_Test.jpg ‏244 KB
    Report.jpg ‏258 KB

  • How create and work with Z output to meet user req

    hi gurus,
    I am SD functional consultant and need ur help
    Please explain me how create and work with Z output .
    How we arrange and change the fields in header and footer
    where and how we do changes in Layouts setting and SAP scripts to meet the user requirments.
    pls forward functional or Tech spec of Z output
    points will be rewarded
    thanx & regards
    shabnum

    Hi
    From SPRO do the all steps.
    Goto SD-> BASIC functions->Output control->Output determination->Output determination by condition technique->Maintain Output determination for sales documents
    Here define all like access sequences, Output types, condition tables and assign them to Program and Forms.
    From SE71 copy the script to a Zscript and to modify it to suit your requirements and the same Zscript has to be assigned to Output type, Program in NACE transaction.
    Reward points if useful
    Regards
    Anji

  • How create 1 dvd with 2 file

    how create 1 dvd with 2 file

    As you probably know,  DVD creation in Compressor and FCPX is very limited – and it only accommodates single tracks.
    But there is no reason why you can't put multiple movies on a single timeline in FCP. Separate them with inserted gaps and add chapter markers to provide some level of navigation among them.
    Good luck.
    Russ

  • How to create date variable with interval in VC

    Hi Everyone,
    I have 2 questions:
    <u>My scenario:</u>
    I am using a BI 7.0 Query which is having some variables. I want the same variables to be displayed in the VC output.
    <u>Question 1:</u>
    I know that how to bring these variables in variable screen, but when we use these queries, dont the variables in the variable screen automatically ask for the input?
    I tried it but it is not happening automatically.
    There are 2 inputs for queries with variables, i tried using both, but it is not working properly. Can any one tell me is it possible.
    <u>Question 2:</u>
    In this variable screen, I have to select date with interval format. but i dont know how to use variable with interval. ( Date with interval format).
    I hope some one might have come across the same scenario .. If so please share with the solution me ..
    Regards,
    Chan

    Ok .. Let me be very clear .. I think I confused you.
    As you mentioned I have done every thing in query level.
    I have created a query with a variable which is an interval based variable(date).
    When I execute the query it asks for the dates to display the inbetween data.
    I gave the inputs and the data is displayed properly.( So far what I have mentioned is all in Query designer).
    In VC, I have used this query and it is having two ports named INPUT and VARIABLE. I know that I need to give input in variables. I selected the calander month variable from the list to display. After this I deployed the model. In the output screen the table is displayed and 1 text box is also available for the date input. I know that here I have to give date with same format as in Query designer output. I tried so many ways to give input by even selecting some other options like date picker but it is giving the following error - <b>Variable expects interval values; enter an interval.</b>
    The date format which I m using in Query is "MM.YYYY" and I m using the same format in VC too.
    Now I hope that u can understand better.
    I want to know is there any other way through which this can enter interval value or what mistake I m doing in the above explained scenario?
    Regards,
    Chan

  • How to create Date relationship with Fact and DimDate on Tabular model?

    I have two tables in SQL Server 2014 SSAS Tabular model.
    DimDate(defined as data table) have date column (format is 1.7.2010 00:00:00) This table have been impoted from AdventureWorks.
    FactSales have SalesDate column (format is 25.03.2015 18:08:05). This table have been imported from Excel. Colums is defined as date in Excel.
    When I set Data Type as date in Tabular I get error:
    "Datatype conversion failed for table...Value:'25.03.2015 18:08:05'"
    I have tried to create calculated colums like =DATEVALUE([SalesDate]) and =DATE(YEAR([SalesDate]),MONTH([SalesDate]), DAY([SalesDate])), but getting error.
    What should I do so that I can create relationship between Date column?
    Kenny_I

    Hi Kenny_l,
    According to your description, you fail to convert the data type into date after creating calculated column with expression. Right?
    In Analysis Services, the tabular will detect the source data type to determine which data type can be converted to. If you want to convert a data into date type, no matter this data is from database or a file, you must make sure this data can be recognized
    as a date in SQL Server. In this scenario, the format '25.03.2015 18:08:05' can't be recognized as date even you apply DAX functions. So please change the format of the column in excel like "03.25.2015 18:08:05".
    PS: For testing, you can create a temp table and insert sample text into a date column, because the date format in SQL Server depends on the location selection when installing.
    After retrieving data from data source and all values within column are correct format, you can change the data type into Date, and select the format you expect.
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Connecting to AOP data store with multiple NIC's

    I have a situation where I need to have the computer running Diadem to be connected to two different networks.  I have two NIC's installed in the machine, NIC A and NIC B, we will call them.  The reason for this, is that I am using Diadem to access an AOP server on an AVL PUMA machine.  If I connect PUMA to the LAN, PUMA occasionally crashes, probably something to do with the IP address.  So, PUMA needs to be on a network that does not have a router.  My diadem computer needs to communicate with PUMA, and with the local shared drives on our company network.
    Anyway, I have the company LAN hooked up on NIC A, and the PUMA network hooked up on NIC B.  Diadem fails to connect when both networks are connected, but connects to the data store no problem when only the PUMA network is hooked up. Any ideas how to fix this?  It seems diadem is looking at NIC A instead of NIC B, like it should be. 
    For reference, when both NIC's are hooked up, I can access share drives, and ping computers on both separate networks, without a problem, so it's not a connection issue.
    Any help would be appreciated.
    -Russ
    Solved!
    Go to Solution.

    I assume poth NIC have different IP range.
    - Did you use IP instead of server name.
    - Do you use RPC or CORBA to access the PUMA server?
    - Which DIAdem version.
    Can you create a logfile when the connection fails?
    Greetings
    Andreas

  • Cannot Create Data Repository with VM Server (3.1.1) Cluster

    Dear All,
    I have 3 Servers (one for VM Manager, two for VM Servers) and 1 Storage (Sun Storage 6180 (Fiber Chanel) and map volume as Default Storage Domain ).
    I try to create my 2 VM Servers as Cluster on VM Manager but it has problem as below:
    Case1. I can create a Server Pool with cluster by select Clustered Server Pool by select physical disk, add 2 servers then create data repository but it got problem: in page Create a Data Repository: Select Physical Disk is shown blank not see a physical disk.
    Case 2. when I create server pool by uncheck Clustered Server Pool box (None Cluster) and add 2 vm servers then create data repository, the physical disk is shown but got error message:
    "OVMRU_002030E Cannot create OCFS2 file system with local file server: Local FS vmserver1. Its server is not in a cluster
    Wed Dec 26 01:27:12 ICT 2012"
    Please kindly give advice for this.
    Thanks and regards,
    Vandy

    1. So you're trying to create a server pool and a storage repository on a single LUN that is direct attached? Don't your software on the 6180 allow you present multiple LUNS from disk group? Think about it. You create a clustered server pool on a single LUN and then try to use that same LUN for repository.....
    2. How are both hosts attached? Are there multiple HBA cards in your 6180? Is so, how many? If you have redundant cards you should be using multipathing for redundancy.

Maybe you are looking for