Joining Objects

Mea Culpa Mea Culpa (Or whatever it is you yell when you just plain give up)
I'm using Illustrator 10 (CS). I'm not doing anything extensive, basically just isometric line-drawings.
I'm trying to show a hole in a metal plate. The plate is thin enough that both edges of the hole can be seen. I created a circle (circle1), copied it (circle2), and aligned the two. I added anchors so I could delete the 'unseen' portions of circle2, turning it into Arc2. Now I have a circle and an arc that touches it in two places. So far so good, until I try to make them into one shape/object/path/whatever. I ended up cutting up circle1 so I could join Arc2 to Arc1 (To make a crescent I can 'fill') then nudged the rest of circle 1 to APPEAR to touch, giving the appearance of a hole. SO, I'm not dead in the water, but am I not able to simply connect, say, the end of a path to the middle of another path?
My concern stems from the fact that, when all is said and done, I need to create complete renderings of individual parts, then 'stack' them for the 'all put together' view.
So, two questions:
1. I figured out how to join the ENDS of two paths how can I join the end of a path to 'not the end' of another (Say, to connect the side of a cylinder to the top)?
2. I've been hunting around without success is there a tutorial that leans toward simple line-art (Looks like CAD work)?
Thanks
Shane

Place them in the same group. Select both shapes and right-click and select Group from the menu.
Andy

Similar Messages

  • "Join" Objects through Webservices

    I'm trying to find a fast way to join objects through webservices.
    What do I need?
    -An extract of the recipients from a campaign joined with contacts
    What do I have?
    -I have an extract of the recipients of the campaign...
    What's left?
    -No I have the ID's of the contacts and I could get the information of the contacts 1 by 1. This means if I have 100 recipients, that I have 101 webservice calls.
    My question?
    Is there a way to get all the contacts with 1 webservice call, containg the needed info from recipients?

    Probably not, Campign - Contact is Many-to-Many relationship. You can query contacts where 'SourceCampaignName' field equals some camagin, but it not the same that you want.

  • Joining object types

    Hi Experts
    I a very much new to crystal reports. could any one suggest how can i fetch values where i have to join two object types( or tables), and also how can i pass the prompted value to the query.
    please suggest
    Edited by: tarun.sharma on Mar 6, 2009 6:49 AM

    Hi Tarun
    When you create a report, you first select the connection type e.g. ODBC.
    e.g you create a report based on Xtreme sample database that comes with Crystal Reports.
    You create a DSN that points to Xtreme sample database.
    Open Crystal Reports -> Create new report -> In database Expert select the DSN and you get the available tables -> Now if you select two or more tables Crystal automatically joins them by name -> However you can join them manually also.
    Note, if the join is not correct you wouldnot get any data.
    You can refer to Crystal Reports help for more information or download the Crystal Reports user guide from help.sap.com
    Hope this helps!!
    Regards
    Sourashree

  • Best way to JOIN with OWB 10.2

    Hi Gurus,
    im a newbie in working with OWB and i also apologize for my englisch. I have the following Issue. Im using Oracle warehousebuilder and i want to load data in a new Table as Table3 from two distinct source table Table1 and Table2.
    It should look like this: Table3.col1 = Table1.col2 ; Table3.col2 = Table1.col3; Table3.col3 = Table2.colb.
    I mean it is a kind of JOIN but the column from Table3 have to be just the same as the column from Table1 and Table2 and no other combination.
    Table1 : col1 | col2 | col3
    Table2: cola | colb | colc
    Table3: Table1.col2 | Table1.col3 | Table2.colb
    Can someone help me?
    Thx

    Vinzsanity,
    The problem is that you are trying to model a query that requires something more than a simple join as you seem to hope..
    Joining based on simple table row order is not something relational databases are really designed to do well. They join on values. For example, if you simply try to write basic SQL to make your join you discover the problem:
    select t1.col1, t2.colb
    from   table1 a,
             table2  b
    where a.rownum = b.rownum ;What you get is an ORA-01747 error as the rownum psuedocolumn is not valid in this context. But if you don't find a way to join on roworder you get the crossproduct - which is NOT what you want.
    The simplest pure SQL join to get your results as described in your sample data is:
    select a.col1, b.cola
    from   (select rownum rnum, s.* from table1 s) a,
             (select rownum rnum, t.* from table2 t) b
    where a.rnum(+) = b.rnum;but this only implies a possible one-way outer join to meet your described data sample. If you want a full outer join to handle the possibilities where either table could have more or less rows than the other, then
    select a.col1, b.cola
    from   ((select rownum rnum, s.* from table1 s) a
             full outer join
           (select rownum rnum, t.* from table2 t) b
           on (a.rnum = b.rnum))Now, clearly this is NOT going to me modelled as just a simple join in OWB. First, you need to find a way to add the rownum pseudocolumn to both table result sets in order to be able to join on them. For example, you could use an expression object for each table, dump all of the fields straight from the expression input to the expression output, and then add a rownum column to the output as well. Then join the two with a joiner defined as full outer join on the two rownum fields and map the required columns to the target table.
    Or, you could create a view:
    Create or replace view table1_And_2_outer_join as
    select a.*, b.*
    from   ((select rownum rnum1, s.* from table1 s) a
             full outer join
           (select rownum rnum2, t.* from table2 t) b
           on (a.rnum = b.rnum))And use that view as the source object in your mapping.
    Or create two views:
    Create or replace view table1_with_rownum as
    select rownum rnum1, s.* from table1;
    Create or replace view table2_with_rownum as
    select rownum rnum2, t.* from table2 tAnd then use the two views as your sources and join them together using the joiner object defined as outer join on the rownumber values.
    Any of those three options will get you what you seem to want.
    Cheers,
    Mike

  • Cannot join tables used in the workbook. Join "" not found in the EUL

    Does anybody have any idea on this error?
    We are getting it from our new server. We exported and inported the EUL from the old server.
    Thanks,
    Dave

    I was just looking at the Disco Admin documentation on technet for version 9 to see if there's any major difference between v4, v9 and v10 and it looks about the same.
    If you're exporting from the Admin program directly then I assume you chose to export the entire EUL?
    I usually use the batch version (DOS) and I've cut the following text from the export command line as right at the top you can see how it's supposed to handle joins.
    Did you not receive any kind of warning when importing the .eex file (ie: something didn't exist for the join to resolve, etc)?
    Also, was this a new EUL or an existing one?
    If existing, did it warn you about a folder may already exist so it changed the name of the folder by appending a number (ie: _1)?
    ==========================================================
    /export (EUL objects)
    This command enables you to export EUL objects to a Discoverer export file (EEX file). You can selectively export individual EUL objects (e.g. folders, business areas, functions) or entire EULs (using the /all modifier). When you import multiple files, Discoverer automatically resolves references between the files. For example, you can export the Emp folder in fileA.eex and the Dept folder in fileB.eex. If Emp and Dept are joined, the join information will actually be in both files, but neither file contains the information for both folders. If you import both files, the join will be recreated when the second file is processed.
    Information Details
    Syntax:
    /export <filename> [<bus_area_name>]
    Or
    /export <filename> <modifier(s)> [identifier]
    Modifiers:
    "/all"
    /asm_policy <asm policy>
    "/audit_info" <audit details>
    "/business_area" <business area>
    "/external_element" <filename> (this filename refers to an xml file,
    not the export filename)
    "/folder" <folder>
    "/function" <function>
    "/hierarchy" <hierarchy>
    "/identifier"
    "/item_class" <item_class>
    "/log" <log file name> [log_only]
    "/summary" <summary>
    /set_created_by <creator name>
    /set_updated_by <updated name>
    "/show_progress"
    "/workbook" <workbook> [XML_workbook]
    "/xmlworkbook" (takes no parameters)
    Notes:
    <filename> - The name of the target *.EEX file. If a directory path is not specified, the target file is created in the default Discoverer folder. To override the default target directory setting, specify a directory path for the file, for example c:\data\sales.eex . Note that the directory path must be an absolute path, not a relative path.
    Wildcards are not allowed for parameters (e.g. business areas, folders) these must be named explicitly.
    [<bus_area_name>] - Use this option to export an entire business area and contents. If you only want to export the business area definition and metadata for the contents, use the /business_area modifier.
    When you export a business area using the /business_area modifier, Discoverer exports only business area definitions and links to the folders in the business area. Discoverer will export the folders and workbooks only when they are specified by name.
    <modifiers> - When specifying parameters, you can use either their Display Name or Identifier.
    To maintain data relationships, you must also export linked (or joined) objects.
    Example:
    To export two business areas named Test BA and Final BA, residing in an EUL named eul31, into a file named export.eex, and write to a log file named export.log, enter:
    dis5adm.exe /connect me/mypassword@database /export export.eex "Test BA, Final BA" /eul eul31 /log export.log
    Message was edited by:
    russ_proudman

  • How to (bulk) link objects in the FIM MA to metaverse objects

    Is there a way to link / join objects which still exist in the FIM MA CS with objects in the metaverse?
    Thanks
    Mik

    Hi Mik,
    in FIM, you can configure joiner to perform your task.
    More details here :  http://technet.microsoft.com/en-us/library/jj572799(v=ws.10).aspx / http://social.technet.microsoft.com/wiki/contents/articles/1881.how-to-implement-joins-and-data-matching-in-fim.aspx 
    Regards,
    joris
    Joris Faure

  • ODI 12c: no automatic joins in mappings

    Hi,
    exploring the freshly installed ODI 12c I'm eager to learn, in which way the OWB mapping concept has been transferred to ODI. Most features appear familiar to me, it's very exciting for me as old OWB guy
    One think seems buggy to me: when dragging 2 tables connected by foreign key into the mapping I expect a join object created automatically but this doesn't happen. Why not?
    I didn't apply any patch.
    best regards
    Thomas

    Hi Thomas,
    You've got it right there - joins are only created for foreign keys when using a dataset. The idea is that datasets present more of an Entity Relationship type view of your data objects, rather than a purely flow based view. So when you drop your sources into a dataset, ODI helps by using the foreign keys to create joins. When you drop a source into the general flow mapping, ODI assumes that you know what you want to do with it, and allows you to create the joins manually.
    Datasets are good for more than backward compatibility. They can be really useful for defining "islands" of data in your mapping flow, which you can then collapse and consider as a single unit. For example, you may have several datasets that each join numerous tables together, but once you've defined the datasets, you can collapse them so that each just looks like a single object, making your mapping a lot simpler to work with.
    I'm afraid I don't know if there's a mapping tutorial, but I'll get back to you if I can find one.
    Nigel

  • While defining a columnar table, what is the purpose of column store type

    Hi folks
    I have two questions related to columnar table definition.
    1. What is the purpose of column store type.
    While defining a columnar table, what is the purpose of column store type (STRING ,CS_FIXEDSTRING,CS_INT etc) , when I define a table using the UI I see that the column is showing STRING but when I goto EXPORT SQL it does not show.  Is this mandatory or optional ?
    2.VARCHAR Vs. CHAR - In the UI when I create the table I do not see the CHAR option , but I do see lot of discussion where people are using CHAR for defining the columnar table. Not sure why UI dropdown does not show it. I also read that we should avoid using VARCHAR as those columns are not compressed, is that true, I thought the column store gives compression for all the columns. Are there certain columns which cannot be compressed .
    Please let me know where I can find more information about these two questions.
    Poonam

    Hi Poonam
    the CS_-data types are the data types that are used internally in the column store. They can be supplied but it is not at all required or recommended to do so.
    SAP HANA will automatically use the correct CS_-data type for every SQL data type in your table definitions.
    To be very clear about this: don't use the CS_-data types directly. Just stick to the SQL data types.
    Concerning VARCHAR vs CHAR: fixed character data types are not supported anymore and don't show up anymore in the documentation.
    I have no idea why you believe that VARCHAR columns are not compressed but this is just a myth.
    create column table charcompr (fchar char(20), vchar varchar(20));
    insert into charcompr (
        select lpad ('x', to_int (rand()*20), 'y'), null from objects cross join objects);
    -- same data into both columns
    update charcompr set vchar = fchar;
    -- perform the delta merge and force a compression optimization
    merge delta of charcompr;
    update charcompr with parameters ('OPTIMIZE_COMPRESSION' ='FORCE');
    -- check the memory requirements
    select COLUMN_NAME, MEMORY_SIZE_IN_TOTAL, UNCOMPRESSED_SIZE, COUNT, DISTINCT_COUNT, COMPRESSION_TYPE
    from m_cs_columns where table_name ='CHARCOMPR'
    COLUMN_NAME    MEMORY_SIZE_IN_TOTAL    UNCOMPRESSED_SIZE   COUNT   DISTINCT_COUNT  COMPRESSION_TYPE
    FCHAR       3661                    70285738            6692569 20              RLE
    VCHAR       3661                    70285738            6692569 20              RLE
    We see: compression and memory requirements are the same for both fixed and variable character sizes.
    - Lars

  • How to find the Names of Most costly Views or Most Time consuming views

    Hi All,
    I had a database consisting of almost 200 views,as a part of Optimization process i want to find out the most costlier views.How to do that.Actually i want the names of the VIEWS,so that i can optimize that.Can any one help me out?
    I had one more doubt in our Database a view is created like view A( which is created by joining several tables).so do i need to add index on this view separately or does it take the index of the tables which i joined? And in our Database i  had created
    a VIEW which is derived from other views(the Joining objects are views not tables),so while i select recoreds from this views which is derived from other views it is taking a lot of time.Is tehre any problem with that?

    Please avoid such double post spread over several forums:
    http://social.msdn.microsoft.com/Forums/en-US/bae4042a-10b8-4d12-aa46-88a05ea37a76/how-to-find-the-names-of-most-costly-views-or-most-time-consuming-views?forum=sqldataaccess
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • Nested for loop in the collections

    Hi Experts,
    collection1
    ============
    SELECT o.object_id
          BULK COLLECT INTO l_obj_info
            FROM (SELECT     n.node_id, n.object_id
                        FROM nodes n
                  START WITH n.node_id = 100
                  CONNECT BY PRIOR n.node_id = n.parent_node_id) n
                 INNER JOIN
                 objects o ON n.object_id = o.object_id
           WHERE o.object_type_id = 285;
    collection2
    ============
    SELECT *
          BULK COLLECT INTO l_tab
            FROM ((SELECT     REGEXP_SUBSTR (i_l_text, '[^,]+', 1, LEVEL)
                         FROM DUAL
                   CONNECT BY REGEXP_SUBSTR (i_l_text, '[^,]+', 1, LEVEL) IS NOT NULL));
       END;
    collection3
    ============
    SELECT o.object_id
                   BULK COLLECT INTO l_fin_tab
                     FROM objects o JOIN ATTRIBUTES att
                          ON o.object_id = att.object_id
                    WHERE o.object_id = collection1.object_id
                      --AND att.VALUE = collection2.val;
    Please tell me how to implement for loop in the collection3 to get the values from collection1 and collection2.
    i have tried in the below way
    CREATE OR REPLACE TYPE LIST_OF_ATTRIBUTES_TYPE AS TABLE OF varchar2(4000);
    CREATE OR REPLACE TYPE LIST_OF_OBJECT_IDS_TYPE AS TABLE OF number(9);
    CREATE OR REPLACE FUNCTION f_get_objects_by_type_id (
       i_object_type_id   IN   NUMBER,
       i_l_text           IN   VARCHAR2,
       i_scope_node_id         NUMBER
       RETURN list_of_object_ids_type
    AS
       CURSOR objs_info
       IS
          SELECT o.object_id
            FROM (SELECT     n.node_id, n.object_id
                        FROM nodes n
                  START WITH n.node_id = i_scope_node_id
                  CONNECT BY PRIOR n.node_id = n.parent_node_id) n
                 INNER JOIN
                 objects o ON n.object_id = o.object_id
           WHERE o.object_type_id = i_object_type_id;
       l_tab       list_of_attributes_type := list_of_attributes_type ();
       --l_obj_info   list_of_object_ids_type := list_of_object_ids_type ();
       l_fin_tab   list_of_object_ids_type := list_of_object_ids_type ();
    BEGIN
       BEGIN
          SELECT *
          BULK COLLECT INTO l_tab
            FROM ((SELECT     trREGEXP_SUBSTR (i_l_text, '[^,]+', 1, LEVEL)
                         FROM DUAL
                   CONNECT BY REGEXP_SUBSTR (i_l_text, '[^,]+', 1, LEVEL) IS NOT NULL));
       END;
       IF l_tab.COUNT > 0
       THEN
          FOR i IN objs_info
          LOOP
             FOR j IN l_tab.FIRST .. l_tab.LAST
             LOOP
                SELECT o.object_id
                BULK COLLECT INTO l_fin_tab
                  FROM objects o JOIN ATTRIBUTES att ON o.object_id =
                                                                     att.object_id
                 WHERE
                                att.VALUE = l_tab (j) and o.object_id =objs_info(i);
             END LOOP;
          END LOOP;
       END IF;
       RETURN l_fin_tab;
    END f_get_objects_by_type_id;

    Why are you wanting to do this?
    It looks like you are trying to implement SQL joins in PL code.  Not only is that using up expensive PGA memory by storing the data in collections, but doing such retrieval of data to try and join it in PL loops, is never going to be as fast as just joining the SQL queries using SQL itself.
    Post some example data and your database version, with an example of what the output should look like from that example data.
    Re: 2. How do I ask a question on the forums?

  • Add Server to Server Pool failed

    Hi, All!
    When you try to add a server to the pool, is an error ... Tell me what could be wrong?
    Hardware:
    Server: IBM x3550m3 - OVMSERVER1 ip 192.168.1.247, 10.20.2.1
    Server: IBM x3550m3 - OVMSERVER2 ip 192.168.1.248, 10.20.2.2
    Storage: IBM DS3524 - SAS link
    Storage disk:
    HS 1.5Gb
    HC 2.5Gb
    OVMPOOL 100Mb
    Sofrware:
    Oracle VM 3.1.1
    OVMPOOL mount as /opt on OVMSERVER1 and mount as nfs-partitions
    mount:
    /dev/sdd3 on / type ext3 (rw)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    */dev/sdc1 on /opt type ext3 (rw)*
    /dev/sdd1 on /boot type ext3 (rw)
    tmpfs on /dev/shm type tmpfs (rw)
    debugfs on /sys/kernel/debug type debugfs (rw)
    xenfs on /proc/xen type xenfs (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
    configfs on /sys/kernel/config type configfs (rw)
    ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
    nfsd on /proc/fs/nfsd type nfsd (rw)
    none on /var/lib/xenstored type tmpfs (rw)
    *192.168.1.247:/opt/OVMS on /nfsmnt/b40bd57f-1ab1-4d03-9254-0151b6f635f7 type nfs (rw,addr=192.168.1.247)*
    /dev/mapper/ovspoolfs on /poolfsmnt/0004fb00000500006352acb0d04c104b type ocfs2 (rw,_netdev,heartbeat=global)
    /dev/mapper/360080e50002d797a0000051e501bc836 on /OVS/Repositories/0004fb00000300008b9920aae151469d type ocfs2 (rw,heartbeat=none)
    /dev/mapper/360080e50002d797a00000523501bc85c on /OVS/Repositories/0004fb00000300007c5a44a941e15d32 type ocfs2 (rw,heartbeat=none)
    exportfs:
    /opt/OVMS 192.168.1.247(rw,sync,no_root_squash,no_subtree_check) 192.168.1.248(rw,sync,no_root_squash,no_subtree_check) 10.20.2.1(rw,sync,no_root_squash,no_subtree_check) 10.20.2.2(rw,sync,no_root_squash,no_subtree_check)
    dmesg OVMSERVER2
    Aug 28 08:18:37 OVMSERVER2 twisted: [-] Log opened.
    Aug 28 08:18:37 OVMSERVER2 twisted: [-] twistd 8.2.0 (/usr/bin/python 2.4.3) starting up.
    Aug 28 08:18:37 OVMSERVER2 twisted: [-] reactor class: twisted.internet.selectreactor.SelectReactor.
    Aug 28 08:18:37 OVMSERVER2 twisted: [monitor] Rescanning all plugins
    Aug 28 08:18:37 OVMSERVER2 twisted: [monitor.plugin.xen_plugin] Starting plugin process /usr/lib/python2.4/site-packages/monitor/plugins/xen_plugin.py
    Aug 28 08:18:37 OVMSERVER2 twisted: [monitor.plugin.xen_plugin] Plugin process /usr/lib/python2.4/site-packages/monitor/plugins/xen_plugin.py launched, PID 5181
    Aug 28 08:18:37 OVMSERVER2 twisted: [monitor.plugin.xen_plugin] Process 5181 started, launching watchdog with gracetime 3600
    Aug 28 08:18:37 OVMSERVER2 twisted: [monitor.plugin.oel] Starting plugin process /usr/lib/python2.4/site-packages/monitor/plugins/oel.py
    Aug 28 08:18:37 OVMSERVER2 twisted: [monitor.plugin.oel] Plugin process /usr/lib/python2.4/site-packages/monitor/plugins/oel.py launched, PID 5182
    Aug 28 08:18:37 OVMSERVER2 twisted: [monitor.plugin.oel] Process 5182 started, launching watchdog with gracetime 3600
    Aug 28 08:18:38 OVMSERVER2 kernel: FS-Cache: Loaded
    Aug 28 08:18:38 OVMSERVER2 kernel: FS-Cache: Netfs 'nfs' registered for caching
    Aug 28 08:18:38 OVMSERVER2 kernel: device-mapper: nfs: version 1.0.0 loaded
    Aug 28 08:18:38 OVMSERVER2 multipathd: dm-4: add map (uevent)
    Aug 28 08:18:39 OVMSERVER2 kernel: OCFS2 DLM 1.8.0
    Aug 28 08:18:39 OVMSERVER2 kernel: ocfs2: Registered cluster interface o2cb
    Aug 28 08:18:39 OVMSERVER2 kernel: OCFS2 DLMFS 1.8.0
    Aug 28 08:18:39 OVMSERVER2 kernel: OCFS2 User DLM kernel interface loaded
    Aug 28 08:18:39 OVMSERVER2 o2cb.init: online a2b7d59e374df2ba
    Aug 28 08:18:39 OVMSERVER2 kernel: o2hb: Heartbeat mode set to global
    Aug 28 08:18:43 OVMSERVER2 kernel: o2net: Connection to node OVMSERVER1 (num 0) at 10.20.2.1:7777 shutdown, state 7
    Aug 28 08:18:51 OVMSERVER2 last message repeated 4 times
    Aug 28 08:18:51 OVMSERVER2 kernel: o2hb: Heartbeat started on region 0004FB00000500006352ACB0D04C104B (dm-4)
    Aug 28 08:18:51 OVMSERVER2 o2hbmonitor: Starting
    Aug 28 08:18:51 OVMSERVER2 kernel: o2cb: This node is not connected to nodes: 0.
    Aug 28 08:18:51 OVMSERVER2 kernel: o2cb: Cluster check failed. Fix errors before retrying.
    Aug 28 08:18:51 OVMSERVER2 kernel: (mount.ocfs2,5633,5):ocfs2_dlm_init:3001 ERROR: status = -22
    Aug 28 08:18:51 OVMSERVER2 kernel: (mount.ocfs2,5633,5):ocfs2_mount_volume:1883 ERROR: status = -22
    Aug 28 08:18:51 OVMSERVER2 kernel: ocfs2: Unmounting device (252,4) on (node 0)
    Aug 28 08:18:51 OVMSERVER2 kernel: (mount.ocfs2,5633,5):ocfs2_fill_super:1240 ERROR: status = -22
    Aug 28 08:18:53 OVMSERVER2 kernel: o2net: Connection to node OVMSERVER1 (num 0) at 10.20.2.1:7777 shutdown, state 7
    Aug 28 08:18:53 OVMSERVER2 kernel: o2hb: Region 0004FB00000500006352ACB0D04C104B (dm-4) is now a quorum device
    Aug 28 08:18:55 OVMSERVER2 kernel: o2net: Connection to node OVMSERVER1 (num 0) at 10.20.2.1:7777 shutdown, state 7
    Aug 28 08:18:56 OVMSERVER2 kernel: o2cb: This node is not connected to nodes: 0.
    Aug 28 08:18:56 OVMSERVER2 kernel: o2cb: Cluster check failed. Fix errors before retrying.
    Aug 28 08:18:56 OVMSERVER2 kernel: (mount.ocfs2,5710,9):ocfs2_dlm_init:3001 ERROR: status = -22
    Aug 28 08:18:56 OVMSERVER2 kernel: (mount.ocfs2,5710,9):ocfs2_mount_volume:1883 ERROR: status = -22
    Aug 28 08:18:56 OVMSERVER2 kernel: ocfs2: Unmounting device (252,4) on (node 0)
    Aug 28 08:18:56 OVMSERVER2 kernel: (mount.ocfs2,5710,9):ocfs2_fill_super:1240 ERROR: status = -22
    Aug 28 08:18:57 OVMSERVER2 kernel: o2net: Connection to node OVMSERVER1 (num 0) at 10.20.2.1:7777 shutdown, state 7
    Aug 28 08:19:01 OVMSERVER2 last message repeated 2 times
    Aug 28 08:19:01 OVMSERVER2 kernel: o2cb: This node is not connected to nodes: 0.
    Aug 28 08:19:01 OVMSERVER2 kernel: o2cb: Cluster check failed. Fix errors before retrying.
    Aug 28 08:19:01 OVMSERVER2 kernel: (mount.ocfs2,5808,5):ocfs2_dlm_init:3001 ERROR: status = -22
    Aug 28 08:19:01 OVMSERVER2 kernel: (mount.ocfs2,5808,5):ocfs2_mount_volume:1883 ERROR: status = -22
    Aug 28 08:19:01 OVMSERVER2 kernel: ocfs2: Unmounting device (252,4) on (node 0)
    Aug 28 08:19:01 OVMSERVER2 kernel: (mount.ocfs2,5808,5):ocfs2_fill_super:1240 ERROR: status = -22
    Aug 28 08:19:02 OVMSERVER2 o2cb.init: offline ocfs2 0
    Aug 28 08:19:03 OVMSERVER2 kernel: o2net: Connection to node OVMSERVER1 (num 0) at 10.20.2.1:7777 shutdown, state 7
    Aug 28 08:19:35 OVMSERVER2 last message repeated 16 times
    Aug 28 08:19:41 OVMSERVER2 last message repeated 3 times
    Aug 28 08:19:43 OVMSERVER2 kernel: o2net: No connection established with node 0 after 60.0 seconds, giving up.
    Aug 28 08:20:43 OVMSERVER2 kernel: o2net: No connection established with node 0 after 60.0 seconds, giving up.
    Oracle VM Manager log:
    Job Construction Phase
    begin()
    Appended operation 'Server Role Update' to object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'.
    Appended operation 'Server Join Server Pool' to object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'.
    Appended operation 'Server Pool Member Update' to object '0004fb0000020000a2b7d59e374df2ba (OVMPOOL)'.
    Appended operation 'Server Cluster Configuration Update' to object 'fa:2d:de:e2:ee:af:3b:1b:a3:cc:ab:74:8a:32:c7:ef (OVMSERVER2)'.
    Appended operation 'Server Cluster Configure' to object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'.
    Appended operation 'Server Cluster Join' to object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (IN_USE): [ServerPool] 0004fb0000020000a2b7d59e374df2ba (OVMPOOL)
    Operation: Server Pool Member Update
    Object (IN_USE): [Server] ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)
    Operation: Server Role Update
    Operation: Server Join Server Pool
    Operation: Server Cluster Configure
    Operation: Server Cluster Join
    Object (IN_USE): [Server] fa:2d:de:e2:ee:af:3b:1b:a3:cc:ab:74:8a:32:c7:ef (OVMSERVER2)
    Operation: Server Cluster Configuration Update
    Object (IN_USE): [Cluster] a2b7d59e374df2ba
    Job Running Phase at 16:08 on Tue, Aug 28, 2012
    Job Participants: []
    Actioner
    Starting operation 'Server Pool Member Update' on object '0004fb0000020000a2b7d59e374df2ba (OVMPOOL)'
    Completed operation 'Server Pool Member Update' completed with direction ==> LATER
    Starting operation 'Server Role Update' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Completed operation 'Server Role Update' completed with direction ==> DONE
    Starting operation 'Server Join Server Pool' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Completed operation 'Server Join Server Pool' completed with direction ==> LATER
    Starting operation 'Server Cluster Configuration Update' on object 'fa:2d:de:e2:ee:af:3b:1b:a3:cc:ab:74:8a:32:c7:ef (OVMSERVER2)'
    Completed operation 'Server Cluster Configuration Update' completed with direction ==> LATER
    Starting operation 'Server Pool Member Update' on object '0004fb0000020000a2b7d59e374df2ba (OVMPOOL)'
    Completed operation 'Server Pool Member Update' completed with direction ==> DONE
    Starting operation 'Server Cluster Configure' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Completed operation 'Server Cluster Configure' completed with direction ==> LATER
    Starting operation 'Server Cluster Configuration Update' on object 'fa:2d:de:e2:ee:af:3b:1b:a3:cc:ab:74:8a:32:c7:ef (OVMSERVER2)'
    Completed operation 'Server Cluster Configuration Update' completed with direction ==> LATER
    Starting operation 'Server Cluster Join' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Completed operation 'Server Cluster Join' completed with direction ==> LATER
    Starting operation 'Server Cluster Configuration Update' on object 'fa:2d:de:e2:ee:af:3b:1b:a3:cc:ab:74:8a:32:c7:ef (OVMSERVER2)'
    Completed operation 'Server Cluster Configuration Update' completed with direction ==> LATER
    Starting operation 'Server Join Server Pool' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Completed operation 'Server Join Server Pool' completed with direction ==> DONE
    Starting operation 'Server Cluster Configure' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Completed operation 'Server Cluster Configure' completed with direction ==> LATER
    Starting operation 'Server Cluster Configuration Update' on object 'fa:2d:de:e2:ee:af:3b:1b:a3:cc:ab:74:8a:32:c7:ef (OVMSERVER2)'
    Completed operation 'Server Cluster Configuration Update' completed with direction ==> DONE
    Starting operation 'Server Cluster Join' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Completed operation 'Server Cluster Join' completed with direction ==> LATER
    Starting operation 'Server Cluster Configure' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: OVMSERVER2 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 configure_server_for_cluster nfs 192.168.1.247:/opt/OVMS 0004fb00000500006352acb0d04c104b b40bd57f-1ab1-4d03-9254-0151b6f635f7, Status: org.apache.xmlrpc.XmlRpcException: exceptions.RuntimeError:Command: ['mount', '/dev/mapper/ovspoolfs', '/poolfsmnt/0004fb00000500006352acb0d04c104b'] failed (1): stderr: mount.ocfs2: Invalid argument while mounting /dev/mapper/ovspoolfs on /poolfsmnt/0004fb00000500006352acb0d04c104b. Check 'dmesg' for more information on this error.
    stdout:
    Tue Aug 28 08:32:23 MSK 2012
    Tue Aug 28 08:32:23 MSK 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
    at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
    at com.oracle.ovm.mgr.action.ClusterAction.configureServerForCluster(ClusterAction.java:88)
    at com.oracle.ovm.mgr.op.physical.ServerClusterConfigure.configureCluster(ServerClusterConfigure.java:139)
    at com.oracle.ovm.mgr.op.physical.ServerClusterConfigure.action(ServerClusterConfigure.java:58)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
    at com.oracle.ovm.mgr.api.physical.ServerProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 configure_server_for_cluster nfs 192.168.1.247:/opt/OVMS 0004fb00000500006352acb0d04c104b b40bd57f-1ab1-4d03-9254-0151b6f635f7, Status: org.apache.xmlrpc.XmlRpcException: exceptions.RuntimeError:Command: ['mount', '/dev/mapper/ovspoolfs', '/poolfsmnt/0004fb00000500006352acb0d04c104b'] failed (1): stderr: mount.ocfs2: Invalid argument while mounting /dev/mapper/ovspoolfs on /poolfsmnt/0004fb00000500006352acb0d04c104b. Check 'dmesg' for more information on this error.
    stdout:
    Tue Aug 28 08:32:23 MSK 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
    ... 30 more
    FailedOperationCleanup
    Starting failed operation 'Server Cluster Configure' cleanup on object 'OVMSERVER2'
    Complete rollback operation 'Server Cluster Configure' completed with direction=OVMSERVER2
    Rollbacker
    Executing rollback operation 'Server Pool Member Update' on object '0004fb0000020000a2b7d59e374df2ba (OVMPOOL)'
    Complete rollback operation 'Server Pool Member Update' completed with direction=LATER
    Executing rollback operation 'Server Cluster Configure' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Complete rollback operation 'Server Cluster Configure' completed with direction=DONE
    Executing rollback operation 'Server Join Server Pool' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Complete rollback operation 'Server Join Server Pool' completed with direction=LATER
    Executing rollback operation 'Server Cluster Configuration Update' on object 'fa:2d:de:e2:ee:af:3b:1b:a3:cc:ab:74:8a:32:c7:ef (OVMSERVER2)'
    Complete rollback operation 'Server Cluster Configuration Update' completed with direction=DONE
    Executing rollback operation 'Server Pool Member Update' on object '0004fb0000020000a2b7d59e374df2ba (OVMPOOL)'
    Complete rollback operation 'Server Pool Member Update' completed with direction=LATER
    Executing rollback operation 'Server Join Server Pool' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Complete rollback operation 'Server Join Server Pool' completed with direction=DONE
    Executing rollback operation 'Server Role Update' on object 'ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)'
    Complete rollback operation 'Server Role Update' completed with direction=DONE
    Executing rollback operation 'Server Pool Member Update' on object '0004fb0000020000a2b7d59e374df2ba (OVMPOOL)'
    Complete rollback operation 'Server Pool Member Update' completed with direction=DONE
    Objects To Be Rolled Back
    Object (IN_USE): [ServerPool] 0004fb0000020000a2b7d59e374df2ba (OVMPOOL)
    Object (IN_USE): [Server] ba:16:46:6e:ed:7c:3d:d7:9c:76:35:78:12:36:4f:79 (OVMSERVER2)
    Object (IN_USE): [Server] fa:2d:de:e2:ee:af:3b:1b:a3:cc:ab:74:8a:32:c7:ef (OVMSERVER2)
    Object (IN_USE): [Cluster] a2b7d59e374df2ba
    Write Methods Invoked
    Class=InternalJobDbImpl vessel_id=12038 method=addTransactionIdentifier accessLevel=6
    Class=ServerPoolDbImpl vessel_id=493 method=addServer accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=lock accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=addServerRole accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=addServerRole accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=addServerRole accessLevel=6
    Class=ServerPoolDbImpl vessel_id=493 method=addServerInternal accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=setServerPool accessLevel=6
    Class=ClusterDbImpl vessel_id=499 method=allocateSlotForServer accessLevel=6
    Class=ClusterDbImpl vessel_id=499 method=addServer accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=lock accessLevel=6
    Class=ServerDbImpl vessel_id=420 method=lock accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setAsset accessLevel=6
    Class=ServerDbImpl vessel_id=420 method=reconfigureCluster accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=setCluster accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setAsset accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setAsset accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setCompletedStep accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setAssociatedHandles accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerPoolDbImpl vessel_id=493 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=setCurrentJobOperationComplete accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=420 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerPoolDbImpl vessel_id=493 method=setCurrentJobOperationComplete accessLevel=6
    Class=ServerPoolDbImpl vessel_id=493 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=420 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=420 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=setCurrentJobOperationComplete accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setProgressMessage accessLevel=6
    Class=ServerDbImpl vessel_id=420 method=setCurrentJobOperationComplete accessLevel=6
    Class=ServerDbImpl vessel_id=420 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=setCurrentOperationToLater accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setFailedOperation accessLevel=6
    Class=ServerPoolDbImpl vessel_id=493 method=nextJobOperation accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=nextJobOperation accessLevel=6
    Class=ServerDbImpl vessel_id=420 method=nextJobOperation accessLevel=6
    Class=ClusterDbImpl vessel_id=499 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=420 method=nextJobOperation accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=InternalJobDbImpl vessel_id=12038 method=setTuringMachineFlag accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=nextJobOperation accessLevel=6
    Class=ServerDbImpl vessel_id=9544 method=nextJobOperation accessLevel=6
    Class=ServerPoolDbImpl vessel_id=493 method=nextJobOperation accessLevel=6
    Completed Step: ROLLBACK
    Job failed commit (internal) due to OVMAPI_4010E Attempt to send command: dispatch to server: OVMSERVER2 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 configure_server_for_cluster nfs 192.168.1.247:/opt/OVMS 0004fb00000500006352acb0d04c104b b40bd57f-1ab1-4d03-9254-0151b6f635f7, Status: org.apache.xmlrpc.XmlRpcException: exceptions.RuntimeError:Command: ['mount', '/dev/mapper/ovspoolfs', '/poolfsmnt/0004fb00000500006352acb0d04c104b'] failed (1): stderr: mount.ocfs2: Invalid argument while mounting /dev/mapper/ovspoolfs on /poolfsmnt/0004fb00000500006352acb0d04c104b. Check 'dmesg' for more information on this error.
    stdout:
    Tue Aug 28 08:32:23 MSK 2012
    Tue Aug 28 08:32:23 MSK 2012
    com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: OVMSERVER2 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 configure_server_for_cluster nfs 192.168.1.247:/opt/OVMS 0004fb00000500006352acb0d04c104b b40bd57f-1ab1-4d03-9254-0151b6f635f7, Status: org.apache.xmlrpc.XmlRpcException: exceptions.RuntimeError:Command: ['mount', '/dev/mapper/ovspoolfs', '/poolfsmnt/0004fb00000500006352acb0d04c104b'] failed (1): stderr: mount.ocfs2: Invalid argument while mounting /dev/mapper/ovspoolfs on /poolfsmnt/0004fb00000500006352acb0d04c104b. Check 'dmesg' for more information on this error.
    stdout:
    Tue Aug 28 08:32:23 MSK 2012
    Tue Aug 28 08:32:23 MSK 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:507)
    at com.oracle.ovm.mgr.action.ActionEngine.sendDispatchedServerCommand(ActionEngine.java:444)
    at com.oracle.ovm.mgr.action.ActionEngine.sendServerCommand(ActionEngine.java:378)
    at com.oracle.ovm.mgr.action.ClusterAction.configureServerForCluster(ClusterAction.java:88)
    at com.oracle.ovm.mgr.op.physical.ServerClusterConfigure.configureCluster(ServerClusterConfigure.java:139)
    at com.oracle.ovm.mgr.op.physical.ServerClusterConfigure.action(ServerClusterConfigure.java:58)
    at com.oracle.ovm.mgr.api.collectable.ManagedObjectDbImpl.executeCurrentJobOperationAction(ManagedObjectDbImpl.java:1009)
    at sun.reflect.GeneratedMethodAccessor408.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.storage.Transaction.invokeMethod(Transaction.java:822)
    at com.oracle.odof.core.Exchange.invokeMethod(Exchange.java:245)
    at com.oracle.ovm.mgr.api.physical.ServerProxy.executeCurrentJobOperationAction(Unknown Source)
    at com.oracle.ovm.mgr.api.job.JobEngine.operationActioner(JobEngine.java:218)
    at com.oracle.ovm.mgr.api.job.JobEngine.objectActioner(JobEngine.java:309)
    at com.oracle.ovm.mgr.api.job.InternalJobDbImpl.objectCommitter(InternalJobDbImpl.java:1140)
    at sun.reflect.GeneratedMethodAccessor933.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:330)
    at com.oracle.odof.core.AbstractVessel.invokeMethod(AbstractVessel.java:290)
    at com.oracle.odof.core.BasicWork.invokeMethod(BasicWork.java:136)
    at com.oracle.odof.command.InvokeMethodCommand.process(InvokeMethodCommand.java:100)
    at com.oracle.odof.core.BasicWork.processCommand(BasicWork.java:81)
    at com.oracle.odof.core.TransactionManager.processCommand(TransactionManager.java:773)
    at com.oracle.odof.core.WorkflowManager.processCommand(WorkflowManager.java:401)
    at com.oracle.odof.core.WorkflowManager.processWork(WorkflowManager.java:459)
    at com.oracle.odof.io.AbstractClient.run(AbstractClient.java:42)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: com.oracle.ovm.mgr.api.exception.IllegalOperationException: OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 configure_server_for_cluster nfs 192.168.1.247:/opt/OVMS 0004fb00000500006352acb0d04c104b b40bd57f-1ab1-4d03-9254-0151b6f635f7, Status: org.apache.xmlrpc.XmlRpcException: exceptions.RuntimeError:Command: ['mount', '/dev/mapper/ovspoolfs', '/poolfsmnt/0004fb00000500006352acb0d04c104b'] failed (1): stderr: mount.ocfs2: Invalid argument while mounting /dev/mapper/ovspoolfs on /poolfsmnt/0004fb00000500006352acb0d04c104b. Check 'dmesg' for more information on this error.
    stdout:
    Tue Aug 28 08:32:23 MSK 2012
    at com.oracle.ovm.mgr.action.ActionEngine.sendAction(ActionEngine.java:798)
    at com.oracle.ovm.mgr.action.ActionEngine.sendCommandToServer(ActionEngine.java:503)
    ... 30 more
    End of Job
    ----------

    Due to the fact that Oracle VM Release 3.1.1 does support shared SAS storage (SAS SAN)
    I created a partition, share it by NSF and placed there ovspoolfs.img ... access to this partition is at all servers ...
    You can mount it manually, but when you try to enter a server in the cluster, OVMM write that the partition is mounted on a different path
    (sorry ... machine translation ...)
    Edited by: bignic on 30.08.2012 8:48

  • How to include DSO in composite provider?

    Hi Experts,
    Could any one let me know the steps to include DSO in composite provider? system verson 7.31 BWA 7.2
    Regards,
    Satish.

    Hi,
    This is all available in help.sap.com.
    You are in the BW Modeling tools. In the context menu for your BW project, choose New > CompositeProvider.
    Select an InfoArea. To ensure that you can find your CompositeProvider again without difficulty, we recommend setting the Add to Favorites flag.
    Enter a name and a description Enter a name for the uppermost node.
    Select the method.
    If you have opted for Union as the assignment method, now press Finish. The editor appears.
    If you have opted for Join, you have to make a few more entries:
    Select the join type. Choose Next.
    As the left-hand join object, select either an InfoProvider from the BW system or, if you have assigned a SAP HANA system to the BW project, select a SAP HANA model as the left-hand join object. Choose Next.
    As the right-hand join object, select a BW InfoProvider or a SAP HANA view. Choose Next.
    Select the a right-hand join object. Choose Finish. The wizard now closes, and the editor opens.
    Regards,
    Michael

  • Changing attribute order in a mapping operator

    Is there a way that I can easily change the position or order of a mapping operator or object (e.g. Joiner object) without having to remove all the attributes in an object, and then either entering them manually or pulling them from the source to the mapping object (the target).

    Hi Kurt,
    Unfortunately that is not possible, operator attributes always show in the order you created it.
    Only tables, views, snapshots, dimensions and cubes allow to change the order of their attributes, but then only in their respective editors, not in a mapping.
    Regards, Patrick

  • Using functions from Combo Box 'Others:'

    I am trying to use function DENSE_RANK in OWB 9.0.3.33.0. Unfortunately I can not find properly aropriate object form tool box palette. I tried to use JOIN, FILTER, EXPRESSION. Mostly I get error: PL/SQL: ORA-30483: window functions are not properly here.
    I am trying to numerate my records in a set. In SQL I can write clause:
    SELECT ID,DATE_VAL, OPERATE,
    DENSE_RANK() OVER (PARTITION BY ID ORDER BY ID, DAT_VAL DESC) "MY_LEVEL"
    FROM (
    SELECT ID, DATE_FROM DATE_VAL, 'U' "OPERATE"
    FROM SCH.P1
    WHERE DATA_FROM = (SELECT MAX(DATA_FROM)
    FROM SCH.P1)
    UNION                                        
    SELECT ID, DATE_VAL,'I' "OPERATE"
    FROM [email protected] )
    ORDER BY ID,DATE_VAL DESC
    Does anybody can tell me how properly I can to use DENSE_RANK() funcion in OWB and get "MY_LEVEL" pseudo column?
    Thank very much for any advice.
    Peter.
    -Poland-

    Thank You for replay.
    I looked at documentation and I saw that DENSE_RANK is in the list aggregate functions. I mean You want to tell me that version of DENSE_RANK what I use is wrong but my question is about OWB.
    In window "Expression Builder" You can use some of functions. If You add to mapping JOIN object in Expression Builder window You can write join condition. In combo box list called "Others", there is DENSE_RANK in the list. After You select this and paste to window OWB generate code like:
    DENSE_RANK() OVER (
    [PARTITION BY <value expression1> [, ...]]
    ORDER BY <value expression2> [collate clause] ASC
    [NULLS FIRST|NULLS LAST] [, ...] )
    I am confused.
    I still do not know what OWB object I can use to get "My level" pseudo column.
    Regards
    Peter

  • Advice on materialized view.

    Hello everybody,
    I have a question and I hope you can advise. Imagine the following (simplified) model.
    CREATE TABLE players
      player_id NUMBER(15) PRIMARY KEY,
      player_name VARCHAR2(100) NOT NULL,
      creation_date DATE,
      birth_date DATE
      -- around 20 other fields (firstname, different sizes, weights...)
    CREATE TABLE competitions
      id_competition NUMBER(15) PRIMARY KEY,
      competition_name VARCHAR2(100),
      start_date DATE,
      end_date DATE,
      creator_id NUMBER(15), -- FOREIGN KEY TO PLAYERS.
      winner_id NUMBER(15)   -- FOREIGN KEY TO PLAYERS.
      -- around 20 other fields
    CREATE TABLE objectives
      id_objective NUMBER(15) PRIMARY KEY,
      objective_name VARCHAR2(100),
      id_competition NUMBER(15),  -- FOREIGN KEY TO COMPETITIONS
      creator_id NUMBER(15),   -- FOREIGN KEY TO PLAYERS.
      winner_id NUMBER(15),    -- FOREIGN KEY TO PLAYERS.
      start_date DATE,
      is_public NUMBER(1),
      restrictions VARCHAR2(100)
      -- around 20 other fields
    );[Foreign keys are all indexed]
    The table players has around 30 million records.
    The table competitions has around 10 million records.
    The table objectives has around 70 million records.
    I have a materialized view which JOIN between all these tables. Something like this:
    SELECT comp.id_competition, comp.competition_name, /* comp.... */
           comp_cre.player_name, /*, comp_cre....*/
           comp_win.player_name, /*, comp_cre....*/
           obj.objective_name,   /*, obj....*/
           obj_cre.player_name,  /*, obj_cre....*/
           obj_win.player_name   /*, obj_cre....*/
           -- some analytics rank() OVER, COUNT() OVER...
      FROM competitions comp
      LEFT JOIN players comp_cre
             ON comp.creator_id = comp_cre.player_id
      LEFT JOIN players comp_win
             ON comp.winner_id = comp_win.player_id
    INNER JOIN objectives obj
             ON obj.id_competition = comp.id_competition
      LEFT JOIN players obj_cre
             ON comp.creator_id = obj_cre.player_id
      LEFT JOIN players obj_win
             ON comp.winner_id = obj_win .player_id;THe materialized view is refreshed everyday. But since we have huge amount of data it takes a while to run. The size of the materialized view is more than 15GB. It is normal that the plan generate full table scans on these 3 tables... Since, that the goal. REtrieve all the data to make a big result set;
    I was wondering if any of you has an idea on how I can speed up the refresh? I can not add materialized view logs on each dependent table. Is it possible to tell Oracle to work in different Thread? SOmething like the first 5 milions competitions are done in a thread, and another 5 milions in another thread? I have a very robust machines. 128GB ram and 64 CPU. Any suggestion?? I'm using Oracle 10g.
    THank you

    With 64 CPUs, you should certainly look at parallelizing the MV refresh:
    http://www.doug.org/newsletter/march/MV_Refresh_Parallel.pdf
    Iordan Iotzov
    http://iiotzov.wordpress.com/

Maybe you are looking for