Zfs diff on old pool?

Hi,
does anyone know if I can export a zpool from a system with an older version of zfs, temporarily move the disks to a test system running a new version of zfs that supports the "zfs diff" command, and then run zfs diff on the old pool snaphots? In other words, does the pool/dataset need to be upgraded to the same version that introduced zfs diff before I can use zfs diff, or does the utility work with older datasets?
thanks,
Graham

Hi,
does anyone know if I can export a zpool from a system with an older version of zfs, temporarily move the disks to a test system running a new version of zfs that supports the "zfs diff" command, and then run zfs diff on the old pool snaphots? In other words, does the pool/dataset need to be upgraded to the same version that introduced zfs diff before I can use zfs diff, or does the utility work with older datasets?Not sure, but i believe normal zfs import should work.
thanks,
X A H E E R

Similar Messages

  • Stats differ for vda "pool-show" vs. "pool-desktops"

    Hi,
    has anyone seen this before? Sun VDI 3.1 connected to a vCenter infrastructure. vda pool-show shows a different number of desktops (145) compared to vda pool-desktops (101)? Judging from the VMs in vCenter vda pool-show seems to be correct, but why would pool-desktops differ?
    # vda pool-show POOL
      Assignment Status: Enabled
    Desktop Assignments: personal
       Desktop Provider: POD1
                Cloning: Enabled
           Cloning Jobs: 0
               Template: TEST
    Available Desktops: 9
      Assigned Desktops: 136
         Total Desktops: 145
    # vda pool-desktops -x POOL | wc -l
         101Sebastian

    Opened a support case, turned out to be a known limitation for the command line tools. 'vda pool-desktops' limits it output to 101 items for performance reasons (but unfortunately doesn't say so). A workaround is to gather the data directly from the database. In our case we needed to obtain the desktop -> user assignments for a specific pool, which we did with the follwing bash/SQL snippet.
    #!/bin/bash
    [ $# -eq 1 ] || { echo 'Pool name required'; exit 1; }
    POOL=$1
    /opt/SUNWvda/mysql/bin/mysql --defaults-file=/etc/opt/SUNWvda/my.cnf -N -p vda <<EOM
    select d.name, d.state, u.distinguished_name
      from t_desktop as d, t_poolclient as u, t_pool as p
      where p.name = "$POOL" and p.id = d.pool_id and d.pool_client_id = u.id
      order by d.name;
    select d.name, d.state
      from t_desktop as d, t_pool as p
      where p.name = "$POOL" and p.id = d.pool_id and d.state='available'
      order by name;
    EOMThe second SQL query (for the available desktops) might be merged into the first one, I am not much of a SQL guy though.

  • New password must differ from old password by at least 9999 characters.

    Hi All,
    When  logging into my Apex as admin ( https://server/apex/apex_admin ) , I was asked to changed the password which I did but then I received the error "New password must differ from old password by at least 9999 characters". Apparently I made some wrong settings in the password policies before, but now I am stuck. I have the admin password, but I cannot go pass the change-passsword page.
    Is there a way that I can reset the password policy from the command line ?
    My Apex is 4.2.2 on Linux RHEL 6.3.
    Thanks,
    Vu

    Hi and Welcome to the Forums!
    Even though things switched over via DTM, it sounds like something on your carrier side has not yet fully switched over. I suggest you contact them for formal assistance as they manage your BIS account and need to ensure that things have properly shifted over on their side.
    FYI -- the credentials it is asking for have nothing to do with your email accounts. These credentials are your BIS account credentials (the account I mentioned above). BIS stores all of your email credentials so that it can perform as true PUSH from the perspective of the BB. But, to protect those, it further has credentials to get into the BIS account itself...and these are the credentials it is requesting.
    Good luck!
    Occam's Razor nearly always applies when troubleshooting technology issues!
    If anyone has been helpful to you, please show your appreciation by clicking the button inside of their post. Please click here and read, along with the threads to which it links, for helpful information to guide you as you proceed. I always recommend that you treat your BlackBerry like any other computing device, including using a regular backup schedule...click here for an article with instructions.
    Join our BBM Channels
    BSCF General Channel
    PIN: C0001B7B4   Display/Scan Bar Code
    Knowledge Base Updates
    PIN: C0005A9AA   Display/Scan Bar Code

  • Diff. Between Pool & Cluster Table

    Hi,
    Could any just let me know regarding the Basic Diff. B/w
    Pool & Clsuter Table.
    Thanks & Regards
    Irfan Hussain

    Hi Irfan,
    Pooled Table :-
    A pooled table in R/3 has a many-to-one
    relationship.
    When you look at a pooled table in R/3, you see a
    description of a table.
    R/3 uses table pools to hold a large number (tens to
    thousands) of very small tables (about 10 to 100
    rows each).
    Table pools reduce the amount of database resources
    needed when many small tables have to be open at the
    same time.
    SAP uses them for system data.
    You might create a table pool if you need to create
    hundreds of small tables that each hold only a few
    rows of data.
    Pooled tables are primarily used by SAP to hold
    customizing data.
    Cluster Table :-
    A cluster table is similar to a pooled table.
    It has a many-to-one relationship with a table in
    the database.
    Many cluster tables are stored in a single table in
    the database called a table cluster.
    It holds many tables within it.
    The tables it holds are all cluster tables.
    Like pooled tables, cluster tables are another
    proprietary SAP construct.
    They are used to hold data from a few (approximately
    2 to 10) very large tables.
    They would be used when these tables have a part of
    their primary keys in common, and if the data in
    these tables are all accessed simultaneously.
    Table clusters contain fewer tables than table pools
    and, unlike table pools, the primary key of each
    table within the table cluster begins with the same
    field or fields.
    A cluster is advantageous in the case where data is
    accessed from multiple tables simultaneously and
    those tables have at least one of their primary key
    fields in common.
    Cluster tables reduce the number of database reads
    and thereby improve performance.
    Pooled and cluster tables are usually used only by
    SAP and not used by customers, probably because of
    the proprietary format of these tables within the
    database and because of technical restrictions
    placed upon their use within ABAP/4 programs.
    Cluster table can only be read and display via your
    ABAP program.
    For furthur details visit this link...
    <a href="http://www.sap-img.com/abap/the-different-types-of-sap-tables.htm">http://www.sap-img.com/abap/the-different-types-of-sap-tables.htm</a>
    Best Regards,
    Maheswaran.B

  • How do I delete old Pools and Servers from my OVM 3.1.1 environment

    I have a 3.1.1 OVM server with two clustered server pools. One works over NFS and the other over fibre to a NetApp SAN. The fibre one is a test environment at the moment. The pool volume and the stroage volume for the test Pool has been deleted from the SAN side :0(. I have therefore now got a pool with servers and virtual machines that cant be deleted because the storage is not present. How can I remove the OVS servers and delete the pool without affecting my other live pool. I have rebuilt one of the OVS servers and that is still getting picked up as the original even though it has a different name and ip address. I guess this is because it uses the hardware address somehow to recognise it. Any ideas?

    Hello, macpro3000.
    Thank you for visiting Apple Support Communities. 
    Deleting a station in iTunes Radio is similar to deleting other iTunes media.  To delete a station, click the desired station, then press delete on your keyboard. You can also delete multiple stations by clicking and dragging your cursor over multiple stations. Once you've selected the desired stations, press delete on your keyboard.  
    About iTunes Radio
    http://support.apple.com/kb/HT5848
    Cheers,
    Jason H. 

  • Diff. bet TYPE-POOL & TYPE-POOLS

    Hi,
    Could anyone please tell me the difference b/w TYPE-POOL & TYPE-POOLS statements with some examples

    Oh, I think I see what you are talking about.  In a report program write this.
    report zrich_0002.
    type-pools slis.
    Now double click on the SLIS.  This takes you to the type pool.  This first statement in the code is TYPE-POOL.   This is just how you define the start of the type pool.  Sort of like REPORT ZRICH_0002 or FUNCTION ztest.  When developing a type pool you will begin the code by saying TYPE-POOL ZTEST.  Then, to use the type pool in your program, you will say TYPE-POOLS ZTEST.
    Make sense?
    Welcome to SDN. Please reward points for helpful answers and mark your post as solved if answered completely. Thanks.
    Regards,
    Rich Heilman

  • Password Changes - Are Diffs From Old Tunable?

    Greetings;
    My "Global Password Policy" is setup to always check complexity, where complexity, amongst others,  is:
    pwd-strong-check-require-charset   :  any-three
    Password must contain a min of 8 characters, 3 of 4 must be 1 upper, 1 lower, 1 special &/or 1 numeric.
    While I intend to enforce even stricter rules later, I right now want my users to be able to change their password to a new one that is different by just one character only.
    Right now, my users are getting errors when they attempt to change their password, error states that their new password was already used/in history.
    Is this a tunable parameter?  I am at a loss as to why they are getting these errors otherwise.
    Thanks all in advance.

    I am running 11.1.1.7.0 on Sparc.
    I am retaining users past 12 passwords in history, essentially to guarantee they are not recycling them; this is a policy requirement.
    To make matter worse, I am having trouble replicating my users claims, which is leading me to believe they are not being truthful :-(

  • 903/902/BC4J can't get data-sources.xml conn pooling to work in production; help

    I have several BC4J ears deployed to a 903 instance of OC4J being configured as a standalone
    instance. I've had this problem since I started deploying in development on 902. So it's
    some basic problem that I've not mastered.
    I can't get data-sources.xml managed connection pooling to actually pool conn's. I'm wanting
    to declare my jndi jdbc source connection pool in j2ee/home/config/data-sources.xml.
    Have all BC4J apps get conns from this JNDI JDBC pool. I've removed all data-sources.xml from my BC4J ears,
    and published the jndi jdbc source in my oc4j common data-sources.xml. I've tested that this is
    the place controlling the conn URL/login passwd by commenting it out of config/data-sources.xml
    and my BC4J apps then throw exceptions, can't get conn.
    I've set the oc4j startup cmd line with the BC4J property to enabled connection pooling:
    -Djbo.doconnectionpooling=true
    symptom
    Connections are created and closed. Instead of being put back into the pool managed by oc4j,
    what ever BC4J is doing or my data-sources.xml is doing, the connections are just being created and
    closed.
    I can verify this via (solaris) lsof and netstat, where I see my oc4j instance under test load
    with only 1 or 2 conns to the db box, and the ephemeral port is tumbling, meaning a new socket is
    being opened for each conn. ;( grrrrrrr
    Does anyone have a clue as to why this is happening?
    Thanks, curt
    my data-sources.xml
    <data-sources>
         <data-source
            class="com.evermind.sql.DriverManagerDataSource"
            connection-driver="oracle.jdbc.driver.OracleDriver"
            ejb-location="jdbc/DEVDS"
            location="jdbc/DEVCoreDS"
            name="DEVDS"
            password="j2train"
            pooled-location="jdbc/DEVPooledDS"
            url="jdbc:oracle:thin:@10.2.1.30:1521:GDOC"
            username="jscribe"
            xa-location="jdbc/xa/DEVXADS"
            inactivity-timeout="300"
            max-connections="50"
            min-connections="40"
        />
    </data-sources>

    I've run another test using local data-source.xml, that's packaged in the .ear. Still
    pooling under BC4J doesn't work??
    A piece of info is that the 903 oc4j release notes states that global conn pooling doesn't
    work. Infering that the j2ee/home/config/data-sources.xml data sources aren't pooled or ??
    I just tested so called local connection pooling, where I edited the data-sources.xml that
    gets packaged in the ear, to include the min/max params and re-ran my test.
    Still, the AM creates a new conn, it's to a new socket, and closes the conn when done. Causing
    each conn to not be pooled, rather opened then closed to the DB box. As verified with lsof and
    netstat, checking the ephemeral port # on the DB box side, always changes, meaning it's a
    new socket and not an old pooled conn socket.
    ???? What the heck??
    Surely if the AM conn check out / return code works properly, OC4J's pooling JDBC driver would
    pool and not close the socket??
    Has anywone gotten JDBC Datasource connections in BC4J to actually be pooled under OC4J??
    Since I couldn't get this to work in my early 902 oc4j testing, and now can't get it to work
    still under 903 OC4J, either it's my config or BC4J AM's code or OC4J?
    Any thoughts on how to figure out what's not configed correctly or has a bug?
    Thanks, curt

  • Pool Master server failover  issue in Oracle VM 2.2.1

    Hello All , We are new to oracle VM world. Sorry about detailed explanation.
    Our current configuration is, server1-poolmaster/utility/VM server & Server2-utility/VM server
    We have guest-VM running on both servers, and serverpool-VIP is configured properly. Below is our OVS-version.
    #rpm -qa | grep -i ovs
    oracle-logos-4.9.17-7ovs
    enterprise-linux-ovs-5-1.0
    ovs-release-2.2-1.0
    ovs-utils-1.0-34
    kernel-ovs-2.6.18-128.2.1.4.25.el5
    ovs-agent-2.3-42
    When we tested HA failover(shutting down server1), it work fine as expected. Pool master moved from server1 to server2, and guest VM restarted on server-2(which was running on server1 earlier).
    Now-- Pool master is server2.
    When we shutdown server2 now, pool master is not migrated to server1 & the guest-VMs(running on server 2) all went to power-off mode & serverpool is "inactive status'.
    Found below error in server1's /var/log/messages.. It seems like some 'dead-lock situation, and the serverpool-VIP is not moved from server2 to server1, until server2 came up online". Why is it so? The expected result should be "pool-master" & serverpool-VIP should moved to server1, but it didnt.
    Anyone experienced this? Any help/ input is appreciated.
    log file from server1's /var/log/ovs-agent/ovs_remaster.log
    2011-01-14 01:47:56 INFO=> run(): release_master_dlm_lock ...
    2011-01-14 01:48:02 INFO=> run(): release_master_dlm_lock ...
    2011-01-14 01:48:08 INFO=> run(): release_master_dlm_lock ...
    2011-01-14 01:48:14 INFO=> run(): release_master_dlm_lock ...
    2011-01-14 01:48:20 INFO=> run(): release_master_dlm_lock ...
    2011-01-14 01:48:26 INFO=> run(): release_master_dlm_lock ...
    ***** At this time its waiting to release the server pool-VIP on server 2
    *** Once server2 came online, serverpool-VIP released and taken by server1***
    2011-01-14 01:54:11 INFO=> cluster_get_next_master: => {"status": "SUCC", "value": "10.24.60.41"}
    2011-01-14 01:54:11 INFO=> run(): cluster_get_next_master: => {"status": "SUCC", "value": "10.24.60.41"}
    2011-01-14 01:54:13 INFO=> run(): clusterm_setup_master_env: => {"status": "SUCC"}
    2011-01-14 01:54:20 INFO=> run(): i am the new master. vip=10.24.60.45
    truncated logs from server1's /var/log/messages
    Jan 14 01:46:40 fwblade1 kernel: ocfs2_dlm: Node 1 leaves domain 70FFE4CF84634F5DB61BEA66E04693A7
    Jan 14 01:46:40 fwblade1 kernel: ocfs2_dlm: Nodes in domain ("70FFE4CF84634F5DB61BEA66E04693A7"): 0
    Jan 14 01:47:59 fwblade1 kernel: ocfs2_dlm: Node 1 leaves domain ovm
    Jan 14 01:47:59 fwblade1 kernel: ocfs2_dlm: Nodes in domain ("ovm"): 0
    Jan 14 01:48:55 fwblade1 kernel: o2net: connection to node fwblade2.wg.kns.com (num 1) at 10.24.60.42:7777 has been idle for 30.0 second
    s, shutting it down.
    Jan 14 01:48:55 fwblade1 kernel: (0,0):o2net_idle_timer:1503 here are some times that might help debug the situation: (tmr 1294987705.66
    5702 now 1294987735.663612 dr 1294987705.665695 adv 1294987705.665724:1294987705.665725 func (53ed487f:505) 1294987705.665424:1294987705
    .665428)
    Jan 14 01:48:55 fwblade1 kernel: o2net: no longer connected to node fwblade2.wg.kns.com (num 1) at 10.24.60.42:7777
    Jan 14 01:48:55 fwblade1 kernel: (5190,0):dlm_send_remote_lock_request:333 ERROR: status = -112
    Jan 14 01:48:55 fwblade1 kernel: (5186,2):dlm_send_remote_lock_request:333 ERROR: status = -107
    Jan 14 01:48:55 fwblade1 kernel: (5190,0):dlm_send_remote_lock_request:333 ERROR: status = -107
    Jan 14 01:48:55 fwblade1 kernel: (5186,2):dlm_send_remote_lock_request:333 ERROR: status = -107
    ** the above messages is repeated till server1 came online ***
    Jan 14 01:48:57 fwblade1 kernel: (4694,2):dlm_drop_lockres_ref:2211 ERROR: status = -107
    Jan 14 01:48:57 fwblade1 kernel: (4694,2):dlm_purge_lockres:206 ERROR: status = -107
    Jan 14 01:48:57 fwblade1 kernel: (4694,2):dlm_drop_lockres_ref:2211 ERROR: status = -107
    Jan 14 01:48:57 fwblade1 kernel: (4694,2):dlm_purge_lockres:206 ERROR: status = -107
    Jan 14 01:49:30 fwblade1 kernel: (4651,0):ocfs2_dlm_eviction_cb:98 device (253,0): dlm has evicted node 1
    Jan 14 01:49:30 fwblade1 kernel: (32373,0):dlm_get_lock_resource:844 78CD07B6D4C34CEAB756BF56E6D9C561:M00000000000000000002182aa14db5: a
    t least one node (1) to recover before lock mastery can begin
    ** Still no sign of server1 taking up the serverpool-VIP, all the guest-VM are still power-off status***
    Jan 14 01:49:35 fwblade1 kernel: (4695,0):dlm_get_lock_resource:844 78CD07B6D4C34CEAB756BF56E6D9C561:$RECOVERY: at least one node (1) to
    recover before lock mastery can begin
    Jan 14 01:49:35 fwblade1 kernel: (4695,0):dlm_get_lock_resource:878 78CD07B6D4C34CEAB756BF56E6D9C561: recovery map is not empty, but mus
    t master $RECOVERY lock now
    Jan 14 01:49:35 fwblade1 kernel: (4695,0):dlm_do_recovery:524 (4695) Node 0 is the Recovery Master for the Dead Node 1 for Domain 78CD07
    B6D4C34CEAB756BF56E6D9C561
    ** still no luck.. all guest VM are down***
    Jan 14 01:53:59 fwblade1 kernel: (5186,1):dlm_send_remote_lock_request:333 ERROR: status = -92
    Jan 14 01:53:59 fwblade1 kernel: (5190,10):dlm_send_remote_lock_request:333 ERROR: status = -92
    Jan 14 01:53:59 fwblade1 kernel: (5186,1):dlm_send_remote_lock_request:333 ERROR: status = -92
    Jan 14 01:53:59 fwblade1 kernel: (5190,10):dlm_send_remote_lock_request:333 ERROR: status = -92
    Jan 14 01:53:59 fwblade1 kernel: (5186,1):dlm_send_remote_lock_request:333 ERROR: status = -92
    Jan 14 01:54:00 fwblade1 kernel: (5190,10):dlm_send_remote_lock_request:333 ERROR: status = -92
    Jan 14 01:54:00 fwblade1 kernel: (5186,1):dlm_send_remote_lock_request:333 ERROR: status = -92
    Jan 14 01:54:00 fwblade1 kernel: ocfs2_dlm: Node 1 joins domain 78CD07B6D4C34CEAB756BF56E6D9C561
    Jan 14 01:54:00 fwblade1 kernel: ocfs2_dlm: Nodes in domain ("78CD07B6D4C34CEAB756BF56E6D9C561"): 0 1
    Jan 14 01:54:00 fwblade1 kernel: (5190,10):dlmlock_remote:269 ERROR: dlm status = DLM_IVLOCKID
    Jan 14 01:54:00 fwblade1 kernel: (5190,10):dlmlock:747 ERROR: dlm status = DLM_IVLOCKID
    Jan 14 01:54:00 fwblade1 kernel: (5190,10):ocfs2_lock_create:997 ERROR: DLM error DLM_IVLOCKID while calling dlmlock on resource F000000
    000000000000a50dd279960c: bad lockid
    Jan 14 01:54:00 fwblade1 kernel: (5190,10):ocfs2_file_lock:1584 ERROR: status = -22
    Jan 14 01:54:00 fwblade1 kernel: (5190,10):ocfs2_do_flock:79 ERROR: status = -22
    Jan 14 01:54:00 fwblade1 kernel: (5186,1):dlmlock_remote:269 ERROR: dlm status = DLM_IVLOCKID
    Jan 14 01:54:00 fwblade1 kernel: (5186,1):dlmlock:747 ERROR: dlm status = DLM_IVLOCKID
    Jan 14 01:54:00 fwblade1 kernel: (5186,1):ocfs2_lock_create:997 ERROR: DLM error DLM_IVLOCKID while calling dlmlock on resource F0000000
    00000000000a50dd279960c: bad lockid
    Jan 14 01:54:00 fwblade1 kernel: (5186,1):ocfs2_file_lock:1584 ERROR: status = -22
    Jan 14 01:54:00 fwblade1 kernel: (5186,1):ocfs2_do_flock:79 ERROR: status = -22
    Jan 14 01:54:05 fwblade1 kernel: ocfs2_dlm: Node 1 joins domain 70FFE4CF84634F5DB61BEA66E04693A7
    Jan 14 01:54:05 fwblade1 kernel: ocfs2_dlm: Nodes in domain ("70FFE4CF84634F5DB61BEA66E04693A7"): 0 1
    ** Now server2 came online(old pool-master) and server-pool-VIP is moved to server1.** All guest-VM are restarted on SERVER2 itself.
    Thanks
    Prakash

    You might be running into a OCFS2 bug. Check the bug list for bug 1099
    http://oss.oracle.com/bugzilla/show_bug.cgi?id=1099
    also related to this subject might be bug 1095 and 1080. You might want to check with the OCFS2 guys at Oracle and participate in resolving this bug. Not sure this is the case however I think this is a good starting point.
    Please keep us posted.
    Regards,
    Johan Louwers.

  • XML diff and merge - free lib?

    I'm using my own XML communication protocol and got a problem. I transfer a configuration with this protocol, and the config can grow very large. So I want to create a diff of old/new configuration an transfer only this diff. On the receiver side the last received config should be patched with the diff.
    I'm using JDOM.
    Is there a free lib for XML diff and merge/patch available? I already googled for it but found only a commercial lib.
    Thanks in advance

    I like ExamDiff for Windows. It was free last time I downloaded it, but I haven't looked in a while.
    I don't know of any good merge tools, streamy. I'll have to crib what comes back from others.
    %

  • More Major Issues with ZFS + Dedup

    I'm having more problems - this time, very, very serious ones, with ZFS and deduplication. Deduplication is basically making my iSCSI targets completely inaccessible to the clients trying to access them over COMSTAR. I have two commands right now that are completely hung:
    1) zfs destroy pool/volume
    2) zfs set dedup=off pool
    The first command I started four hours ago, and it has barely removed 10G of the 50G that were allocated to that volume. It also seems to periodically cause the ZFS system to stop responding to any other I/O requests, which in turn causes major issues on my iSCSI clients. I cannot kill or pause the destroy command, and I've tried renicing it, to no avail. If anyone has any hints or suggestions on what I can do to overcome this issue, I'd very much appreciate that. I'm open to suggestions that will kill the destroy command, or will at least reprioritize it such that other I/O requests have precedence over this destroy command.
    Thanks,
    Nick

    To add some more detail, I've been review iostat and zpool iostat for a couple of hours, and am seeing some very, very strange behavior. There seem to be three distinct patterns going on here.
    The first is extremely heavy writes. Using zpool iostat, I see write bandwidth in the 15MB/s range sustained for a few minutes. I'm guessing this is when ZFS is allowing normal access to volumes and when it is actually removing some of the data for the volume I tried to destroy. This only lasts for two to three minutes at a time before progressing to the next pattern.
    The second pattern is categorized by heavy, heavy read access - several thousand read operations per second, and several MB/s bandwidth. This also lasts for five or ten minutes before proceeding to the third pattern. During this time there is very little, if any write activity.
    The third and final pattern is categorized by absolutely no write activity (0s in both the write ops/sec and the write bandwidth columns, and very, very small read activity. By small ready activity, I mean 100-200 read ops per second, and 100-200K read bandwidth per second. This lasts for 30 to 40 minutes, and then the patter proceeds back to the first one.
    I have no idea what to make of this, and I'm out of my league in terms of ZFS tools to figure out what's going on. This is extremely frustrating because all of my iSCSI clients are essentially dead right now - this destroy command has completely taken over my ZFS storage, and it seems like all I can do is sit and wait for it to finish, which, as this rate, will be another 12 hours.
    Also, during this time, if I look at the plain iostat command, I see that the read ops for the physical disk and the actv are within normal ranges, as are asvc_t and %w. %b, however is pegged at 99-100%.
    Edited by: Nick on Jan 4, 2011 10:57 AM

  • 903/902/BC4J can't get OC4J data-sources.xml conn pooling to work in production: help

    [cross posted to the j2ee forum]
    I have several BC4J ears deployed to a 903 instance of OC4J being configured as a standalone
    instance. I've had this problem since I started deploying in development on 902. So it's
    some basic problem that I've not mastered.
    I can't get data-sources.xml managed connection pooling to actually pool conn's. I'm wanting
    to declare my jndi jdbc source connection pool in j2ee/home/config/data-sources.xml and
    have all BC4J apps get conns from this JNDI JDBC pool. I've removed all data-sources.xml from
    my BC4J ears, and published the jndi jdbc source in my oc4j common data-sources.xml.
    I've tested that this is the place controlling the conn URL/login passwd by commenting it
    out of config/data-sources.xml and my BC4J apps then throw exceptions, can't get conn.
    I've set the oc4j startup cmd line with the BC4J property to enabled connection pooling:
    -Djbo.doconnectionpooling=true
    symptom
    Connections are created and closed. Instead of being put back into the pool managed by oc4j,
    what ever BC4J is doing or my data-sources.xml is doing, the connections are just being created and
    closed.
    I can verify this via (solaris) lsof and netstat, where I see my oc4j instance under test load
    with only 1 or 2 conns to the db box, and the ephemeral port is tumbling, meaning a new socket is
    being opened for each conn. ;( grrrrrrr
    Does anyone have a clue as to why this is happening?
    Thanks, curt
    my data-sources.xml
    <data-sources>
         <data-source
            class="com.evermind.sql.DriverManagerDataSource"
            connection-driver="oracle.jdbc.driver.OracleDriver"
            ejb-location="jdbc/DEVDS"
            location="jdbc/DEVCoreDS"
            name="DEVDS"
            password="j2train"
            pooled-location="jdbc/DEVPooledDS"
            url="jdbc:oracle:thin:@10.2.1.30:1521:GDOC"
            username="jscribe"
            xa-location="jdbc/xa/DEVXADS"
            inactivity-timeout="300"
            max-connections="50"
            min-connections="40"
        />
    </data-sources>

    Thanks Leif,
    Yes, set it to the location jndi path.
    A piece of info is that the 903 oc4j release notes states that global conn pooling doesn't
    work. Infering that the j2ee/home/config/data-sources.xml data sources aren't pooled or ??
    I just tested so called local connection pooling, where I edited the data-sources.xml that
    gets packaged in the ear, to include the min/max params and re-ran my test.
    Still, the AM creates a new conn, it's to a new socket, and closes the conn when done. Causing
    each conn to not be pooled, rather opened then closed to the DB box. As verified with lsof and
    netstat, checking the ephemeral port # on the DB box side, always changes, meaning it's a
    new socket and not an old pooled conn socket.
    ???? What the heck??
    Surely if the AM conn check out / return code works properly, OC4J's pooling JDBC driver would
    pool and not close the socket??
    Has anywone gotten JDBC Datasource connections in BC4J to actually be pooled under OC4J??
    Since I couldn't get this to work in my early 902 oc4j testing, and now can't get it to work
    still under 903 OC4J, either it's my config or BC4J AM's code or OC4J?
    Any thoughts on how to figure out what's not configed correctly or has a bug?
    Thanks, curt

  • Problem in new environment and in old environment.

    Hi gurus
    I have a problem in data loading we have two environments old and new
    The data flow in new environment is differ from old
    The data flow is like this In BW side i have ODS first and from  ODS the data is going to two Cubes
    But in these  two Cubes one cube is in old environment and another in new environment
    My problem is I have a char 0NOTIFICATION and a key figure. But this  key figure is filled with Time char
    In new environment for some notification no's the key figure is bringing same data as it is in ODS
    But in old environment it is bringing some dates for the same notification nos
    Actually I have checked the Transfer rules and update rules to check any routines are there. No routines are available here
    How can I achieve this problem
    And one more thing I want to check whether remaining notifications is getting the same problem or not I have huge amount of data in cubes to compare Please let me know the simple way to Reconcile the data
    Thanks in advance

    I couldn't get a LC ES Forms server solution to this problem.  Adobe LC support stated that I had to modify the XDPs that show the bolding problem.  According to Product Engineering and Escalations group, LC ES 8.2.1 contained changes with respect to font attributes that we need to modify the XDP.  There are no server level changes that can solve the problem.
    Solution for me:
    1.  Open the XDP that showed the bolding problem in LC Designer ES SP3.
    2.  Find the bolded Chinese characters and unbold them.  Save the XDP.
    3.  Drop the updated XDP on to the LC ES Update 1 (8.2.1) Forms server.
    4.  Render the form in LC ES Update 1 (8.2.1).  No unwanted bolded characters for me.  As a plus I noticed the XDP size grow smaller in terms of file size after I saved it on ES Designer SP3.

  • Flex Diff Algorithm

    A guy named Paul Butler made a diff algorithm for PHP.  It compares two strings, and shows you the differences between the strings.  The original PHP code is here:
    http://github.com/paulgb/simplediff/blob/5bfe1d2a8f967c7901ace50f04ac2d9308ed3169/simpledi ff.php
    I have translated it into Flex, and given it an Air wrapper  (the two primary functions, diff and HTMLdiff should work fine in any flex app, but the wrapper requires AIR since it uses an HTML container).  Just putting this out there, as it is kind of cool and may be useful to some folks.  The code is pretty ugly, so if anyone wants to clean it up, please do so and repost (the comments were meant to help me translate from PHP to actionscript):
    <?xml version="1.0" encoding="utf-8"?>
    <mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" height="300" width="400" viewSourceURL="srcview/index.html">
        <mx:Label x="10" y="10" text="Old:"/>
        <mx:TextArea x="10" y="36" width="194" height="110" id="oldbox" text="This is the original text before any changes have been made."/>
        <mx:Label x="212" y="10" text="New:"/>
        <mx:TextArea x="212" y="36" width="176" height="110" id="newbox" text="This is the new text after all the changes were made."/>
        <mx:HTML right="10" left="10" top="180" bottom="10" id="result"/>
        <mx:Button x="347" y="8" label="Go" click="go()"/>
        <mx:Script>
            <![CDATA[
                public function diff($old:Array, $new:Array):Array{
                    var $matrix:Array = new Array();
                    var $maxlen:uint;
                    var $omax:uint;
                    var $nmax:uint;
                    var $return:Array = new Array();
                    var $subreturn:Array = new Array();
                    //The next two lines imitate foreach($old as $oindex => $ovalue){
                    for (var $oindexstring:String in $old){
                        var $oindex:uint = uint($oindexstring);
                        var $ovalue:String = $old[$oindex];
                    //The next lines imitate $nkeys = array_keys($new, $ovalue);   
                        var $nkeys:Array = new Array();
                        for (var $nkey:String in $new){
                            if ($new[$nkey] == $ovalue){
                                $nkeys.push($nkey);
                    //The next line imitates foreach($nkeys as $nindex){
                        for each (var $nindex:uint in $nkeys){
                            //The next line imitates $matrix[$oindex][$nindex] = isset($matrix[$oindex - 1][$nindex -1]) ? $matrix[$oindex - 1][$nindex - 1] + 1 : 1;
                            if ($matrix.hasOwnProperty(String($oindex-1))){
                                if ($matrix[$oindex-1].hasOwnProperty(String($nindex-1))){
                                    if ($matrix.hasOwnProperty(String($oindex))){
                                        $matrix[$oindex][$nindex] = $matrix[$oindex - 1][$nindex - 1] + 1;
                                    } else {
                                        $matrix[$oindex] = new Array();
                                        $matrix[$oindex][$nindex] = $matrix[$oindex - 1][$nindex - 1] + 1;
                                } else {
                                    if ($matrix.hasOwnProperty(String($oindex))){
                                        $matrix[$oindex][$nindex] = 1;
                                    } else {
                                        $matrix[$oindex] = new Array();
                                        $matrix[$oindex][$nindex] = 1;
                            } else {
                                if ($matrix.hasOwnProperty(String($oindex))){
                                    $matrix[$oindex][$nindex] = 1;
                                } else {
                                    $matrix[$oindex] = new Array();
                                    $matrix[$oindex][$nindex] = 1;
                            if ($matrix[$oindex][$nindex] > $maxlen){
                                $maxlen = $matrix[$oindex][$nindex];
                                $omax = $oindex + 1 - $maxlen;
                                $nmax = $nindex + 1 - $maxlen;
                    if($maxlen == 0){
                        $subreturn["d"] = $old;
                        $subreturn["i"] = $new;
                        $return.push($subreturn);
                        return $return;
                    } else {
                        //Next line is diff(array_slice($old, 0, $omax), array_slice($new, 0, $nmax));
                        $subreturn = diff($old.slice(0, $omax), $new.slice(0, $nmax));
                        $return = $return.concat($subreturn);
                        //Next line imitates array_slice($new, $nmax, $maxlen)
                        $subreturn = $new.slice($nmax, $nmax + $maxlen);
                        $return = $return.concat($subreturn);
                        //Next line is diff(array_slice($old, $omax + $maxlen), array_slice($new, $nmax + $maxlen))
                        $subreturn = diff($old.slice($omax + $maxlen), $new.slice($nmax + $maxlen));
                        $return = $return.concat($subreturn);
                        return $return;
                public function go():void{
                    result.htmlText = htmlDiff(oldbox.text, newbox.text);
                public function htmlDiff($old:String, $new:String):String{
                    var $ret:String = "";
                    var $diff:Array = diff($old.split(" "), $new.split(" "));
                    for each(var $k:Object in $diff){
                        if($k.constructor == Array){
                            //$ret += (!empty($k['d'])?"<del>".implode(' ',$k['d'])."</del> ":'').
                            if ($k['d'] == undefined || $k['d'].length == 0){
                                $ret += "";
                            } else {
                                $ret += "<del style=\"color: red ; text-decoration: line-through\" >"+$k["d"].join(" ")+"</del> ";
                            //(!empty($k['i'])?"<ins>".implode(' ',$k['i'])."</ins> ":'');
                            if ($k['i'] == undefined || $k['i'].length == 0){
                                $ret += "";
                            } else {
                                $ret += "<ins style=\"color: blue ; text-decoration: underline\" >"+$k["i"].join(" ")+"</ins> "
                        } else {
                            $ret += $k + ' ';
                    return "<html><body>"+$ret+"</body></html>";
            ]]>
        </mx:Script>
        <mx:Label x="10" y="154" text="Diff Result:"/>
    </mx:WindowedApplication>

    Spend a good time with all the Java building blocks, especially OOP, data structures, collections etc. Ask more specific questions if you encounter problem.
    Also have a look at open source java diff library such as: http://code.google.com/p/java-diff-utils, go through their source code, see if you can find something useful

  • ZFS mount points and zones

    folks,
    a little history, we've been running cluster 3.2.x with failover zones (using the containers data service) where the zoneroot is installed on a failover zpool (using HAStoragePlus). it's worked ok but could be better with the real problems surrounding lack of agents that work in this config (we're mostly an oracle shop). we've been using the joost manifests inside the zones which are ok and have worked but we wouldn't mind giving the oracle data services a go - and the more than a little painful patching processes in the current setup...
    we're started to look at failover applications amongst zones on the nodes, so we'd have something like node1:zone and node2:zone as potentials and the apps failing between them on 'node' failure and switchover. this way we'd actually be able to use the agents for oracle (DB, AS and EBS).
    with the current cluster we create various ZFS volumes within the pool (such as oradata) and through the zone boot resource have it mounted where we want inside the zone (in this case $ORACLE_BASE/oradata) with the global zone having the mount point of /export/zfs/<instance>/oradata.
    is there a way of achieving something like this with failover apps inside static zones? i know we can set the volume mountpoint to be what we want but we rather like having the various oracle zones all having a similar install (/app/oracle etc).
    we haven't looked at zone clusters at this stage if for no other reason than time....
    or is there a better way?
    thanks muchly,
    nelson

    i must be missing something...any ideas what and where?
    nelson
    devsun012~> zpool import Zbob
    devsun012~> zfs list|grep bob
    Zbob 56.9G 15.5G 21K /export/zfs/bob
    Zbob/oracle 56.8G 15.5G 56.8G /export/zfs/bob/oracle
    Zbob/oratab 1.54M 15.5G 1.54M /export/zfs/bob/oratab
    devsun012~> zpool export Zbob
    devsun012~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    1 bob running /opt/zones/bob native shared
    devsun013~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    16 bob running /opt/zones/bob native shared
    devsun012~> clrt list|egrep 'oracle_|HA'
    SUNW.HAStoragePlus:6
    SUNW.oracle_server:6
    SUNW.oracle_listener:5
    devsun012~> clrg create -n devsun012:bob,devsun013:bob bob-rg
    devsun012~> clrslh create -g bob-rg -h bob bob-lh-rs
    devsun012~> clrs create -g bob-rg -t SUNW.HAStoragePlus \
    root@devsun012 > -p FileSystemMountPoints=/app/oracle:/export/zfs/bob/oracle \
    root@devsun012 > bob-has-rs
    clrs: devsun013:bob - Entry for file system mount point /export/zfs/bob/oracle is absent from global zone /etc/vfstab.
    clrs: (C189917) VALIDATE on resource bob-has-rs, resource group bob-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource bob-has-rs in resource group bob-rg on node devsun013:bob failed.
    clrs: (C891200) Failed to create resource "bob-has-rs".

Maybe you are looking for

  • Unlocking cores 990xa-gd55

    I have finally worked through my display driver problems (turned out to be a flash problem) now I want to figure out why unlocking the cores works on occasion but most of the time it doesn't.  On the occasion it does unlock all six cores the system r

  • 10.8 and 10.6 on different partitions?

    I have a new mini. It came with 10.8 installed. First, I made two partitions, then reinstalled 10.8 and updated to 10.8.1. So far, so good. Now I want to put 10.6 on the other partition and migrate to there to ease my OS transition. The mini won't le

  • To control no of lines in list output.

    Hi,      Am using Reuse_alv_list_display in which i want display only 30 lines in a page how to control it. thanks, vino

  • Question about family base and 911 notification of both parents

    recently,we,had,someone,call,911,on,one,of,our,phones.,,I,am,a,parent,and,was,not,notified,,but,my,wife,who,i,think,is,shown,as,account,owner,on,our,bill,was,notified,on,her,phone.,,We,want,to,be,sure,we,are,both,notified,immediately,on,our,phones.,,

  • Upgrade from 4.6C to ECC 6.0 - change text of infotype 0002 screen

    Hello Guru, My client is upgrading from 4.6C to ECC 6.0. In 4.6C the field RUFNM text in infotype screen for 0002 is "Know As"  where in ECC 6.0 is now "Nickname". The field is P0002-RUFNM? If I want to change the text in ECC 6.0 from 'Nickname' to "