Running BDB 4.7.25 on Solaris 10 U8 with ultrasparc iv+

hello,
i have a question with regards to building BDB 4.7.25 on a Solaris 10 U8 and using UltraSparc CPU iv+.
i actually have two deployments one with Ultrasparc iv+ and IIIi. the same configuration is applied , and built in the same way
using default configuration. both DB starts correct on both boxes, however, when i try to stress both of them with Multi/read/write/modify operations, the Version on
Ultrasparc 4+ hangs/mutex lock issues, while the UltraSparc 3i, resolve them with no problems at all.
i cant help but wonder, is there any significat change between the two CPU versions? and does BDB require any special build for Ultrasparc 4 version?
thanks a lot

Ok thanks,
I am not aware of any difference that would cause such
a problem. Maybe someone else has more information.
One thought would be to run test_mutex (a standalone mutex
test for Berkeley DB mutexes) on both systems. You can find
this in your distribution at: db-4.7.25/mutex/test_mutex.c
Thanks,
Sandra

Similar Messages

  • HELP: WL 8.1 run unexpectedly slow on solaris 9 with low CPU utilization

    Hi All
    I have setup my app to run on WL8.1 + solaris 9 env + SUN mid-end series server.
    JVM was configured to use 3G RAM and there are still abundant RAM on the HW server.
    I tried out a use case, it took long time to get response (~ 2 minutes). But the
    CPU utilization has been always lower 20%. I have tried out the same test case
    on a wintel server with 500 RAM allocated to JVM, the response time is much quicker
    (less than 30 sec). I did the same on solaris 8 with 3G RAM and had used alternate
    threads library mode (changing LD_LIBRARY_PATH to include /usr/lib/lwp) which
    is the default mode adopted by solaris 9. The same use case responded much quicker
    and comparable to abovementioned test on wintel. Can anybody advice on how to
    tune WL 8.1 on solaris 9 so as to make it perform best ? Is there any special
    trick ?
    thank u very much for any advice in advance
    dso

    "Arjan Kramer" <[email protected]> wrote:
    >
    Hi dso,
    I'm running the same two configs and run into the same performance issues
    as you do. Please let me know if you any response on this!
    Regards,
    Arjan Kramer
    "dso" <[email protected]> wrote:
    Hi All
    I have setup my app to run on WL8.1 + solaris 9 env. JVM was configured
    to use
    3G RAM and there are still abundant RAM on the HW server. I tried out
    a use case,
    it took long time to get response (~ 2 minutes). But the CPU utilization
    has been
    always lower 20%. I have tried out the same test case on a wintel server
    with
    500 RAM allocated to JVM, the response time is much quicker (less than
    30 sec).
    I did the same on solaris 8 with 3G RAM and had used alternate threads
    library
    mode (changing LD_LIBRARY_PATH to include /usr/lib/lwp) which is the
    default mode
    adopted by solaris 9. The same use case responded much quicker and comparable
    to abovementioned test on wintel. Can anybody advice on how to tuneWL
    8.1 on
    solaris 9 so as to make it perform best ? Is there any special trick
    thank u very much for any advice in advance
    dso
    There could be many factors that add to performance degradation, database, OS,
    Network, app config etc, so without knowing too much its difficult to tell.
    Can you please supply the startup JAVA options used to set the heap etc. Having
    larger heao sizes is not always the best approach of building HA applications...the
    bigger they are, the bigger they fall. I'd suggest using many but smaller instances.
    Provide the heap info from NT also.
    BTW, when weblogic starts, can you tell me how much memory is being used in the
    console...ie the footprint of weblogic + your application.
    Many Thanks

  • V490 running solaris 9 with warning messages:

    I have a V490 running solaris 9 with the full recommended patch cluster and the SFS_base packages (and the additional patches) installed.
    Installed in the V490 is 1* SG-XPCI2FC-QF2-Z (dual 2Gb fibre card). On rebooting the v490 the follwing warning appears:
    WARNING: fctl: ULP FCSM version mismatch; please upgrade FCSM
    Does anyone know if this is a major issue and if so is there a fix?
    I have never seem this message before as we usually use the X6768A and maybe there is some difference in the card.

    I am having the same issue after installing the card "SG-XPCI1FC-QF4" on V490.
    I have opened a ticket.
    Is this a major issue or missing patches?.

  • Solaris x86 with Oracle RAC 10g Enterprise Edition Release 10.2.0.3.0

    Hello,
    Maybe you can help me (new on RMAN backup) in doing this.
    I have configured a single Oracle 10g database to have backup with RMAN with following steps:
    1. $ mkdir $ORACLE_BASE/rman_scripts
    2. $ mkdir $ORACLE_BASE/logs
    3. $ mkdir $ORACLE_BASE/tracking
    4. $ mkdir $ORACLE_BASE/c_backup
    5. $ sqlplus sys/<password> as sysdba
    6. SQL> alter system set db_recovery_file_dest_size = 50G scope=both;
    7. SQL> alter system set db_recovery_file_dest='${ ORACLE_BASE}/flash_recovery_ area' scope=both;
    8. SQL> alter system set log_archive_dest_10='location= use_db_recovery_file_dest';
    9. SQL> shutdown immediate
    10. SQL> startup nomount
    11. SQL> alter database archivelog;
    12. SQL> alter database open;
    13. SQL> alter database enable block change tracking using file '${ORACLE_BASE}/tracking/rman_ change_track.f';
    14. $ rman target /
    15. RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK
    TO '/var/opt/oracle/flash_ recovery_area/ORCL/c_backup/% F';
    16. RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
    17. RMAN> CONFIGURE BACKUP OPTIMIZATION ON;
    18. RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    19. RMAN> exit
    I need to configure incremental backup with RMAN on a two node Solaris x86 with Oracle RAC 10g Enterprise Edition Release 10.2.0.3.0 installation.
    We also use ASM to store database files, and have Oracle software installed on separate file systems (two Oracle roots for Node1 and Node2).
    I have following questions:
    1) where to put Flash Recovery Area (FRA)?
    I saw recommendations to put FRA on the ASM, is this the best way to do it?
    2) Can I put FRA on another file system (not on the ASM) which is available only from Node1? This way I can save space on the ASM.
    3) Is it possible/recommended to run RMAN from Node1 only?
    Below is the script used to run RMAN on the normal Oracle database (without RAC) which I need to change :
    =============================================================================================
    2.0 Oracle backup script: /opt/app/oracle/rman_scripts/backup.sh
    Use this for daily backups, possiblly as a cron job.
    Once a week run this: /opt/app/oracle/rman_scripts/backup.sh FULL
    All other days of the week: /opt/app/oracle/rman_scripts/backup.sh INCREMENTAL
    Note: You may have to change ORACLE_SID, ORACLE_BASE below to match your database.
    =============================================================================================
    #!/usr/bin/ksh
    ORACLE_SID=orcl
    ORACLE_BASE=/opt/app/oracle
    ORACLE_HOME=${ORACLE_BASE}/product/10.2.0/db_1
    PATH=${ORACLE_HOME}/bin:/usr/bin
    LOGDIR=${ORACLE_BASE}/logs
    LOGFILE=${LOGDIR}/rman.log
    if [[ $# < 1 ]]
    then
    echo "usage: backup.sh FULL|INCREMENTAL"
    exit;
    fi
    BACKUPTYPE=${1}
    full='FULL'
    incremental='INCREMENTAL'
    if [[ $BACKUPTYPE == $full ]]
    then
    $ORACLE_HOME/bin/rman target / nocatalog log ${LOGFILE} append << eof
    run {
    backup database;
    SQL 'alter system archive log current';
    backup archivelog all;
    delete noprompt obsolete;
    exit;
    eof
    echo ''
    fi
    if [[ $BACKUPTYPE == $incremental ]]
    then
    $ORACLE_HOME/bin/rman target / nocatalog log ${LOGFILE} append << eof
    run {
    backup database;
    backup incremental level 1 database;
    SQL 'alter system archive log current';
    backup archivelog all;
    delete noprompt obsolete;
    exit;
    eof
    echo ''
    fi

    Hi [email protected],
    Q1) where to put Flash Recovery Area (FRA)?
    A1) With RAC: on the shared storage
    I saw recommendations to put FRA on the ASM, is this the best way to do it?
    If you want your backups to be available for both nodes you have to use shared storage or tape using an mml library.
    So if you want to use the FRA for rman backups and the database is on ASM just make ASM the standard for the FRA as well.
    Q2) Can I put FRA on another file system (not on the ASM) which is available only from Node1? This way I can save space on the ASM.
    A2) Than you cannot recover in case Node1 is down. Best would be to send your storage admin to a training course so he can manage the clustered raw devices needed for ASM.
    Q3) Is it possible/recommended to run RMAN from Node1 only?
    A3) No see A2.
    Regards,
    Tycho

  • How to install solaris 8 with a nvidia M64 agp ?

    How to install solaris 8 with a nvidia M64 agp ?

    Use kdmconfig. Make sure your running MU3 or greater because there are some patches
    specific to the M64 cards that are needed to run in the hi res modes.
    ---Bob

  • Upgrade to Solaris 8 with Oracle 8.0.6

    Hi forum,
    on a Sparc-Server with Oracle 8.0.6 we want to upgrade from Solaris 7 to Solaris 8. Do we need to upgrade Oracle first or is Oracle 8.0.6 supportet on Solaris 8 ?
    Thanks
    Hans-Peter

    > Our organisation has been running SAP R/3 4.6C on Solaris 8 with Oracle 8i database. We are considering an upgrade to SAP ECC 6.0.
    Oh.. you're aware of the fact, that Oracle 8 as well as Solaris 8 is out of support?
    I would do it like (if you need/want to stay on the same hardware):
    - upgrade Oracle 8 (I hope your run 8.1.7.4) to Oracle 9.2.0.8
    - upgrade Solaris 8 to Solaris 10
    - upgrade Oracle 9.2.0.8 to Oracle 10.2.0.4 + latest interim patches
    - use kernel 46D_EX2
    - upgrade the 4.6c to ERP 6.0
    Be aware, that, if you plan to use Java applications (such as the Enterprise Portal or other Netweaer related appliation), the ERP 6.0 must be converted to Unicode.
    Markus

  • Problem initializing libsapsecu.so on Solaris 10 with SAP NW2004s.

    We have developed a Java application that we intend to serve with SAP NetWeaver 7.0 (NW2004s). Our NetWeaver system is running on Solaris 10 with an Oracle DB. We want our application to be able to extract usernames from SAP Login Tickets and to do these we need the external libraries <b>libsapsecu.so</b> and <b>libsapssoext.so</b>. However, when our application tries to initialize these libraries an exception is thrown:
    Mysapinitialize failed. rc=14
    However, we are using Windows versions of these libraries on another server and they work.
    Does anybody have experience using these libraries on Solaris, and if so, has anybody had similar problems.
    Any help would be greatly appreciated.

    Standard C++ defines two versions of qsort (and also bsearch): one that takes a pointer to a C function, and one that takes a pointer to a C++ function.
    Recall that in standard C++, a pointer to a C function has a different type than a pointer to a C++ function. This issue is discussed at length in the C++ Migration Guide that comes with the compiler.
    The version of qsort that takes a pointer to a C function is the C version of qsort, and is in libc.so (the basic Solaris runtime library that all programs use).
    The version of qsort that takes a pointer to a C++ function is in the C++ runtime library libCrun.so that all C++ programs use.
    But because it took a while for Solaris headers to be updated to the C++ requirement, early versions of libCrun did not have the C++ version of qsort (or bsearch). If you get the latest C++ runtime library patch (SUNWlibC) for your system, your program should link. You can get patches here:
    http://developers.sun.com/prodtech/cc/downloads/patches/index.html
    Not only the system where you build the program needs updating, but every system that runs the program you build.
    Alternatively, you can declare the comparison function extern "C" so that the C version of qsort will be used.
    extern "C"
    int comp(const void pv1, const void pv2)
    But if the comparison function is in a namespace or is a class member function, you cannot usefully declare it extern "C".

  • Networking Solaris OS With Windows

    Hi guys, I am currently developing a java web application using the Solaris 10 OS. It is a Chat application so in other words, people using windows OS must be able to use my Chat application.
    My question is how do i network Solaris 10 with Windows so that people using windows OS can use my application which is developed on the Solaris.

    Part of the issue may be how often you log onto Virtual XP.  Since both W7 and XP are separate and independent operating environments, the both need to be updated with all Microsoft and other software updates.  The also operate with separate security applications (e.g., anti-virus).  Depending on your automatic system update settings, when you log into XP, you may automatically install and run the latest version of Microsoft Security Essentials, or AVG Antivirus, or some other such applications.  Or, you may be overdue for a scheduled system scan, so the scan is launched as soon as you start Virtual XP.  Systems such as these consume extremely large amounts of CPU time when they launch and do a system scan.  However, when they finish, CPU utilization drops to normal.
    If you don't constantly go back and forth between W7 and XP every day, you should expect that you will probably find very large CPU utilization from such scans if you have not logged into XP for some time.  In my experience, it will all go away.
    Also, remember that you should separately defrag both environments.  The W7 defrag may take hours, as Virtual XP actually exists as three very large files on your basic HDD, and defragging W7 does nothing to defrag all of the tens of thousands of individual Virtual XP files that are embedded in the huge system files (such as .vhd) which hold the XP environment.
    You might also want to check out the Auslogics Disk Defrag package.  It is freeware, and in my experience does a better job than Microsoft's systems.  Again, you would need to install separate copies of this package in each of the W7 and XP environments.
    T410 2522-K4U QuadCore Intel i7 Processor 8GB RAM 320GB SATA HDD
    64-bit Windows7 Pro, with Windows Virtual XP (as included under W7 license)
    D-Link 655 Wireless Network WEP encryption MAC Filtering
    -- Both 802.11b and g adapters on various network components

  • Does Solaris come with

    A management system for non-UNIX experts? Like CP+ in Trustix or Webmin open source?
    Thanks,
    Dave

    While Solaris ships with webmin and the likes I'd advice you to skip the idea while you can because if you really need to depend on packages like that I doubt you'll fare well running a Solaris server. Perhaps even worse; you'll end up with others running it for you.
    When it comes to running a server connected to the Internet you really have no choice but to get a good grasp of what you're doing or hire someone to do it for you. Any other road is IMO bound to lead to compromises.

  • I am running Photoshop CC 2014 on Windows 8.1 with an AMD quad core processor and AMD E1-2100 APU with AMD Radeon HD 8210 Graphics Card. I have installed the latest driver version 13.152.0.0. When I activate puppet warp, the mesh appears over the image an

    I am running Photoshop CC 2014 on Windows 8.1 with an AMD quad core processor and AMD E1-2100 APU with AMD Radeon HD 8210 Graphics Card. I have installed the latest driver version 13.152.0.0. When I activate puppet warp, the mesh appears over the image and I can place the first pin.  However, when I go to add the second pin, Photoshop crashes. I have tried with on a new, blank file, with just a few lines on it, but, as soon as I place the first pin, the image disappears, and then, as soon as I click anywhere else in the image, Photoshop crashes. I have followed all the online advice about settings under the Preferences/Performance interface, but nothing fixes the problem. Can anyone please help?

    BOILERPLATE TEXT:
    If you give complete and detailed information about your setup and the issue at hand, such as your platform (Mac or Win), exact versions of your OS, of Photoshop and of Bridge, machine specs, such as total installed RAM, scratch file HDs, video card specs, what troubleshooting steps you have taken so far, what error message(s) you receive, if having issues opening raw files also the exact camera make and model that generated them, etc., someone may be able to help you.
    A screen shot could be very helpful too.
    Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
    http://forums.adobe.com/thread/419981?tstart=0
    Thanks!

  • I have just purchased a lightning to 30 pin adaptor to use on an iPad mini and a gen5 iPod touch both devices are running iOS 8.1.3 and come up with The error message "this accessory is not supported by this device "

    I have just purchased a lightning to 30 pin adaptor to use on an iPad mini and a gen5 iPod touch both devices are running iOS 8.1.3 and come up with The error message "this accessory is not supported by this device "
    THis is means they are not charging on a 30 pin cable and the touch won't work in my Apple iPod dock - no audio out or charge. Have re powered both devices to no avail am wondering if the iOS version is the problem
    Help!

    If it is a lightning to 30 pin adaptor, and you have a 7th Generation Nano it has to fit the Nano.
    This is lightning to 30 pin adapter: http://www.bestbuy.com/site/Apple%26%23174%3B---Lightning-to-30-Pin-Adapter/6651 936.p?id=1218803450821&skuId=6651936#tab=overview
    Is this what you bought?
    You need to contact Sony and see if they model you have is compatible with the docking adapter. It may not be.

  • On Sun fire v490 - Solaris 10 with Oracle 8.1.7.4 & Sybase 12.0

    Hi,
    We are going to upgrade our server with this configuration -
    Sun Fire V490     2 x 1.05 GHz UltraSPARC IV CPU
    8096MB RAM     2 x73GB local disk
    2x FC 2GB Sun/QLogic HBAs
    DAT72
    On one machine we will have Sun Solaris v10 with
    Oracle DB v8.1.7.4 & Second one will be Sun Solaris v10 with Sybase DB v12.0.0.6.
    Now our question is - Sun fire have Hyper-thread CPUs ��� will the O/S and databases (Oracle and Sybase) view the proposed system as a true 4 CPU platform? Will parameters used to tune the database such as Sybase max online engines still operate in the same manner as before?
    Our old machine configuration was - Sun E450     4x400MHz CPU     1024MB RAM     2 x18; 8x36GB disks

    Questions on Oracle and Sybase should be directed to a database forum, this forum is for Sun hardware support.
    Here is a link to a DB forum I look at from time to time:
    http://www.dbforums.com/index.php
    The topic of tuning Oracle or Solaris is way beyond the scope of this forum, I have attempted to go into it before but didn't get any feedback and I would only like to spend lots of time on it if I was being paid!!! On the memory side, keep in mind that Oracle 9i 64-bit can address a maximum of 2 ^ 64 ( 16777216 TB ) memory, prior to that the DBA had to define memory parameters in init.ora. To be honest the last time I worked with a Oracle 8 database I shut a HP K class server down permanently that had been migrated to Oracle 9i on Solaris by an Oracle consultant and I can't remember all the tuning trick etc.

  • I have a Windows 8.1 and it's even running slow.  I'm a complete novice with computers (I've only had this one for 3 weeks) and I'm probably doing something wrong, but I haven't a clue what....

    I have a Windows 8.1 and it's even running slow.  I'm a complete novice with computers (I've only had this one for 3 weeks) and I'm probably doing something wrong, but I haven't a clue what.
    The tools are not responding when I try to use them.  Some of them work sometimes, but not others and some don't work at all.  I'm in a design class online and I need these tools desperately.  I have an assignment due Monday and I'm losing the whole weekend because Tech Support is only open M-F!
    Any help anyone can give me will be appreciated.
    Thanks,
    Rose Ireland

    Maybe these links provide some pertinent information.
    Optimize performance | Photoshop CS4, CS5, CS6, CC
    Photoshop: Basic Troubleshooting steps to fix most issues

  • I'm told there is a virus associated with Adobe Reader. How can I tell if my machine is effected and if so what is the cure. I'm running OS X10.6.8 on an iMac with safari 5.1.5

    I'm told there is a virus associated with Adobe Reader. How can I tell if my machine is effected and if so what is the cure. I'm running OS X10.6.8 on an iMac with safari 5.1.5

    There are no viruses currently affecting the Mac but there is something called Flashback which is malware that has been doing the rounds and affected many Mac users. Apple have released some updates to Java that should remove it and improve protection. Run Software Update to see if there is anything for downloading.
    A few precautions that can help prevent your Mac becoming infected:
    If you use Flash only download it directly from Adobe.
    In all web browsers disable Java (but do leave Javascript on as that's something else).
    In all web browsers make sure downloaded files aren't set to automatically open after downloading.
    Consider disabling Java completely (launch Java Preferences in the Utilities folder and disable it - you'll likely rarely, if ever need it. If you do just turn it on and off again when your done).
    You can also check out this link:
    http://lifehacker.com/5900434/how-to-find-out-if-your-mac-was-infected-by-the-fl ashback-trojan-in-one-click

  • HT4736 Taking too long (2 minutes) to locate and download photos into Photo '11 9.5 (902.7 build running on an older Intel based MacBook Pro with iPhoto libraries on a USB2 External HD).  I checked and repaired HD permissions. Any ideas?

    Regarding iPhoto '11 9.5 (902.7 build running on an older Intel based MacBook Pro with iPhoto libraries on a USB2 External HD).  I am dealing with iPhoto taking too long to download photos.  Specifically, I rechecked and repaired HD permissions. I am running the most current software my five year old Intel MacBook Pro can run.   What happens is that when I connect an external SD card, or my iPhone, the new version of iPhoto takes up to two full minutes to fully acknowledge the device. Then locate new photos and be ready to download them to my external HD.  I am kind of concerned about this.  This has never happened before. 
    I take 20,000 photos a year.  I really don't want to lose any.  Or is there something I am doing wrong?  Or need to be aware of?  Any experienced suggestions would be appreciated.  Thanks.  Have a great day.
    PS.... The cameras I use are Canon SX-30, Nikon D3100, and my iPhone 4S.  Thanks again for your assistance.

    Hello Old Toad.... Those sound like great ideas. 
    I thought I checked and repaired disk permissions on my main boot HD.  That boot disk is Mac OS Extended (Journaled)  Capacity 749.3 GB.  Available 562.53 GB.   BUT.... now that I think of it.... the Seagate external HD with USB2 interface is: Mac OS Extended (Journaled), Capacity 639.79 GB, Available 36.2 GB with my latest iPhoto Library 517.37 GB that was already scanned & updated to be read by the latest iPhoto version. 
    I'll try your suggestions tonight as far as double checking 'permissions' and setting up a tiny test library.
    Or maybe it's time to fill up another External HD?
    I appreciate your and anyone else's suggestions to try.
    Have a great day. ~~ David in Rochester NY

Maybe you are looking for