JDS and Linux-2.6.6 kernel

http://supportforum.sun.com/sjds/index.php?t=msg&th=765&start=0&rid=5310&SQ=ea5bd8aac21c62bfceb22cbea787cd9c

Installing a 2.6 Kernel on JDS is much like installing a 2.6 kernel on any linux distribution. You can find details at http://www.kernel.org. Be aware of the potential for new hardware issues. For example, some LG CDROM drives have a firmware bug which causes them to to interpret the standard IDE bus command used by kernel 2.6 in probing hardware as "UPLOAD firmware."

Similar Messages

  • Java applet and Linux RedHat7.1 kernel  2.4.2-2

    Hello.
    I have really interesting problem...
    I am trying to run java applet in different env.:
    #1. windows2000, sun java1.3.1, webstart 1.0.1;
    #2. linux RedHat 7.1 kernel 2.4.2-2, sun java1.3.1, webstart 1.0.1;
    #3. linux RedHat 7.0, kernel 2.2.16-22, sun java1.3.0-01, webstart 1.0.1;
    #4.SunOs5.8, sun java1.3.0, webstart 1.0.1.
    I found that webstart can't launch applet into a full size under #2.
    Applet runnable, no error msg in java console, but the applet frame
    size is decrease into min. At the same time this application perfectly
    runnable without any changes under the other env.(#1, #3, #4).
    JNLP file:
    <?xml version="1.0" encoding="utf-8"?>
    <!-- JNLP File for FpEdit.jar Application -->
    <jnlp
    spec="1.0+"
    codebase="http://www.mycodebase"
    href="FpEdit.jnlp">
    <information>
    <title>Ufp Editor Application</title>
    <vendor>ME</vendor>
    <homepage href="http://www.myhomepage"/>
    <description>Ufp Editor Application</description>
    <description kind="short">
    Swing Graphical User Interface.
    </description>
    <offline-allowed/>
    </information>
    <security>
    <all-permissions/>
    </security>
    <resources>
    <j2se version="1.3+" initial-heap-size="30m" max-heap-size="50m"/>
    <jar href="http://www.myhref/SFpEdit.jar"/>
    </resources>
    <applet-desc
    documentBase="http://www.mydocumentbase"
    name="FpEdit"
    main-class="FpEdit"
    width="950"
    height="590">
    </applet-desc>
    </jnlp>
    May be somebody can explain me what is the problem???
    Thank's.

    Sorry about replay, but may be somebody can help me...
    For me ==> this is a problem between new
    linux kernel 2.4.. and java...

  • Mac OS X and Linux Zeroconf LAN irregularities

    Mac OS X and Linux Zeroconf LAN irregularities
    I would like to at least understand, if not remedy,
    an annoyance in establishing the connection between
    my Mac and my Linux box. I have yet to find anything
    constructive relative to the issue via the google
    shuffle or manual pages. Any thoughts, experiences
    and/or further troubleshooting steps would be
    appreciated. Thank you.
    ==============================================
    Installation:
    PMac G5 running OS X Tiger (10.4.4)
    Linux printer is shared with Mac via CUPS,
    not classic AppleTalk.
    P4 with Debian Etch (testing), kernel 2.6.12-1-686,
    Gnome desktop and USB attached printer and scanner.
    With netatalk 2.0.3-4 and task-howl (0.9.5-2), which
    includes the howl mdnsresponder (0.9.5-2), installed.
    Only the netatalk afpd and cnid_metad deamons are
    being run.
    hardwired ethernet with the Mac and Linux box connected
    to a Belkin 4 port router
    ==============================================
    Trials:
    If I boot up both my Mac and Linux box:
    Linux has autoipd, mDNSResponder and nifd daemons running.
    Mac has mDNSResponder and netinfod daemons running (don't
    know which others are pertinent to note).
    If on my Mac in a Finder pane I click Network -> My Network
    -> debian1 -> Connect then I get the message:
    AFP connection status
    Looking up "debian1.local.."
    which times out with the message:
    Connection failed - server may not exist blah blah
    In can however, on my Mac in the Finder menu bar select
    Go -> Connect to Server -> enter afp://192.168.2.48/
    (my Linux box) -> enter user and password of shared
    directory -> select share to mount; and I have my
    connection. If I dismount the share though, within a
    session, and try to connect again this method works and
    the Finder pane method still does not.
    Alternatively, on my Linux box I can run the cli
    # /etc/init.d/mdnsresponder restart
    Then back on my Mac the Finder pane method of establishing
    a connection (that failed above) works as it should.
    Further, I can dismount the share and reconnect with this
    method any number of times in the same session, and even
    reconnect with this method after rebooting only my Mac.
    However, if I reboot my Linux box I'm back to Finder menu
    bar or mdnsresponder restart reconnect choices.
    ==============================================
    What the above trials tell me is that if I restart
    mdnsreaponder then everything is "peachy" until my Linux
    box is rebooted. What they do not tell me is if there is
    something flakey with mdnsresponder, my Linux installation,
    an incompatibility with Tiger, or even some other
    interference on either box.
    ==============================================
    Lee C
    "Life is judged with all the blindness of life itself."
    -- George Santayana
    (see Backup::Restore article
    http://homepage.mac.com/lee_cullens/Bx3.html )
    (see Artworks sampling
    http://homepage.mac.com/lee_cullens/Artworks.pdf )

    I posted the following to the Debian list for obvious reasons, but I'm also posting it here in the hope that someone might shed some light on my suspicions relative to the Mac log.
    The howl-tools portion of my zeroconf installation is being phased
    out in favor of the newer Avahi as I understand it. However, Avahi
    hasn't made its way down to testing yet so for the time being I'm
    trying to make do with howl-tools. The point in my mentioning
    this here is that much of the information I find is outdated.
    That said, the only thing I've found in the Linux bootup logs yet
    (using dmesg) that I recognize is the line
    autoipd uses obsolete (PFINET,SOCKPACKET)
    So, using autoipd as a starting point for further researching
    this issue I found the following
    (at http://www.porchdogsoft.com/products/howl/InstallUnix.html )
    Ideally, autoipd should run only in the event that the interface has not been statically configured and DHCP fails. Running autoipd this way requires modification to the standard distribution boot scripts for your OS. These modifications vary depending on your version of Linux or FreeBSD. On RedHat Linux, for example, the /sbin/ifup script may be modified to launch autoipd in the event that dhclient fails. On our systems, we added the following line to /sbin/ifup right after dhclient is launched so that autoipd runs whenever DHCP fails:
    elif [ -z "`pidof -x dhclient`" ] && [ -z "`pidof -x dhcpcd`" ] && [ -z "`pidof -x pump`" ] && [ -x /usr/local/bin/autopid ] && /usr/local/bin/autoipd -i ${DEVICE}; then echo $"started autoip."
    Note that since this change causes "$NUMDEFROUTES" to become zero-length, the subsequent code in /sbin/ifup for fixing the duplicate routes generated by DHCP also gets modified in the howl version of ifup.
    You may need to modify your boot scripts differently depending on your platform. We have included the original ifup script for RedHat 9 along with our modified version (howl_ifup) as an example so that you may more easily identify how to modify the boot scripts for your platform.
    A lot of Greek to me but I did look in /etc/init.d and found
    the autoipd script which is the normal boilerplate (same as all
    the others with autoipd substituted for the daemon name).
    Even if I felt comfortable adding the above in the
    /etc/init.d/autoipd script, I have no idea whether it would be
    dealt with after dhclient.
    I also found in the Mac system log after a reboot:
    Jan 26 02:22:48 slpmacg5 mDNSResponder: Adding browse domain local.
    Jan 26 02:22:48 slpmacg5 configd[33]: executing /System/Library/SystemConfiguration/Kicker.bundle/Contents/Resources/enable-net work
    Jan 26 02:22:48 slpmacg5 configd[33]: posting notification com.apple.system.config.network_change
    Jan 26 02:22:48 slpmacg5 lookupd[80]: lookupd (version 369.2) starting - Thu Jan 26 02:22:48 2006
    Jan 26 02:22:49 slpmacg5 configd[33]: target=enable-network: disabled
    Jan 26 02:22:50 slpmacg5 configd[33]: AppleTalk startup complete
    Jan 26 02:23:59 slpmacg5 automount[148]: NSLXResolveDNS will try and resolve [debian1] of type [afpovertcp.tcp.] in location [local.]\n
    so I'm thinking (dangerous at my age) that my Mac isn't "discovering"
    what it needs until I restart (stop and start) mdnsresponder on my Linux box????
    What I'm asking is whether anyone thinks this (autoipd) on Linux is a
    likely cause of my described connection annoyance?
    If so, is the referenced info applicable to Debian today and how
    would one go about applying it?
    Or alternately if so, might it make sense to just disable autoipd
    startup altogether and how would I best go about that - just something
    like update-rc.d autoipd remove????????
    Thank you,
    Lee C

  • Where to download OAS9i V1.0.2.2 for Linux RedHat RH6.2(kernel 2.2.16 or later) use?

    So that we may better diagnose problems, please provide the following information.
    - Server name:linux1
    - Filename
    - Date/Time: 2/6/2002
    - Browser + Version: Netscape 6
    - O/S + Version: Linux RedHat 6.2 (Kernel 2.2.19, glibc 2.1.3)
    - Error Msg
    I am a member of OTN, and I tried to download and test OAS9i for my platform - Linux RedHat 6.2(Kernel 2.2.19, glibc 2.1.3), and I tried to find the related OAS9i product for Linux use from your site "Oracle9iAS Download Options", but I got a confused about those items, I found v1.0.2.2.1 OAS9i image *.cp file there, but there installation guide itself shows that it is for V1.0.2.2 use,that is for Intel Linux use(Kernel 2.2.14 or later, glibc 2.1.3), however, the image *.cp file which could be downloaded is V1.0.2.2.1, is for Intel RH7.1 Linux use(Kernel 2.4.3-12, glibc 2.2.2-10).
    So that's very strange, and I want to know that where can I download the *.cp files for Intel Linux RH6.2 use(Kernel 2.2.19 or later, glibc 2.1.3), my platform is RH6.2 because I heart that Intel Linux RedHat's version over 6.2 doesn't be certificated by ORACLE, so I always use Intel RedHat 6.2 as my basic platform, so who can tell me where to download the imgae *.cp files for my platform use? Or any suggestion?
    Thanks in advance!
    Best Regards!
    Frank

    You wrote:
    There no longer are standalone upgraders for Safari.
    I beg to differ, amigo...
    ...but you're correct in saying that they're not generally available. You have to know where to find them. Evidently, you don't...and obviously I didn't either, or I wouldn't have asked. But I think I know how to find out where they are now.
    Thanks again for your prompt replies.

  • 6140 and Linux woes

    Hi,
    I have a 6140 array attached to 2 McData 4400 (4 connections - each controller connects to each 4400).
    Attached to the the 4400s i have several boxes, namelly some x4100s. the 4100s have each a dual port QLogic QLA2462 (sun branded).
    Those boxes are intended to be xen Dom0s (64b) and connect to the 6140 thru the 4400s.
    If i let one of those servers just to see one of the paths to one of the 6140 controllers (there are four, two coming from each 4400), it will see all 6140 Luns but on those that belong to the other controller it will bomb errors - (of the kind "Buffer I/O error on device sdf, logical block 7")
    But it boots. Mess, is that i want to use multipath (SUN's provided rdac driver doesn 't seem to compile nicely with a pristine xen kernel) so i must use device-mapper-multipath, which kinda of works. It can see (with all paths exposed) all devices and so on... what it cannot see is that, for each path, only the volumes owned by the controller that terminates that path are actually useable, which makes fail-over actually a mess.
    Question is - anyone actually uses dm-multipath with the 6140 and linux ? Am i doing anything blatantly wrong ? if so, which is the politically correct setup ? if rdac is the only way to go, where can i find a package that actually cooks with pristine upstream linux kernels ?
    Thanxs in Advance
    Ant�nio

    hey,
    only mpp will provide any useful multipathing under linux with 6140 arrays; dm-multipath doesnt work
    (perhabs sometimes a special driver for dm-mp will exist?)
    -- randy

  • [WORKED AROUND] Installing Arch Linux with 3.11 kernel on Vaio Pro 13

    Yesterday I received my Sony Vaio Pro 13.
    The WLAN card is only supported since kernel 3.11, which has been released by Linus several days ago.
    The laptop has no ethernet port, ideally I'd thus use the WLAN abilities using installation.
    Since there is no Arch ISO available with 3.11 yet, I was hoping to create my own, using the arch wiki guide on remastering the ISO: https://wiki.archlinux.org/index.php/Re … nstall_ISO
    I was wondering whether, instead of compiling my own kernel, I could just chroot into the unpacked ISO filesystem and install the 3.11 kernel from the testing repository using pacman.
    Which, as far as I could tell, was the released kernel, and not an earlier RC. Is this possible?
    One other question remains: the author of this blog mentions a diff that is needed to prevent the CPU freq to be stuck at 800Mhz (http://elouisyoung.blogspot.se/2013/07/ … -with.html).
    Does anyone know whether this made it to the mainline?
    # UPDATE
    I explored the latest kernel release and found out that the patch is not yet included in the mainline.
    So I'll have to compile my own kernel. There are however still issues with the CPU freq scaling (won't scale lower than 1.6Ghz). Nasty...
    # UPDATE 2
    Booting + installing mainline ISO, installed testing/linux for wifi, compiled own kernel + installed besides the testing/linux kernel worked fine for me.
    Last edited by A.J.Rouvoet (2013-09-08 17:21:05)

    Yesterday I received my Sony Vaio Pro 13.
    The WLAN card is only supported since kernel 3.11, which has been released by Linus several days ago.
    The laptop has no ethernet port, ideally I'd thus use the WLAN abilities using installation.
    Since there is no Arch ISO available with 3.11 yet, I was hoping to create my own, using the arch wiki guide on remastering the ISO: https://wiki.archlinux.org/index.php/Re … nstall_ISO
    I was wondering whether, instead of compiling my own kernel, I could just chroot into the unpacked ISO filesystem and install the 3.11 kernel from the testing repository using pacman.
    Which, as far as I could tell, was the released kernel, and not an earlier RC. Is this possible?
    One other question remains: the author of this blog mentions a diff that is needed to prevent the CPU freq to be stuck at 800Mhz (http://elouisyoung.blogspot.se/2013/07/ … -with.html).
    Does anyone know whether this made it to the mainline?
    # UPDATE
    I explored the latest kernel release and found out that the patch is not yet included in the mainline.
    So I'll have to compile my own kernel. There are however still issues with the CPU freq scaling (won't scale lower than 1.6Ghz). Nasty...
    # UPDATE 2
    Booting + installing mainline ISO, installed testing/linux for wifi, compiled own kernel + installed besides the testing/linux kernel worked fine for me.
    Last edited by A.J.Rouvoet (2013-09-08 17:21:05)

  • ASCII-EBCDIC convertion between z/Os and Linux

    Hi experts, we are migrating our landscape to z/Os (DB+ASCS) and Linux (PAS). We have our GLOBALHOST on z/Os but we are experimenting some problems when we try to install our application servers because the conversion between platforms.
    In the planning guide we can see that there is a way to mount NFS file systems exported from z/Os, that make this convertion in an automatic way, but the commands mentioned on the guide are for UNIX and not for Linux.
    Does any of you have this kind of installtion that could help us to set this parameters ok?
    Or does any of you face this problems before?
    Regards
    gustavo

    First, yes, we have z/OS systems programmers and DBAs with specific knowledge of DB2 z/OS. One of the reasons we initially went with the Z platform when we implemented SAP was that our legacy systems ran there for many years and our company had a lot of Z knowledge and experience. zSeries was one of our "core competencies".
    I also need to give you a little more information about our Z setup. We actually had 2 z9 CECs in a sysplex, one in our primary data center and another close by in our DR site and connected by fiber. This allowed us to run SAP as HA on the Z platform. For highly used systems like production ERP we actually ran our DB2 instances active/active. This is one of the few advantages of the Z platform unavailable on other platforms (except Oracle RAC, which is also expensive but can at least be implemented on commodity hardware). Another advantage is that the SAP support personnel for DB2 z/OS are extremely knowledgeable and respond to issues very quickly.
    We also chose the Z platform because of the touted "near-continuous availability" which sounded very good. Let me assure you, however, that although SAP has been making great strides with things like the enhancement pack installer, at present you will never have zero downtime running SAP on any platform. Specifically you will still have planned downtime for SAP kernel updates and support packs or enhancement packs, period. The "near-continuous availability" in this context refers to zero unplanned downtime. In my experience this is not the case either. We had several instances of unplanned downtime, the most recent had to do with issues when the CECs got to 100% CPU utilization for a brief period of time and could not free some asinine small memory area that caused the entire sysplex to pause all LPARs until it was dealt with(yes, this could be dealt with using system automation but our Z folks would prefer to deal with these manually since each situation can be different). We worked with IBM on a PMR for several months, but our eventual "workaround" was much better. We stopped running our DB2 instances as active/active and never had the problem again. We chose this "workaround" because we knew we were abandoning the platform and any of the test fixes from IBM required a rolling update of z/OS in all LPARs (10 total at the time), which is a major hassle, especially when you do it several times applying several different fixes until the problem is finally solved.
    We also experienced some issues with DB2 z/OS itself. In one case, some data in a table in production got corrupted (yikes!!) SAP support helped us correct the data based on our QA system and IBM delivered a PTF (or maybe it was a ++APAR) to correct the problem. We also had several instances of strange poor performance in ERP or BI that were solved with a PTF or by using some special RUNSTATS output by some IBM DB2 tool our DBAs ran when we gave them the "bad" query. Every time we updated DB2 z/OS with an RSU felt like a craps shoot. Sometimes there were no issues revealed during testing, other times major issues were uncovered. This made us very hesitant when it came to patching DB2 and also made us stay well behind currently available maintenance so we could let other organizations identify problems.
    Back to the topic of downtime related to DB2 z/OS itself, we know another company which runs SAP on Z that takes several hours of downtime each week (early Sunday morning I think) to REORG some large BLOB tables(if you're not in the monthly conference call for SAP on DB2 z/OS organizations, I suggest you join in). The need for RUNSTATS and REORGs to be dealt with explicitly (typically once a day for RUNSTATs and once a week for REORGs, at least for us) is a major negative of the platform, in my opinion. It is amazing what "proper" RUNSTATS can do to a previously poor performing query(hours reduced to seconds!). Also, due to the way REORGs are handled in DB2 z/OS, you'll need a lot of extra disk space for the image copies which get created. In our experience you need enough temp disk to hold the shadow copy of the largest table being REORGd and the image copies of the largest tables that are REORGd in the same time period. I recall that the image copies can be migrated to tape or virtual tape to free the image copy space back up using a periodic job, but it was a huge amount of trial and error to properly size this temp disk space, especially when the tables requiring a REORG are not the same week-to-week. We heard that with DB2 z/OS v10 that RUNSTATS and REORGs will be dealt with automatically by DB2, but I do not know if it has even been certified for SAP yet(based on recent posts in this forum it would appear not). Even when it is, I would not recommend going to it immediately(we made this mistake when DB2 z/OS v9 was certified and suffered for months getting bugs with SAP and DB2 interoperability fixed). Also, due to the way that REORGs work on BLOB tables, there will be a period of table unavailability. The caused us some issues/headaches. There are some extra REORG parameters you can set, but these issues are still always a possibility and I think that is why the company mentioned previously just took the weekly downtime to finish the REORGs on their large BLOB tables. They are very smart folks that are very experienced with zSeries and they engaged IBM experts for assistance to try and perform the REORGs online and yet they still take the downtime to perform the BLOB REORGs offline. In contrast, these periodic database tasks do not require our Basis team to do anything with SQLServer and do not cause our end-users grief when a table is unavailable.
    Our reasons for moving platforms (which, let me assure you was a major undertaking and was considered long and hard) were based on 3 things:
    1. Complexity
    2. Performance
    3. Cost
    When I speak of complexity, let me give you some data... There was a time when ~50% of all of the OSS messages the Basis team opened with SAP were in the BC-DB-DB2 category. In contrast, I think we've opened 1 or 2 OSS messages in the BC-DB-MSS category ever. Many of the OSS messages for DB2 z/OS resulted in a fix from either SAP or from IBM. We've had seveal instances of applying a PTF, ++APAR, or RSU to z/OS and/or DB2 which fixed serious "unable to perform a job function" problems with SAP. We've yet to have to apply a single update to Windows or SQLServer to fix an issue with SAP.
    To summarize... Comparing our previous and current SAP platforms, the performance was slower, the cost higher, and the complexity much higher. I have no doubt (especially with the newer Z10 and zEnterprise 196) that we could certainly have built a zSeries SAP solution which performed on par with what we have now, but.... I could not even fathom a guess as to the cost. I suspect this is why you don't see any data  for the standard SAP SD benchmark on zSeries.
    I suspect you're already committed to the platform since deploying a Z machine, even in a lab/sandbox environment isn't as easy as going down to your local computer dealer and buying a $500 test server to install on, but... If you wanted to run SAP on DB2 I would suggest looking at DB2 LUW on either X86_64 Linux or on IBM's pSeries platform.
    Brian

  • Java RTS 2.1 Beta free evaluation release for Solaris and Linux available

    Hi:
    I would like to notify this forum that a free evaluation release of
    Java Real-Time System (Java RTS) 2.1 Beta is now available for downloading
    at our public web site.
    Supported platforms are Solaris/SPARC, Solaris/x86, and Linux/x86 with
    real-time POSIX APIs. The specific Linux distributions which this release
    has been tested on are: SUSE Linux Enterprise Real Time 10 (released)
    and Red Hat Enterprise MRG 1.0 (beta). As for the Solaris versions,
    both Solaris 10 Update 3 and Update 4 are supported.
    The URL for the web page where to start in order to be able to get to
    the download link is:
    http://java.sun.com/javase/technologies/realtime/rts/
    The download link will be presented to you after you fill out a quick
    survey and agree with a click-through, 90-days e-license.
    The latest version of the Java RTS Beta technical documentation
    bundle included with the product is being separately maintained at
    our public website and can be accessed starting from here:
    http://java.sun.com/javase/technologies/realtime/reference/rts_productdoc.html
    Thanks,
    -Carlos
    Carlos B. Lucasius
    Java SE Embedded and Real-Time Engineering
    Sun Microsystems, Inc.
    http://java.sun.com/javase/technologies/embedded/index.jsp
    http://java.sun.com/javase/technologies/realtime.jsp
    http://java.sun.com/javase/technologies/realtime/faq.jsp
    http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-1331&yr=2007&track=5
    http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2901&yr=2007&track=5
    http://developers.sun.com/learning/javaoneonline/j1lab.jsp?lab=LAB-7250&yr=2007&track=4
    http://www.sun.com/training/catalog/courses/DTJ-4103.xml
    http://www.youtube.com/v/xH1yUXd9krU
    http://blogs.sun.com/jtc/
    http://blogs.sun.com/delsart
    http://blogs.sun.com/bollellaRT

    Hello,
    Just a quick question: can we have an official position from Sun regarding support of earlier releases of Solaris with Java RTS 2.1? Our customer is currently running Solaris 10 Update 3 with current Recommended patches, and the 2.1 beta cyclic driver supported this version of Solaris. However with the official release version of 2.1, support for U3 disappeared (only U4 and U5 are now supported). Aside from "rubber stamping" the Solaris build via /etc/release, is there a technical reason why U3 is no longer supported? As long as our kernel is up to date, can we safely use 2.1? Or is it just a case of being unable to officially support and test so many releases of Solaris?
    Is this a general rule-of-thumb we can expect in future: only supporting the last 2 updates of Solaris 10?
    Your advice is appreciated.
    Thanks,
    Dave.

  • JRockit 8.1 dumps on RH Linux AS 2.1, kernel v2.4.9-e.3smp

    Hi All,
    I am using JRockit 8.1 SDK and it dumped on RH Linux AS 2.1, kernel v2.4.9-e.3smp.
    The machine configuration is Xeon,3.06 GHZ Dual CPU with hyperthreading disable.
    My program runs multiple threads and each of them tries to do Inet Address lookup.
    I am suspecting that the Inet address lookup is the problem, the reason is that
    the dump gives me following message in the Thread stack trace.
    Thread Stack Trace:
    at rniInet4AddressImplLookupAllHostAddr+385 (:0)@0x4027b01d
    I was facing an Inet address lookup problem in multiple threads on JRockit's unsupported
    linux version listed in below link, but the version I am using is supported by
    JRockit.
    http://edocs.bea.com/wljrockit/docs81/certif.html
    Any help on this problem is highly appreciated.
    Thanks,
    Mitul

    Cecilia,
    Here is the attached test case which will help you to reproduce the problem.
    I have tried to simulate my applications behaviour in this unit test, where it
    tries to do the InetAddress lookup of localhost and remote host for creating a
    socket. We have a highly multithreaded application ie. each thread tries to
    do the lookup of remote and local host.
    The important piece of data is that I have disabled a DNS cache at VM level by
    changing java.security file in <JRockit Home>/jre/lib/security folder. Below
    are the settings for that.
    networkaddress.cache.ttl=0
    networkaddress.cache.negative.ttl=0
    You will also need to add policy in the java.policy file to allow the lookup to
    happen, which is as below.
    permission java.net.SocketPermission "*","resolve";
    Once you are done with the above settings run below Command line to run unit test:
    <jrockit_home>/bin/java InetLookupTest [host] [noOfThreads]
    e.g.
    <jrockit_home>/bin/java InetLookupTest www.bea.com 100
    I can always reproduce the problem executing with above setup. I hope you will
    also be able to reproduce it. I am hoping for the quick turn over to this problem.
    My application strictly followes the same of nature of run. If we cannot get
    the quick turnover to this problem we might have to think for another option.
    BTW, I run this same unit test with Sun VM and it does not core dump.
    Thanks,
    Mitul
    >
    >
    >
    Ceilia,
    I am attaching the context dump herewith. I hope you will find more
    informations
    out of it. Unfortunately, I will not be able to share the codebase
    which reproduce
    this. Have you ever seen such kind a behaviour before ? Any quick fix
    to a problem
    will be greately appreciated.
    Thanks,
    Mitul
    "Cecilia Borg" <[email protected]> wrote:
    Mitul,
    It would be most helpful if you could send us the full JRockit context
    dump.
    Are you able to also provide us with a small test case?
    Thanks, Cecilia
    BEA WebLogic JRockit
    Customer Centric Engineering
    "Mitul Limbachiya" <[email protected]> wrote in message
    news:[email protected]...
    Hi All,
    I am using JRockit 8.1 SDK and it dumped on RH Linux AS 2.1, kernelv2.4.9-e.3smp.
    The machine configuration is Xeon,3.06 GHZ Dual CPU with hyperthreadingdisable.
    My program runs multiple threads and each of them tries to do InetAddress lookup.
    I am suspecting that the Inet address lookup is the problem, the
    reason
    is that
    the dump gives me following message in the Thread stack trace.
    Thread Stack Trace:
    at rniInet4AddressImplLookupAllHostAddr+385 (:0)@0x4027b01d
    I was facing an Inet address lookup problem in multiple threads onJRockit's unsupported
    linux version listed in below link, but the version I am using issupported by
    JRockit.
    http://edocs.bea.com/wljrockit/docs81/certif.html
    Any help on this problem is highly appreciated.
    Thanks,
    Mitul
    [InetLookupTest.java]

  • RealServer 8 plug-in and linux: a solution

    Hi,
    i have
    Linux RedHat 7.2 kernel 2.4
    RealServer 8
    Oracle 8.1.7
    There was a problem loading plug-in and RS tell me: "invalid library".
    I traced processes execution and i found two library problems but you can do "ldd plug_in_library_name"
    First is a library version problem and i resolved it simply doing a symbolic link with the right name.
    Second problem was that in ...oracle/lib dir there was a static library named libskgxp8.a but RS need a shared version.
    Then i compile it
    gcc -shared -o libskgxp8.so -W1 --whole-archive libskgxp8.a
    and ..... it works.
    Bye Bye.
    Matteo Bertazzo

    Agreed - it will maybe work using rosetta - but flash will run properly on a mac intel - although for some reason a number of users have had their universal flash plugin files replaced by power pc ones.
    Suggest you check out http://discussions.apple.com/message.jspa?messageID=1839094#1839094 and the link to the full sp within it.
    I'm making assumptions about what you mean by "...refuses to display Flash files. So if the linked threads don't apply to you - please come back with exactly what error message you get, or a more detailed description of how safari reacts when you try to load flash sites.
    Nice website btw

  • Glibc and linux-lts-headers

    Hi,
    I'm using linux lts in my archlinux installation, so I think I don't need linux-api-headers. Thing is glibc depends on linux-api-headers.
    Can I switch dependences to linux-lts-headers safely and replace glibc with a custom PKGBUILD from abs? That package is the only one stopping me to remove linux-api-headers.
    I'll appreciate your help.
    Thanks.

    really? I always thought linux-lts-headers and linux-api-headers are the same but for different kernels :-P.
    Thanks for clearing this up to me. I guess I won't need to modify anything then.
    Regards!
    Last edited by unformatt (2012-07-12 15:47:43)

  • Intel i915 and displayport mst freeze entire kernel

    Hi folks,
    My lenovo laptop (t440s) came with the Ultra dock that has displayport on the back. I use a displayport cable to connect to a monitor. When I connect to the dock, the monitor works just fine, but if I ever try to "xrandr --output DP2-1 --off" it or disconnect from the dock, the entire kernel hard locks. No response, image on the screen is frozen, and from another computer on the same network I can't ssh or ping.
    I put drm.debug=0xf on the boot line, and then repro'd the issue (which repro's every time) and I get the following output right before the crash (see the kernel BUG line)
    Mar 18 12:13:54 nevada kernel: [drm:drm_ioctl] pid=587, dev=0xe200, auth=1, I915_GEM_BUSY
    Mar 18 12:13:54 nevada kernel: [drm:drm_ioctl] pid=587, dev=0xe200, auth=1, I915_GEM_BUSY
    Mar 18 12:13:54 nevada kernel: [drm:drm_ioctl] pid=587, dev=0xe200, auth=1, I915_GEM_MADVISE
    Mar 18 12:13:54 nevada kernel: [drm:drm_ioctl] pid=587, dev=0xe200, auth=1, I915_GEM_BUSY
    Mar 18 12:13:54 nevada kernel: [drm:drm_ioctl] pid=587, dev=0xe200, auth=1, I915_GEM_BUSY
    Mar 18 12:13:54 nevada kernel: [drm:drm_ioctl] pid=587, dev=0xe200, auth=1, I915_GEM_MADVISE
    Mar 18 12:13:54 nevada kernel: [drm:drm_ioctl] pid=587, dev=0xe200, auth=1, I915_GEM_THROTTLE
    Mar 18 12:13:54 nevada kernel: [drm:intel_hpd_irq_handler] hotplug event received, stat 0x00400000, dig 0x00101210
    Mar 18 12:13:54 nevada kernel: [drm:intel_hpd_irq_handler] digital hpd port C - long
    Mar 18 12:13:54 nevada kernel: [drm:intel_hpd_irq_handler] Received HPD interrupt on PIN 5 - cnt: 1
    Mar 18 12:13:54 nevada kernel: [drm:intel_dp_hpd_pulse] got hpd irq on port C - long
    Mar 18 12:13:54 nevada kernel: [drm:intel_hpd_irq_handler] hotplug event received, stat 0x00400000, dig 0x00101210
    Mar 18 12:13:54 nevada kernel: [drm:intel_hpd_irq_handler] digital hpd port C - long
    Mar 18 12:13:54 nevada kernel: [drm:intel_hpd_irq_handler] Received HPD interrupt on PIN 5 - cnt: 2
    Mar 18 12:13:54 nevada kernel: [drm:intel_dp_get_dpcd] DPCD: 12 14 c4 01 00 15 01 83 02 00 00 00 00 00 04
    Mar 18 12:13:54 nevada kernel: [drm:intel_dp_get_dpcd] Displayport TPS3 supported
    Mar 18 12:13:54 nevada kernel: [drm:intel_dp_probe_oui] Sink OUI: 000000
    Mar 18 12:13:54 nevada kernel: [drm:intel_dp_probe_oui] Branch OUI: 90cc24
    Mar 18 12:13:54 nevada kernel: [drm:intel_dp_probe_mst] Sink is MST capable
    Mar 18 12:13:54 nevada kernel: [drm:intel_dp_hpd_pulse] got hpd irq on port C - long
    Mar 18 12:13:54 nevada kernel: [drm:intel_dp_hpd_pulse] MST device may have disappeared 1 vs 1
    Mar 18 12:13:54 nevada kernel: BUG: unable to handle kernel NULL pointer dereference at 000000000000004c
    There are no other lines after that kernel NULL pointer dereference at 0x4c
    I can attach or paste the full dmesg somewhere else if anyone is interested, but I'm confused by the second to last line there, MST device may have disappeared 1 vs 1. From the code (linux-stable at tag v3.18.6):
    From drivers/gpu/drm/i915/intel_dp.c:
    4559 mst_fail:
    4560 /* if we were in MST mode, and device is not there get out of MST mode */
    4561 if (intel_dp->is_mst) {
    4562 DRM_DEBUG_KMS("MST device may have disappeared %d vs %d\n", intel_dp->is_mst, intel_dp->mst_mgr.mst_state);
    4563 intel_dp->is_mst = false;
    4564 drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, intel_dp->is_mst);
    4565 }
    4566 put_power:
    4567 intel_display_power_put(dev_priv, power_domain);
    4568
    4569 return ret;
    4570 }
    So it's locking up either in drm_dp_mst_topology_mgr_set_mst (which doesn't have any debug messages in it in the main path, and does capture a lock) or it's locking up in intel_display_power_put, which ALSO doesn't have debug output. They both acquire mutex's.. looking into this some more. If anyone has seen this issue with displayport, mst, and linux 3.18.6 please let me know

    Ok, writing a patch with debug messages to see where this is exactly locking up hopefully...
    From 05a0a4758a98f47305165befa81eb61154e15676 Mon Sep 17 00:00:00 2001
    From: Jeff Mickey <[email protected]>
    Date: Wed, 18 Mar 2015 14:00:28 -0700
    Subject: [PATCH] Debugging statements for figuring out this dp mst bug
    Signed-off-by: Jeff Mickey <[email protected]>
    drivers/gpu/drm/drm_dp_mst_topology.c | 27 ++++++++++++++++++++++-----
    drivers/gpu/drm/i915/intel_dp.c | 8 ++++++++
    drivers/gpu/drm/i915/intel_pm.c | 3 +++
    3 files changed, 33 insertions(+), 5 deletions(-)
    diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
    index f50d884..b0ff4be 100644
    --- a/drivers/gpu/drm/drm_dp_mst_topology.c
    +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
    @@ -1827,13 +1827,16 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
    int ret = 0;
    struct drm_dp_mst_branch *mstb = NULL;
    + DRM_DEBUG_KMS("locking mgr->lock\n");
    mutex_lock(&mgr->lock);
    - if (mst_state == mgr->mst_state)
    + if (mst_state == mgr->mst_state) {
    + DRM_DEBUG_KMS("goto out_unlock 1\n");
    goto out_unlock;
    + }
    mgr->mst_state = mst_state;
    /* set the device into MST mode */
    if (mst_state) {
    + DRM_DEBUG_KMS("inside mst_state\n");
    WARN_ON(mgr->mst_primary);
    /* get dpcd info */
    @@ -1849,9 +1852,11 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
    mgr->avail_slots = mgr->total_slots;
    /* add initial branch device at LCT 1 */
    + DRM_DEBUG_KMS("calling drm_dp_add_mst_branch_device\n");
    mstb = drm_dp_add_mst_branch_device(1, NULL);
    if (mstb == NULL) {
    ret = -ENOMEM;
    + DRM_DEBUG_KMS("goto out_unlock 2\n");
    goto out_unlock;
    mstb->mgr = mgr;
    @@ -1864,29 +1869,35 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
    struct drm_dp_payload reset_pay;
    reset_pay.start_slot = 0;
    reset_pay.num_slots = 0x3f;
    + DRM_DEBUG_KMS("drm_dp_dpcd_write_payload\n");
    drm_dp_dpcd_write_payload(mgr, 0, &reset_pay);
    + DRM_DEBUG_KMS("drm_dp_dpcd_writeb\n");
    ret = drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL,
    DP_MST_EN | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC);
    if (ret < 0) {
    + DRM_DEBUG_KMS("goto out_unlock 3\n");
    goto out_unlock;
    /* sort out guid */
    + DRM_DEBUG_KMS("drm_dp_dpcd_read\n");
    ret = drm_dp_dpcd_read(mgr->aux, DP_GUID, mgr->guid, 16);
    if (ret != 16) {
    DRM_DEBUG_KMS("failed to read DP GUID %d\n", ret);
    goto out_unlock;
    + DRM_DEBUG_KMS("drm_dp_validate_guid\n");
    mgr->guid_valid = drm_dp_validate_guid(mgr, mgr->guid);
    if (!mgr->guid_valid) {
    + DRM_DEBUG_KMS("drm_dp_dpcd_write 2\n");
    ret = drm_dp_dpcd_write(mgr->aux, DP_GUID, mgr->guid, 16);
    mgr->guid_valid = true;
    + DRM_DEBUG_KMS("queue_work\n");
    queue_work(system_long_wq, &mgr->work);
    ret = 0;
    @@ -1895,6 +1906,7 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
    mstb = mgr->mst_primary;
    mgr->mst_primary = NULL;
    /* this can fail if the device is gone */
    + DRM_DEBUG_KMS("drm_dp_dpcd_writeb 2\n");
    drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0);
    ret = 0;
    memset(mgr->payloads, 0, mgr->max_payloads * sizeof(struct drm_dp_payload));
    @@ -1904,11 +1916,16 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
    out_unlock:
    + DRM_DEBUG_KMS("unlocking mgr->lock\n");
    mutex_unlock(&mgr->lock);
    - if (mstb)
    + if (mstb) {
    + DRM_DEBUG_KMS("drm_dp_put_mst_branch_device 2\n");
    drm_dp_put_mst_branch_device(mstb);
    - return ret;
    + }
    +
    + DRM_DEBUG_KMS("returning %d\n", ret);
    + return ret;
    EXPORT_SYMBOL(drm_dp_mst_topology_mgr_set_mst);
    diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
    index 4bcd917..5a3c562 100644
    --- a/drivers/gpu/drm/i915/intel_dp.c
    +++ b/drivers/gpu/drm/i915/intel_dp.c
    @@ -4523,24 +4523,29 @@ intel_dp_hpd_pulse(struct intel_digital_port *intel_dig_port, bool long_hpd)
    if (HAS_PCH_SPLIT(dev)) {
    if (!ibx_digital_port_connected(dev_priv, intel_dig_port))
    + DRM_DEBUG_KMS("goto mst_fail 1\n");
    goto mst_fail;
    } else {
    if (g4x_digital_port_connected(dev, intel_dig_port) != 1)
    + DRM_DEBUG_KMS("goto mst_fail 2\n");
    goto mst_fail;
    if (!intel_dp_get_dpcd(intel_dp)) {
    + DRM_DEBUG_KMS("goto mst_fail 3\n");
    goto mst_fail;
    intel_dp_probe_oui(intel_dp);
    if (!intel_dp_probe_mst(intel_dp))
    + DRM_DEBUG_KMS("goto mst_fail 4\n");
    goto mst_fail;
    } else {
    if (intel_dp->is_mst) {
    if (intel_dp_check_mst_status(intel_dp) == -EINVAL)
    + DRM_DEBUG_KMS("goto mst_fail 5\n");
    goto mst_fail;
    @@ -4549,6 +4554,7 @@ intel_dp_hpd_pulse(struct intel_digital_port *intel_dig_port, bool long_hpd)
    * we'll check the link status via the normal hot plug path later -
    * but for short hpds we should check it now
    + DRM_DEBUG_KMS("drm_modeset_lock\n");
    drm_modeset_lock(&dev->mode_config.connection_mutex, NULL);
    intel_dp_check_link_status(intel_dp);
    drm_modeset_unlock(&dev->mode_config.connection_mutex);
    @@ -4566,6 +4572,8 @@ mst_fail:
    put_power:
    intel_display_power_put(dev_priv, power_domain);
    + DRM_DEBUG_KMS("Returning %d as ret\n", ret);
    +
    return ret;
    diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
    index 83c7ecf..c5e6f33 100644
    --- a/drivers/gpu/drm/i915/intel_pm.c
    +++ b/drivers/gpu/drm/i915/intel_pm.c
    @@ -6555,6 +6555,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
    power_domains = &dev_priv->power_domains;
    + DRM_DEBUG_KMS("locking power_domains->lock\n");
    mutex_lock(&power_domains->lock);
    WARN_ON(!power_domains->domain_use_count[domain]);
    @@ -6570,8 +6571,10 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
    + DRM_DEBUG_KMS("unlocking power_domains->lock\n");
    mutex_unlock(&power_domains->lock);
    + DRM_DEBUG_KMS("intel_runtime_pm_put is being called\n");
    intel_runtime_pm_put(dev_priv);
    2.3.3

  • Lightroom and Linux

    I know that this has already been asked... But I cannot start to understand why Adobe seems to dismiss Linux users this way. I've read some comments about Linux on this forum and please be honest. Linux is not for hacker only, Linux users do not expect free (as in free beer) software only. Many are ready to pay for GOOD software and Lightroom definitly fall into this category.
    I've tried Lightroom on a VirtualBox, it is too slow to be usable!
    I'm really looking forward for a native linux port. This OS is so much better than Windows (no I'm not a troll, for example the file system is lot faster and you have far more chance to get a virus) that we have all to win here.
    Can we hope? There is a port of Lightroom under MacOS which has a BSD kernel not too far from a Linux one. The hard work is done on this side. I cannot speak for the GUI though...
    I'm ready to pay today a Lighroom on Linux with a price tag 50% more expensive than the Windows one. I mean it!
    Please if you want a Linux port, post a message maybe Adobe will listen!
    Pascal.

    I had actually decided that I wouldn't write in this thread anymore, after Pascals response to my thread earlier. But, alas, I got pulled back in.
    Right, first, this message is written from a workstation running Arch Linux. This serves as a good platform for devloping bespoke JEE applications.
    I use both OSX and Linux on a daily basic, and they are both great operating systems. These days, these days Linux is mostly at work though, at home OSX solves most my problems more elegantly.  But, in the end of the day I completely agree with Miguel de incaza (for people not interested in FOSS, he is basically the father of the GNOME desktop and a quite important person in FOSS), when he wrote his infamous "what killed the linux desktop" blog post -->  http://tirania.org/blog/archive/2012/Aug-29.html 
    Linux as a desktop operating system, is unlikely to happen. For all the reasons he is articulating so nicely in that post. Linux as a specialized unix workstation, ofcourse, but in all fairness, all we need is a terminal, gcc, git and sed then really
    robin48gx wrote:
    Steam are already plugged into the Linux community revenue stream....
    This is irrelevant though, why does it matter if a company making computer games have released a beta client of their store? Their business model is different, they are planning to roll out a console which is based on linux, which makes it a nobrainer. With that said, how many recent AAA title is available for the steam Linux client at the moment? None. When will they come? Most likely when the stream box comes out, hence, this is not driven by linux as a super tasty viable platform. But its rather a bi-product of a commerical product valve is planning to launch.
    This is not the case for Adobe.
    The linux desktop market is estimated to have a market share around 1.5%. This is segmented across a wealth of different linux distributions which are more or less compatible with each-other. Right, so we have a small market segment, which is fairly non-standardised and a bit of a pain to deploy for (unless you release your source code). On top of this, a lot of these people are fairly fanatical about free (open) software and doesn't particularly like commercial software . Java is a good example of this, Java is one of the worlds most used programming languages, but is still a second rate citizen in the desktop linux eco-system, java language binding for quite important stuff such as clutter, gst etc etc is terribly far behind the other language bindings.
    So, in summery, you have a small, quite segmented, where the average user is very religious about why software should be free and open.
    Ofcourse they haven't ported, thats a terribly business case

  • JMF and Linux troubles

    Hi All,
    Perhaps some of you with a bit more experience in this area can help me. Here's the situation:
    I open an RTP session (using jmstudio!) from Windows->Windows
    - result, I hear audio fine
    I open an RTP session from Linux->Linux
    - result, can't hear any audio
    I open an RTP session from Windows->Linux
    - where'd the audio go?
    RTP session from Linux->Windows
    - Audio seems fine
    It seems that my Linux machine cannot play any (arriving) audio sent over an RTP session. The strange part is that it can play a local audio file just fine, and it can transmit an audio file over the network. While transmitting the file it plays ok (it echoes the sound locally).
    Since I can hear sounds on this machine using other tools, I am assuming the sound card is configured correctly. Also I am assuming that the sound is physically being sent over the network (actually you can see it on the network monitor so this is true).
    Are there any possible reasons why I cannot hear the sound? I'm running redhat 8.0 and am using the i810_audio kernel module for my sound card. As far as I can tell everything should be going like blue blazes..

    Just in case anyone cares; I had to upgrade to the
    JDK1.4.2 (still beta as of the time of writing). There
    was a crucial javasound bug fix involved.Does that mean that you've solved the audio problem?
    Alex

  • [SOLVED]Questions about openbsd ffs and linux ext2fs

    Hello everyone.
    First of all, forgive me if the post is not on the correct section.
    I'm going to install arch linux to my thinkpad and i have some questions:
    I have two external hard disks (once) formatted to openbsd ffs. (now i'm making efforts to turn it to ext2fs )
    1) Does Arch have by default read support for openbsd ffs filesystems or i have to build a custom kernel?
    2)I would mostly like to have read and write access from both OSs. I know that ext2fs should work for read and write from Openbsd and Linux so this could be a good option. My problem starts when i try it for example to format the disks under Openbsd; i successfully created a working ext2 partition, but when i try it
    with an Ubuntu live cd to see if it works, i see no partitions and i have to reformat it. On the other hand, if i create it under Linux, then it's not working under Openbsd.
    So now that i'm migrating to arch, which are the good practices to make partitions work/cooperate under both OSs?
    notes: On Linux, i tried with both gparted and disk utility with no luck.( It works for linux but not for OpenBSD).
    Has anyone managed to work with both ext2fs and both Linux Openbsd?
    What are the best practices in such cases?
    Last edited by lambda (2012-09-26 12:07:26)

    well fstype is one thing , bsd labels vs mbr is another. In case with linux and bsd dualboot best practice is vfat (aka fat32) but with all its limitations ( i can be wrong here 'cos i never tested dual zfs for instance ).

Maybe you are looking for