Ocrconfig initalization fails

Hi all,
I am using Oracle 11g R1 . i am trying to install CRS on HPUX . i am using raw disk for OCR and voting disk. installation went fine (staus was 100%) . then i got msg to execute root.sh on each node. on first node execution of root.sh went fine. on second node it failed :( with following error messages.
PROT-1: Failed to initialize ocrconfig
Failed to upgrade Oracle Cluster Registry configuration
i checked the logs an found following error message
[ OCRCONF][1]Failed to initialized OCR context. error:[PROC-26: Error while accessing the physical storage]
but oracle user can read/write the ocr device from both nodes. i used dd command to verify that .
Any idea what's wrong here.
Thanks for your help.

Thanks a lot for your reply,
output of ocr_loc file is,
ocrconfig_loc=/dev/oracle/ocr
ocrmirrorconfig_loc=/dev/oracle/ocr_mirror
local_only=FALSE
ocr_loc is similar on both nodes.
following is output of ls -la of raw files.
node1 0 crw-rw---- 1 oracle dba 12 0x000015 Jul 24 07:40 asmdisk1
node1 0 crw-rw---- 1 oracle dba 12 0x000016 Jul 24 07:41 asmdisk2
node1 0 crw-r----- 1 root oinstall 12 0x000019 Jul 27 01:28 ocr
node1 0 crw-r----- 1 root oinstall 12 0x00001c Jul 27 01:28 ocr_mirror
node1 0 crw-r----- 1 oracle oinstall 12 0x000018 Jul 25 21:51 votingdisk
node2 0 crw-rw---- 1 oracle dba 12 0x000015 Jul 24 07:54 asmdisk1
node2 0 crw-rw---- 1 oracle dba 12 0x000016 Jul 24 07:55 asmdisk2
node2 0 crw-r----- 1 root oinstall 12 0x000019 Jul 27 04:39 ocr
node2 0 crw-r----- 1 root oinstall 12 0x00001c Jul 27 04:39 ocr_mirror
node2 0 crw-r----- 1 oracle oinstall 12 0x000018 Jul 27 04:55 votingdisk
Please let me know what's wrong here.
Thanks,

Similar Messages

  • I get a dialog box from Xmarks that "Firefox service initalization failed". Please advise.

    The Xmarks icon shows a question mark and a shows "Xmarks has encountered an error''. On clicking the icon the Xmarks settings dialog box shows the current status as PROBLEMS SYNCING "Firefox service initialization failed" and that "Firefox reports that some services are unavailable".
    Please advise action to take.

    Yoono. I uninstalled Yoono and the problem went away. However, it does seem like a nice application since it brings all your social networks together in a column on the left side of the screen that are active while you can be doing something else like searching the web or reading your e-mail. When I'm in e-mail however, I want a fuller screen and when I try to collapse the Yoono column, I get that warning message and it won't allow me to.

  • Question about bluetooth communication between PC and mobile device

    I am a newbie of bluetooth communication. This time I need to have connumication between PC and mobile device (mainly mobile phone) by sending strings. PC is acted as server and mobile device act as client.
    For using bluetooth in PC, I use bluecove 2.0.1
    I have already connected them successfully.
    When I want to send strings between them, it is found that it can only do one cycle of communication (client -> server -> client).
    For my design, they can communicate multiple times.
    I simulate the core class of the system, the performance is fine.
    Cound anyone help me to watch the code and give me some advices?
    Server Side - ServerBox.java
    public class ServerBox implements Runnable {
       LocalDevice localDevice;
       StreamConnectionNotifier notifier;
       ServiceRecord record;
       boolean isClosed;
       ClientProcessor processor;
       CMDProcessor cmd;
       MainInterface midlet;
       private static final UUID ECHO_SERVER_UUID = new UUID(
               "F0E0D0C0B0A000908070605040302010", false);
       public ServerBox(MainInterface midlet) {
           this.midlet = midlet;
       public void run() {
           boolean isBTReady = false;
           try {
               localDevice = LocalDevice.getLocalDevice();
               if (!localDevice.setDiscoverable(DiscoveryAgent.GIAC)) {
                   midlet.showInfo("Cannot set to discoverable");
                   return;
               // prepare a URL to create a notifier
               StringBuffer url = new StringBuffer("btspp://");
               url.append("localhost").append(':');
               url.append(ECHO_SERVER_UUID.toString());
               url.append(";name=Echo Server");
               url.append(";authorize=false");
               // create notifier now
               notifier = (StreamConnectionNotifier) Connector.open(url.toString());
               record = localDevice.getRecord(notifier);
               isBTReady = true;
           } catch (Exception e) {
               e.printStackTrace();
           // nothing to do if no bluetooth available
           if (isBTReady) {
               midlet.showInfo("Initalization complete. Waiting for connection");
               midlet.completeInitalization();
           } else {
               midlet.showInfo("Initalization fail. Exit.");
               return;
           // produce client processor
           processor = new ClientProcessor();
           cmd = new CMDProcessor();
           // start accepting connections then
           while (!isClosed) {
               StreamConnection conn = null;
               try {
                   conn = notifier.acceptAndOpen();
               } catch (IOException e) {
                   // wrong client or interrupted - continue anyway
                   continue;
               processor.addConnection(conn);
       // activate the set up of process
       public void publish() {
           isClosed = false;
           new Thread(this).start();
       // stop the service
       public void cancelService() {
           isClosed = true;
           midlet.showInfo("Service Terminate.");
           midlet.completeTermination();
       // inner private class for handling connection and activate connection handling
       private class ClientProcessor implements Runnable {
           private Thread processorThread;
           private Vector queue = new Vector();
           private boolean isOk = true;
           ClientProcessor() {
               processorThread = new Thread(this);
               processorThread.start();
           public void run() {
               while (!isClosed) {
                   synchronized (this) {
                       if (queue.size() == 0) {
                           try {
                               // wait for new client
                               wait();
                           } catch (InterruptedException e) { }
                   StreamConnection conn;
                   synchronized (this) {
                       if (isClosed) {
                           return;
                       conn = (StreamConnection) queue.firstElement();
                       queue.removeElementAt(0);
                       processConnection(conn);
           // add stream connection and notify the thread
           void addConnection(StreamConnection conn) {
               synchronized (this) {
                   queue.addElement(conn);
                   midlet.showInfo("A connection is added.");
                   notify();    // for wait() command in run()
       // receive string
       private String readInputString(StreamConnection conn) {
           String inputString = null;
           try {
               DataInputStream dis = conn.openDataInputStream();
               inputString = dis.readUTF();
               dis.close();
           } catch (Exception e) {
               e.printStackTrace();
           return inputString;
       private void sendOutputData(String outputData, StreamConnection conn) {
           try {
               DataOutputStream dos = conn.openDataOutputStream();
               dos.writeUTF(outputData);
               dos.close();
           } catch (IOException e) {
       // process connecion
       private void processConnection(StreamConnection conn) {
           String inputString = readInputString(conn);
           String outputString = cmd.reactionToCMD(inputString);
           sendOutputData(outputString, conn);
    /*       try {
               conn.close();
           } catch (IOException e) {}*/
           midlet.showInfo("Client input: " + inputString + ", successfully received.");
    }For "CMDProcessor" , it is the class of message processing before feedback to client.
    Client side - ClientBox.java
    public class ClientBox implements Runnable, CommandListener{
        StringItem result = new StringItem("","");
        private DiscoveryAgent discoveryAgent;
        private String connString;
        private boolean isClosed = false;
        private boolean boxReady = false;
        StreamConnection conn;
        private static final UUID ECHO_SERVER_UUID = new UUID( "F0E0D0C0B0A000908070605040302010", false);
        Form process = new Form("Process");
        ClientInterface midlet;
        public ClientBox(ClientInterface mid){
            this.midlet = mid;
            process.append(result);
            process.addCommand(new Command("Cancel",Command.CANCEL,1));
            process.setCommandListener(this);
            new Thread(this).start();
        public void commandAction(Command arg0, Displayable arg1) {    
            if(arg0.getCommandType()==Command.CANCEL){
                isClosed = true;
                midlet.notifyDestroyed();
        public synchronized void run() {
            LocalDevice localDevice = null;
            boolean isBTReady = false;
            /* Process Gauge screen */
            midlet.displayPage(process);
            Gauge g=new Gauge(null,false,Gauge.INDEFINITE,Gauge.CONTINUOUS_RUNNING);
            process.append(g);
            showInfo("Initalization...");
            System.gc();
            try {
                localDevice = LocalDevice.getLocalDevice();
                discoveryAgent = localDevice.getDiscoveryAgent();
                isBTReady = true;
            } catch (Exception e) {
                e.printStackTrace();
            if (!isBTReady) {
                showInfo("Bluetooth is not avaliable. Please check the device.");
                return;
            if(!isClosed){
                try {
                    connString = discoveryAgent.selectService(ECHO_SERVER_UUID, ServiceRecord.NOAUTHENTICATE_NOENCRYPT, false);
                } catch (BluetoothStateException ex) {
                    ex.printStackTrace();
            else return;
            if (connString == null) {
                showInfo("Cannot Find Server. Please check the device.");
                return;
            else showInfo("Can Find Server, stand by for request.");
            boxReady = true;
        /* True if the clientbox is ready */
        public boolean getBoxReady(){
            return boxReady;
        /* True if the clientbox is closed in run() */
        public boolean getIsClosed(){
            return isClosed;
        public String accessService(String input) {
            String output = null;
            try {
                /* Connect to server */
                StreamConnection conn = (StreamConnection) Connector.open(connString);
                /* send string */
                DataOutputStream dos = conn.openDataOutputStream();
                dos.writeUTF(input);
                dos.close();
                /* receive string */
                DataInputStream dis = conn.openDataInputStream();
                output = dis.readUTF();
                dis.close();
            } catch (IOException ex){
                showInfo("Fail connect to connect to server.");
            return output;
        private void showInfo(String s){
            StringBuffer sb=new StringBuffer(result.getText());
            if(sb.length()>0){ sb.append("\n"); }
            sb.append(s);
            result.setText(sb.toString());
    }Client side - ClientInterface.java
    public class ClientInterface extends MIDlet implements Runnable, CommandListener{
        private ClientBox cb = new ClientBox(this);
        private Form temp = new Form("Temp");
        private Command select = new Command("Select", Command.OK, 1);
        private Command back = new Command("Back", Command.BACK, 1);
        Alert alert;
        String[] element;
        String out;
        List list;
        public void run(){
            /* Send message and get reply */
            out = cb.accessService("Proglist");
            element = split(out,",");
            /* Use the reply to make list */
            list = createList(element[0], List.IMPLICIT, out);
            list.addCommand(select);
            list.addCommand(back);
            list.setCommandListener(this);
            Display.getDisplay(this).setCurrent(list);
        public void startApp() {
            System.gc();
            waitForBoxSetUp(); /* Recursively check for clientbox status */
            new Thread(this).start();
        public void pauseApp() {
        public void destroyApp(boolean unconditional) {
            notifyDestroyed();
        public void displayPage(Displayable d){
            Display.getDisplay(this).setCurrent(d);
        private void waitForBoxSetUp(){
            while(!cb.getBoxReady()){
                if(cb.getIsClosed())
                    notifyDestroyed();
        public void commandAction(Command c, Displayable d){
            if (c.getCommandType() == Command.OK){
                if (d == list){
                    /* Send the choice to server */
                    out = cb.accessService(list.getString(list.getSelectedIndex()));
                    alert = new Alert("Output", "selected = "+out, null, AlertType.ALARM);
                    alert.setTimeout(2000);
                    Display.getDisplay(this).setCurrent(alert,list);
            if (c.getCommandType() == Command.BACK){
                notifyDestroyed();
        public void showWarning(String title, String content){
            alert = new Alert("Output", "selected = "+list.getString(list.getSelectedIndex()), null, AlertType.ALARM);
            alert.setTimeout(3000);
            Display.getDisplay(this).setCurrent(alert,list);
        private List createList(String name, int type, String message){
            List temp;
            String[] source = split(message,",") ;
            temp = new List(name, type, source, null);
            return temp;
        private static String[] split(String original,String regex)
            int startIndex = 0;
            Vector v = new Vector();
            String[] str = null;
            int index = 0;
            startIndex = original.indexOf(regex);
            while(startIndex < original.length() && startIndex != -1)
                String temp = original.substring(index,startIndex);
                v.addElement(temp);
                index = startIndex + regex.length();
                startIndex = original.indexOf(regex,startIndex + regex.length());
            v.addElement(original.substring(index + 1 - regex.length()));
            str = new String[v.size()];
            for(int i=0;i<v.size();i++)
                str[i] = (String)v.elementAt(i);
            return str;
    }

    i haven't worked with devices but only with the toolkit emulators;
    it definitely is possible...
    u have to send the image as a bytestream and receive the image at the jsp end...
    and then reconstruct the image.
    the Stream classes in J2ME AND J2SE are all u will require.
    also the Image class.
    i have not done this but i have successfully sent an image frm a jsp and displayed it on the emulator.

  • Lag/Pause believed to be caused by RAID [SOLVED]

    I have installed arch about 4 months back and ever since been experience and Lag/Freezing issue which occurs anywhere from 1-30 seconds at a time. I have been looking for solutions and fixing issues I have found with my system in order to find a solution. It does not seem to be any issue with the hardware, everything is new. I have even gone so far as to swapping out certain parts such as the memory and have even updated the bios firmware.
    I believe this is an issue with the way the raid is setup, as harddrive I/O will cease when this happens. If there is something else that may be causeing this issue then I am unsure what it could be.
    I have posted my dmesg, mdstat and mdadm.conf files below. Also my system is still experience two bugs. The first one is the RAID initalization failed message durring systm initialization even though the raid array was assembled fine by the kernel. And the other on is the error message from LVM2 durring initalization "File-based locking initialisation failed". That error has  supposively been fixed from an update though I am still getting it.
    dmesg
    Initializing cgroup subsys cpuset
    Initializing cgroup subsys cpu
    Linux version 2.6.33-ARCH (thomas@evey) (gcc version 4.5.0 (GCC) ) #1 SMP PREEMPT Mon Apr 26 19:31:00 CEST 2010
    Command line: root=/dev/array/root ro
    BIOS-provided physical RAM map:
    BIOS-e820: 0000000000000000 - 000000000009f800 (usable)
    BIOS-e820: 000000000009f800 - 00000000000a0000 (reserved)
    BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
    BIOS-e820: 0000000000100000 - 00000000cfbe0000 (usable)
    BIOS-e820: 00000000cfbe0000 - 00000000cfbe1000 (ACPI NVS)
    BIOS-e820: 00000000cfbe1000 - 00000000cfbf0000 (ACPI data)
    BIOS-e820: 00000000cfbf0000 - 00000000cfc00000 (reserved)
    BIOS-e820: 00000000f4000000 - 00000000f8000000 (reserved)
    BIOS-e820: 00000000fec00000 - 0000000100000000 (reserved)
    BIOS-e820: 0000000100000000 - 0000000130000000 (usable)
    NX (Execute Disable) protection: active
    DMI 2.4 present.
    No AGP bridge found
    last_pfn = 0x130000 max_arch_pfn = 0x400000000
    MTRR default type: uncachable
    MTRR fixed ranges enabled:
    00000-9FFFF write-back
    A0000-BFFFF uncachable
    C0000-CCFFF write-protect
    CD000-EFFFF uncachable
    F0000-FFFFF write-through
    MTRR variable ranges enabled:
    0 base 000000000 mask F00000000 write-back
    1 base 0E0000000 mask FE0000000 uncachable
    2 base 0D0000000 mask FF0000000 uncachable
    3 base 100000000 mask FC0000000 write-back
    4 base 130000000 mask FF0000000 uncachable
    5 disabled
    6 disabled
    7 disabled
    x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
    e820 update range: 00000000d0000000 - 0000000100000000 (usable) ==> (reserved)
    last_pfn = 0xcfbe0 max_arch_pfn = 0x400000000
    e820 update range: 0000000000001000 - 0000000000010000 (usable) ==> (reserved)
    Scanning 1 areas for low memory corruption
    modified physical RAM map:
    modified: 0000000000000000 - 0000000000001000 (usable)
    modified: 0000000000001000 - 0000000000010000 (reserved)
    modified: 0000000000010000 - 000000000009f800 (usable)
    modified: 000000000009f800 - 00000000000a0000 (reserved)
    modified: 00000000000f0000 - 0000000000100000 (reserved)
    modified: 0000000000100000 - 00000000cfbe0000 (usable)
    modified: 00000000cfbe0000 - 00000000cfbe1000 (ACPI NVS)
    modified: 00000000cfbe1000 - 00000000cfbf0000 (ACPI data)
    modified: 00000000cfbf0000 - 00000000cfc00000 (reserved)
    modified: 00000000f4000000 - 00000000f8000000 (reserved)
    modified: 00000000fec00000 - 0000000100000000 (reserved)
    modified: 0000000100000000 - 0000000130000000 (usable)
    initial memory mapped : 0 - 20000000
    found SMP MP-table at [ffff8800000f5630] f5630
    init_memory_mapping: 0000000000000000-00000000cfbe0000
    0000000000 - 00cfa00000 page 2M
    00cfa00000 - 00cfbe0000 page 4k
    kernel direct mapping tables up to cfbe0000 @ 16000-1c000
    init_memory_mapping: 0000000100000000-0000000130000000
    0100000000 - 0130000000 page 2M
    kernel direct mapping tables up to 130000000 @ 1a000-20000
    RAMDISK: 37d4d000 - 37feff2b
    ACPI: RSDP 00000000000f6ff0 00014 (v00 GBT )
    ACPI: RSDT 00000000cfbe1040 00040 (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
    ACPI: FACP 00000000cfbe10c0 00074 (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
    ACPI: DSDT 00000000cfbe1180 042C4 (v01 GBT GBTUACPI 00001000 MSFT 0100000C)
    ACPI: FACS 00000000cfbe0000 00040
    ACPI: HPET 00000000cfbe55c0 00038 (v01 GBT GBTUACPI 42302E31 GBTU 00000098)
    ACPI: MCFG 00000000cfbe5640 0003C (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
    ACPI: EUDS 00000000cfbe5680 00560 (v01 GBT 00000000 00000000)
    ACPI: TAMG 00000000cfbe5be0 00A32 (v01 GBT GBT B0 5455312E BG?? 53450101)
    ACPI: APIC 00000000cfbe54c0 000BC (v01 GBT GBTUACPI 42302E31 GBTU 01010101)
    ACPI: SSDT 00000000cfbe6640 02F8C (v01 INTEL PPM RCM 80000001 INTL 20061109)
    ACPI: Local APIC address 0xfee00000
    (13 early reservations) ==> bootmem [0000000000 - 0130000000]
    #0 [0000000000 - 0000001000] BIOS data page ==> [0000000000 - 0000001000]
    #1 [0001000000 - 00016d6b38] TEXT DATA BSS ==> [0001000000 - 00016d6b38]
    #2 [0037d4d000 - 0037feff2b] RAMDISK ==> [0037d4d000 - 0037feff2b]
    #3 [00016d7000 - 00016d70f6] BRK ==> [00016d7000 - 00016d70f6]
    #4 [00000f5640 - 0000100000] BIOS reserved ==> [00000f5640 - 0000100000]
    #5 [00000f5630 - 00000f5640] MP-table mpf ==> [00000f5630 - 00000f5640]
    #6 [000009f800 - 00000f0d00] BIOS reserved ==> [000009f800 - 00000f0d00]
    #7 [00000f0edc - 00000f5630] BIOS reserved ==> [00000f0edc - 00000f5630]
    #8 [00000f0d00 - 00000f0edc] MP-table mpc ==> [00000f0d00 - 00000f0edc]
    #9 [0000010000 - 0000012000] TRAMPOLINE ==> [0000010000 - 0000012000]
    #10 [0000012000 - 0000016000] ACPI WAKEUP ==> [0000012000 - 0000016000]
    #11 [0000016000 - 000001a000] PGTABLE ==> [0000016000 - 000001a000]
    #12 [000001a000 - 000001b000] PGTABLE ==> [000001a000 - 000001b000]
    [ffffea0000000000-ffffea00043fffff] PMD -> [ffff880001800000-ffff8800051fffff] on node 0
    Zone PFN ranges:
    DMA 0x00000000 -> 0x00001000
    DMA32 0x00001000 -> 0x00100000
    Normal 0x00100000 -> 0x00130000
    Movable zone start PFN for each node
    early_node_map[4] active PFN ranges
    0: 0x00000000 -> 0x00000001
    0: 0x00000010 -> 0x0000009f
    0: 0x00000100 -> 0x000cfbe0
    0: 0x00100000 -> 0x00130000
    On node 0 totalpages: 1047408
    DMA zone: 56 pages used for memmap
    DMA zone: 107 pages reserved
    DMA zone: 3821 pages, LIFO batch:0
    DMA32 zone: 14280 pages used for memmap
    DMA32 zone: 832536 pages, LIFO batch:31
    Normal zone: 2688 pages used for memmap
    Normal zone: 193920 pages, LIFO batch:31
    ACPI: PM-Timer IO Port: 0x408
    ACPI: Local APIC address 0xfee00000
    ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
    ACPI: LAPIC (acpi_id[0x01] lapic_id[0x02] enabled)
    ACPI: LAPIC (acpi_id[0x02] lapic_id[0x04] enabled)
    ACPI: LAPIC (acpi_id[0x03] lapic_id[0x06] enabled)
    ACPI: LAPIC (acpi_id[0x04] lapic_id[0x01] enabled)
    ACPI: LAPIC (acpi_id[0x05] lapic_id[0x03] enabled)
    ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled)
    ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] enabled)
    ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
    ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
    ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
    ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
    ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
    ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
    ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
    ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
    ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
    IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
    ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
    ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
    ACPI: IRQ0 used by override.
    ACPI: IRQ2 used by override.
    ACPI: IRQ9 used by override.
    Using ACPI (MADT) for SMP configuration information
    ACPI: HPET id: 0x8086a201 base: 0xfed00000
    SMP: Allowing 8 CPUs, 0 hotplug CPUs
    nr_irqs_gsi: 24
    PM: Registered nosave memory: 0000000000001000 - 0000000000010000
    PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
    PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
    PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
    PM: Registered nosave memory: 00000000cfbe0000 - 00000000cfbe1000
    PM: Registered nosave memory: 00000000cfbe1000 - 00000000cfbf0000
    PM: Registered nosave memory: 00000000cfbf0000 - 00000000cfc00000
    PM: Registered nosave memory: 00000000cfc00000 - 00000000f4000000
    PM: Registered nosave memory: 00000000f4000000 - 00000000f8000000
    PM: Registered nosave memory: 00000000f8000000 - 00000000fec00000
    PM: Registered nosave memory: 00000000fec00000 - 0000000100000000
    Allocating PCI resources starting at cfc00000 (gap: cfc00000:24400000)
    Booting paravirtualized kernel on bare hardware
    setup_percpu: NR_CPUS:16 nr_cpumask_bits:16 nr_cpu_ids:8 nr_node_ids:1
    PERCPU: Embedded 29 pages/cpu @ffff880005400000 s89176 r8192 d21416 u262144
    pcpu-alloc: s89176 r8192 d21416 u262144 alloc=1*2097152
    pcpu-alloc: [0] 0 1 2 3 4 5 6 7
    Built 1 zonelists in Zone order, mobility grouping on. Total pages: 1030277
    Kernel command line: root=/dev/array/root ro
    PID hash table entries: 4096 (order: 3, 32768 bytes)
    Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
    Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
    Checking aperture...
    No AGP bridge found
    Calgary: detecting Calgary via BIOS EBDA area
    Calgary: Unable to locate Rio Grande table in EBDA - bailing!
    Memory: 4046176k/4980736k available (3449k kernel code, 791104k absent, 142528k reserved, 1818k data, 480k init)
    SLUB: Genslabs=13, HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
    Hierarchical RCU implementation.
    NR_IRQS:768
    Console: colour VGA+ 80x25
    console [tty0] enabled
    allocated 41943040 bytes of page_cgroup
    please try 'cgroup_disable=memory' option if you don't want memory cgroups
    hpet clockevent registered
    Fast TSC calibration using PIT
    Detected 1998.531 MHz processor.
    Calibrating delay loop (skipped), value calculated using timer frequency.. 3998.60 BogoMIPS (lpj=6661770)
    Security Framework initialized
    Mount-cache hash table entries: 256
    Initializing cgroup subsys ns
    Initializing cgroup subsys cpuacct
    Initializing cgroup subsys memory
    Initializing cgroup subsys devices
    Initializing cgroup subsys freezer
    Initializing cgroup subsys net_cls
    CPU: Physical Processor ID: 0
    CPU: Processor Core ID: 0
    mce: CPU supports 9 MCE banks
    CPU0: Thermal monitoring enabled (TM1)
    CPU 0 MCA banks CMCI:2 CMCI:3 CMCI:5 CMCI:6 CMCI:8
    using mwait in idle threads.
    Performance Events: Nehalem/Corei7 events, Intel PMU driver.
    ... version: 3
    ... bit width: 48
    ... generic registers: 4
    ... value mask: 0000ffffffffffff
    ... max period: 000000007fffffff
    ... fixed-purpose events: 3
    ... event mask: 000000070000000f
    ACPI: Core revision 20091214
    Setting APIC routing to flat
    ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
    CPU0: Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz stepping 05
    Booting Node 0, Processors #1
    CPU 1 MCA banks CMCI:2 CMCI:3 CMCI:5 SHD:6 SHD:8
    #2
    CPU 2 MCA banks CMCI:2 CMCI:3 CMCI:5 SHD:6 SHD:8
    #3
    CPU 3 MCA banks CMCI:2 CMCI:3 CMCI:5 SHD:6 SHD:8
    #4
    CPU 4 MCA banks SHD:2 SHD:3 SHD:5 SHD:6 SHD:8
    #5
    CPU 5 MCA banks SHD:2 SHD:3 SHD:5 SHD:6 SHD:8
    #6
    CPU 6 MCA banks SHD:2 SHD:3 SHD:5 SHD:6 SHD:8
    #7 Ok.
    CPU 7 MCA banks SHD:2 SHD:3 SHD:5 SHD:6 SHD:8
    Brought up 8 CPUs
    Total of 8 processors activated (31988.63 BogoMIPS).
    NET: Registered protocol family 16
    ACPI: bus type pci registered
    PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf4000000-0xf7ffffff] (base 0xf4000000)
    PCI: MMCONFIG at [mem 0xf4000000-0xf7ffffff] reserved in E820
    PCI: Using configuration type 1 for base access
    bio: create slab <bio-0> at 0
    ACPI: EC: Look up EC in DSDT
    ACPI: Interpreter enabled
    ACPI: (supports S0 S3 S4 S5)
    ACPI: Using IOAPIC for interrupt routing
    ACPI Warning: Incorrect checksum in table [TAMG] - 10, should be 0F (20091214/tbutils-314)
    ACPI: No dock devices found.
    ACPI: PCI Root Bridge [PCI0] (0000:00)
    pci_root PNP0A03:00: ignoring host bridge windows from ACPI; boot with "pci=use_crs" to use them
    pci_root PNP0A03:00: host bridge window [io 0x0000-0x0cf7] (ignored)
    pci_root PNP0A03:00: host bridge window [io 0x0d00-0xffff] (ignored)
    pci_root PNP0A03:00: host bridge window [mem 0x000a0000-0x000bffff] (ignored)
    pci_root PNP0A03:00: host bridge window [mem 0x000c0000-0x000dffff] (ignored)
    pci_root PNP0A03:00: host bridge window [mem 0xcfc00000-0xfebfffff] (ignored)
    pci 0000:00:03.0: PME# supported from D0 D3hot D3cold
    pci 0000:00:03.0: PME# disabled
    pci 0000:00:1a.0: reg 20: [io 0xff00-0xff1f]
    pci 0000:00:1a.1: reg 20: [io 0xfe00-0xfe1f]
    pci 0000:00:1a.2: reg 20: [io 0xfd00-0xfd1f]
    pci 0000:00:1a.7: reg 10: [mem 0xfbfff000-0xfbfff3ff]
    pci 0000:00:1a.7: PME# supported from D0 D3hot D3cold
    pci 0000:00:1a.7: PME# disabled
    pci 0000:00:1b.0: reg 10: [mem 0xfbff8000-0xfbffbfff 64bit]
    pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
    pci 0000:00:1b.0: PME# disabled
    pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
    pci 0000:00:1c.0: PME# disabled
    pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold
    pci 0000:00:1c.1: PME# disabled
    pci 0000:00:1c.2: PME# supported from D0 D3hot D3cold
    pci 0000:00:1c.2: PME# disabled
    pci 0000:00:1d.0: reg 20: [io 0xfc00-0xfc1f]
    pci 0000:00:1d.1: reg 20: [io 0xfb00-0xfb1f]
    pci 0000:00:1d.2: reg 20: [io 0xfa00-0xfa1f]
    pci 0000:00:1d.3: reg 20: [io 0xf900-0xf91f]
    pci 0000:00:1d.7: reg 10: [mem 0xfbffe000-0xfbffe3ff]
    pci 0000:00:1d.7: PME# supported from D0 D3hot D3cold
    pci 0000:00:1d.7: PME# disabled
    pci 0000:00:1f.2: reg 10: [io 0xf800-0xf807]
    pci 0000:00:1f.2: reg 14: [io 0xf700-0xf703]
    pci 0000:00:1f.2: reg 18: [io 0xf600-0xf607]
    pci 0000:00:1f.2: reg 1c: [io 0xf500-0xf503]
    pci 0000:00:1f.2: reg 20: [io 0xf400-0xf40f]
    pci 0000:00:1f.2: reg 24: [io 0xf300-0xf30f]
    pci 0000:00:1f.3: reg 10: [mem 0xfbffd000-0xfbffd0ff 64bit]
    pci 0000:00:1f.3: reg 20: [io 0x0500-0x051f]
    pci 0000:00:1f.5: reg 10: [io 0xf100-0xf107]
    pci 0000:00:1f.5: reg 14: [io 0xf000-0xf003]
    pci 0000:00:1f.5: reg 18: [io 0xef00-0xef07]
    pci 0000:00:1f.5: reg 1c: [io 0xee00-0xee03]
    pci 0000:00:1f.5: reg 20: [io 0xed00-0xed0f]
    pci 0000:00:1f.5: reg 24: [io 0xec00-0xec0f]
    pci 0000:01:00.0: reg 10: [mem 0xf9000000-0xf9ffffff]
    pci 0000:01:00.0: reg 14: [mem 0xd0000000-0xdfffffff 64bit pref]
    pci 0000:01:00.0: reg 1c: [mem 0xee000000-0xefffffff 64bit pref]
    pci 0000:01:00.0: reg 24: [io 0xcf00-0xcf7f]
    pci 0000:01:00.0: reg 30: [mem 0x00000000-0x0007ffff pref]
    pci 0000:01:00.1: reg 10: [mem 0xfaffc000-0xfaffffff]
    pci 0000:00:03.0: PCI bridge to [bus 01-01]
    pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
    pci 0000:00:03.0: bridge window [mem 0xf9000000-0xfaffffff]
    pci 0000:00:03.0: bridge window [mem 0xd0000000-0xefffffff 64bit pref]
    pci 0000:02:00.0: reg 10: [io 0xaf00-0xaf07]
    pci 0000:02:00.0: reg 14: [io 0xae00-0xae03]
    pci 0000:02:00.0: reg 18: [io 0xad00-0xad07]
    pci 0000:02:00.0: reg 1c: [io 0xac00-0xac03]
    pci 0000:02:00.0: reg 20: [io 0xab00-0xab0f]
    pci 0000:02:00.0: reg 24: [mem 0xfbcff000-0xfbcff7ff]
    pci 0000:02:00.0: reg 30: [mem 0x00000000-0x0000ffff pref]
    pci 0000:02:00.0: PME# supported from D3hot
    pci 0000:02:00.0: PME# disabled
    pci 0000:00:1c.0: PCI bridge to [bus 02-02]
    pci 0000:00:1c.0: bridge window [io 0xa000-0xafff]
    pci 0000:00:1c.0: bridge window [mem 0xfbc00000-0xfbcfffff]
    pci 0000:03:00.0: reg 10: [io 0xde00-0xdeff]
    pci 0000:03:00.0: reg 18: [mem 0xfbeff000-0xfbefffff 64bit pref]
    pci 0000:03:00.0: reg 20: [mem 0xfbef8000-0xfbefbfff 64bit pref]
    pci 0000:03:00.0: reg 30: [mem 0x00000000-0x0001ffff pref]
    pci 0000:03:00.0: supports D1 D2
    pci 0000:03:00.0: PME# supported from D0 D1 D2 D3hot D3cold
    pci 0000:03:00.0: PME# disabled
    pci 0000:00:1c.1: PCI bridge to [bus 03-03]
    pci 0000:00:1c.1: bridge window [io 0xd000-0xdfff]
    pci 0000:00:1c.1: bridge window [mem 0xfbb00000-0xfbbfffff]
    pci 0000:00:1c.1: bridge window [mem 0xfbe00000-0xfbefffff 64bit pref]
    pci 0000:04:00.0: reg 10: [mem 0xfbdfe000-0xfbdfffff 64bit]
    pci 0000:04:00.0: PME# supported from D0 D3hot D3cold
    pci 0000:04:00.0: PME# disabled
    pci 0000:00:1c.2: PCI bridge to [bus 04-04]
    pci 0000:00:1c.2: bridge window [mem 0xfbd00000-0xfbdfffff]
    pci 0000:05:05.0: reg 10: [io 0xbf00-0xbf07]
    pci 0000:05:05.0: reg 14: [io 0xbe00-0xbe03]
    pci 0000:05:05.0: reg 18: [io 0xbd00-0xbd07]
    pci 0000:05:05.0: reg 1c: [io 0xbc00-0xbc03]
    pci 0000:05:05.0: reg 20: [io 0xbb00-0xbb0f]
    pci 0000:00:1e.0: PCI bridge to [bus 05-05] (subtractive decode)
    pci 0000:00:1e.0: bridge window [io 0xb000-0xbfff]
    pci_bus 0000:00: on NUMA node 0
    ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
    ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEX0._PRT]
    ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEX1._PRT]
    ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.PEX2._PRT]
    ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.HUB0._PRT]
    ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 9 *10 11 12 14 15)
    ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 9 10 11 *12 14 15)
    ACPI: PCI Interrupt Link [LNKC] (IRQs *3 4 5 6 7 9 10 11 12 14 15)
    ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)
    ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 9 10 11 12 14 15) *0, disabled.
    ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 9 10 *11 12 14 15)
    ACPI: PCI Interrupt Link [LNK0] (IRQs 3 4 *5 6 7 9 10 11 12 14 15)
    ACPI: PCI Interrupt Link [LNK1] (IRQs 3 4 5 6 7 *9 10 11 12 14 15)
    vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none
    vgaarb: loaded
    PCI: Using ACPI for IRQ routing
    PCI: pci_cache_line_size set to 64 bytes
    NetLabel: Initializing
    NetLabel: domain hash size = 128
    NetLabel: protocols = UNLABELED CIPSOv4
    NetLabel: unlabeled traffic allowed by default
    HPET: 8 timers in total, 5 timers will be used for per-cpu timer
    hpet0: at MMIO 0xfed00000, IRQs 2, 8, 24, 25, 26, 27, 28, 0
    hpet0: 8 comparators, 64-bit 14.318180 MHz counter
    hpet: hpet2 irq 24 for MSI
    hpet: hpet3 irq 25 for MSI
    hpet: hpet4 irq 26 for MSI
    hpet: hpet5 irq 27 for MSI
    hpet: hpet6 irq 28 for MSI
    Switching to clocksource tsc
    pnp: PnP ACPI init
    ACPI: bus type pnp registered
    pnp: PnP ACPI: found 14 devices
    ACPI: ACPI bus type pnp unregistered
    system 00:01: [io 0x04d0-0x04d1] has been reserved
    system 00:01: [io 0x0290-0x029f] has been reserved
    system 00:01: [io 0x0800-0x087f] has been reserved
    system 00:01: [io 0x0290-0x0294] has been reserved
    system 00:01: [io 0x0880-0x088f] has been reserved
    system 00:0a: [io 0x0400-0x04cf] has been reserved
    system 00:0a: [io 0x04d2-0x04ff] has been reserved
    system 00:0b: [mem 0xf4000000-0xf7ffffff] has been reserved
    system 00:0c: [mem 0x000cd600-0x000cffff] has been reserved
    system 00:0c: [mem 0x000f0000-0x000f7fff] could not be reserved
    system 00:0c: [mem 0x000f8000-0x000fbfff] could not be reserved
    system 00:0c: [mem 0x000fc000-0x000fffff] could not be reserved
    system 00:0c: [mem 0xcfbe0000-0xcfbeffff] could not be reserved
    system 00:0c: [mem 0x00000000-0x0009ffff] could not be reserved
    system 00:0c: [mem 0x00100000-0xcfbdffff] could not be reserved
    system 00:0c: [mem 0xcfbf0000-0xcfbfffff] has been reserved
    system 00:0c: [mem 0xfec00000-0xfec00fff] could not be reserved
    system 00:0c: [mem 0xfed10000-0xfed1dfff] has been reserved
    system 00:0c: [mem 0xfed20000-0xfed8ffff] has been reserved
    system 00:0c: [mem 0xfee00000-0xfee00fff] has been reserved
    system 00:0c: [mem 0xffb00000-0xffb7ffff] has been reserved
    system 00:0c: [mem 0xfff00000-0xffffffff] has been reserved
    system 00:0c: [mem 0x000e0000-0x000effff] has been reserved
    pci 0000:00:1c.0: BAR 15: assigned [mem 0xf0000000-0xf01fffff pref]
    pci 0000:00:1c.2: BAR 15: assigned [mem 0xf0200000-0xf03fffff 64bit pref]
    pci 0000:00:1c.2: BAR 13: assigned [io 0x1000-0x1fff]
    pci 0000:01:00.0: BAR 6: assigned [mem 0xe0000000-0xe007ffff pref]
    pci 0000:00:03.0: PCI bridge to [bus 01-01]
    pci 0000:00:03.0: bridge window [io 0xc000-0xcfff]
    pci 0000:00:03.0: bridge window [mem 0xf9000000-0xfaffffff]
    pci 0000:00:03.0: bridge window [mem 0xd0000000-0xefffffff 64bit pref]
    pci 0000:02:00.0: BAR 6: assigned [mem 0xf0000000-0xf000ffff pref]
    pci 0000:00:1c.0: PCI bridge to [bus 02-02]
    pci 0000:00:1c.0: bridge window [io 0xa000-0xafff]
    pci 0000:00:1c.0: bridge window [mem 0xfbc00000-0xfbcfffff]
    pci 0000:00:1c.0: bridge window [mem 0xf0000000-0xf01fffff pref]
    pci 0000:03:00.0: BAR 6: assigned [mem 0xfbe00000-0xfbe1ffff pref]
    pci 0000:00:1c.1: PCI bridge to [bus 03-03]
    pci 0000:00:1c.1: bridge window [io 0xd000-0xdfff]
    pci 0000:00:1c.1: bridge window [mem 0xfbb00000-0xfbbfffff]
    pci 0000:00:1c.1: bridge window [mem 0xfbe00000-0xfbefffff 64bit pref]
    pci 0000:00:1c.2: PCI bridge to [bus 04-04]
    pci 0000:00:1c.2: bridge window [io 0x1000-0x1fff]
    pci 0000:00:1c.2: bridge window [mem 0xfbd00000-0xfbdfffff]
    pci 0000:00:1c.2: bridge window [mem 0xf0200000-0xf03fffff 64bit pref]
    pci 0000:00:1e.0: PCI bridge to [bus 05-05]
    pci 0000:00:1e.0: bridge window [io 0xb000-0xbfff]
    pci 0000:00:1e.0: bridge window [mem disabled]
    pci 0000:00:1e.0: bridge window [mem pref disabled]
    pci 0000:00:03.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
    pci 0000:00:03.0: setting latency timer to 64
    pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
    pci 0000:00:1c.0: setting latency timer to 64
    pci 0000:00:1c.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
    pci 0000:00:1c.1: setting latency timer to 64
    pci 0000:00:1c.2: enabling device (0006 -> 0007)
    pci 0000:00:1c.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18
    pci 0000:00:1c.2: setting latency timer to 64
    pci 0000:00:1e.0: setting latency timer to 64
    pci_bus 0000:00: resource 0 [io 0x0000-0xffff]
    pci_bus 0000:00: resource 1 [mem 0x00000000-0xffffffffffffffff]
    pci_bus 0000:01: resource 0 [io 0xc000-0xcfff]
    pci_bus 0000:01: resource 1 [mem 0xf9000000-0xfaffffff]
    pci_bus 0000:01: resource 2 [mem 0xd0000000-0xefffffff 64bit pref]
    pci_bus 0000:02: resource 0 [io 0xa000-0xafff]
    pci_bus 0000:02: resource 1 [mem 0xfbc00000-0xfbcfffff]
    pci_bus 0000:02: resource 2 [mem 0xf0000000-0xf01fffff pref]
    pci_bus 0000:03: resource 0 [io 0xd000-0xdfff]
    pci_bus 0000:03: resource 1 [mem 0xfbb00000-0xfbbfffff]
    pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbefffff 64bit pref]
    pci_bus 0000:04: resource 0 [io 0x1000-0x1fff]
    pci_bus 0000:04: resource 1 [mem 0xfbd00000-0xfbdfffff]
    pci_bus 0000:04: resource 2 [mem 0xf0200000-0xf03fffff 64bit pref]
    pci_bus 0000:05: resource 0 [io 0xb000-0xbfff]
    pci_bus 0000:05: resource 3 [io 0x0000-0xffff]
    pci_bus 0000:05: resource 4 [mem 0x00000000-0xffffffffffffffff]
    NET: Registered protocol family 2
    IP route cache hash table entries: 131072 (order: 8, 1048576 bytes)
    TCP established hash table entries: 262144 (order: 10, 4194304 bytes)
    TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
    TCP: Hash tables configured (established 262144 bind 65536)
    TCP reno registered
    UDP hash table entries: 2048 (order: 4, 65536 bytes)
    UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
    NET: Registered protocol family 1
    pci 0000:01:00.0: Boot video device
    PCI: CLS 64 bytes, default 64
    Unpacking initramfs...
    Freeing initrd memory: 2699k freed
    PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
    Placing 64MB software IO TLB between ffff880005bdd000 - ffff880009bdd000
    software IO TLB at phys 0x5bdd000 - 0x9bdd000
    Scanning for low memory corruption every 60 seconds
    audit: initializing netlink socket (disabled)
    type=2000 audit(1272807521.273:1): initialized
    VFS: Disk quotas dquot_6.5.2
    Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
    msgmni has been set to 7909
    alg: No test for stdrng (krng)
    Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
    io scheduler noop registered
    io scheduler deadline registered
    io scheduler cfq registered (default)
    pcieport 0000:00:03.0: setting latency timer to 64
    pcieport 0000:00:03.0: irq 29 for MSI/MSI-X
    pcieport 0000:00:1c.0: setting latency timer to 64
    pcieport 0000:00:1c.0: irq 30 for MSI/MSI-X
    pcieport 0000:00:1c.1: setting latency timer to 64
    pcieport 0000:00:1c.1: irq 31 for MSI/MSI-X
    pcieport 0000:00:1c.2: setting latency timer to 64
    pcieport 0000:00:1c.2: irq 32 for MSI/MSI-X
    aer 0000:00:03.0:pcie02: AER service couldn't init device: no _OSC support
    Linux agpgart interface v0.103
    Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
    serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
    00:08: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
    input: Macintosh mouse button emulation as /devices/virtual/input/input0
    PNP: No PS/2 controller found. Probing ports directly.
    Failed to disable AUX port, but continuing anyway... Is this a SiS?
    If AUX port is really absent please use the 'i8042.noaux' option.
    serio: i8042 KBD port at 0x60,0x64 irq 1
    mice: PS/2 mouse device common for all mice
    cpuidle: using governor ladder
    cpuidle: using governor menu
    TCP cubic registered
    NET: Registered protocol family 17
    PM: Resume from disk failed.
    registered taskstats version 1
    Initalizing network drop monitor service
    Freeing unused kernel memory: 480k freed
    device-mapper: uevent: version 1.0.3
    device-mapper: ioctl: 4.17.0-ioctl (2010-03-05) initialised: [email protected]
    SCSI subsystem initialized
    libata version 3.00 loaded.
    pata_it8213 0000:05:05.0: version 0.0.3
    pata_it8213 0000:05:05.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
    pata_it8213 0000:05:05.0: setting latency timer to 64
    scsi0 : pata_it8213
    scsi1 : pata_it8213
    ata1: PATA max UDMA/66 cmd 0xbf00 ctl 0xbe00 bmdma 0xbb00 irq 19
    ata2: DUMMY
    ata_piix 0000:00:1f.2: version 2.13
    ata_piix 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19
    ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ]
    ata_piix 0000:00:1f.2: setting latency timer to 64
    scsi2 : ata_piix
    scsi3 : ata_piix
    ata3: SATA max UDMA/133 cmd 0xf800 ctl 0xf700 bmdma 0xf400 irq 19
    ata4: SATA max UDMA/133 cmd 0xf600 ctl 0xf500 bmdma 0xf408 irq 19
    ata_piix 0000:00:1f.5: PCI INT B -> GSI 19 (level, low) -> IRQ 19
    ata_piix 0000:00:1f.5: MAP [ P0 -- P1 -- ]
    ata_piix 0000:00:1f.5: setting latency timer to 64
    scsi4 : ata_piix
    scsi5 : ata_piix
    ata5: SATA max UDMA/133 cmd 0xf100 ctl 0xf000 bmdma 0xed00 irq 19
    ata6: SATA max UDMA/133 cmd 0xef00 ctl 0xee00 bmdma 0xed08 irq 19
    ata6: SATA link down (SStatus 0 SControl 300)
    ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
    ata5.00: ATAPI: HL-DT-ST BDDVDRW CH08LS10, 1.00, max UDMA/133
    ata5.00: configured for UDMA/133
    ata3.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    ata3.01: SATA link down (SStatus 0 SControl 300)
    ata4.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    ata4.01: SATA link down (SStatus 0 SControl 300)
    ata3.00: ATA-8: WDC WD15EARS-00Z5B1, 80.00A80, max UDMA/133
    ata3.00: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32)
    ata3.00: configured for UDMA/133
    scsi 2:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5
    ata4.00: ATA-8: WDC WD15EARS-00Z5B1, 80.00A80, max UDMA/133
    ata4.00: 2930277168 sectors, multi 16: LBA48 NCQ (depth 0/32)
    ata4.00: configured for UDMA/133
    scsi 3:0:0:0: Direct-Access ATA WDC WD15EARS-00Z 80.0 PQ: 0 ANSI: 5
    scsi 4:0:0:0: CD-ROM HL-DT-ST BDDVDRW CH08LS10 1.00 PQ: 0 ANSI: 5
    sd 2:0:0:0: [sda] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB)
    sd 3:0:0:0: [sdb] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB)
    sd 3:0:0:0: [sdb] Write Protect is off
    sd 3:0:0:0: [sdb] Mode Sense: 00 3a 00 00
    sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    sdb:
    sd 2:0:0:0: [sda] Write Protect is off
    sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
    sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    sda: sda1 sda2 sda3
    sd 2:0:0:0: [sda] Attached SCSI disk
    sdb1 sdb2 sdb3
    sd 3:0:0:0: [sdb] Attached SCSI disk
    sr0: scsi3-mmc drive: 48x/48x writer dvd-ram cd/rw xa/form2 cdda tray
    Uniform CD-ROM driver Revision: 3.20
    sr 4:0:0:0: Attached scsi CD-ROM sr0
    md: md0 stopped.
    md: bind<sdb1>
    md: bind<sda1>
    md: raid1 personality registered for level 1
    raid1: raid set md0 active with 2 out of 2 mirrors
    md0: detected capacity change from 0 to 295960576
    md0: unknown partition table
    md: md1 stopped.
    md: bind<sdb2>
    md: bind<sda2>
    raid1: raid set md1 active with 2 out of 2 mirrors
    md1: detected capacity change from 0 to 1497954975744
    md1: unknown partition table
    md: md2 stopped.
    md: bind<sdb3>
    md: bind<sda3>
    raid1: raid set md2 active with 2 out of 2 mirrors
    md2: detected capacity change from 0 to 2048000000
    md2: unknown partition table
    EXT4-fs (dm-0): mounted filesystem with ordered data mode
    rtc_cmos 00:04: RTC can wake from S4
    rtc_cmos 00:04: rtc core: registered rtc_cmos as rtc0
    rtc0: alarms up to one month, 242 bytes nvram, hpet irqs
    udev: starting version 151
    sd 2:0:0:0: Attached scsi generic sg0 type 0
    sd 3:0:0:0: Attached scsi generic sg1 type 0
    sr 4:0:0:0: Attached scsi generic sg2 type 5
    iTCO_vendor_support: vendor-support=0
    iTCO_wdt: Intel TCO WatchDog Timer Driver v1.05
    iTCO_wdt: Found a P55 TCO device (Version=2, TCOBASE=0x0460)
    iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
    Monitor-Mwait will be used to enter C-1 state
    Monitor-Mwait will be used to enter C-2 state
    Monitor-Mwait will be used to enter C-3 state
    ahci 0000:02:00.0: version 3.0
    ahci 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
    ahci 0000:02:00.0: irq 33 for MSI/MSI-X
    ahci 0000:02:00.0: AHCI 0001.0200 32 slots 8 ports 6 Gbps 0xff impl SATA mode
    ahci 0000:02:00.0: flags: 64bit ncq pio
    ahci 0000:02:00.0: setting latency timer to 64
    scsi6 : ahci
    scsi7 : ahci
    scsi8 : ahci
    scsi9 : ahci
    scsi10 : ahci
    scsi11 : ahci
    scsi12 : ahci
    scsi13 : ahci
    ata7: SATA max UDMA/133 abar m2048@0xfbcff000 port 0xfbcff100 irq 33
    ata8: SATA max UDMA/133 abar m2048@0xfbcff000 port 0xfbcff180 irq 33
    ata9: SATA max UDMA/133 abar m2048@0xfbcff000 port 0xfbcff200 irq 33
    ata10: SATA max UDMA/133 abar m2048@0xfbcff000 port 0xfbcff280 irq 33
    ata11: SATA max UDMA/133 abar m2048@0xfbcff000 port 0xfbcff300 irq 33
    ata12: SATA max UDMA/133 abar m2048@0xfbcff000 port 0xfbcff380 irq 33
    ata13: SATA max UDMA/133 abar m2048@0xfbcff000 port 0xfbcff400 irq 33
    ata14: SATA max UDMA/133 abar m2048@0xfbcff000 port 0xfbcff480 irq 33
    input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input1
    ACPI: Power Button [PWRB]
    input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
    ACPI: Power Button [PWRF]
    usbcore: registered new interface driver usbfs
    usbcore: registered new interface driver hub
    usbcore: registered new device driver usb
    ata13: SATA link down (SStatus 0 SControl 300)
    ata8: SATA link down (SStatus 0 SControl 300)
    ata10: SATA link down (SStatus 0 SControl 300)
    ata14: SATA link up <unknown> (SStatus FFFFFF93 SControl 300)
    ata9: SATA link down (SStatus 0 SControl 300)
    ata12: SATA link down (SStatus 0 SControl 300)
    ata11: SATA link down (SStatus 0 SControl 300)
    ata7: SATA link down (SStatus 0 SControl 300)
    ata14.00: ATAPI: MARVELL VIRTUALL, 1.09, max UDMA/66
    ata14.00: configured for UDMA/66
    scsi 13:0:0:0: Processor Marvell 91xx Config 1.01 PQ: 0 ANSI: 5
    scsi 13:0:0:0: Attached scsi generic sg3 type 3
    nvidia: module license 'NVIDIA' taints kernel.
    Disabling lock debugging due to kernel taint
    parport_pc 00:09: reported by Plug and Play ACPI
    parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
    Floppy drive(s): fd0 is 1.44M
    ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
    ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18
    ehci_hcd 0000:00:1a.7: setting latency timer to 64
    ehci_hcd 0000:00:1a.7: EHCI Host Controller
    ehci_hcd 0000:00:1a.7: new USB bus registered, assigned bus number 1
    ehci_hcd 0000:00:1a.7: debug port 2
    ehci_hcd 0000:00:1a.7: cache line size of 64 is not supported
    ehci_hcd 0000:00:1a.7: irq 18, io mem 0xfbfff000
    xhci_hcd 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18
    xhci_hcd 0000:04:00.0: setting latency timer to 64
    xhci_hcd 0000:04:00.0: xHCI Host Controller
    xhci_hcd 0000:04:00.0: new USB bus registered, assigned bus number 2
    xhci_hcd 0000:04:00.0: irq 18, io mem 0xfbdfe000
    usb usb2: config 1 interface 0 altsetting 0 endpoint 0x81 has no SuperSpeed companion descriptor
    xHCI xhci_add_endpoint called for root hub
    xHCI xhci_check_bandwidth called for root hub
    hub 2-0:1.0: USB hub found
    hub 2-0:1.0: 4 ports detected
    r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
    r8169 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
    r8169 0000:03:00.0: setting latency timer to 64
    r8169 0000:03:00.0: irq 34 for MSI/MSI-X
    eth0: RTL8168d/8111d at 0xffffc90000068000, 6c:f0:49:02:e2:c1, XID 083000c0 IRQ 34
    FDC 0 is a post-1991 82077
    lp: driver loaded but no devices found
    ehci_hcd 0000:00:1a.7: USB 2.0 started, EHCI 1.00
    hub 1-0:1.0: USB hub found
    hub 1-0:1.0: 6 ports detected
    ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23
    ehci_hcd 0000:00:1d.7: setting latency timer to 64
    ehci_hcd 0000:00:1d.7: EHCI Host Controller
    ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 3
    ehci_hcd 0000:00:1d.7: debug port 2
    ehci_hcd 0000:00:1d.7: cache line size of 64 is not supported
    ehci_hcd 0000:00:1d.7: irq 23, io mem 0xfbffe000
    uhci_hcd: USB Universal Host Controller Interface driver
    ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00
    hub 3-0:1.0: USB hub found
    hub 3-0:1.0: 8 ports detected
    i801_smbus 0000:00:1f.3: PCI INT C -> GSI 18 (level, low) -> IRQ 18
    uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
    uhci_hcd 0000:00:1a.0: setting latency timer to 64
    uhci_hcd 0000:00:1a.0: UHCI Host Controller
    uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 4
    uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000ff00
    hub 4-0:1.0: USB hub found
    hub 4-0:1.0: 2 ports detected
    uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21
    uhci_hcd 0000:00:1a.1: setting latency timer to 64
    uhci_hcd 0000:00:1a.1: UHCI Host Controller
    uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 5
    uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000fe00
    hub 5-0:1.0: USB hub found
    hub 5-0:1.0: 2 ports detected
    uhci_hcd 0000:00:1a.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18
    uhci_hcd 0000:00:1a.2: setting latency timer to 64
    uhci_hcd 0000:00:1a.2: UHCI Host Controller
    uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 6
    uhci_hcd 0000:00:1a.2: irq 18, io base 0x0000fd00
    hub 6-0:1.0: USB hub found
    hub 6-0:1.0: 2 ports detected
    uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23
    uhci_hcd 0000:00:1d.0: setting latency timer to 64
    uhci_hcd 0000:00:1d.0: UHCI Host Controller
    uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 7
    uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000fc00
    hub 7-0:1.0: USB hub found
    hub 7-0:1.0: 2 ports detected
    uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19
    uhci_hcd 0000:00:1d.1: setting latency timer to 64
    uhci_hcd 0000:00:1d.1: UHCI Host Controller
    uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 8
    uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000fb00
    hub 8-0:1.0: USB hub found
    hub 8-0:1.0: 2 ports detected
    uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18
    uhci_hcd 0000:00:1d.2: setting latency timer to 64
    uhci_hcd 0000:00:1d.2: UHCI Host Controller
    uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 9
    uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000fa00
    hub 9-0:1.0: USB hub found
    hub 9-0:1.0: 2 ports detected
    uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 16 (level, low) -> IRQ 16
    uhci_hcd 0000:00:1d.3: setting latency timer to 64
    uhci_hcd 0000:00:1d.3: UHCI Host Controller
    uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 10
    uhci_hcd 0000:00:1d.3: irq 16, io base 0x0000f900
    hub 10-0:1.0: USB hub found
    hub 10-0:1.0: 2 ports detected
    ppdev: user-space parallel port driver
    lp0: using parport0 (interrupt-driven).
    HDA Intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
    HDA Intel 0000:00:1b.0: irq 35 for MSI/MSI-X
    HDA Intel 0000:00:1b.0: setting latency timer to 64
    input: HDA Digital PCBeep as /devices/pci0000:00/0000:00:1b.0/input/input3
    HDA Intel 0000:01:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
    hda_intel: Disable MSI for Nvidia chipset
    HDA Intel 0000:01:00.1: setting latency timer to 64
    usb 9-2: new low speed USB device using uhci_hcd and address 2
    usbcore: registered new interface driver hiddev
    usbcore: registered new interface driver usbhid
    usbhid: USB HID core driver
    nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
    nvidia 0000:01:00.0: setting latency timer to 64
    vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=none:owns=io+mem
    NVRM: loading NVIDIA UNIX x86_64 Kernel Module 195.36.15 Fri Mar 12 00:29:13 PST 2010
    input: Logitech USB Receiver as /devices/pci0000:00/0000:00:1d.2/usb9/9-2/9-2:1.0/input/input4
    logitech 0003:046D:C517.0001: input,hidraw0: USB HID v1.10 Keyboard [Logitech USB Receiver] on usb-0000:00:1d.2-2/input0
    logitech 0003:046D:C517.0002: fixing up Logitech keyboard report descriptor
    input: Logitech USB Receiver as /devices/pci0000:00/0000:00:1d.2/usb9/9-2/9-2:1.1/input/input5
    logitech 0003:046D:C517.0002: input,hiddev0,hidraw1: USB HID v1.10 Mouse [Logitech USB Receiver] on usb-0000:00:1d.2-2/input1
    kjournald starting. Commit interval 5 seconds
    EXT3-fs (md0): using internal journal
    EXT3-fs (md0): mounted filesystem with writeback data mode
    Adding 1999992k swap on /dev/md2. Priority:-1 extents:1 across:1999992k
    r8169: eth0: link up
    r8169: eth0: link up
    it87: Found IT8720F chip at 0x290, revision 8
    it87: VID is disabled (pins used for GPIO)
    it87: in3 is VCC (+5V)
    fuse init (API version 7.13)
    end_request: I/O error, dev fd0, sector 0
    end_request: I/O error, dev fd0, sector 0
    NET: Registered protocol family 10
    lo: Disabled Privacy Extensions
    /etc/mdadm.conf
    ARRAY /dev/md0 metadata=0.90 UUID=d3dade34:d68c8182:644d3026:471d4918
    ARRAY /dev/md1 metadata=0.90 UUID=c8e3ea5d:7be896ac:a45d145b:781c14fa
    ARRAY /dev/md2 metadata=0.90 UUID=5c868358:21748aa6:5b20e486:491c0ca6
    # Emails sent on faults
    MAILADDR [myemailaddress]
    /proc/mdstat
    Personalities : [raid1]
    md2 : active raid1 sda3[0] sdb3[1]
    2000000 blocks [2/2] [UU]
    md1 : active raid1 sda2[0] sdb2[1]
    1462846656 blocks [2/2] [UU]
    md0 : active raid1 sda1[0] sdb1[1]
    289024 blocks [2/2] [UU]
    unused devices: <none>
    Also I am not sure about the "unknown partition table" messages in the dmesg output. I am not sure if they are somthing I should be concerned about or could be a contributing factor.
    If you need more information about my problem just ask an I will post it.
    Last edited by KrimReaper (2010-08-05 18:44:01)

    I have been doing some searching around and have found some new information.
    https://raid.wiki.kernel.org/index.php/Performance
    I found there were some fixes for the "apparent pauses in the stream of IO" that I believed I was experiencing as well. I had in the meantime changed my raid array to raid 5 to increase the read performance which had helped with my lag issue but it was still apparent. So I did two of the optimizations listed on the page. But before I did I ran a benchamark on my current raid 5 array.
    Unoptimized Raid 5 array
    Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    media 8G 43161 66 34106 4 19897 1 48765 65 156367 7 412.4 1
    ------Sequential Create------ --------Random Create--------
    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
    files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
    16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    media,8G,43161,66,34106,4,19897,1,48765,65,156367,7,412.4,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    Set read-ahead from 512 to 65536 on all md devices, seems to be a minor improvement
    Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    media 8G 38289 58 36708 4 22971 2 70054 93 185402 5 407.9 1
    ------Sequential Create------ --------Random Create--------
    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
    files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
    16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    media,8G,38289,58,36708,4,22971,2,70054,93,185402,5,407.9,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    Set stripe-cache_size for RAID5 from 512 to 16384, massive improvement
    Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    media 8G 65835 98 126321 11 48050 4 70850 95 184347 5 370.4 1
    ------Sequential Create------ --------Random Create--------
    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
    files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
    16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    media,8G,65835,98,126321,11,48050,4,70850,95,184347,5,370.4,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    After changing the stripe cache size it looks like there has been a massive improvement in output speeds. Though I have not used the desktop environment of the computer yet nor have I been watching movies/downloading or doing anything I/O heavy yet of this change. I think this may have been the root of my problem. When I have used the computer for a few more weeks I will report here again whether or not lag/pause problem has been solved.

  • Dmtreedemo.java error

    hi,
    i got error in the following programme in java named dmdemotree.java the code and the error are as mentioned below
    i have installed oracle 10g r2 and i have used JDK 1.4.2 softwares , i have set classpath for jdm.jar and ojdm_api.jar available in oracle 10g r2 software ,successfully compiled but at execution stage i got error as
    F:\Mallari\DATA MINING demos\java\samples>java dmtreedemo localhost:1521:orcl scott tiger
    --- Build Model - using cost matrix ---
    javax.datamining.JDMException: Generic Error.
    at oracle.dmt.jdm.resource.OraExceptionHandler.createException(OraExcept
    ionHandler.java:142)
    at oracle.dmt.jdm.resource.OraExceptionHandler.createException(OraExcept
    ionHandler.java:91)
    at oracle.dmt.jdm.OraDMObject.createException(OraDMObject.java:111)
    at oracle.dmt.jdm.base.OraTask.saveObjectInDatabase(OraTask.java:204)
    at oracle.dmt.jdm.OraMiningObject.saveObjectInDatabase(OraMiningObject.j
    ava:164)
    at oracle.dmt.jdm.resource.OraPersistanceManagerImpl.saveObject(OraPersi
    stanceManagerImpl.java:245)
    at oracle.dmt.jdm.resource.OraConnection.saveObject(OraConnection.java:3
    83)
    at dmtreedemo.executeTask(dmtreedemo.java:622)
    at dmtreedemo.buildModel(dmtreedemo.java:304)
    at dmtreedemo.main(dmtreedemo.java:199)
    Caused by: java.sql.SQLException: Unsupported feature
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269)
    at oracle.jdbc.dbaccess.DBError.throwUnsupportedFeatureSqlException(DBEr
    ror.java:690)
    at oracle.jdbc.driver.OracleCallableStatement.setString(OracleCallableSt
    atement.java:1337)
    at oracle.dmt.jdm.utils.OraSQLUtils.createCallableStatement(OraSQLUtils.
    java:126)
    at oracle.dmt.jdm.utils.OraSQLUtils.executeCallableStatement(OraSQLUtils
    .java:532)
    at oracle.dmt.jdm.scheduler.OraProgramJob.createJob(OraProgramJob.java:7
    7)
    at oracle.dmt.jdm.scheduler.OraJob.saveJob(OraJob.java:107)
    at oracle.dmt.jdm.scheduler.OraProgramJob.saveJob(OraProgramJob.java:85)
    at oracle.dmt.jdm.scheduler.OraProgramJob.saveJob(OraProgramJob.java:290
    at oracle.dmt.jdm.base.OraTask.saveObjectInDatabase(OraTask.java:199)
    ... 6 more
    SO PLZ HELP ME OUT IN THIS , I WILL BE VERY THANK FULL
    ===========================================================
    the sample code is
    // Copyright (c) 2004, 2005, Oracle. All rights reserved.
    // File: dmtreedemo.java
    * This demo program describes how to use the Oracle Data Mining (ODM) Java API
    * to solve a classification problem using Decision Tree (DT) algorithm.
    * PROBLEM DEFINITION
    * How to predict whether a customer responds or not to the new affinity card
    * program using a classifier based on DT algorithm?
    * DATA DESCRIPTION
    * Data for this demo is composed from base tables in the Sales History (SH)
    * schema. The SH schema is an Oracle Database Sample Schema that has the customer
    * demographics, purchasing, and response details for the previous affinity card
    * programs. Data exploration and preparing the data is a common step before
    * doing data mining. Here in this demo, the following views are created in the user
    * schema using CUSTOMERS, COUNTRIES, and SUPPLIMENTARY_DEMOGRAPHICS tables.
    * MINING_DATA_BUILD_V:
    * This view collects the previous customers' demographics, purchasing, and affinity
    * card response details for building the model.
    * MINING_DATA_TEST_V:
    * This view collects the previous customers' demographics, purchasing, and affinity
    * card response details for testing the model.
    * MINING_DATA_APPLY_V:
    * This view collects the prospective customers' demographics and purchasing
    * details for predicting response for the new affinity card program.
    * DATA MINING PROCESS
    * Prepare Data:
    * 1. Missing Value treatment for predictors
    * See dmsvcdemo.java for a definition of missing values, and the steps to be
    * taken for missing value imputation. SVM interprets all NULL values for a
    * given attribute as "sparse". Sparse data is not suitable for decision
    * trees, but it will accept sparse data nevertheless. Decision Tree
    * implementation in ODM handles missing predictor values (by penalizing
    * predictors which have missing values) and missing target values (by simple
    * discarding records with missing target values). We skip missing values
    * treatment in this demo.
    * 2. Outlier/Clipping treatment for predictors
    * See dmsvcdemo.java for a discussion on outlier treatment. For decision
    * trees, outlier treatment is not really necessary. We skip outlier treatment
    * in this demo.
    * 3. Binning high cardinality data
    * No data preparation for the types we accept is necessary - even for high
    * cardinality predictors. Preprocessing to reduce the cardinality
    * (e.g., binning) can improve the performance of the build, but it could
    * penalize the accuracy of the resulting model.
    * The PrepareData() method in this demo program illustrates the preparation of the
    * build, test, and apply data. We skip PrepareData() since the decision tree
    * algorithm is very capable of handling data which has not been specially
    * prepared. For this demo, no data preparation will be performed.
    * Build Model:
    * Mining Model is the prime object in data mining. The buildModel() method
    * illustrates how to build a classification model using DT algorithm.
    * Test Model:
    * Classification model performance can be evaluated by computing test
    * metrics like accuracy, confusion matrix, lift and ROC. The testModel() or
    * computeTestMetrics() method illustrates how to perform a test operation to
    * compute various metrics.
    * Apply Model:
    * Predicting the target attribute values is the prime function of
    * classification models. The applyModel() method illustrates how to
    * predict the customer response for affinity card program.
    * EXECUTING DEMO PROGRAM
    * Refer to Oracle Data Mining Administrator's Guide
    * for guidelines for executing this demo program.
    // Generic Java api imports
    import java.math.BigDecimal;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    import java.sql.ResultSetMetaData;
    import java.sql.SQLException;
    import java.sql.Statement;
    import java.text.DecimalFormat;
    import java.text.MessageFormat;
    import java.util.Collection;
    import java.util.HashMap;
    import java.util.Iterator;
    import java.util.Stack;
    // Java Data Mining (JDM) standard api imports
    import javax.datamining.ExecutionHandle;
    import javax.datamining.ExecutionState;
    import javax.datamining.ExecutionStatus;
    import javax.datamining.JDMException;
    import javax.datamining.MiningAlgorithm;
    import javax.datamining.MiningFunction;
    import javax.datamining.NamedObject;
    import javax.datamining.SizeUnit;
    import javax.datamining.algorithm.tree.TreeHomogeneityMetric;
    import javax.datamining.algorithm.tree.TreeSettings;
    import javax.datamining.algorithm.tree.TreeSettingsFactory;
    import javax.datamining.base.AlgorithmSettings;
    import javax.datamining.base.Model;
    import javax.datamining.base.Task;
    import javax.datamining.data.AttributeDataType;
    import javax.datamining.data.CategoryProperty;
    import javax.datamining.data.CategorySet;
    import javax.datamining.data.CategorySetFactory;
    import javax.datamining.data.ModelSignature;
    import javax.datamining.data.PhysicalAttribute;
    import javax.datamining.data.PhysicalAttributeFactory;
    import javax.datamining.data.PhysicalAttributeRole;
    import javax.datamining.data.PhysicalDataSet;
    import javax.datamining.data.PhysicalDataSetFactory;
    import javax.datamining.data.SignatureAttribute;
    import javax.datamining.modeldetail.tree.TreeModelDetail;
    import javax.datamining.modeldetail.tree.TreeNode;
    import javax.datamining.resource.Connection;
    import javax.datamining.resource.ConnectionFactory;
    import javax.datamining.resource.ConnectionSpec;
    import javax.datamining.rule.Predicate;
    import javax.datamining.rule.Rule;
    import javax.datamining.supervised.classification.ClassificationApplySettings;
    import javax.datamining.supervised.classification.ClassificationApplySettingsFactory;
    import javax.datamining.supervised.classification.ClassificationModel;
    import javax.datamining.supervised.classification.ClassificationSettings;
    import javax.datamining.supervised.classification.ClassificationSettingsFactory;
    import javax.datamining.supervised.classification.ClassificationTestMetricOption;
    import javax.datamining.supervised.classification.ClassificationTestMetrics;
    import javax.datamining.supervised.classification.ClassificationTestMetricsTask;
    import javax.datamining.supervised.classification.ClassificationTestMetricsTaskFactory;
    import javax.datamining.supervised.classification.ClassificationTestTaskFactory;
    import javax.datamining.supervised.classification.ConfusionMatrix;
    import javax.datamining.supervised.classification.CostMatrix;
    import javax.datamining.supervised.classification.CostMatrixFactory;
    import javax.datamining.supervised.classification.Lift;
    import javax.datamining.supervised.classification.ReceiverOperatingCharacterics;
    import javax.datamining.task.BuildTask;
    import javax.datamining.task.BuildTaskFactory;
    import javax.datamining.task.apply.DataSetApplyTask;
    import javax.datamining.task.apply.DataSetApplyTaskFactory;
    // Oracle Java Data Mining (JDM) implemented api imports
    import oracle.dmt.jdm.algorithm.tree.OraTreeSettings;
    import oracle.dmt.jdm.resource.OraConnection;
    import oracle.dmt.jdm.resource.OraConnectionFactory;
    import oracle.dmt.jdm.modeldetail.tree.OraTreeModelDetail;
    public class dmtreedemo
    //Connection related data members
    private static Connection m_dmeConn;
    private static ConnectionFactory m_dmeConnFactory;
    //Object factories used in this demo program
    private static PhysicalDataSetFactory m_pdsFactory;
    private static PhysicalAttributeFactory m_paFactory;
    private static ClassificationSettingsFactory m_clasFactory;
    private static TreeSettingsFactory m_treeFactory;
    private static BuildTaskFactory m_buildFactory;
    private static DataSetApplyTaskFactory m_dsApplyFactory;
    private static ClassificationTestTaskFactory m_testFactory;
    private static ClassificationApplySettingsFactory m_applySettingsFactory;
    private static CostMatrixFactory m_costMatrixFactory;
    private static CategorySetFactory m_catSetFactory;
    private static ClassificationTestMetricsTaskFactory m_testMetricsTaskFactory;
    // Global constants
    private static DecimalFormat m_df = new DecimalFormat("##.####");
    private static String m_costMatrixName = null;
    public static void main( String args[] )
    try
    if ( args.length != 3 ) {
    System.out.println("Usage: java dmsvrdemo <Host name>:<Port>:<SID> <User Name> <Password>");
    return;
    String uri = args[0];
    String name = args[1];
    String password = args[2];
    // 1. Login to the Data Mining Engine
    m_dmeConnFactory = new OraConnectionFactory();
    ConnectionSpec connSpec = m_dmeConnFactory.getConnectionSpec();
    connSpec.setURI("jdbc:oracle:thin:@"+uri);
    connSpec.setName(name);
    connSpec.setPassword(password);
    m_dmeConn = m_dmeConnFactory.getConnection(connSpec);
    // 2. Clean up all previuosly created demo objects
    clean();
    // 3. Initialize factories for mining objects
    initFactories();
    m_costMatrixName = createCostMatrix();
    // 4. Build model with supplied cost matrix
    buildModel();
    // 5. Test model - To compute accuracy and confusion matrix, lift result
    // and ROC for the model from apply output data.
    // Please see dnnbdemo.java to see how to test the model
    // with a test input data and cost matrix.
    // Test the model with cost matrix
    computeTestMetrics("DT_TEST_APPLY_OUTPUT_COST_JDM",
    "dtTestMetricsWithCost_jdm", m_costMatrixName);
    // Test the model without cost matrix
    computeTestMetrics("DT_TEST_APPLY_OUTPUT_JDM",
    "dtTestMetrics_jdm", null);
    // 6. Apply the model
    applyModel();
    } catch(Exception anyExp) {
    anyExp.printStackTrace(System.out);
    } finally {
    try {
    //6. Logout from the Data Mining Engine
    m_dmeConn.close();
    } catch(Exception anyExp1) { }//Ignore
    * Initialize all object factories used in the demo program.
    * @exception JDMException if factory initalization failed
    public static void initFactories() throws JDMException
    m_pdsFactory = (PhysicalDataSetFactory)m_dmeConn.getFactory(
    "javax.datamining.data.PhysicalDataSet");
    m_paFactory = (PhysicalAttributeFactory)m_dmeConn.getFactory(
    "javax.datamining.data.PhysicalAttribute");
    m_clasFactory = (ClassificationSettingsFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.ClassificationSettings");
    m_treeFactory = (TreeSettingsFactory) m_dmeConn.getFactory(
    "javax.datamining.algorithm.tree.TreeSettings");
    m_buildFactory = (BuildTaskFactory)m_dmeConn.getFactory(
    "javax.datamining.task.BuildTask");
    m_dsApplyFactory = (DataSetApplyTaskFactory)m_dmeConn.getFactory(
    "javax.datamining.task.apply.DataSetApplyTask");
    m_testFactory = (ClassificationTestTaskFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.ClassificationTestTask");
    m_applySettingsFactory = (ClassificationApplySettingsFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.ClassificationApplySettings");
    m_costMatrixFactory = (CostMatrixFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.CostMatrix");
    m_catSetFactory = (CategorySetFactory)m_dmeConn.getFactory(
    "javax.datamining.data.CategorySet" );
    m_testMetricsTaskFactory = (ClassificationTestMetricsTaskFactory)m_dmeConn.getFactory(
    "javax.datamining.supervised.classification.ClassificationTestMetricsTask");
    * This method illustrates how to build a mining model using the
    * MINING_DATA_BUILD_V dataset and classification settings with
    * DT algorithm.
    * @exception JDMException if model build failed
    public static void buildModel() throws JDMException
    System.out.println("---------------------------------------------------");
    System.out.println("--- Build Model - using cost matrix ---");
    System.out.println("---------------------------------------------------");
    // 1. Create & save PhysicalDataSpecification
    PhysicalDataSet buildData =
    m_pdsFactory.create("MINING_DATA_BUILD_V", false);
    PhysicalAttribute pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    buildData.addAttribute(pa);
    m_dmeConn.saveObject("treeBuildData_jdm", buildData, true);
    //2. Create & save Mining Function Settings
    // Create tree algorithm settings
    TreeSettings treeAlgo = m_treeFactory.create();
    // By default, tree algorithm will have the following settings:
    // treeAlgo.setBuildHomogeneityMetric(TreeHomogeneityMetric.gini);
    // treeAlgo.setMaxDepth(7);
    // ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(0.1, SizeUnit.percentage);
    // treeAlgo.setMinNodeSize( 0.05, SizeUnit.percentage );
    // treeAlgo.setMinNodeSize( 10, SizeUnit.count );
    // ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(20, SizeUnit.count);
    // Set cost matrix. A cost matrix is used to influence the weighting of
    // misclassification during model creation (and scoring).
    // See Oracle Data Mining Concepts Guide for more details.
    String costMatrixName = m_costMatrixName;
    // Create ClassificationSettings
    ClassificationSettings buildSettings = m_clasFactory.create();
    buildSettings.setAlgorithmSettings(treeAlgo);
    buildSettings.setCostMatrixName(costMatrixName);
    buildSettings.setTargetAttributeName("AFFINITY_CARD");
    m_dmeConn.saveObject("treeBuildSettings_jdm", buildSettings, true);
    // 3. Create, save & execute Build Task
    BuildTask buildTask = m_buildFactory.create(
    "treeBuildData_jdm", // Build data specification
    "treeBuildSettings_jdm", // Mining function settings name
    "treeModel_jdm" // Mining model name
    buildTask.setDescription("treeBuildTask_jdm");
    executeTask(buildTask, "treeBuildTask_jdm");
    //4. Restore the model from the DME and explore the details of the model
    ClassificationModel model =
    (ClassificationModel)m_dmeConn.retrieveObject(
    "treeModel_jdm", NamedObject.model);
    // Display model build settings
    ClassificationSettings retrievedBuildSettings =
    (ClassificationSettings)model.getBuildSettings();
    if(buildSettings == null)
    System.out.println("Failure to restore build settings.");
    else
    displayBuildSettings(retrievedBuildSettings, "treeBuildSettings_jdm");
    // Display model signature
    displayModelSignature((Model)model);
    // Display model detail
    TreeModelDetail treeModelDetails = (TreeModelDetail) model.getModelDetail();
    displayTreeModelDetailsExtensions(treeModelDetails);
    * Create and save cost matrix.
    * Consider an example where it costs $10 to mail a promotion to a
    * prospective customer and if the prospect becomes a customer, the
    * typical sale including the promotion, is worth $100. Then the cost
    * of missing a customer (i.e. missing a $100 sale) is 10x that of
    * incorrectly indicating that a person is good prospect (spending
    * $10 for the promo). In this case, all prediction errors made by
    * the model are NOT equal. To act on what the model determines to
    * be the most likely (probable) outcome may be a poor choice.
    * Suppose that the probability of a BUY reponse is 10% for a given
    * prospect. Then the expected revenue from the prospect is:
    * .10 * $100 - .90 * $10 = $1.
    * The optimal action, given the cost matrix, is to simply mail the
    * promotion to the customer, because the action is profitable ($1).
    * In contrast, without the cost matrix, all that can be said is
    * that the most likely response is NO BUY, so don't send the
    * promotion. This shows that cost matrices can be very important.
    * The caveat in all this is that the model predicted probabilities
    * may NOT be accurate. For binary targets, a systematic approach to
    * this issue exists. It is ROC, illustrated below.
    * With ROC computed on a test set, the user can see how various model
    * predicted probability thresholds affect the action of mailing a promotion.
    * Suppose I promote when the probability to BUY exceeds 5, 10, 15%, etc.
    * what return can I expect? Note that the answer to this question does
    * not rely on the predicted probabilities being accurate, only that
    * they are in approximately the correct rank order.
    * Assuming that the predicted probabilities are accurate, provide the
    * cost matrix table name as input to the RANK_APPLY procedure to get
    * appropriate costed scoring results to determine the most appropriate
    * action.
    * In this demo, we will create the following cost matrix
    * ActualTarget PredictedTarget Cost
    * 0 0 0
    * 0 1 1
    * 1 0 8
    * 1 1 0
    private static String createCostMatrix() throws JDMException
    String costMatrixName = "treeCostMatrix";
    // Create categorySet
    CategorySet catSet = m_catSetFactory.create(AttributeDataType.integerType);
    // Add category values
    catSet.addCategory(new Integer(0), CategoryProperty.valid);
    catSet.addCategory(new Integer(1), CategoryProperty.valid);
    // Create cost matrix
    CostMatrix costMatrix = m_costMatrixFactory.create(catSet);
    // ActualTarget PredictedTarget Cost
    costMatrix.setValue(new Integer(0), new Integer(0), 0);
    costMatrix.setValue(new Integer(0), new Integer(1), 1);
    costMatrix.setValue(new Integer(1), new Integer(0), 8);
    costMatrix.setValue(new Integer(1), new Integer(1), 0);
    //save cost matrix
    m_dmeConn.saveObject(costMatrixName, costMatrix, true);
    return costMatrixName;
    * This method illustrates how to compute test metrics using
    * an apply output table that has actual and predicted target values. Here the
    * apply operation is done on the MINING_DATA_TEST_V dataset. It creates
    * an apply output table with actual and predicted target values. Using
    * ClassificationTestMetricsTask test metrics are computed. This produces
    * the same test metrics results as ClassificationTestTask.
    * @param applyOutputName apply output table name
    * @param testResultName test result name
    * @param costMatrixName table name of the supplied cost matrix
    * @exception JDMException if model test failed
    public static void computeTestMetrics(String applyOutputName,
    String testResultName, String costMatrixName) throws JDMException
    if (costMatrixName != null) {
    System.out.println("---------------------------------------------------");
    System.out.println("--- Test Model - using apply output table ---");
    System.out.println("--- - using cost matrix table ---");
    System.out.println("---------------------------------------------------");
    else {
    System.out.println("---------------------------------------------------");
    System.out.println("--- Test Model - using apply output table ---");
    System.out.println("--- - using no cost matrix table ---");
    System.out.println("---------------------------------------------------");
    // 1. Do the apply on test data to create an apply output table
    // Create & save PhysicalDataSpecification
    PhysicalDataSet applyData =
    m_pdsFactory.create( "MINING_DATA_TEST_V", false );
    PhysicalAttribute pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    applyData.addAttribute( pa );
    m_dmeConn.saveObject( "treeTestApplyData_jdm", applyData, true );
    // 2 Create & save ClassificationApplySettings
    ClassificationApplySettings clasAS = m_applySettingsFactory.create();
    HashMap sourceAttrMap = new HashMap();
    sourceAttrMap.put( "AFFINITY_CARD", "AFFINITY_CARD" );
    clasAS.setSourceDestinationMap( sourceAttrMap );
    m_dmeConn.saveObject( "treeTestApplySettings_jdm", clasAS, true);
    // 3 Create, store & execute apply Task
    DataSetApplyTask applyTask = m_dsApplyFactory.create(
    "treeTestApplyData_jdm",
    "treeModel_jdm",
    "treeTestApplySettings_jdm",
    applyOutputName);
    if(executeTask(applyTask, "treeTestApplyTask_jdm"))
    // Compute test metrics on new created apply output table
    // 4. Create & save PhysicalDataSpecification
    PhysicalDataSet applyOutputData = m_pdsFactory.create(
    applyOutputName, false );
    applyOutputData.addAttribute( pa );
    m_dmeConn.saveObject( "treeTestApplyOutput_jdm", applyOutputData, true );
    // 5. Create a ClassificationTestMetricsTask
    ClassificationTestMetricsTask testMetricsTask =
    m_testMetricsTaskFactory.create( "treeTestApplyOutput_jdm", // apply output data used as input
    "AFFINITY_CARD", // actual target column
    "PREDICTION", // predicted target column
    testResultName // test metrics result name
    testMetricsTask.computeMetric( // enable confusion matrix computation
    ClassificationTestMetricOption.confusionMatrix, true );
    testMetricsTask.computeMetric( // enable lift computation
    ClassificationTestMetricOption.lift, true );
    testMetricsTask.computeMetric( // enable ROC computation
    ClassificationTestMetricOption.receiverOperatingCharacteristics, true );
    testMetricsTask.setPositiveTargetValue( new Integer(1) );
    testMetricsTask.setNumberOfLiftQuantiles( 10 );
    testMetricsTask.setPredictionRankingAttrName( "PROBABILITY" );
    if (costMatrixName != null) {
    testMetricsTask.setCostMatrixName(costMatrixName);
    displayTable(costMatrixName, "", "order by ACTUAL_TARGET_VALUE, PREDICTED_TARGET_VALUE");
    // Store & execute the task
    boolean isTaskSuccess = executeTask(testMetricsTask, "treeTestMetricsTask_jdm");
    if( isTaskSuccess ) {
    // Restore & display test metrics
    ClassificationTestMetrics testMetrics = (ClassificationTestMetrics)
    m_dmeConn.retrieveObject( testResultName, NamedObject.testMetrics );
    // Display classification test metrics
    displayTestMetricDetails(testMetrics);
    * This method illustrates how to apply the mining model on the
    * MINING_DATA_APPLY_V dataset to predict customer
    * response. After completion of the task apply output table with the
    * predicted results is created at the user specified location.
    * @exception JDMException if model apply failed
    public static void applyModel() throws JDMException
    System.out.println("---------------------------------------------------");
    System.out.println("--- Apply Model ---");
    System.out.println("---------------------------------------------------");
    System.out.println("---------------------------------------------------");
    System.out.println("--- Business case 1 ---");
    System.out.println("--- Find the 10 customers who live in Italy ---");
    System.out.println("--- that are least expensive to be convinced to ---");
    System.out.println("--- use an affinity card. ---");
    System.out.println("---------------------------------------------------");
    // 1. Create & save PhysicalDataSpecification
    PhysicalDataSet applyData =
    m_pdsFactory.create( "MINING_DATA_APPLY_V", false );
    PhysicalAttribute pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    applyData.addAttribute( pa );
    m_dmeConn.saveObject( "treeApplyData_jdm", applyData, true );
    // 2. Create & save ClassificationApplySettings
    ClassificationApplySettings clasAS = m_applySettingsFactory.create();
    // Add source attributes
    HashMap sourceAttrMap = new HashMap();
    sourceAttrMap.put( "COUNTRY_NAME", "COUNTRY_NAME" );
    clasAS.setSourceDestinationMap( sourceAttrMap );
    // Add cost matrix
    clasAS.setCostMatrixName( m_costMatrixName );
    m_dmeConn.saveObject( "treeApplySettings_jdm", clasAS, true);
    // 3. Create, store & execute apply Task
    DataSetApplyTask applyTask = m_dsApplyFactory.create(
    "treeApplyData_jdm", "treeModel_jdm",
    "treeApplySettings_jdm", "TREE_APPLY_OUTPUT1_JDM");
    executeTask(applyTask, "treeApplyTask_jdm");
    // 4. Display apply result -- Note that APPLY results do not need to be
    // reverse transformed, as done in the case of model details. This is
    // because class values of a classification target were not (required to
    // be) binned or normalized.
    // Find the 10 customers who live in Italy that are least expensive to be
    // convinced to use an affinity card.
    displayTable("TREE_APPLY_OUTPUT1_JDM",
    "where COUNTRY_NAME='Italy' and ROWNUM < 11 ",
    "order by COST");
    System.out.println("---------------------------------------------------");
    System.out.println("--- Business case 2 ---");
    System.out.println("--- List ten customers (ordered by their id) ---");
    System.out.println("--- along with likelihood and cost to use or ---");
    System.out.println("--- reject the affinity card. ---");
    System.out.println("---------------------------------------------------");
    // 1. Create & save PhysicalDataSpecification
    applyData =
    m_pdsFactory.create( "MINING_DATA_APPLY_V", false );
    pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    applyData.addAttribute( pa );
    m_dmeConn.saveObject( "treeApplyData_jdm", applyData, true );
    // 2. Create & save ClassificationApplySettings
    clasAS = m_applySettingsFactory.create();
    // Add cost matrix
    clasAS.setCostMatrixName( m_costMatrixName );
    m_dmeConn.saveObject( "treeApplySettings_jdm", clasAS, true);
    // 3. Create, store & execute apply Task
    applyTask = m_dsApplyFactory.create(
    "treeApplyData_jdm", "treeModel_jdm",
    "treeApplySettings_jdm", "TREE_APPLY_OUTPUT2_JDM");
    executeTask(applyTask, "treeApplyTask_jdm");
    // 4. Display apply result -- Note that APPLY results do not need to be
    // reverse transformed, as done in the case of model details. This is
    // because class values of a classification target were not (required to
    // be) binned or normalized.
    // List ten customers (ordered by their id) along with likelihood and cost
    // to use or reject the affinity card (Note: while this example has a
    // binary target, such a query is useful in multi-class classification -
    // Low, Med, High for example).
    displayTable("TREE_APPLY_OUTPUT2_JDM",
    "where ROWNUM < 21",
    "order by CUST_ID, PREDICTION");
    System.out.println("---------------------------------------------------");
    System.out.println("--- Business case 3 ---");
    System.out.println("--- Find the customers who work in Tech support ---");
    System.out.println("--- and are under 25 who is going to response ---");
    System.out.println("--- to the new affinity card program. ---");
    System.out.println("---------------------------------------------------");
    // 1. Create & save PhysicalDataSpecification
    applyData =
    m_pdsFactory.create( "MINING_DATA_APPLY_V", false );
    pa = m_paFactory.create("CUST_ID",
    AttributeDataType.integerType, PhysicalAttributeRole.caseId );
    applyData.addAttribute( pa );
    m_dmeConn.saveObject( "treeApplyData_jdm", applyData, true );
    // 2. Create & save ClassificationApplySettings
    clasAS = m_applySettingsFactory.create();
    // Add source attributes
    sourceAttrMap = new HashMap();
    sourceAttrMap.put( "AGE", "AGE" );
    sourceAttrMap.put( "OCCUPATION", "OCCUPATION" );
    clasAS.setSourceDestinationMap( sourceAttrMap );
    m_dmeConn.saveObject( "treeApplySettings_jdm", clasAS, true);
    // 3. Create, store & execute apply Task
    applyTask = m_dsApplyFactory.create(
    "treeApplyData_jdm", "treeModel_jdm",
    "treeApplySettings_jdm", "TREE_APPLY_OUTPUT3_JDM");
    executeTask(applyTask, "treeApplyTask_jdm");
    // 4. Display apply result -- Note that APPLY results do not need to be
    // reverse transformed, as done in the case of model details. This is
    // because class values of a classification target were not (required to
    // be) binned or normalized.
    // Find the customers who work in Tech support and are under 25 who is
    // going to response to the new affinity card program.
    displayTable("TREE_APPLY_OUTPUT3_JDM",
    "where OCCUPATION = 'TechSup' " +
    "and AGE < 25 " +
    "and PREDICTION = 1 ",
    "order by CUST_ID");
    * This method stores the given task with the specified name in the DMS
    * and submits the task for asynchronous execution in the DMS. After
    * completing the task successfully it returns true. If there is a task
    * failure, then it prints error description and returns false.
    * @param taskObj task object
    * @param taskName name of the task
    * @return boolean returns true when the task is successful
    * @exception JDMException if task execution failed
    public static boolean executeTask(Task taskObj, String taskName)
    throws JDMException
    boolean isTaskSuccess = false;
    m_dmeConn.saveObject(taskName, taskObj, true);
    ExecutionHandle execHandle = m_dmeConn.execute(taskName);
    System.out.print(taskName + " is started, please wait. ");
    //Wait for completion of the task
    ExecutionStatus status = execHandle.waitForCompletion(Integer.MAX_VALUE);
    //Check the status of the task after completion
    isTaskSuccess = status.getState().equals(ExecutionState.success);
    if( isTaskSuccess ) //Task completed successfully
    System.out.println(taskName + " is successful.");
    else //Task failed
    System.out.println(taskName + " failed.\nFailure Description: " +
    status.getDescription() );
    return isTaskSuccess;
    private static void displayBuildSettings(
    ClassificationSettings clasSettings, String buildSettingsName)
    System.out.println("BuildSettings Details from the "
    + buildSettingsName + " table:");
    displayTable(buildSettingsName, "", "order by SETTING_NAME");
    System.out.println("BuildSettings Details from the "
    + buildSettingsName + " model build settings object:");
    String objName = clasSettings.getName();
    if(objName != null)
    System.out.println("Name = " + objName);
    String objDescription = clasSettings.getDescription();
    if(objDescription != null)
    System.out.println("Description = " + objDescription);
    java.util.Date creationDate = clasSettings.getCreationDate();
    String creator = clasSettings.getCreatorInfo();
    String targetAttrName = clasSettings.getTargetAttributeName();
    System.out.println("Target attribute name = " + targetAttrName);
    AlgorithmSettings algoSettings = clasSettings.getAlgorithmSettings();
    if(algoSettings == null)
    System.out.println("Failure: clasSettings.getAlgorithmSettings() returns null");
    MiningAlgorithm algo = algoSettings.getMiningAlgorithm();
    if(algo == null) System.out.println("Failure: algoSettings.getMiningAlgorithm() returns null");
    System.out.println("Algorithm Name: " + algo.name());
    MiningFunction function = clasSettings.getMiningFunction();
    if(function == null) System.out.println("Failure: clasSettings.getMiningFunction() returns null");
    System.out.println("Function Name: " + function.name());
    try {
    String costMatrixName = clasSettings.getCostMatrixName();
    if(costMatrixName != null) {
    System.out.println("Cost Matrix Details from the " + costMatrixName
    + " table:");
    displayTable(costMatrixName, "", "order by ACTUAL_TARGET_VALUE, PREDICTED_TARGET_VALUE");
    } catch(Exception jdmExp)
    System.out.println("Failure: clasSettings.getCostMatrixName()throws exception");
    jdmExp.printStackTrace();
    // List of DT algorithm settings
    // treeAlgo.setBuildHomogeneityMetric(TreeHomogeneityMetric.gini);
    // treeAlgo.setMaxDepth(7);
    // ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(0.1, SizeUnit.percentage);
    // treeAlgo.setMinNodeSize( 0.05, SizeUnit.percentage );
    // treeAlgo.setMinNodeSize( 10, SizeUnit.count );
    // ((OraTreeSettings)treeAlgo).setMinDecreaseInImpurity(20, SizeUnit.count);
    TreeHomogeneityMetric homogeneityMetric = ((OraTreeSettings)algoSettings).getBuildHomogeneityMetric();
    System.out.println("Homogeneity Metric: " + homogeneityMetric.name());
    int intValue = ((OraTreeSettings)algoSettings).getMaxDepth();
    System.out.println("Max Depth: " + intValue);
    double doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSizeForSplit(SizeUnit.percentage);
    System.out.println("MinNodeSizeForSplit (percentage): " + m_df.format(doubleValue));
    doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSizeForSplit(SizeUnit.count);
    System.out.println("MinNodeSizeForSplit (count): " + m_df.format(doubleValue));
    doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSize();
    SizeUnit unit = ((OraTreeSettings)algoSettings).getMinNodeSizeUnit();
    System.out.println("Min Node Size (" + unit.name() +"): " + m_df.format(doubleValue));
    doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSize( SizeUnit.count );
    System.out.println("Min Node Size (" + SizeUnit.count.name() +"): " + m_df.format(doubleValue));
    doubleValue = ((OraTreeSettings)algoSettings).getMinNodeSize( SizeUnit.percentage );
    System.out.println("Min Node Size (" + SizeUnit.percentage.name() +"): " + m_df.format(doubleValue));
    * This method displayes DT model signature.
    * @param model model object
    * @exception JDMException if failed to retrieve model signature
    public static void displayModelSignature(Model model) throws JDMException
    String modelName = model.getName();
    System.out.println("Model Name: " + modelName);
    ModelSignature modelSignature = model.getSignature();
    System.out.println("ModelSignature Deatils: ( Attribute Name, Attribute Type )");
    MessageFormat mfSign = new MessageFormat(" ( {0}, {1} )");
    String[] vals = new String[3];
    Collection sortedSet = modelSignature.getAttributes();
    Iterator attrIterator = sortedSet.iterator();
    while(attrIterator.hasNext())
    SignatureAttribute attr = (SignatureAttribute)attrIterator.next();
    vals[0] = attr.getName();
    vals[1] = attr.getDataType().name();
    System.out.println( mfSign.format(vals) );
    * This method displayes DT model details.
    * @param treeModelDetails tree model details object
    * @exception JDMException if failed to retrieve model details
    public static void displayTreeModelDetailsExtensions(TreeModelDetail treeModelDetails)
    throws JDMException
    System.out.println( "\nTreeModelDetail: Model name=" + "treeModel_jdm" );
    TreeNode root = treeModelDetails.getRootNode();
    System.out.println( "\nRoot node: " + root.getIdentifier() );
    // get the info for the tree model
    int treeDepth = ((OraTreeModelDetail) treeModelDetails).getTreeDepth();
    System.out.println( "Tree depth: " + treeDepth );
    int totalNodes = ((OraTreeModelDetail) treeModelDetails).getNumberOfNodes();
    System.out.println( "Total number of nodes: " + totalNodes );
    int totalLeaves = ((OraTreeModelDetail) treeModelDetails).getNumberOfLeafNodes();
    System.out.println( "Total number of leaf nodes: " + totalLeaves );
    Stack nodeStack = new Stack();
    nodeStack.push( root);
    while( !nodeStack.empty() )
    TreeNode node = (TreeNode) nodeStack.pop();
    // display this node
    int nodeId = node.getIdentifier();
    long caseCount = node.getCaseCount();
    Object prediction = node.getPrediction();
    int level = node.getLevel();
    int children = node.getNumberOfChildren();
    TreeNode parent = node.getParent();
    System.out.println( "\nNode id=" + nodeId + " at level " + level );
    if( parent != null )
    System.out.println( "parent: " + parent.getIdentifier() +
    ", children=" + children );
    System.out.println( "Case count: " + caseCount + ", prediction: " + prediction );
    Predicate predicate = node.getPredicate();
    System.out.println( "Predicate: " + predicate.toString() );
    Predicate[] surrogates = node.getSurrogates();
    if( surrogates != null )
    for( int i=0; i<surrogates.length; i++ )
    System.out.println( "Surrogate[" + i + "]: " + surrogates[i] );
    // add child nodes in the stack
    if( children > 0 )
    TreeNode[] childNodes = node.getChildren();
    for( int i=0; i<childNodes.length; i++ )
    nodeStack.push( childNodes[i] );
    TreeNode[] allNodes = treeModelDetails.getNodes();
    System.out.print( "\nNode identifiers by getNodes():" );
    for( int i=0; i<allNodes.length; i++ )
    System.out.print( " " + allNodes.getIdentifier() );
    System.out.println();
    // display the node identifiers
    int[] nodeIds = treeModelDetails.getNodeIdentifiers();
    System.out.print( "Node identifiers by getNodeIdentifiers():" );
    for( int i=0; i<nodeIds.length; i++ )
    System.out.print( " " + nodeIds[i] );
    System.out.println();
    TreeNode node = treeModelDetails.getNode(nodeIds.length-1);
    System.out.println( "Node identifier by getNode(" + (nodeIds.length-1) +
    "): " + node.getIdentifier() );
    Rule rule2 = treeModelDetails.getRule(nodeIds.length-1);
    System.out.println( "Rule identifier by getRule(" + (nodeIds.length-1) +
    "): " + rule2.getRuleIdentifier() );
    // get the rules and display them
    Collection ruleColl = treeModelDetails.getRules();
    Iterator ruleIterator = ruleColl.iterator();
    while( ruleIterator.hasNext() )
    Rule rule = (Rule) ruleIterator.next();
    int ruleId = rule.getRuleIdentifier();
    Predicate antecedent = (Predicate) rule.getAntecedent();
    Predicate consequent = (Predicate) rule.getConsequent();
    System.out.println( "\nRULE " + ruleId + ": support=" +
    rule.getSupport() + " (abs=" + rule.getAbsoluteSupport() +
    "), confidence=" + rule.getConfidence() );
    System.out.println( antecedent );
    System.out.println( "=======>" );
    System.out.println( consequent );
    * Display classification test metrics object
    * @param testMetrics classification test metrics object
    * @exception JDMException if failed to retrieve test metric details
    public static void displayTestMetricDetails(
    ClassificationTestMetrics testMetrics) throws JDMException
    // Retrieve Oracle ABN model test metrics deatils extensions
    // Test Metrics Name
    System.out.println("Test Metrics Name = " + testMetrics.getName());
    // Model Name
    System.out.println("Model Name = " + testMetrics.getModelName());
    // Test Data Name
    System.out.println("Test Data Name = " + testMetrics.getTestDataName());
    // Accuracy
    System.out.println("Accuracy = " + m_df.format(testMetrics.getAccuracy().doubleValue()));
    // Confusion Matrix
    ConfusionMatrix confusionMatrix = testMetrics.getConfusionMatrix();
    Collection categories = confusionMatrix.getCategories();
    Iterator xIterator = categories.iterator();
    System.out.println("Confusion Matrix: Accuracy = " + m_df.format(confusionMatrix.getAccuracy()));
    System.out.println("Confusion Matrix: Error = " + m_df.format(confusionMatrix.getError()));
    System.out.println("Confusion Matrix:( Actual, Prection, Value )");
    MessageFormat mf = new MessageFormat(" ( {0}, {1}, {2} )");
    String[] vals = new String[3];
    while(xIterator.hasNext())
    Object actual = xIterator.next();
    vals[0] = actual.toString();
    Iterator yIterator = categories.iterator();
    while(yIterator.hasNext())
    Object predicted = yIterator.next();
    vals[1] = predicted.toString();
    long number = confusionMatrix.getNumberOfPredictions(actual, predicted);
    vals[2] = Long.toString(number);
    System.out.println(mf.format(vals));
    // Lift
    Lift lift = testMetrics.getLift();
    System.out.println("Lift Details:");
    System.out.println("Lift: Target Attribute Name = " + lift.getTargetAttributeName());
    System.out.println("Lift: Positive Target Value = " + lift.getPositiveTargetValue());
    System.out.println("Lift: Total Cases = " + lift.getTotalCases());
    System.out.println("Lift: Total Positive Cases = " + lift.getTotalPositiveCases());
    int numberOfQuantiles = lift.getNumberOfQuantiles();
    System.out.println("Lift: Number Of Quantiles = " + numberOfQuantiles);
    System.out.println("Lift: ( QUANTILE_NUMBER, QUANTILE_TOTAL_COUNT, QUANTILE_TARGET_COUNT, PERCENTAGE_RECORDS_CUMULATIVE,CUMULATIVE_LIFT,CUMULATIVE_TARGET_DENSITY,TARGETS_CUMULATIVE, NON_TARGETS_CUMULATIVE, LIFT_QUANTILE, TARGET_DENSITY )");
    MessageFormat mfLift = new MessageFormat(" ( {0}, {1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}, {9} )");
    String[] liftVals = new String[10];
    for(int iQuantile=1; iQuantile<= numberOfQuantiles; iQuantile++)
    liftVals[0] = Integer.toString(iQuantile); //QUANTILE_NUMBER
    liftVals[1] = Long.toString(lift.getCases((iQuantile-1), iQuantile));//QUANTILE_TOTAL_COUNT
    liftVals[2] = Long.toString(lift.getNumberOfPositiveCases((iQuantile-1), iQuantile));//QUANTILE_TARGET_COUNT
    liftVals[3] = m_df.format(lift.getCumulativePercentageSize(iQuantile).doubleValue());//PERCENTAGE_RECORDS_CUMULATIVE
    liftVals[4] = m_df.format(lift.getCumulativeLift(iQuantile).doubleValue());//CUMULATIVE_LIFT
    liftVals[5] = m_df.format(lift.getCumulativeTargetDensity(iQuantile).doubleValue());//CUMULATIVE_TARGET_DENSITY
    liftVals[6] = Long.toString(lift.getCumulativePositiveCases(iQuantile));//TARGETS_CUMULATIVE
    liftVals[7] = Long.toString(lift.getCumulativeNegativeCases(iQuantile));//NON_TARGETS_CUMULATIVE
    liftVals[8] = m_df.format(lift.getLift(iQuantile, iQuantile).doubleValue());//LIFT_QUANTILE
    liftVals[9] = m_df.format(lift.getTargetDensity(iQuantile, iQuantile).doubleValue());//TARGET_DENSITY
    System.out.println(mfLift.format(liftVals));
    // ROC
    ReceiverOperatingCharacterics roc = testMetrics.getROC();
    System.out.println("ROC Details:");
    System.out.println("ROC: Area Under Curve = " + m_df.format(roc.getAreaUnderCurve()));
    int nROCThresh = roc.getNumberOfThresholdCandidates();
    System.out.println("ROC: Number Of Threshold Candidates = " + nROCThresh);
    System.out.println("ROC: ( INDEX, PROBABILITY, TRUE_POSITIVES, FALSE_NEGATIVES, FALSE_POSITIVES, TRUE_NEGATIVES, TRUE_POSITIVE_FRACTION, FALSE_POSITIVE_FRACTION )");
    MessageFormat mfROC = new MessageFormat(" ( {0}, {1}, {2}, {3}, {4}, {5}, {6}, {7} )");
    String[] rocVals = new String[8];
    for(int iROC=1; iROC <= nROCThresh; iROC++)
    rocVals[0] = Integer.toString(iROC); //INDEX
    rocVals[1] = m_df.format(roc.getProbabilityThreshold(iROC));//PROBABILITY
    rocVals[2] = Long.toString(roc.getPositives(iROC, true));//TRUE_POSITIVES
    rocVals[3] = Long.toString(roc.getNegatives(iROC, false));//FALSE_NEGATIVES
    rocVals[4] = Long.toString(roc.getPositives(iROC, false));//FALSE_POSITIVES
    rocVals[5] = Long.toString(roc.getNegatives(iROC, true));//TRUE_NEGATIVES
    rocVals[6] = m_df.format(roc.getHitRate(iROC));//TRUE_POSITIVE_FRACTION
    rocVals[7] = m_df.format(roc.getFalseAlarmRate(iROC));//FALSE_POSITIVE_FRACTION
    System.out.println(mfROC.format(rocVals));
    private static void displayTable(String tableName, String whereCause, String orderByColumn)
    StringBuffer emptyCol = new StringBuffer(" ");
    java.sql.Connection dbConn =
    ((OraConnection)m_dmeConn).getDatabaseConnection();
    PreparedStatement pStmt = null;
    ResultSet rs = null;
    try
    pStmt = dbConn.prepareStatement("SELECT * FROM " + tableName + " " + whereCause + " " + orderByColumn);
    rs = pStmt.executeQuery();
    ResultSetMetaData rsMeta = rs.getMetaData();
    int colCount = rsMeta.getColumnCount();
    StringBuffer header = new StringBuffer();
    System.out.println("Table : " + tableName);
    //Build table header
    for(int iCol=1; iCol<=colCount; iCol++)
    String colName = rsMeta.getColumnName(iCol);
    header.append(emptyCol.replace(0, colName.length(), colName));
    emptyCol = new StringBuffer(" ");
    System.out.println(header.toString());
    //Write table data
    while(rs.next())
    StringBuffer rowContent = new StringBuffer();
    for(int iCol=1; iCol<=colCount; iCol++)
    int sqlType = rsMeta.getColumnType(iCol);
    Object obj = rs.getObject(iCol);
    String colContent = null;
    if(obj instanceof java.lang.Number)
    try
    BigDecimal bd = (BigDecimal)obj;
    if(bd.scale() > 5)
    colContent = m_df.format(obj);
    } else
    colContent = bd.toString();
    } catch(Exception anyExp) {
    colContent = m_df.format(obj);
    } else
    if(obj == null)
    colContent = "NULL";
    else
    colContent = obj.toString();
    rowContent.append(" "+emptyCol.replace(0, colContent.length(), colContent));
    emptyCol = new StringBuffer(" ");
    System.out.println(rowContent.toString());
    } catch(Exception anySqlExp) {
    anySqlExp.printStackTrace();
    }//Ignore
    private static void createTableForTestMetrics(String applyOutputTableName,
    String testDataName,
    String testMetricsInputTableName)
    //0. need to execute the following in the schema
    String sqlCreate =
    "create table " + testMetricsInputTableName + " as " +
    "select a.id as id, prediction, probability, affinity_card " +
    "from " + testDataName + " a, " + applyOutputTableName + " b " +
    "where a.id = b.id";
    java.sql.Connection dbConn = ((OraConnection) m_dmeConn).getDatabaseConnection();
    Statement stmt = null;
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate( sqlCreate );
    catch( Exception anySqlExp )
    System.out.println( anySqlExp.getMessage() );
    anySqlExp.printStackTrace();
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    private static void clean()
    java.sql.Connection dbConn =
    ((OraConnection) m_dmeConn).getDatabaseConnection();
    Statement stmt = null;
    // Drop apply output table
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE TREE_APPLY_OUTPUT1_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE TREE_APPLY_OUTPUT2_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE TREE_APPLY_OUTPUT3_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    // Drop apply output table created for test metrics task
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE DT_TEST_APPLY_OUTPUT_COST_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    try
    stmt = dbConn.createStatement();
    stmt.executeUpdate("DROP TABLE DT_TEST_APPLY_OUTPUT_JDM");
    } catch(Exception anySqlExp) {}//Ignore
    finally
    try
    stmt.close();
    catch( SQLException sqlExp ) {}
    //Drop the model
    try {
    m_dmeConn.removeObject( "treeModel_jdm", NamedObject.model );
    } catch(Exception jdmExp) {}
    // drop test metrics result: created by TestMetricsTask
    try {
    m_dmeConn.removeObject( "dtTestMetricsWithCost_jdm", NamedObject.testMetrics );
    } catch(Exception jdmExp) {}
    try {
    m_dmeConn.removeObject( "dtTestMetrics_jdm", NamedObject.testMetrics );
    } catch(Exception jdmExp) {}

    Hi
    I am not sure whether this will help but someone else was getting an error with a java.sql.SQLexception: Unsupported feature. Here is a link to the fix: http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=3&t=007947
    Best wishes
    Michael

  • Gnome-shell error on clutter

    [[new]]
    my home pc come with the same problem after update.
    I have uploaded the
    xorg.0.log:  http://paste2.org/p/1549987
    .xsession-errors: http://paste2.org/p/1549989
    xorg.conf: http://paste2.org/p/1549991
    in xorg.0.log: "fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found" may be the problem.
    after remove /etc/X11/xorg.conf, I got into the gnome with low resolution, glxinfo has that:
    glxinfo
    name of display: :0.0
    X Error of failed request:  BadRequest (invalid request code or no such operation)
      Major opcode of failed request:  135 (GLX)
      Minor opcode of failed request:  19 (X_GLXQueryServerString)
      Serial number of failed request:  12
      Current serial number in output stream:  12
    [[original]]
    I have a new pc DELL optiplex 990 with ati 5450 installed archlinux using archboot. (I installed the catalyst-total from AUR) when start gdm, after login, I can only see the wallpaper and mouse.
    the ~/.xsession-errors has a sentence:
    (gnome-shell:20642): Clutter-CRITICAL **: Unable to initialize Clutter: Unable to find suitable fbconfig for the GLX context
    Window manager error: Unable to initialize Clutter.
    Failure: Module initalization failed
    Failed to play sound: File or data not found
    after change the driver to xf86-video-ati, I can use gnome3 in fallback mode, no 3d acceleration.
    after google, I found something like: (rpath in gnome-shell)
    https://bugzilla.redhat.com/show_bug.cgi?id=716572
    https://fedoraproject.org/w/index.php?t … did=242788
    does anyone have the same problem?
    Last edited by niqingliang2003 (2011-07-29 14:35:08)

    useless without Xorg.0.log and glxinfo | grep -i opengl from fallback mode

  • PROT-1: Failed to initialize ocrconfig

    Hi,
    I'm trying to configure two node 10g RAC under Sun Solaris 5.10 using raw devices. I'm getting the below error while running the
    /u01/app/oracle/product/10.2.0/crs/root.sh. Would be very grateful if you assist us on this.
    We are using the NFS for the shared storage, I have tried with various options for the nfs parameters but not so sure why my log always shows as "utmountcheck: FS mounted with following options wsize 0, rsize 0, hard 0, noac 0, forcedirectio 0"
    /u01/app/oracle/product/10.2.0/crs/root.sh
    ===============================
    WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/u01/app/oracle/product' is not owned by root
    WARNING: directory '/u01/app/oracle' is not owned by root
    WARNING: directory '/u01/app' is not owned by root
    WARNING: directory '/u01' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    PROT-1: Failed to initialize ocrconfig
    Failed to upgrade Oracle Cluster Registry configuration
    NFS parameter
    ===========
    /u02 on 10.70.2.6:/data/ASM remote/read/write/setuid/nodevices/noac/nointr/rsize=32768/wsize=32768/forcedirectio/xattr/zone=ziHISJMCTest/dev=4840023
    /u02 on 10.70.2.6:/data/ASM remote/read/write/setuid/nodevices/noac/nointr/rsize=32768/wsize=32768/forcedirectio/bg/hard/timeo=300/actimeo=0/xattr/zone=ziHISJMCTest2/dev=4840026 on Fri May 30 17:44:14 2008
    rw,bg,vers=4,proto=tcp,hard,intr,rsize=524288,wsize=524288,noac
    /u01/app/oracle/product/10.2.0/crs/log/com-zihisjmctest2/client
    ==============================================
    Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle. All rights reserved.
    2008-06-01 00:40:13.192: [ OCRCONF][1]ocrconfig starts...
    2008-06-01 00:40:13.192: [ OCRCONF][1]Upgrading OCR data
    2008-06-01 00:40:13.196: [  OCROSD][1]utmountcheck: FS mounted with following options wsize 0, rsize 0, hard 0, noac 0, forcedirectio 0
    2008-06-01 00:40:13.196: [  OCROSD][1]NFS file system /u02 mounted with incorrect options
    2008-06-01 00:40:13.196: [  OCROSD][1]WARNING:Expected NFS mount options: wsize>=32768,rsize>=32768,hard,noac
    2008-06-01 00:40:13.196: [  OCROSD][1]utopen:6'': OCR location /u02/emcpower24d configured is not valid storage type. Return code [37].
    2008-06-01 00:40:13.196: [  OCRRAW][1]proprinit: Could not open raw device
    2008-06-01 00:40:13.196: [ default][1]a_init:7!: Backend init unsuccessful : [37]
    2008-06-01 00:40:13.196: [ OCRCONF][1]Exporting OCR data to [OCRUPGRADEFILE]
    2008-06-01 00:40:13.196: [  OCRAPI][1]a_init:7!: Backend init unsuccessful : [33]
    2008-06-01 00:40:13.196: [ OCRCONF][1]There was no previous version of OCR. error:[PROC-33: Oracle Cluster Registry is not configured]
    2008-06-01 00:40:13.199: [  OCROSD][1]utmountcheck: FS mounted with following options wsize 0, rsize 0, hard 0, noac 0, forcedirectio 0
    2008-06-01 00:40:13.199: [  OCROSD][1]NFS file system /u02 mounted with incorrect options
    2008-06-01 00:40:13.199: [  OCROSD][1]WARNING:Expected NFS mount options: wsize>=32768,rsize>=32768,hard,noac
    2008-06-01 00:40:13.199: [  OCROSD][1]utopen:6'': OCR location /u02/emcpower24d configured is not valid storage type. Return code [37].
    2008-06-01 00:40:13.199: [  OCRRAW][1]proprinit: Could not open raw device
    2008-06-01 00:40:13.199: [ default][1]a_init:7!: Backend init unsuccessful : [37]
    2008-06-01 00:40:13.201: [  OCROSD][1]utmountcheck: FS mounted with following options wsize 0, rsize 0, hard 0, noac 0, forcedirectio 0
    2008-06-01 00:40:13.201: [  OCROSD][1]NFS file system /u02 mounted with incorrect options
    2008-06-01 00:40:13.201: [  OCROSD][1]WARNING:Expected NFS mount options: wsize>=32768,rsize>=32768,hard,noac
    2008-06-01 00:40:13.201: [  OCROSD][1]utopen:6'': OCR location /u02/emcpower24d configured is not valid storage type. Return code [37].
    2008-06-01 00:40:13.201: [  OCRRAW][1]proprinit: Could not open raw device
    2008-06-01 00:40:13.201: [  OCRAPI][1]a_init:6b!: Backend init unsuccessful : [37]
    2008-06-01 00:40:13.201: [ OCRCONF][1]Failed to initialized OCR context. error:[PROC-37: Oracle Cluster Registry does not support the storage type configured
    2008-06-01 00:40:13.201: [ OCRCONF][1]Exiting [status=failed]...
    Raw device config
    =============
    dd if=/dev/zero of=/u02/emcpower32d bs=104857600 count=1
    chown root:oinstall /u02/emcpower32d
    chmod 660 /u02/emcpower32d
    dd if=/dev/zero of=/u02/emcpower24d bs=104857600 count=1
    chown root:oinstall /u02/emcpower24d
    chmod 660 /u02/emcpower24d
    dd if=/dev/zero of=/u02/emcpower25d bs=104857600 count=1
    chown oracle:oinstall /u02/emcpower25d
    chmod 660 /u02/emcpower25d
    dd if=/dev/zero of=/u02/emcpower31d bs=104857600 count=1
    chown oracle:oinstall /u02/emcpower31d
    chmod 660 /u02/emcpower31d
    dd if=/dev/zero of=/u02/emcpower23d bs=104857600 count=1
    chown oracle:oinstall /u02/emcpower23d
    chmod 660 /u02/emcpower23d
    Verification for the shared storage
    =========================
    COM-ziHISJMCTest2:oracle$ ./runcluvfy.sh comp ssa -n com-zihisjmctest1,com-zihisjmctest2 -s /u02 -verbose
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    "/u02" is shared.
    Shared storage check was successful on nodes "com-zihisjmctest2,com-zihisjmctest1".
    Verification of shared storage accessibility was successful.
    Many Thanks,
    Viji

    hi,
    i am getting same error on Solaris box.
    Please see the below error while installing CRS on SOLARIS BOX.
    # /u01/mmapps/mmappsdb/crs/root.sh
    WARNING: directory '/u01/mmapps/mmappsdb' is not owned by root
    WARNING: directory '/u01/mmapps' is not owned by root
    WARNING: directory '/u01' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    PROT-1: Failed to initialize ocrconfig
    Failed to upgrade Oracle Cluster Registry configuration
    Thanks in advance
    Anil Pinto

  • Error Message:  Quicktime failed to initalize error (-2096)

    I can't open ITUNES. I have reinstalled quicktime andn ITUNES twice.....no help. Error Message: Quicktime failed to initalize error (-2096) Any suggestions?
    If I were to uninstall ITUNES, then download it new, would the music on my IPOD 30G, sync back to the new ITUNES???

    try this,,,,, this JUST worked for me, i had the same problem, error 2096. Go into your PROGRAM FILES folder, open the ITUNES folder. Right click on the ITUNES.exe icon, click COMPATIBILITY, then -uncheck- the box that says "run this program in compatibility mode for.....". After that you should be good to go!

  • "failed to initalize display adapter"

    I have a 15 in mbp and I installed bootcamp like a week ago and when i go to install the video game need for speed the run after i spend an hour or two installing it. When i start the game the following error accures. "Please check graphics card and associated drivers. Failed to initalize display adapter. Please ensure your video card is compatible and the drivers for it installed. Usually this is a result of missing vendor specific drivers.
    Thank you

    yes i have and they do meet the requirements for the game. Atleast i think so. But if it wasnt up to requirements the game would give a different error message. This message is talking about some kind of driver so my video card can run correctly because right now it isnt. I can just tell bc windows 7 doesnt look as crisp as it did in parallels

  • I keep getting an Installer failed to initalize error

    I am trying to install the trial version of Adobe CS 6 and I keep getting this error: "Installer failed to initialize. Please download Adobe Support to detect the problem". I did that and the Adobe Support Advisor says there are no problems. I tried re-downloading the CS 6 trial but I still get the same message.
    This is what my PDApp.log says:
    3/5/2013 15:03:54 [FATAL] PIM - Failed to load updaterInventory Library
    3/5/2013 15:03:54 [ERROR] PIM - Error in _pimCreateAndValidateUpdateInventoryLibrary.
    Please help!

    Bokin,
    Can you just check if your Windows Service Installer is Running Fine or not?
    How to Check, Here :
    Click on "Start", type "Run", and hit Enter at the RUN.
    Type Services.msc
    Order the list by 'Name' and search for "Windows Installer", Double Click to open it and choose 'StartUp Type' to Automatic and then try the installation.
    If this doesn't work - Go to this location - C:\Program Files (x86)\Common Files\Adobe  and Rename the Folder "AAMUpdaterInventory" to "AAMUpdaterInventory.old" , and then
    Go the link "Click Here", and download the latest Adobe application Manager Patch for your Operating System.
    Cheers!

  • Application failed to initalize

    hi
    i am using sap business one 2005b patch36 from client system after clicking on sap b1 icon i am getting a error "application failed to initialize" and its closing i am not able to enter in to log in screen.
    regards
    hariprasad

    hi,
    Check this link
    Failed to install client
    Jeyakanthan

  • Adobe Flash Player failed to initalize error

    Hello there -  I'm so hoping someone can help me out.  I am trying to download Adobe flash player and get an error message - &quot;failed to initialize&quot;.  I've tried deleting my old file and still no luck.  I also tried to save it and then go to windows explorer to run the program but can't find it, even when I do a search.  The funny thing is when I go into unintall programs, there's an incon for it. I'm not sure what I'm doing wrong.  Can anyone help me out?  I would really love to watch videos on youtube.  :-)  I'm running windows vista and using windows explorer 9. Thanks so much! JodiJ.

    Came across this problem a few times...
    Did you try deleting the actual files from the install directory manually? I think they are located in c:\windows\system32\macromed\flash
    Also came across this... maybe helpful...
    I don't know what web browsers you use, so my instructions are for both browser types (Internet Explorer, and other browsers like Firefox).
    go to http://forums.adobe.com/thread/889580
    download the installer for Internet Explorer and save it to disk;
    if you use other browsers, then also download the other installer and save it to disk;
    close all browser windows, then use Windows Explorer to run the installers you downloaded (install_flash_player_10_active_x.exe and possibly install_flash_player_10.exe).
    If you still get errors
    please give the exact error message you are getting
    list (copy & paste) the contents of the FlashInstall.log file in C:\Windows\system32\Macromed\Flash

  • Installing CS5 on second computer (Installer failed To Initalize)

    I have CS5 and want to install it on my laotop I already have it on my desktop Ihave down loaded adobe advisor it detetec's nothing wrong but I get error mesage ( installer faied to initilize )

    "Installer Failed to initialize" is a generic error and we need to figure out what is causing the trouble. Most of the time, its an issue with the download that it was not successful or downloaded properly. So, in such cases, download the software again and make sure to compare the file size of the downloaded file with the file on the server. Lets check that in this case as well and make sure the download is complete.
    Navigate to the log files in one of the following folders:
    • Windows 32 bit (XP, Vista, 7): \Program Files\Common Files\Adobe\Installers\
    • Windows 64 bit (XP, Vista, 7): \Program Files(x86)\Common Files\Adobe\Installers
    • Mac OS: /Library/Logs/Adobe/Installers/
    The log filename includes the product name and install date, followed by ".log.gz." The extension .gz indicates a compressed format.
    Use a decompression utility such as Winzip or StuffIt to decompress the .gz file. Once uncompressed, the log file is a plain text file.
    Open the .log file using WordPad (Windows) or TextEdit (Mac OS).
    Note: By default, log files open in Console on Mac OS. Select all of the text by pressing Command+A, and then copy and paste it into a text editor before continuing.
    Scroll to the bottom of the log. Look in the --- Summary --- section for lines that start with ERROR: or FATAL and indicate a failure during the installation process.
    Copy and Paste that SUMMARY section in here so that we can check the errors. You might also analyze the logs yourself.
    http://helpx.adobe.com/creative-suite/kb/troubleshoot-install-logs-cs5 -cs5.html

  • Failed to initalize vbox in Photoshop Elements 2.0

    I am trying to put my old version of Photoshop Elements onto my new laptop computer. However, I cannot install the program, nor open it, because when I try I receive the error message "Failed To Initialize Vbox!" I have the serial number, but cannot find a way to start the program.
    Any help would be fantastic! Thank you for all of your hard work,
    -nick

    This might resolve the issue you having:
    -www.computing.net/answers/windows-95/failed-to-initialize-vbox/101922.html
    -http://inetexplorer.mvps.org/data/vbox.htm
    -Garry

  • OS X could not be installed on your computer. Request failed to initalize

    MBP retina 2013, 15 inch.
    I Just try to install it and get the message from the header. Rebooting of course doesn't help.
    Help?

    Server is probably to full since the rush is now over from yesterday everybody is updating today. You should have updated yesterday

Maybe you are looking for

  • What should I do next to assure my hard drive is OK?

    Hi everyone I have a Macbook Version 10.6.8 So as I was working on an essay last night, my computer began to freeze (especially when I would launch a browser such as firefox or chrome).  Pressed for time, I tried restarting my computer multiple times

  • Distiller 9 issue after a new update to Adobe Acrobat Pro (version 9.3)

    After installing the update, I was asked to reboot my HP Pavilion dv9617nr Notebook Computer (Windows Vista Home Edition). After the reboot, I was given an error message which explained that the ICC Color profiles in Adobe Distiller 9 could not be fo

  • SAP HR to ISH Med.

    I am an Sap hr consultant and i am looking to expand my skills and become ish consultant. Can anyone please let me know how to go about it...

  • "Page Cannot be Found" - "HTTP 400 - Bad Request" error on clicking iview

    Hi, We are in EP6 SP19 cluster environment.We have a customized personalisation iview in our portal.This iview gives above mentioned error in one of our DI's when trying to perform any action on this iview.The data is being taken from DB of CI, so th

  • EzVPN sometimes ping only in one direction or only one interface

    Guys, I have lots of 857's routers in the field with mostly the latest OS - 12.4(15)T17 making ezVPN connections to a 2951 with 15.1(4)M5. All the 857's have lookback and vlan interfaces similar to : interface Loopback0 ip address 50.43.8.1 255.255.2