Clustering across File systems

Folks!
          I have a question... Does Weblogic support a clustering env. across
          filesystems. We are running into this problem. We are using the NES proxy
          lib to switch between two weblogic app server. If both are started from the
          same FileSystem ie..the same WL_HOME then the Session failover works
          seamlessly. However if we move to an env. where everything else is the same
          except that the two weblogic servers have different WL_HOME (since they are
          in different machines) then we cannot get the session to replicate ???
          Any Ideas
          

In memory session replication doesn't use file's. So if replication is not
          working, it is misconfigured.
          1. Modify simplesession.jsp to print session.getId();
          2. Hit servers directly one after another, and post the response back.
          - Prasad
          Prasad Peddada wrote:
          > Just make sure that your properties are he same on both machines. By the way
          > are you talking about in memory replication or file persistence?
          >
          > Vivek Bhaskaran wrote:
          >
          > > Folks!
          > > I have a question... Does Weblogic support a clustering env. across
          > > filesystems. We are running into this problem. We are using the NES proxy
          > > lib to switch between two weblogic app server. If both are started from the
          > > same FileSystem ie..the same WL_HOME then the Session failover works
          > > seamlessly. However if we move to an env. where everything else is the same
          > > except that the two weblogic servers have different WL_HOME (since they are
          > > in different machines) then we cannot get the session to replicate ???
          > >
          > > Any Ideas
          >
          > --
          > Cheers
          >
          > - Prasad
          

Similar Messages

  • Opinion on non-clustered file system to offload backups

    3 node 11.2 rac using ASM on SAN.
    All of the RMAN backups go to +FRA. Any node can initiate RMAN and create backups.
    I want to offload the backups from +FRA to a filesystem. Our current backup system can only read cooked file systems.
    This LUN also comes from the same SAN. It will have a file system, ext4. It will be visible to all 3 nodes and will be mounted to ONLY the first node. First node will be the one to copy from asm to this filesystem.
    If first node is out of commission, I can mount the backup lun to one of the other remaining nodes.
    Does this sound like a decent plan or should I go with a clustered file system?
    Thanks for you opinions!

    Have you looked at using ACFS? Create a very big diskgroup, then create an ACFS volume and finally an ACFS filesystem. create a path on all nodes on which the ACFS filesystem will be mounted.
    Example: Using asmca do the following
    mkdir /d01/FRA on all systems
    DGFRA (4disks at 500G)
    Create ACFS volume 1.8T
    Create ACFS filesystem mount point of /d01/FRA
    set your db_recovery_file_dest=/d01/FRA scope=both sid='*'
    Now any node can backup to this FRA location AND any node can copy files to tape or where ever...
    This is "supported" as of 11.2.0.3 ( I have used it on 11.2.0.1 and 11.2.0.2 for testing)

  • RAID/ shared file system for clustering

    could somebody give me the h/w config,vendor etc.. used generally to
    provide failover to the shared file system for cluster of weblogic
    servers?

    Many of our customers use either a hardware or software raid to protect
    their filesystem for the cluster. Most of this is a vendor specific --
    you should check out the documentation for your specific platform.
    Another option is to not do a single filesystem for the cluster. They
    should be free self-explanatory -- you simply need to replicate all of
    the files, application code, etc. across the cluster in each individual
    server machine.
    shivu wrote:
    >
    could somebody give me the h/w config,vendor etc.. used generally to
    provide failover to the shared file system for cluster of weblogic
    servers?--
    Thanks,
    Michael
    -- BEA WebLogic is hiring!
    Check our website: http://www.bea.com/

  • Migrating Essbase cube across versions via file system

    A large BSO cube has been taking much longer to complete a 'calc all' in Essbase 11.1.2.2 than on Essbase 9.3.1 despite all Essbase.cfg, app and db settings being same (https://forums.oracle.com/thread/2599658).
    As a last resort, I've tried the following-
    1. Calc the cube on the 9.3.1 server.
    2. Use EAS Migration Wizard to migrate the cube from the 9.3.1 server to the 11.1.2.2 server.
    3. File system transfer of all ess*.ind and ess*.pag from 9.3.1\app\db folder to 11.1.2.2\app\db folder (at this point a retrieval from the 11.1.2.2 server does not yet return any data).
    4. File system transfer of the dbname.esm file from 9.3.1\app\db folder to 11.1.2.2\app\db folder (at this point a retrieval from the 11.1.2.2 server returns an "unable to load database dbname" error and an "Invalid transaction status for block -- Please use the IBH Locate/Fix utilities to find/fix the problem" error).
    5. File system transfer of the dbname.tct file from 9.3.1\app\db folder to 11.1.2.2\app\db folder (and voila! Essbase returns data from the 11.1.2.2 server and numbers match with the 9.3.1 sever).
    This almost seems too good to be true. Can anyone think of any dangers of migrating apps this way? Has nothing changed in file formats between Essbase 9.x and 11.x? Won't not transferring the dbname.ind and dbname.db files cause any issues down the road? Thankfully we are soon moving to ASO for this large BSO cube, so this isn't a long term worry.

    Freshly install the Essbase 11.1.2.2 on Window server 2008 r-2 with the recommended hardware specification. After Installation configure 11.1.2.2 with the DB/Schema
    Take the all data back up of the essbase applications using script export or directly exporting from the cube.
    Use the EAS Migration wizard to migrate the essbase applications
    After the Migrating the applications successfully,reLoad all the data into cube.
    For the 4th Point
    IBH error generally caused when there is a mismatch in the index file and the PAG file while e executing the calculation script .Possible solutions are available
    The recommended procedure is:
    a)Disable all logins.
    alter application sample disable connects;
    b)Forcibly log off all users.
    alter system logout session on database sample.basic;
    c)Run the MaxL statement to get invalid block header information.
    alter database sample.basic validate data to local logfile 'invalid_blocks';
    d)Repair invalid block headers
    alter database sample.basic repair invalid_block_headers;
    Thanks,
    Sreekumar Hariharan

  • Why would anyone want to use ASM Clustered File system?

    DB Version: 11gR2
    OS : Solaris, AIX, HP-UX
    I've read about the new feature ACFS.
    http://www.oracle-base.com/articles/11g/ACFS_11gR2.php
    But why would anyone want to store database binaries in a separate Filesystem created by Oracle?

    Hi Vitamind,
    how do these binaries interact with the CPU when they want something to be done?
    ACFS should work with Local OS (Solaris) to communicate with the CPU . Isn't this kind of double work?ACFS dont work with .... but provide filesystem to Local S.O
    There may be extra work, but that's because there are more resources that a common filesystem.
    Oracle ACFS executes on operating system platforms as a native file system technology supporting native operating system file system application programming interfaces (APIs).
    ACFS is a general purpose POSIX compliant cluster file system. Being POSIX compliant, all operating system utilities we use with ext3 and other file systems can also be used with Oracle ACFS given it belongs to the same family of related standards.
    ACFS Driver Model
    An Oracle ACFS file system is installed as a dynamically loadable vendor operating system (OS) file system driver and tool set that is developed for each supported operating system platform. The driver is implemented as a Virtual File System (VFS) and processes all file and directory operations directed to a specific file system.
    It makes sense you use the ACFS if you use some of the features below:
    • Oracle RAC / RAC ONE NODE
    • Oracle ACFS Snapshots
    • Oracle ASM Dynamic Volume Manager
    • Cluster Filesystem for regular files
    ACFS Use Cases
    • Shared Oracle DB home
    • Other “file system” data
    • External tables, data loads, data extracts
    • BFILES and other data customer chooses not to store in db
    • Log files (consolidates access)
    • Test environments
    • Copy back a previous snapshot after testing
    • Backups
    • Snapshot file system for point-intime backups
    • General purpose local or cluster file system
    • Leverage ASM manageability
    Note : Oracle ACFS file systems cannot be used for an Oracle base directory or an Oracle grid infrastructure home that contains the software for Oracle Clusterware, Oracle ASM, Oracle ACFS, and Oracle ADVM components.
    Regards,
    Levi Pereira

  • Question about a file system storage option for RAC on 10g

    Hello everyone,
    I am in the beginning of connecting our storage and switches, and building RAC on them but there is a little argument between our specialists.
    We have two database servers(10g with OEL 5) to be clustered and two visible disk groups to each of those nodes. So question is can we choose only one disk group as a shared storage leaving the rest one as the redundant copy during database a creation window while installing the database.  Because some of us argue that oracle database has a built-in capability to decide on what level of RAID we store our data.
    Thank you for your help.

    "some of us argue that oracle database has a built-in capability to decide on what level of RAID we store our data". 
    That statement is not true.  Oracle has optional multiplexing for control files, redo logs, and archive logs but this is not enabled by default and Oracle will not automatically enable it.  If you want redundancy of tables, indexes, temp, and undo you must provide this because Oracle does not offer it standard or as an option.  You can achieve redundancy with RAID at the array level, or host based mirroring (like ASM redundancy groups or Linux mdadm).  This can also depend on your file system because, I think, OCFS2 does not support host based mirroring (so you cannot use mdadm or lvm to mirror the storage if you are using OCFS2).
    Redundancy is not required, but it is recommended if you are using hard disks because they are prone to failures.  You can configure RAID 10 across all disks in the array and present this as one big LUN to the database server.  If you have two storage arrays and you want to mirror the data across the two arrays, then present all of the devices as JBOD and use Linux mdadm to create your RAID group.
    RAC requires shared storage.  Maybe you have a NAS or SAN device, and you will present LUNs to the Oracle database servers.  That is no problem.  The problem is making those LUNs usable by Oracle RAC.  When I used Oracle 10g RAC, I used the Linux raw device facility to manage these LUNs and make them ready for Oracle RAC.  However, raw has been desupported.  Today I would use either ASM or OCFS2.  This has nothing to do with redundancy, this is just because you are using RAC.

  • Automatic File System Replication at remote(Disaster) Site

    Hi all, I have two site one primary and other one is DR. How to configure file system replication for remote site so that any changes/create/delete being made in file system is automatically updated at DR site servers. At both site solaris version is 11.2
    Please suggest.

    Hi,
    A few recommendations:
    1. Use an actual clustering product like Oracle Solaris Cluster:
    Oracle Solaris Cluster | Oracle
    2. If you required synchronous replication, then review this product:
    Sun StorageTek Availability Suite 4.0 Software Product Library Documentation
    3. A clustered ZFS storage application provides continuous replication across the production and DR sites.
    See page 2 of this doc: http://www.oracle.com/us/products/servers-storage/sun-zfs-storage-family-ds-173238.pdf
    4. You can build your own Solaris 11.2 ZFS replication by taking hourly snapshots and sending them over to the DR site.
    This is not a fully clustered solution, without any kind of automatic failover. ZFS is not a clustered file system so you can't
    access the same data from different systems.
    Thanks, Cindy

  • Ocfs2 can not mount the ocfs2 file system on RedHat AS v4 Update 1

    Hi there,
    I installed ocfs2-2.6.9-11.0.0.10.3.EL-1.0.4-1.i686.rpm onto RedHat linux AS v4 update 1. Installation looks OK. And configure ocfs2 (At this stage i only added 1 node in the cluster), load and start accordingly. Then paritition the disk and mkfs.ocfs2 the partition. Everything seems OK.
    [root@node1 init.d]# ./o2cb status
    Module "configfs": Loaded
    Filesystem "configfs": Mounted
    Module "ocfs2_nodemanager": Loaded
    Module "ocfs2_dlm": Loaded
    Module "ocfs2_dlmfs": Loaded
    Filesystem "ocfs2_dlmfs": Mounted
    Checking cluster ocfs2: Online
    Checking heartbeat: Not active
    But here you can check if the partition is there:
    [root@node1 init.d]# fsck.ocfs2 /dev/hda12
    Checking OCFS2 filesystem in /dev/hda12:
    label: oracle
    uuid: 27 74 a6 70 32 ad 4f 77 bf 55 8e 3a 87 78 ea cb
    number of blocks: 612464
    bytes per block: 4096
    number of clusters: 76558
    bytes per cluster: 32768
    max slots: 2
    /dev/hda12 is clean. It will be checked after 20 additional mounts.
    However, mount -t ocfs2 /dev/hda12 just does not work.
    [root@node1 oracle]# mount -t ocfs2 /dev/hda12 /oradata/m10g
    mount.ocfs2: No such device while mounting /dev/hda12 on /oradata/m10g
    [root@node1 oracle]# mount -L oracle
    mount: no such partition found
    Looks like mount just can not see the ocfs2 partition somehow.
    I cannot find much info in metalink and anywhere else, does anyone here come across this issue before?
    Regards,
    Eric

    I have been having a similar problem.
    However, when I applied your fix I ended up with another problem:
    (20765,0):ocfs2_initialize_osb:1179 max_slots for this device: 4
    (20765,0):ocfs2_fill_local_node_info:851 I am node 0
    (20765,0):dlm_request_join:756 ERROR: status = -107
    (20765,0):dlm_try_to_join_domain:906 ERROR: status = -107
    (20765,0):dlm_join_domain:1151 ERROR: status = -107
    (20765,0):dlm_register_domain:1330 ERROR: status = -107
    (20765,0):ocfs2_dlm_init:1771 ERROR: status = -12
    (20765,0):ocfs2_mount_volume:912 ERROR: status = -12
    ocfs2: Unmounting device (253,7) on (node 0)
    Now the odd thing about this bit of log output (/var/log/messages)
    is the fact that this is only a 2 node cluster and only one node has
    currently mounted the file system in question. Now, I am running
    the multipath drivers with my qla2xxx drivers under SLES9-R2.
    However, at worst that should only double everything
    (2 nodes x 2 paths through the SAN).
    How can I get more low level information on what is consuming
    the node slots in ocfs2? How can I force it to "disconnect" nodes
    and recover/cleanup node slots?

  • File systems available on Windows Server 2012 R2?

    What are the supported file systems in Windows Server 2012 R2? I mean the complete list. I know you can create, read and write on Fat32, NTFS and ReFS. What about non-Microsoft file systems, like EXT4 or HFS+? If I create a VM with a Linux OS, will
    I be able to acces the virtual hard disk natively from WS 2012 R2, or will I need a third party tool, like the one from Paragon? If I have a drive formated in EXT4 or HFS+, will I be able to acces it from Windows, without any third party tool? Acces it,
    I mean both read and write on them. I know that on the client OS, Windows 8.1, this is not possible natively, this is why I am asking here, I guess it is very possible for the server OS to have build-in support for accesing thoose file systems. If Hyper-V
    has been optimised to run not just Windows VMs, but also Linux VMs, it would make sense to me that file systems like thoose from Linux or OS X to be available using a build-in feature. I have tried to mount the vhd from a Linux VM I have created in HyperV,
    Windows Explorer could not read the hard drive.

    Installed Paragon ExtFS free. With it loaded, tried to mount on Windows Explorer a ext4 formated vhd, created on a Linux Hyper-V vm, it failed, and Paragon ExtFS crashed. Uninstalled Paragon ExtFS. The free version was not supported on WS 2012 R2
    by Paragon, if Windows has no build-in support for ext4, this means this free software has not messed around anything in the OS, I guess.
    Don't mess with third-party kernel-mode file systems as it's basically begging for troubles: crash inside them will make whole system BSOD and third-party FS are typically buggy... Because a) FS development for Windows is VERY complex and b) there are very
    few external adopters so not that many people actually theist them. What you can do however:
    1) Spawn an OS with a supported FS inside VM and configure loopback connectivity (even over SMB) with your host. So you'll read and write your volume inside a VM and copy content to / from host.
    (I personally use this approach in a reversed direction, my primary OS is MacOS X but I read/write NTFS-formatted disks from inside a Windows 7 VM I run on VMware Fusion)
    2) Use user-mode file system explorer (see sample links below, I'm NOT affiliated with that companie). So you'll copy content from the volume as it would be some sort of a shell extension.
    Crashes in 1) and 2) would not touch your whole OS stability. 
    HFS Explorer for Windows
    http://www.heise.de/download/hfsexplorer.html
    Ext2Read
    http://sourceforge.net/projects/ext2read/
    (both are user-land applications for HFS(+) and EXT2/3/4 accordingly)
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Read only file system

    One of my Parabola installations has started to always mount the / partition read-only. I've booted into a different install on the same machine, and can mount the affected partition without problems. I've done a fsck, and run the SMART disk checks. No problems. I've touched /forcefsck so a file system check is done every time I boot, but still the partition is read-only when it finishes booting.
    This is the output of 'mount':
    proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
    sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
    dev on /dev type devtmpfs (rw,nosuid,relatime,size=1024388k,nr_inodes=256097,mode=755)
    run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
    /dev/sdb10 on / type ext4 (ro,relatime,data=ordered)
    tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime)
    /dev/sdb9 on /boot type ext4 (rw,relatime,data=ordered)
    /dev/sdb1 on /media/Stuff type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
    binfmt on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
    This is the output of 'dmesg':
    [ 0.000000] Initializing cgroup subsys cpuset
    [ 0.000000] Initializing cgroup subsys cpu
    [ 0.000000] Linux version 3.6.3-1-LIBRE (nobody@root) (gcc version 4.7.2 (GCC) ) #1 SMP PREEMPT Tue Oct 23 00:29:01 UYST 2012
    [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-linux-libre root=UUID=f0206707-f1ca-4bea-9eb3-c5d3713e4a4e ro resume=UUID=5b4248ba-30c3-48e5-a090-3a9c9f49d9c4 quiet
    [ 0.000000] e820: BIOS-provided physical RAM map:
    [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
    [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
    [ 0.000000] BIOS-e820: [mem 0x00000000000e4000-0x00000000000fffff] reserved
    [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007ff8ffff] usable
    [ 0.000000] BIOS-e820: [mem 0x000000007ff90000-0x000000007ff9dfff] ACPI data
    [ 0.000000] BIOS-e820: [mem 0x000000007ff9e000-0x000000007ffcffff] ACPI NVS
    [ 0.000000] BIOS-e820: [mem 0x000000007ffd0000-0x000000007ffddfff] reserved
    [ 0.000000] BIOS-e820: [mem 0x000000007ffe0000-0x000000007fffffff] reserved
    [ 0.000000] BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
    [ 0.000000] BIOS-e820: [mem 0x00000000fff00000-0x00000000ffffffff] reserved
    [ 0.000000] NX (Execute Disable) protection: active
    [ 0.000000] DMI present.
    [ 0.000000] DMI: System manufacturer System Product Name/P5KPL-CM, BIOS 0702 08/27/2010
    [ 0.000000] e820: update [mem 0x00000000-0x0000ffff] usable ==> reserved
    [ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
    [ 0.000000] No AGP bridge found
    [ 0.000000] e820: last_pfn = 0x7ff90 max_arch_pfn = 0x400000000
    [ 0.000000] MTRR default type: uncachable
    [ 0.000000] MTRR fixed ranges enabled:
    [ 0.000000] 00000-9FFFF write-back
    [ 0.000000] A0000-BFFFF uncachable
    [ 0.000000] C0000-DFFFF write-protect
    [ 0.000000] E0000-EFFFF write-through
    [ 0.000000] F0000-FFFFF write-protect
    [ 0.000000] MTRR variable ranges enabled:
    [ 0.000000] 0 base 000000000 mask F80000000 write-back
    [ 0.000000] 1 disabled
    [ 0.000000] 2 disabled
    [ 0.000000] 3 disabled
    [ 0.000000] 4 disabled
    [ 0.000000] 5 disabled
    [ 0.000000] 6 disabled
    [ 0.000000] 7 disabled
    [ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
    [ 0.000000] found SMP MP-table at [mem 0x000ff780-0x000ff78f] mapped at [ffff8800000ff780]
    [ 0.000000] initial memory mapped: [mem 0x00000000-0x1fffffff]
    [ 0.000000] Base memory trampoline at [ffff880000099000] 99000 size 24576
    [ 0.000000] init_memory_mapping: [mem 0x00000000-0x7ff8ffff]
    [ 0.000000] [mem 0x00000000-0x7fdfffff] page 2M
    [ 0.000000] [mem 0x7fe00000-0x7ff8ffff] page 4k
    [ 0.000000] kernel direct mapping tables up to 0x7ff8ffff @ [mem 0x1fbfd000-0x1fffffff]
    [ 0.000000] RAMDISK: [mem 0x37a10000-0x37cfffff]
    [ 0.000000] ACPI: RSDP 00000000000fb6a0 00014 (v00 ACPIAM)
    [ 0.000000] ACPI: RSDT 000000007ff90000 0003C (v01 A_M_I_ OEMRSDT 08001027 MSFT 00000097)
    [ 0.000000] ACPI: FACP 000000007ff90200 00084 (v02 A_M_I_ OEMFACP 08001027 MSFT 00000097)
    [ 0.000000] ACPI: DSDT 000000007ff905c0 07BFA (v01 A0968 A0968000 00000000 INTL 20060113)
    [ 0.000000] ACPI: FACS 000000007ff9e000 00040
    [ 0.000000] ACPI: APIC 000000007ff90390 0006C (v01 A_M_I_ OEMAPIC 08001027 MSFT 00000097)
    [ 0.000000] ACPI: MCFG 000000007ff90400 0003C (v01 A_M_I_ OEMMCFG 08001027 MSFT 00000097)
    [ 0.000000] ACPI: OEMB 000000007ff9e040 00080 (v01 A_M_I_ AMI_OEM 08001027 MSFT 00000097)
    [ 0.000000] ACPI: HPET 000000007ff981c0 00038 (v01 A_M_I_ OEMHPET 08001027 MSFT 00000097)
    [ 0.000000] ACPI: GSCI 000000007ff9e0c0 02024 (v01 A_M_I_ GMCHSCI 08001027 MSFT 00000097)
    [ 0.000000] ACPI: Local APIC address 0xfee00000
    [ 0.000000] No NUMA configuration found
    [ 0.000000] Faking a node at [mem 0x0000000000000000-0x000000007ff8ffff]
    [ 0.000000] Initmem setup node 0 [mem 0x00000000-0x7ff8ffff]
    [ 0.000000] NODE_DATA [mem 0x7ff8c000-0x7ff8ffff]
    [ 0.000000] [ffffea0000000000-ffffea0001ffffff] PMD -> [ffff88007d600000-ffff88007f5fffff] on node 0
    [ 0.000000] Zone ranges:
    [ 0.000000] DMA [mem 0x00010000-0x00ffffff]
    [ 0.000000] DMA32 [mem 0x01000000-0xffffffff]
    [ 0.000000] Normal empty
    [ 0.000000] Movable zone start for each node
    [ 0.000000] Early memory node ranges
    [ 0.000000] node 0: [mem 0x00010000-0x0009efff]
    [ 0.000000] node 0: [mem 0x00100000-0x7ff8ffff]
    [ 0.000000] On node 0 totalpages: 524063
    [ 0.000000] DMA zone: 64 pages used for memmap
    [ 0.000000] DMA zone: 6 pages reserved
    [ 0.000000] DMA zone: 3913 pages, LIFO batch:0
    [ 0.000000] DMA32 zone: 8127 pages used for memmap
    [ 0.000000] DMA32 zone: 511953 pages, LIFO batch:31
    [ 0.000000] ACPI: PM-Timer IO Port: 0x808
    [ 0.000000] ACPI: Local APIC address 0xfee00000
    [ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
    [ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
    [ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x82] disabled)
    [ 0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x83] disabled)
    [ 0.000000] ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
    [ 0.000000] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
    [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
    [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
    [ 0.000000] ACPI: IRQ0 used by override.
    [ 0.000000] ACPI: IRQ2 used by override.
    [ 0.000000] ACPI: IRQ9 used by override.
    [ 0.000000] Using ACPI (MADT) for SMP configuration information
    [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
    [ 0.000000] smpboot: Allowing 4 CPUs, 2 hotplug CPUs
    [ 0.000000] nr_irqs_gsi: 40
    [ 0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
    [ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000e4000
    [ 0.000000] PM: Registered nosave memory: 00000000000e4000 - 0000000000100000
    [ 0.000000] e820: [mem 0x80000000-0xfedfffff] available for PCI devices
    [ 0.000000] Booting paravirtualized kernel on bare hardware
    [ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:4 nr_node_ids:1
    [ 0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007fc00000 s84608 r8192 d21888 u524288
    [ 0.000000] pcpu-alloc: s84608 r8192 d21888 u524288 alloc=1*2097152
    [ 0.000000] pcpu-alloc: [0] 0 1 2 3
    [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 515866
    [ 0.000000] Policy zone: DMA32
    [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-linux-libre root=UUID=f0206707-f1ca-4bea-9eb3-c5d3713e4a4e ro resume=UUID=5b4248ba-30c3-48e5-a090-3a9c9f49d9c4 quiet
    [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
    [ 0.000000] __ex_table already sorted, skipping sort
    [ 0.000000] Checking aperture...
    [ 0.000000] No AGP bridge found
    [ 0.000000] Calgary: detecting Calgary via BIOS EBDA area
    [ 0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
    [ 0.000000] Memory: 2048780k/2096704k available (4726k kernel code, 452k absent, 47472k reserved, 4144k data, 772k init)
    [ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
    [ 0.000000] Preemptible hierarchical RCU implementation.
    [ 0.000000] RCU dyntick-idle grace-period acceleration is enabled.
    [ 0.000000] Dump stacks of tasks blocking RCU-preempt GP.
    [ 0.000000] RCU restricting CPUs from NR_CPUS=64 to nr_cpu_ids=4.
    [ 0.000000] NR_IRQS:4352 nr_irqs:712 16
    [ 0.000000] Console: colour VGA+ 80x25
    [ 0.000000] console [tty0] enabled
    [ 0.000000] allocated 8388608 bytes of page_cgroup
    [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
    [ 0.000000] hpet clockevent registered
    [ 0.000000] tsc: Fast TSC calibration using PIT
    [ 0.000000] tsc: Detected 1613.131 MHz processor
    [ 0.003338] Calibrating delay loop (skipped), value calculated using timer frequency.. 3227.68 BogoMIPS (lpj=5377103)
    [ 0.003342] pid_max: default: 32768 minimum: 301
    [ 0.003391] Security Framework initialized
    [ 0.003397] AppArmor: AppArmor disabled by boot time parameter
    [ 0.003643] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
    [ 0.004961] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
    [ 0.005580] Mount-cache hash table entries: 256
    [ 0.005927] Initializing cgroup subsys cpuacct
    [ 0.005933] Initializing cgroup subsys memory
    [ 0.005946] Initializing cgroup subsys devices
    [ 0.005949] Initializing cgroup subsys freezer
    [ 0.005951] Initializing cgroup subsys net_cls
    [ 0.005953] Initializing cgroup subsys blkio
    [ 0.005995] CPU: Physical Processor ID: 0
    [ 0.005998] CPU: Processor Core ID: 0
    [ 0.006000] mce: CPU supports 6 MCE banks
    [ 0.006010] CPU0: Thermal monitoring enabled (TM2)
    [ 0.006015] process: using mwait in idle threads
    [ 0.006021] Last level iTLB entries: 4KB 128, 2MB 4, 4MB 4
    Last level dTLB entries: 4KB 256, 2MB 0, 4MB 32
    tlb_flushall_shift is 0xffffffff
    [ 0.008169] ACPI: Core revision 20120711
    [ 0.013350] ftrace: allocating 18348 entries in 72 pages
    [ 0.026962] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
    [ 0.059974] smpboot: CPU0: Intel(R) Celeron(R) CPU E1200 @ 1.60GHz stepping 0d
    [ 0.059996] Performance Events: PEBS fmt0+, 4-deep LBR, Core2 events, Intel PMU driver.
    [ 0.059996] perf_event_intel: PEBS disabled due to CPU errata
    [ 0.059996] ... version: 2
    [ 0.059996] ... bit width: 40
    [ 0.059996] ... generic registers: 2
    [ 0.059996] ... value mask: 000000ffffffffff
    [ 0.059996] ... max period: 000000007fffffff
    [ 0.059996] ... fixed-purpose events: 3
    [ 0.059996] ... event mask: 0000000700000003
    [ 0.083501] NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter.
    [ 0.096681] smpboot: Booting Node 0, Processors #1
    [ 0.109825] Brought up 2 CPUs
    [ 0.109825] smpboot: Total of 2 processors activated (6455.37 BogoMIPS)
    [ 0.110104] devtmpfs: initialized
    [ 0.112186] PM: Registering ACPI NVS region [mem 0x7ff9e000-0x7ffcffff] (204800 bytes)
    [ 0.112186] NET: Registered protocol family 16
    [ 0.112186] ACPI: bus type pci registered
    [ 0.112186] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf0000000-0xf3ffffff] (base 0xf0000000)
    [ 0.112186] PCI: not using MMCONFIG
    [ 0.112186] PCI: Using configuration type 1 for base access
    [ 0.113367] bio: create slab <bio-0> at 0
    [ 0.113401] ACPI: Added _OSI(Module Device)
    [ 0.113401] ACPI: Added _OSI(Processor Device)
    [ 0.113401] ACPI: Added _OSI(3.0 _SCP Extensions)
    [ 0.113401] ACPI: Added _OSI(Processor Aggregator Device)
    [ 0.114233] ACPI: EC: Look up EC in DSDT
    [ 0.116040] ACPI: Executed 1 blocks of module-level executable AML code
    [ 0.122816] ACPI: SSDT 000000007ffa00f0 001D2 (v01 AMI CPU1PM 00000001 INTL 20060113)
    [ 0.122816] ACPI: Dynamic OEM Table Load:
    [ 0.122816] ACPI: SSDT (null) 001D2 (v01 AMI CPU1PM 00000001 INTL 20060113)
    [ 0.122816] ACPI: SSDT 000000007ffa02d0 00143 (v01 AMI CPU2PM 00000001 INTL 20060113)
    [ 0.122816] ACPI: Dynamic OEM Table Load:
    [ 0.122816] ACPI: SSDT (null) 00143 (v01 AMI CPU2PM 00000001 INTL 20060113)
    [ 0.122816] ACPI: Interpreter enabled
    [ 0.122816] ACPI: (supports S0 S1 S3 S4 S5)
    [ 0.122816] ACPI: Using IOAPIC for interrupt routing
    [ 0.122816] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf0000000-0xf3ffffff] (base 0xf0000000)
    [ 0.122816] PCI: MMCONFIG at [mem 0xf0000000-0xf3ffffff] reserved in ACPI motherboard resources
    [ 0.136077] ACPI: No dock devices found.
    [ 0.136086] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
    [ 0.136182] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
    [ 0.136328] pci_root PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-3f] only partially covers this bridge
    [ 0.136382] PCI host bridge to bus 0000:00
    [ 0.136387] pci_bus 0000:00: busn_res: [bus 00-ff] is inserted under domain [bus 00-ff]
    [ 0.136390] pci_bus 0000:00: root bus resource [bus 00-ff]
    [ 0.136394] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7]
    [ 0.136397] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff]
    [ 0.136400] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff]
    [ 0.136403] pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000dffff]
    [ 0.136406] pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff]
    [ 0.136420] pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000
    [ 0.136478] pci 0000:00:01.0: [8086:29c1] type 01 class 0x060400
    [ 0.136533] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
    [ 0.136587] pci 0000:00:1b.0: [8086:27d8] type 00 class 0x040300
    [ 0.136606] pci 0000:00:1b.0: reg 10: [mem 0xfe9fc000-0xfe9fffff 64bit]
    [ 0.136697] pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
    [ 0.136723] pci 0000:00:1c.0: [8086:27d0] type 01 class 0x060400
    [ 0.136803] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
    [ 0.136831] pci 0000:00:1c.1: [8086:27d2] type 01 class 0x060400
    [ 0.136910] pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold
    [ 0.136941] pci 0000:00:1d.0: [8086:27c8] type 00 class 0x0c0300
    [ 0.136986] pci 0000:00:1d.0: reg 20: [io 0xc480-0xc49f]
    [ 0.137022] pci 0000:00:1d.1: [8086:27c9] type 00 class 0x0c0300
    [ 0.137067] pci 0000:00:1d.1: reg 20: [io 0xc800-0xc81f]
    [ 0.137108] pci 0000:00:1d.2: [8086:27ca] type 00 class 0x0c0300
    [ 0.137153] pci 0000:00:1d.2: reg 20: [io 0xc880-0xc89f]
    [ 0.137190] pci 0000:00:1d.3: [8086:27cb] type 00 class 0x0c0300
    [ 0.137235] pci 0000:00:1d.3: reg 20: [io 0xcc00-0xcc1f]
    [ 0.137282] pci 0000:00:1d.7: [8086:27cc] type 00 class 0x0c0320
    [ 0.137304] pci 0000:00:1d.7: reg 10: [mem 0xfe9fbc00-0xfe9fbfff]
    [ 0.137395] pci 0000:00:1d.7: PME# supported from D0 D3hot D3cold
    [ 0.137419] pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401
    [ 0.137491] pci 0000:00:1f.0: [8086:27b8] type 00 class 0x060100
    [ 0.137581] pci 0000:00:1f.0: ICH7 LPC Generic IO decode 1 PIO at 0294 (mask 0003)
    [ 0.137630] pci 0000:00:1f.1: [8086:27df] type 00 class 0x01018a
    [ 0.137645] pci 0000:00:1f.1: reg 10: [io 0x0000-0x0007]
    [ 0.137656] pci 0000:00:1f.1: reg 14: [io 0x0000-0x0003]
    [ 0.137667] pci 0000:00:1f.1: reg 18: [io 0x08f0-0x08f7]
    [ 0.137678] pci 0000:00:1f.1: reg 1c: [io 0x08f8-0x08fb]
    [ 0.137689] pci 0000:00:1f.1: reg 20: [io 0xffa0-0xffaf]
    [ 0.137730] pci 0000:00:1f.2: [8086:27c0] type 00 class 0x01018f
    [ 0.137747] pci 0000:00:1f.2: reg 10: [io 0xc400-0xc407]
    [ 0.137757] pci 0000:00:1f.2: reg 14: [io 0xc080-0xc083]
    [ 0.137767] pci 0000:00:1f.2: reg 18: [io 0xc000-0xc007]
    [ 0.137777] pci 0000:00:1f.2: reg 1c: [io 0xbc00-0xbc03]
    [ 0.137786] pci 0000:00:1f.2: reg 20: [io 0xb880-0xb88f]
    [ 0.137827] pci 0000:00:1f.2: PME# supported from D3hot
    [ 0.137845] pci 0000:00:1f.3: [8086:27da] type 00 class 0x0c0500
    [ 0.137901] pci 0000:00:1f.3: reg 20: [io 0x0400-0x041f]
    [ 0.137974] pci_bus 0000:01: busn_res: [bus 01] is inserted under [bus 00-ff]
    [ 0.137991] pci 0000:01:00.0: [1002:68f9] type 00 class 0x030000
    [ 0.138008] pci 0000:01:00.0: reg 10: [mem 0xe0000000-0xefffffff 64bit pref]
    [ 0.138021] pci 0000:01:00.0: reg 18: [mem 0xfeac0000-0xfeadffff 64bit]
    [ 0.138031] pci 0000:01:00.0: reg 20: [io 0xd000-0xd0ff]
    [ 0.138047] pci 0000:01:00.0: reg 30: [mem 0xfeaa0000-0xfeabffff pref]
    [ 0.138085] pci 0000:01:00.0: supports D1 D2
    [ 0.138109] pci 0000:01:00.1: [1002:aa68] type 00 class 0x040300
    [ 0.138125] pci 0000:01:00.1: reg 10: [mem 0xfeafc000-0xfeafffff 64bit]
    [ 0.138195] pci 0000:01:00.1: supports D1 D2
    [ 0.138231] pci 0000:00:01.0: PCI bridge to [bus 01]
    [ 0.138236] pci 0000:00:01.0: bridge window [io 0xd000-0xdfff]
    [ 0.138240] pci 0000:00:01.0: bridge window [mem 0xfea00000-0xfeafffff]
    [ 0.138246] pci 0000:00:01.0: bridge window [mem 0xe0000000-0xefffffff 64bit pref]
    [ 0.138291] pci_bus 0000:03: busn_res: [bus 03] is inserted under [bus 00-ff]
    [ 0.138295] pci 0000:00:1c.0: PCI bridge to [bus 03]
    [ 0.138347] pci_bus 0000:02: busn_res: [bus 02] is inserted under [bus 00-ff]
    [ 0.138372] pci 0000:02:00.0: [1969:1026] type 00 class 0x020000
    [ 0.138398] pci 0000:02:00.0: reg 10: [mem 0xfebc0000-0xfebfffff 64bit]
    [ 0.138412] pci 0000:02:00.0: reg 18: [io 0xec00-0xec7f]
    [ 0.138519] pci 0000:02:00.0: PME# supported from D3hot D3cold
    [ 0.138544] pci 0000:02:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force'
    [ 0.138555] pci 0000:00:1c.1: PCI bridge to [bus 02]
    [ 0.138560] pci 0000:00:1c.1: bridge window [io 0xe000-0xefff]
    [ 0.138565] pci 0000:00:1c.1: bridge window [mem 0xfeb00000-0xfebfffff]
    [ 0.138595] pci_bus 0000:04: busn_res: [bus 04] is inserted under [bus 00-ff]
    [ 0.138642] pci 0000:00:1e.0: PCI bridge to [bus 04] (subtractive decode)
    [ 0.138654] pci 0000:00:1e.0: bridge window [io 0x0000-0x0cf7] (subtractive decode)
    [ 0.138657] pci 0000:00:1e.0: bridge window [io 0x0d00-0xffff] (subtractive decode)
    [ 0.138660] pci 0000:00:1e.0: bridge window [mem 0x000a0000-0x000bffff] (subtractive decode)
    [ 0.138663] pci 0000:00:1e.0: bridge window [mem 0x000d0000-0x000dffff] (subtractive decode)
    [ 0.138666] pci 0000:00:1e.0: bridge window [mem 0x80000000-0xffffffff] (subtractive decode)
    [ 0.138689] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
    [ 0.138794] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P1._PRT]
    [ 0.138867] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT]
    [ 0.138903] ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P5._PRT]
    [ 0.138963] pci0000:00: ACPI _OSC support notification failed, disabling PCIe ASPM
    [ 0.138967] pci0000:00: Unable to request _OSC control (_OSC support mask: 0x08)
    [ 0.145009] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 *11 12 14 15)
    [ 0.145073] ACPI: PCI Interrupt Link [LNKB] (IRQs *10)
    [ 0.145128] ACPI: PCI Interrupt Link [LNKC] (IRQs *3 4 5 6 7 11 12 14 15)
    [ 0.145187] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 7 *11 12 14 15)
    [ 0.145246] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 7 11 12 14 15) *0, disabled.
    [ 0.145306] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 11 12 14 15) *0, disabled.
    [ 0.145365] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 7 11 12 14 15) *0, disabled.
    [ 0.145425] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 *5 6 7 11 12 14 15)
    [ 0.146698] vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none
    [ 0.146703] vgaarb: loaded
    [ 0.146705] vgaarb: bridge control possible 0000:01:00.0
    [ 0.146775] PCI: Using ACPI for IRQ routing
    [ 0.147301] PCI: pci_cache_line_size set to 64 bytes
    [ 0.147387] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
    [ 0.147389] e820: reserve RAM buffer [mem 0x7ff90000-0x7fffffff]
    [ 0.147545] NetLabel: Initializing
    [ 0.147548] NetLabel: domain hash size = 128
    [ 0.147550] NetLabel: protocols = UNLABELED CIPSOv4
    [ 0.147571] NetLabel: unlabeled traffic allowed by default
    [ 0.147588] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
    [ 0.147593] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
    [ 0.147599] hpet0: 3 comparators, 64-bit 14.318180 MHz counter
    [ 0.166686] Switching to clocksource hpet
    [ 0.177018] pnp: PnP ACPI init
    [ 0.177044] ACPI: bus type pnp registered
    [ 0.177174] pnp 00:00: [bus 00-ff]
    [ 0.177179] pnp 00:00: [io 0x0cf8-0x0cff]
    [ 0.177182] pnp 00:00: [io 0x0000-0x0cf7 window]
    [ 0.177186] pnp 00:00: [io 0x0d00-0xffff window]
    [ 0.177189] pnp 00:00: [mem 0x000a0000-0x000bffff window]
    [ 0.177192] pnp 00:00: [mem 0x000d0000-0x000dffff window]
    [ 0.177195] pnp 00:00: [mem 0x80000000-0xffffffff window]
    [ 0.177279] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active)
    [ 0.177295] pnp 00:01: [mem 0xfed14000-0xfed19fff]
    [ 0.177363] system 00:01: [mem 0xfed14000-0xfed19fff] has been reserved
    [ 0.177369] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active)
    [ 0.177419] pnp 00:02: [dma 4]
    [ 0.177423] pnp 00:02: [io 0x0000-0x000f]
    [ 0.177426] pnp 00:02: [io 0x0081-0x0083]
    [ 0.177429] pnp 00:02: [io 0x0087]
    [ 0.177431] pnp 00:02: [io 0x0089-0x008b]
    [ 0.177434] pnp 00:02: [io 0x008f]
    [ 0.177437] pnp 00:02: [io 0x00c0-0x00df]
    [ 0.177478] pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active)
    [ 0.177494] pnp 00:03: [io 0x0070-0x0071]
    [ 0.177508] pnp 00:03: [irq 8]
    [ 0.177544] pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active)
    [ 0.177557] pnp 00:04: [io 0x0061]
    [ 0.177595] pnp 00:04: Plug and Play ACPI device, IDs PNP0800 (active)
    [ 0.177609] pnp 00:05: [io 0x00f0-0x00ff]
    [ 0.177617] pnp 00:05: [irq 13]
    [ 0.177654] pnp 00:05: Plug and Play ACPI device, IDs PNP0c04 (active)
    [ 0.178000] pnp 00:06: [irq 6]
    [ 0.178004] pnp 00:06: [dma 2]
    [ 0.178007] pnp 00:06: [io 0x03f0-0x03f5]
    [ 0.178010] pnp 00:06: [io 0x03f7]
    [ 0.178096] pnp 00:06: Plug and Play ACPI device, IDs PNP0700 (active)
    [ 0.178501] pnp 00:07: [irq 7]
    [ 0.178504] pnp 00:07: [dma 3]
    [ 0.178508] pnp 00:07: [io 0x0378-0x037f]
    [ 0.178511] pnp 00:07: [io 0x0778-0x077f]
    [ 0.178686] pnp 00:07: Plug and Play ACPI device, IDs PNP0401 (active)
    [ 0.178731] pnp 00:08: [io 0x0000-0xffffffffffffffff disabled]
    [ 0.178735] pnp 00:08: [io 0x0000-0xffffffffffffffff disabled]
    [ 0.178738] pnp 00:08: [io 0x0290-0x0297]
    [ 0.178815] system 00:08: [io 0x0290-0x0297] has been reserved
    [ 0.178821] system 00:08: Plug and Play ACPI device, IDs PNP0c02 (active)
    [ 0.178918] pnp 00:09: [io 0x0010-0x001f]
    [ 0.178922] pnp 00:09: [io 0x0022-0x003f]
    [ 0.178925] pnp 00:09: [io 0x0044-0x005f]
    [ 0.178928] pnp 00:09: [io 0x0062-0x0063]
    [ 0.178934] pnp 00:09: [io 0x0065-0x006f]
    [ 0.178937] pnp 00:09: [io 0x0072-0x007f]
    [ 0.178939] pnp 00:09: [io 0x0080]
    [ 0.178942] pnp 00:09: [io 0x0084-0x0086]
    [ 0.178945] pnp 00:09: [io 0x0088]
    [ 0.178948] pnp 00:09: [io 0x008c-0x008e]
    [ 0.178951] pnp 00:09: [io 0x0090-0x009f]
    [ 0.178953] pnp 00:09: [io 0x00a2-0x00bf]
    [ 0.178956] pnp 00:09: [io 0x00e0-0x00ef]
    [ 0.178959] pnp 00:09: [io 0x04d0-0x04d1]
    [ 0.178962] pnp 00:09: [io 0x0800-0x087f]
    [ 0.178965] pnp 00:09: [io 0x0000-0xffffffffffffffff disabled]
    [ 0.178968] pnp 00:09: [io 0x0480-0x04bf]
    [ 0.178971] pnp 00:09: [mem 0xfed1c000-0xfed1ffff]
    [ 0.178974] pnp 00:09: [mem 0xfed20000-0xfed8ffff]
    [ 0.179068] system 00:09: [io 0x04d0-0x04d1] has been reserved
    [ 0.179072] system 00:09: [io 0x0800-0x087f] has been reserved
    [ 0.179076] system 00:09: [io 0x0480-0x04bf] has been reserved
    [ 0.179080] system 00:09: [mem 0xfed1c000-0xfed1ffff] has been reserved
    [ 0.179083] system 00:09: [mem 0xfed20000-0xfed8ffff] has been reserved
    [ 0.179089] system 00:09: Plug and Play ACPI device, IDs PNP0c02 (active)
    [ 0.179173] pnp 00:0a: [mem 0xfed00000-0xfed003ff]
    [ 0.179219] pnp 00:0a: Plug and Play ACPI device, IDs PNP0103 (active)
    [ 0.179280] pnp 00:0b: [mem 0xffb00000-0xffbfffff]
    [ 0.179284] pnp 00:0b: [mem 0xfff00000-0xffffffff]
    [ 0.179332] pnp 00:0b: Plug and Play ACPI device, IDs INT0800 (active)
    [ 0.179387] pnp 00:0c: [mem 0xffc00000-0xffefffff]
    [ 0.179460] system 00:0c: [mem 0xffc00000-0xffefffff] has been reserved
    [ 0.179466] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active)
    [ 0.179524] pnp 00:0d: [mem 0xfec00000-0xfec00fff]
    [ 0.179528] pnp 00:0d: [mem 0xfee00000-0xfee00fff]
    [ 0.179608] system 00:0d: [mem 0xfec00000-0xfec00fff] could not be reserved
    [ 0.179612] system 00:0d: [mem 0xfee00000-0xfee00fff] has been reserved
    [ 0.179617] system 00:0d: Plug and Play ACPI device, IDs PNP0c02 (active)
    [ 0.179660] pnp 00:0e: [io 0x0060]
    [ 0.179663] pnp 00:0e: [io 0x0064]
    [ 0.179673] pnp 00:0e: [irq 1]
    [ 0.179734] pnp 00:0e: Plug and Play ACPI device, IDs PNP0303 PNP030b (active)
    [ 0.179796] pnp 00:0f: [irq 12]
    [ 0.179852] pnp 00:0f: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active)
    [ 0.180166] pnp 00:10: [irq 4]
    [ 0.180170] pnp 00:10: [dma 0 disabled]
    [ 0.180173] pnp 00:10: [io 0x03f8-0x03ff]
    [ 0.180292] pnp 00:10: Plug and Play ACPI device, IDs PNP0501 (active)
    [ 0.180342] pnp 00:11: [mem 0xf0000000-0xf3ffffff]
    [ 0.180427] system 00:11: [mem 0xf0000000-0xf3ffffff] has been reserved
    [ 0.180433] system 00:11: Plug and Play ACPI device, IDs PNP0c02 (active)
    [ 0.180636] pnp 00:12: [mem 0x00000000-0x0009ffff]
    [ 0.180640] pnp 00:12: [mem 0x000c0000-0x000cffff]
    [ 0.180643] pnp 00:12: [mem 0x000e0000-0x000fffff]
    [ 0.180646] pnp 00:12: [mem 0x00100000-0x7fffffff]
    [ 0.180649] pnp 00:12: [mem 0x00000000-0xffffffffffffffff disabled]
    [ 0.180743] system 00:12: [mem 0x00000000-0x0009ffff] could not be reserved
    [ 0.180747] system 00:12: [mem 0x000c0000-0x000cffff] could not be reserved
    [ 0.180751] system 00:12: [mem 0x000e0000-0x000fffff] could not be reserved
    [ 0.180754] system 00:12: [mem 0x00100000-0x7fffffff] could not be reserved
    [ 0.180760] system 00:12: Plug and Play ACPI device, IDs PNP0c01 (active)
    [ 0.180916] pnp: PnP ACPI: found 19 devices
    [ 0.180919] ACPI: ACPI bus type pnp unregistered
    [ 0.188605] pci 0000:00:1c.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000
    [ 0.188613] pci 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000
    [ 0.188617] pci 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000
    [ 0.188628] pci 0000:00:1c.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000
    [ 0.188646] pci 0000:00:1c.0: res[14]=[mem 0x00100000-0x000fffff] get_res_add_size add_size 200000
    [ 0.188650] pci 0000:00:1c.0: res[15]=[mem 0x00100000-0x000fffff 64bit pref] get_res_add_size add_size 200000
    [ 0.188653] pci 0000:00:1c.1: res[15]=[mem 0x00100000-0x000fffff 64bit pref] get_res_add_size add_size 200000
    [ 0.188657] pci 0000:00:1c.0: res[13]=[io 0x1000-0x0fff] get_res_add_size add_size 1000
    [ 0.188664] pci 0000:00:1c.0: BAR 14: assigned [mem 0x80000000-0x801fffff]
    [ 0.188668] pci 0000:00:1c.0: BAR 15: assigned [mem 0x80200000-0x803fffff 64bit pref]
    [ 0.188673] pci 0000:00:1c.1: BAR 15: assigned [mem 0x80400000-0x805fffff 64bit pref]
    [ 0.188678] pci 0000:00:1c.0: BAR 13: assigned [io 0x1000-0x1fff]
    [ 0.188683] pci 0000:00:01.0: PCI bridge to [bus 01]
    [ 0.188687] pci 0000:00:01.0: bridge window [io 0xd000-0xdfff]
    [ 0.188693] pci 0000:00:01.0: bridge window [mem 0xfea00000-0xfeafffff]
    [ 0.188697] pci 0000:00:01.0: bridge window [mem 0xe0000000-0xefffffff 64bit pref]
    [ 0.188703] pci 0000:00:1c.0: PCI bridge to [bus 03]
    [ 0.188707] pci 0000:00:1c.0: bridge window [io 0x1000-0x1fff]
    [ 0.188713] pci 0000:00:1c.0: bridge window [mem 0x80000000-0x801fffff]
    [ 0.188718] pci 0000:00:1c.0: bridge window [mem 0x80200000-0x803fffff 64bit pref]
    [ 0.188725] pci 0000:00:1c.1: PCI bridge to [bus 02]
    [ 0.188729] pci 0000:00:1c.1: bridge window [io 0xe000-0xefff]
    [ 0.188735] pci 0000:00:1c.1: bridge window [mem 0xfeb00000-0xfebfffff]
    [ 0.188740] pci 0000:00:1c.1: bridge window [mem 0x80400000-0x805fffff 64bit pref]
    [ 0.188747] pci 0000:00:1e.0: PCI bridge to [bus 04]
    [ 0.188784] pci 0000:00:1c.0: enabling device (0104 -> 0107)
    [ 0.188805] pci 0000:00:1e.0: setting latency timer to 64
    [ 0.188810] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7]
    [ 0.188814] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff]
    [ 0.188817] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff]
    [ 0.188820] pci_bus 0000:00: resource 7 [mem 0x000d0000-0x000dffff]
    [ 0.188824] pci_bus 0000:00: resource 8 [mem 0x80000000-0xffffffff]
    [ 0.188827] pci_bus 0000:01: resource 0 [io 0xd000-0xdfff]
    [ 0.188830] pci_bus 0000:01: resource 1 [mem 0xfea00000-0xfeafffff]
    [ 0.188833] pci_bus 0000:01: resource 2 [mem 0xe0000000-0xefffffff 64bit pref]
    [ 0.188836] pci_bus 0000:03: resource 0 [io 0x1000-0x1fff]
    [ 0.188840] pci_bus 0000:03: resource 1 [mem 0x80000000-0x801fffff]
    [ 0.188843] pci_bus 0000:03: resource 2 [mem 0x80200000-0x803fffff 64bit pref]
    [ 0.188846] pci_bus 0000:02: resource 0 [io 0xe000-0xefff]
    [ 0.188849] pci_bus 0000:02: resource 1 [mem 0xfeb00000-0xfebfffff]
    [ 0.188852] pci_bus 0000:02: resource 2 [mem 0x80400000-0x805fffff 64bit pref]
    [ 0.188856] pci_bus 0000:04: resource 4 [io 0x0000-0x0cf7]
    [ 0.188859] pci_bus 0000:04: resource 5 [io 0x0d00-0xffff]
    [ 0.188862] pci_bus 0000:04: resource 6 [mem 0x000a0000-0x000bffff]
    [ 0.188865] pci_bus 0000:04: resource 7 [mem 0x000d0000-0x000dffff]
    [ 0.188868] pci_bus 0000:04: resource 8 [mem 0x80000000-0xffffffff]
    [ 0.188916] NET: Registered protocol family 2
    [ 0.189551] TCP established hash table entries: 262144 (order: 10, 4194304 bytes)
    [ 0.192082] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
    [ 0.192686] TCP: Hash tables configured (established 262144 bind 65536)
    [ 0.192758] TCP: reno registered
    [ 0.192773] UDP hash table entries: 1024 (order: 3, 32768 bytes)
    [ 0.192798] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes)
    [ 0.192949] NET: Registered protocol family 1
    [ 0.193147] pci 0000:01:00.0: Boot video device
    [ 0.193158] PCI: CLS 32 bytes, default 64
    [ 0.193230] Unpacking initramfs...
    [ 0.285578] Freeing initrd memory: 3008k freed
    [ 0.287514] audit: initializing netlink socket (disabled)
    [ 0.287533] type=2000 audit(1351450895.286:1): initialized
    [ 0.303689] HugeTLB registered 2 MB page size, pre-allocated 0 pages
    [ 0.306261] VFS: Disk quotas dquot_6.5.2
    [ 0.306342] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
    [ 0.306570] msgmni has been set to 4007
    [ 0.306904] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252)
    [ 0.306951] io scheduler noop registered
    [ 0.306955] io scheduler deadline registered
    [ 0.307043] io scheduler cfq registered (default)
    [ 0.307215] pcieport 0000:00:01.0: irq 40 for MSI/MSI-X
    [ 0.307324] pcieport 0000:00:1c.0: irq 41 for MSI/MSI-X
    [ 0.307443] pcieport 0000:00:1c.1: irq 42 for MSI/MSI-X
    [ 0.307659] intel_idle: does not run on family 6 model 15
    [ 0.307698] GHES: HEST is not enabled!
    [ 0.307788] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
    [ 0.328340] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
    [ 0.349279] 00:10: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
    [ 0.349579] Linux agpgart interface v0.103
    [ 0.349708] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12
    [ 0.352389] serio: i8042 KBD port at 0x60,0x64 irq 1
    [ 0.352444] serio: i8042 AUX port at 0x60,0x64 irq 12
    [ 0.352607] mousedev: PS/2 mouse device common for all mice
    [ 0.352700] rtc_cmos 00:03: RTC can wake from S4
    [ 0.352865] rtc_cmos 00:03: rtc core: registered rtc_cmos as rtc0
    [ 0.352896] rtc0: alarms up to one month, y3k, 114 bytes nvram, hpet irqs
    [ 0.352913] cpuidle: using governor ladder
    [ 0.352915] cpuidle: using governor menu
    [ 0.353088] drop_monitor: Initializing network drop monitor service
    [ 0.353195] TCP: cubic registered
    [ 0.353379] NET: Registered protocol family 10
    [ 0.353609] NET: Registered protocol family 17
    [ 0.353626] Key type dns_resolver registered
    [ 0.353965] PM: Checking hibernation image partition UUID=5b4248ba-30c3-48e5-a090-3a9c9f49d9c4
    [ 0.377050] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
    [ 0.398600] PM: Hibernation image not present or could not be loaded.
    [ 0.398629] registered taskstats version 1
    [ 0.399166] rtc_cmos 00:03: setting system clock to 2012-10-28 19:01:35 UTC (1351450895)
    [ 0.400807] Freeing unused kernel memory: 772k freed
    [ 0.401093] Write protecting the kernel read-only data: 8192k
    [ 0.407007] Freeing unused kernel memory: 1408k freed
    [ 0.410355] Freeing unused kernel memory: 652k freed
    [ 0.424642] systemd-udevd[42]: starting version 194
    [ 0.484567] ACPI: bus type usb registered
    [ 0.484613] usbcore: registered new interface driver usbfs
    [ 0.484629] usbcore: registered new interface driver hub
    [ 0.484994] usbcore: registered new device driver usb
    [ 0.485924] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
    [ 0.485981] ehci_hcd 0000:00:1d.7: setting latency timer to 64
    [ 0.485987] ehci_hcd 0000:00:1d.7: EHCI Host Controller
    [ 0.485997] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1
    [ 0.489913] ehci_hcd 0000:00:1d.7: debug port 1
    [ 0.489922] ehci_hcd 0000:00:1d.7: cache line size of 32 is not supported
    [ 0.489951] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xfe9fbc00
    [ 0.493550] SCSI subsystem initialized
    [ 0.496118] ACPI: bus type scsi registered
    [ 0.496720] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00
    [ 0.497003] hub 1-0:1.0: USB hub found
    [ 0.497013] hub 1-0:1.0: 8 ports detected
    [ 0.497923] uhci_hcd: USB Universal Host Controller Interface driver
    [ 0.497967] uhci_hcd 0000:00:1d.0: setting latency timer to 64
    [ 0.497972] uhci_hcd 0000:00:1d.0: UHCI Host Controller
    [ 0.497982] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2
    [ 0.498020] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000c480
    [ 0.498143] libata version 3.00 loaded.
    [ 0.499305] hub 2-0:1.0: USB hub found
    [ 0.499317] hub 2-0:1.0: 2 ports detected
    [ 0.499540] uhci_hcd 0000:00:1d.1: setting latency timer to 64
    [ 0.499547] uhci_hcd 0000:00:1d.1: UHCI Host Controller
    [ 0.499557] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3
    [ 0.499610] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000c800
    [ 0.500253] hub 3-0:1.0: USB hub found
    [ 0.500264] hub 3-0:1.0: 2 ports detected
    [ 0.500442] uhci_hcd 0000:00:1d.2: setting latency timer to 64
    [ 0.500448] uhci_hcd 0000:00:1d.2: UHCI Host Controller
    [ 0.500463] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4
    [ 0.500512] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000c880
    [ 0.500811] hub 4-0:1.0: USB hub found
    [ 0.500822] hub 4-0:1.0: 2 ports detected
    [ 0.501016] uhci_hcd 0000:00:1d.3: setting latency timer to 64
    [ 0.501022] uhci_hcd 0000:00:1d.3: UHCI Host Controller
    [ 0.501039] uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 5
    [ 0.501090] uhci_hcd 0000:00:1d.3: irq 16, io base 0x0000cc00
    [ 0.502169] hub 5-0:1.0: USB hub found
    [ 0.502179] hub 5-0:1.0: 2 ports detected
    [ 0.502422] ata_piix 0000:00:1f.1: version 2.13
    [ 0.502499] ata_piix 0000:00:1f.1: setting latency timer to 64
    [ 0.503963] scsi0 : ata_piix
    [ 0.504744] scsi1 : ata_piix
    [ 0.505372] ata1: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0xffa0 irq 14
    [ 0.505378] ata2: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0xffa8 irq 15
    [ 0.505429] ata_piix 0000:00:1f.2: MAP [
    [ 0.505431] P0 P2 P1 P3 ]
    [ 0.505484] ata_piix 0000:00:1f.2: setting latency timer to 64
    [ 0.507566] scsi2 : ata_piix
    [ 0.508975] scsi3 : ata_piix
    [ 0.509624] ata3: SATA max UDMA/133 cmd 0xc400 ctl 0xc080 bmdma 0xb880 irq 19
    [ 0.509630] ata4: SATA max UDMA/133 cmd 0xc000 ctl 0xbc00 bmdma 0xb888 irq 19
    [ 0.684177] ata3.00: ATA-8: WDC WD6400AAKS-00A7B2, 01.03B01, max UDMA/133
    [ 0.684184] ata3.00: 1250263728 sectors, multi 16: LBA48 NCQ (depth 0/32)
    [ 0.691230] ata3.00: configured for UDMA/133
    [ 0.691417] scsi 2:0:0:0: Direct-Access ATA WDC WD6400AAKS-0 01.0 PQ: 0 ANSI: 5
    [ 0.699349] sd 2:0:0:0: [sda] 1250263728 512-byte logical blocks: (640 GB/596 GiB)
    [ 0.699463] sd 2:0:0:0: [sda] Write Protect is off
    [ 0.699468] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
    [ 0.699516] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    [ 0.774080] sda: sda1 sda2 sda3 sda4 < sda5 sda6 sda7 sda8 sda9 sda10 >
    [ 0.775132] sd 2:0:0:0: [sda] Attached SCSI disk
    [ 1.096765] usb 2-2: new low-speed USB device number 2 using uhci_hcd
    [ 1.275214] usbcore: registered new interface driver usbhid
    [ 1.275219] usbhid: USB HID core driver
    [ 1.276959] input: USB Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/input/input1
    [ 1.277358] hid-generic 0003:04B3:310C.0001: input,hidraw0: USB HID v1.11 Mouse [USB Optical Mouse] on usb-0000:00:1d.0-2/input0
    [ 1.290145] tsc: Refined TSC clocksource calibration: 1613.217 MHz
    [ 1.290153] Switching to clocksource tsc
    [ 1.426263] PM: Starting manual resume from disk
    [ 1.426269] PM: Hibernation image partition 8:8 present
    [ 1.426271] PM: Looking for hibernation image.
    [ 1.426530] PM: Image not found (code -22)
    [ 1.426535] PM: Hibernation image not present or could not be loaded.
    [ 1.648559] EXT4-fs (sda10): mounted filesystem with ordered data mode. Opts: (null)
    [ 12.388263] EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null)
    [ 12.459036] fuse init (API version 7.20)
    [ 13.150864] Adding 1951860k swap on /dev/sda8. Priority:-1 extents:1 across:1951860k
    [ 0.691798] ACPI: Invalid Power Resource to register!
    [ 14.060519] synergys[417]: segfault at 0 ip 0000000000449fa9 sp 00007fff68a19a90 error 4
    [ 14.060531] in synergys[400000+cb000]
    (I disconnected the second hard drive in the machine to verify that it wasn't causing problems. Hence the sda / sdb discrepancy)
    I've been able to mount the partitions and chroot into them, but I don't know what to do from here. Oddly, I've discovered that 'ls' doesn't output anything, although bash completion seems to work ok.
    Any ideas on what I should do to get my system running again?
    Thanks in advance,
    Bernie

    Have you read the forum rules? Asking for support for Parabola isn't going to get you very far.
    https://wiki.archlinux.org/index.php/Fo … pport_ONLY

  • Crashes and read-only file systems

    Notice: I apolgize for the long post, I've tried to be as thorough as possible.  I have searched everywhere for possible solutions, but things I've found end up being temporary workarounds or don't apply to my situation.  Any help, even as simple as, "have you checked out XYZ log, it's hidden here", would be greatly appreciated.  Thanks
    I'm not sure what exactly caused the issues below, but they did start to happen within a day of running pacman -Syu.  I hadn't run that since I first installed Arch on December 2nd of this year.
    Setup:
    Thinkpad 2436CTO
    UEFI/GPT
    SSD drive
    Partitions: UEFISYS, Boot, LVM
    The LVM is encrypted and is broken up as: /root, /var, /usr, /tmp, /home
    All LVM file systems are EXT4 (used to have /var and /tmp as ReiserFS)
    The first sign that something was wrong was gnome freezing.  Gnome would then crash and I'd get booted back to the shell with all filesystems mounted as read-only.  I started having the same issues as this OP:
    https://bbs.archlinux.org/viewtopic.php?id=150704
    At the time, I had /var and /tmp as ReiserFS, and would also get reiserfs_read_locked_inode errors.
    When shutting down (even during non-crashed sessions) I would notice this during shutdown:
    Failed unmounting /var
    Failed unmounting /usr
    Followed by a ton of these:
    device-mapper: remove ioctl on <my LVM group> failed: Device or resource busy
    Nother of these errors had ever appeared before.
    After hours of looking for solutions (and not finding any that worked) I was convinced (without any proof) that my Reiser file systems were corrupt and so I reformatted my entire SSD and started anew - not the Arch way, I know   I set all logical volumes as EXT4.
    After started anew, I noticed
    device-mapper: remove ioctl on LVM_SysGroup failed: Device or resource busy
    was still showing up, even with just a stock Arch setup (maybe even when powering off via Arch install ISO, don't remember).  After a lot of searching, I found that most people judged it a harmless error, so I ignored it and continued setting up Arch.
    I set up Gnome and a basic LAMP server, and everything seemed to work for a couple of hours.  Soon after, I got the same old issues back.  The System-journald issue came back and per the workaround on https://bbs.archlinux.org/viewtopic.php?id=150704 and a couple other places, I rotated the journals and stopped journald from saving to storage.  That seemed to stop THOSE errors from at least overwhelming the shell, but I would still get screen freezes, crashes, and read-only file systems.
    I had to force the laptop to power off, since poweroff/reboot/halt commands weren't working (would get errors regarding the filesystems mounted as read-only).
    I utilized all disk checking functions possible.  From running the tests (SMART test included) that came as part of my laptop's BIOS to full blown fsck.  All tests showed the drive was working fine, and Fsck would show everything was either clean, or
    Clearing orphaned inode ## (uid=89, gid=89, mode=0100600, size=###
    Free blocks count wrong (###, counted=###)
    Which I would opt to fix.  Nothing serious, though.
    I could safely boot back into Arch and use the system fine until the system decides to freeze/crash and do the above all over again.
    The sure way of recreating this for me is to run a cron job on a local site I'm developing. After a brief screen freeze (mouse still moveable but everything is otherwise unreponsive) I'll systemctl status mysqld.service and notice that mysqld went down.
    It seems that it's at this point my file systems are mounted as read only, as trying to do virtually anything results in:
    unable to open /var/db/sudo/...: Read-only file system
    After some time, X/Gnome crashes and I get sent back to shell with
    ERROR: file_stream_metrics.cc(37)
    RecordFileError() err = 30 source = 1 record = 0
    Server terminated successfully (0)
    Closing log file.or_delegate.h(30)] sqlite erro1, errno 0: SQL logic error or missing database[1157:1179
    rm: cannot remove '/tmp/serverauth.teuroEBhtl': Read-only file system
    Before all this happened, I was using Arch just fine for a few weeks.  I wiped the drives and started anew, and this still happens with just the minimal number of packages installed.
    I've searched for solutions to each individual problem, but come across a hack that doesn't solve anything (like turning off storing logs for journal), or the solution doesn't apply to my case.
    At this point, I'm so overwhelmed I'm not even sure where exactly to pick up figuring this issue out.
    Thanks in advance for any help

    Did this occur when you booted from the live/install media?
    What is your current set up? That is, partitions, filesystems etc. I take it you have not yet reinstalled X but are in the default CLI following installation?
    If turning off log storage didn't help, reenable it so that you may at least stand a chance of finding something useful.
    What services, if any, are you running? What non-default daemons etc.?
    Does it happen if you keep the machine off line?
    Have you done pacman -Syu since installation and dealt with any *.pacnew files?
    Last edited by cfr (2012-12-26 22:17:57)

  • Difference between ASM Disk Group, ADVM Volume and ACFS File system

    Q1. What is the difference between an ASM Disk Group and an ADVM Volume ?
    To my mind, an ASM Disk Group is effectively a logical volume for Database files ( including FRA files ).
    11gR2 seems to have introduced the concepts of ADVM volumes and ACFS File Systems.
    An 11gR2 ASM Disk Group can contain :
    ASM Disks
    ADVM volumes
    ACFS file systems
    Q2. ADVM volumes appear to be dynamic volumes.
    However is this therefore not effectively layering a logical volume ( the ADVM volume ) beneath an ASM Disk Group ( conceptually a logical volume as well ) ?
    Worse still if you have left ASM Disk Group Redundancy to the hardware RAID / SAN level ( as Oracle recommend ), you could effectively have 3 layers of logical disk ? ( ASM on top of ADVM on top of RAID/SAN ) ?
    Q3. if it is 2 layers of logical disk ( i.e. ASM on top of ADVM ), what makes this better than 2 layers using a 3rd party volume manager ( eg ASM on top of 3rd party LVM ) - something Oracle encourages against ?
    Q4. ACFS File systems, seem to be clustered file systems for non database files including ORACLE_HOMEs, application exe's etc ( but NOT GRID_HOME, OS root, OCR's or Voting disks )
    Can you create / modify ACFS file systems using ASM.
    The oracle toplogy diagram for ASM in the 11gR2 ASM Admin guide, shows ACFS as part of ASM. I am not sure from this if ACFS is part of ASM or ASM sits on top of ACFS ?
    Q5. Connected to Q4. there seems to be a number of different ways, ACFS file systems can be created ? Which of the below are valid methods ?
    through ASM ?
    through native OS file system creation ?
    through OEM ?
    through acfsutil ?
    my head is exploding
    Any help and clarification greatly appreciated
    Jim

    Q1 - ADVM volume is a type of special file created in the ASM DG.  Once created, it creates a block device on the OS itself that can be used just like any other block device.  http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmfilesystem.htm#OSTMG30000
    Q2 - the asm disk group is a disk group, not really a logical volume.  It combines attributes of both when used for database purposes, as the database and certain other applications know how to talk "ASM" protocol.  However, you won't find any general purpose applications that can do so.  In addition, some customers prefer to deal directly with file systems and volume devices, which ADVM is made to do.  In your way of thinking, you could have 3 layers of logical disk, but each of them provides different attributes and characteristics.  This is not a bad thing though, as each has a slightly different focus - os file system\device, database specific, and storage centric.
    Q3 - ADVM is specifically developed to extend the characteristics of ASM for use by general OS applications.  It understands the database performance characteristics and is tuned to work well in that situation.  Because it is developed in house, it takes advantage of the ASM design model.  Additionally, rather than having to contact multiple vendors for support, your support is limited to calling Oracle, a one-stop shop.
    Q4 - You can create and modify ACFS file systems using command line tools and ASMCA.  Creating and modifying logical volumes happens through SQL(ASM), asmcmd, and ASMCA.  EM can also be used for both items.  ACFS sits on top of ADVM, which is a file in an ASM disk group.  ACFS is aware of the characteristics of ASM\ADVM volumes, and tunes it's IO to make best use of those characteristics. 
    Q5 - several ways:
    1) Connect to ASM with SQL, use 'alter diskgroup add volume' as Mihael points out.  This creates an ADVM volume.  Then, format the volume using 'mkfs' (*nix) or acfsformat (windows).
    2) Use ASMCA - A gui to create a volume and format a file system.  Probably the easiest if your head is exploding.
    3) Use 'asmcmd' to create a volume, and 'mkfs' to format the ACFS file system.
    Here is information on ASMCA, with examples:
    http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmca_acfs.htm#OSTMG94348
    Information on command line tools, with examples:
    Basic Steps to Manage Oracle ACFS Systems

  • Unix shell: Environment variable works for file system but not for ASM path

    We would like to switch from file system to ASM for data files of Oracle tablespaces. For the path of the data files, we have so far used environment variables, e.g.,
    CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    This works just fine (from shell scripts, PL/SQL packages, etc.) if ORACLE_DB_DATA denotes a file system path, such as "/home/oracle", but doesn’t work if the environment variable denotes an ASM path like "\+DATA/rac/datafile". I assume that it has something to do with "+" being a special character in the shell. However, escaping "\+" didn’t work. I tried with both bash and ksh.
    Oracle managed files (e.g., set DB_CREATE_FILE_DEST to +DATA/rac/datafile) would be an option. However, this would require changing quite a few scripts and programs. Therefore, I am looking for a solution with the environment variable. Any suggestions?
    The example below is on a RAC Attack system (http://en.wikibooks.org/wiki/RAC_Attack_-OracleCluster_Database_at_Home). I get the same issues on Solaris/AIX/HP-UX on 11.2.0.3 also.
    Thanks,
    Martin
    ==== WORKS JUST FINE WITH ORACLE_DB_DATA DENOTING FILE SYSTEM PATH ====
    collabn1:/home/oracle[RAC1]$ export ORACLE_DB_DATA=/home/oracle
    collabn1:/home/oracle[RAC1]$ sqlplus "/ as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Aug 24 20:57:09 2012
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    Tablespace created.
    SQL> !ls -l ${ORACLE_DB_DATA}/bma.dbf
    -rw-r----- 1 oracle asmadmin 2105344 Aug 24 20:57 /home/oracle/bma.dbf
    SQL> drop tablespace bma including contents and datafiles;
    ==== DOESN’T WORK WITH ORACLE_DB_DATA DENOTING ASM PATH ====
    collabn1:/home/oracle[RAC1]$ export ORACLE_DB_DATA="+DATA/rac/datafile"
    collabn1:/home/oracle[RAC1]$ sqlplus "/ as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Aug 24 21:08:47 2012
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON
    ERROR at line 1:
    ORA-01119: error in creating database file '${ORACLE_DB_DATA}/bma.dbf'
    ORA-27040: file create error, unable to create file
    Linux Error: 2: No such file or directory
    SQL> -- works if I substitute manually
    SQL> CREATE TABLESPACE BMA DATAFILE '+DATA/rac/datafile/bma.dbf' SIZE 2M AUTOEXTEND ON;
    Tablespace created.
    SQL> drop tablespace bma including contents and datafiles;

    My revised understanding is that it is not a shell issue with replacing +, but an Oracle problem. It appears that Oracle first checks whether the path starts with a "+" or not. If it does not (file system), it performs the normal environment variable resolution. If it does start with a "+" (ASM case), Oracle does not perform environment variable resolution. Escaping, such as "\+" instead of "+" doesn't work either.
    To be more specific regarding my use case: I need the substitution to work from SQL*Plus scripts started with @script, PL/SQL packages with execute immediate, and optionally entered interactively in SQL*Plus.
    Thanks,
    Martin

  • In WL Cluster JMS Persistent Storefiles on local file system.

    Hi Gurus,
      We have Weblogic cluster 10.3.4.1 (3 nodes) on Linux 5 which is setup for Oracle Services Bus.  Cluster has 3 machines configured.  Currently JMS Persistent Storefiles are on shared file system but we are having some issue with the shared file system and wanna move JMS Persistent Storefiles on the local filesystem instead of shared file system.  Is it possible in Clustered WL env or not.
    Thanks All.

    The data will be uploaded to the server and a PHP program will read the data and use it. I've already implemented it using JSON. I had to install a JSON stringifer and a JSON parser in a subfolder of configuration/shared/common/scripts. It's working well.
    Thanks
    mitzy_kitty

  • Xml validation ---- file system of PI ???

    hi all ,
         i read the xml validation concept of PI7.1 . iam unable to find any practical explanation about the following
    "To validate the structure of a PI message payload, you should export the schemas from the ESR and save them in the file system of PI. "
    save them in the file system of pi ... where does this mean to be stored ?.
    will it be necessary for both the ADAPTER ENGINE & INTEG ENGINE VALIDATION?
    Please give any other important aspect of this validation concept which u came across  during ur practical implementation
    THANKS IN ADVANCE

    Hi Netaji,
      Validating the XML documents is a new feature introduced in SAP NetWeaver Process Integration 7.1. The validation can be performed at two different locations - Integration Server or the Adapter Engine. Validations can be done in both synchronous and asynchronous operations.
    In the synchronous scenario or using an adapter that can handle synchronous messages (e.g. HTTP and SOAP adapters), when a validation error occurs, the error message will be returned to the sender. In the case of asynchronous scenarios (e.g. file adapter), the error message will be logged to the SXI_MONITOR when the validation is done in the Integration Server or to the Runtime Workbench (RWB) when the validation is done in the Adapter Engine. In both cases, the messaging will be terminated with error.
    Both Integration Server and Adapter Engine can be used for validating XML from the sender. However, only the Integration Server can be used to validate the XML when sending to the receiver. The XML validation configurations are done in either the Sender Agreement or the Receiver Agreement.
    In the current release, as of PI 7.1, the XML schema (or XSD) to be used for the XML validation has to reside in a file directory under the JEE. The XSD file has to be explicitly copied into a specific directory, depending on where the validation is to be performed.
    Regards,
    Leela

Maybe you are looking for

  • Mail not receiving emails since upgrading to Maverick

    I have been having nothing but issues using the Mail program since upgrading to Maverick earlier this year.  I can't get my incoming emails to sync normally (I have multiple different accounts set up from different servers), sometimes they come in wi

  • RFC - PL/I function on IBM Host OS/390

    I want to implement the following scenario: call from R/3 via RFC a PL/I function on the IBM host which reads the DB2 on IBM host in order to exchange data. I know OSS note 119496 but can't locate the file OS390RFC.TXT mentioned there. Can anybody se

  • Mapping issue: FCC: Idoc for each record in file

    Hi, I have file to Idoc scenario. I receive csv file with multiple records. The requirement is to create an Idoc for each record. For eg. source file A1,B1,C1 A2,B2,C2 A3,B3,C3 After FCC <MT> <TRANS> <ROW> <A>A1</A> <B>B1</B> <C>C1</C> </ROW> <ROW> <

  • HT1430 How do i close open icons using new ios 7 software?

    How do i close open icons using new ios 7 software?

  • Help! Icon present after update to iOS6.1

    Hi guys, After I made the update on my iPhone 4S to iOS 6.1 it seems that some times i have the connected icon present (like in the picture attached). I've checked everithing and it also appears when no app is active (I close them all) and also the b