Compile in /dev/shm?

Hello,
I have an SSD installed and followed the instructions on wiki page: https://wiki.archlinux.org/index.php/SSD
Now I'm in heavy developement for a project, using gwt. How can I ensure that if I compile my Application, that it takes place in /dev/shm or some other place in vm?

https://wiki.archlinux.org/index.php/Makepkg.conf
https://wiki.archlinux.org/index.php//dev/shm

Similar Messages

  • /dev/shm not release space after delete file.

    when I remove file from /dev/shm, the available space of shm still not increase.
    Is it having a command for shrinking  shm ?

    http://www.linuxquestions.org/questions … on-806387/
    Edit: Even the wiki has a thing or two on this subject https://wiki.archlinux.org/index.php//dev/shm
    Last edited by karol (2011-05-29 10:22:27)

  • /dev/shm on Oracle Linux 6.x to run Oracle 11g R2 - manual configuration?

    Hello
    We are building a server to run Oracle 11g R2 database (11.2.0.3 x64) on Oracle Linux 6.2 with UEK R2.
    Our preference is to use AMM to have Oracle 11g R2 manage memory. We may impose some minium SGA and PGA memory allocations but basically aim to use MEMORY_TARGET to manage overall memory.
    By default Linux makes the size of /dev/shm ~50% of server physical RAM, as far as I can tell.
    Here is the /etc/fstab entry created by the installation:
    tmpfs /dev/shm tmpfs defaults 0 0
    Given this Linux server will only run Oracle 11g R2 database and some monitoring software, almost application code will run on the server. The application code will run on the separate application server and is Java based.
    Can I change the */etc/fstab* entry for /dev/shm to manually increase the size to take up ~80-90% of the server physical RAM ? Is it a good idea?
    The server is 64-bit, the RAM = 64 GB, so I am thinking to manually make /dev/shm to be = ~55GB, leaving ~8GB for other purposes.
    Right now it's about 32GB (50%?) if I leave the /dev/shm 'defaults' on.
    many thanks

    thanks,
    I have read the doc (what little there is on this topic).
    I have asked on the database forum......
    just FYI - below is the proof:
    SQL> show parameter mem
    NAME                    TYPE     VALUE
    hi_shared_memory_address     integer     0
    memory_max_target          big integer 4G
    memory_target          big integer 0
    shared_memory_address     integer     0
    SQL> show parameter ga
    NAME                    TYPE     VALUE
    lock_sga               boolean     FALSE
    pga_aggregate_target          big integer 1600M
    pre_page_sga          boolean     FALSE
    sga_max_size          big integer 3G
    sga_target               big integer 1600M
    still does not work.
    And I cant change memory_max_target = 0 because I get error on startup:
    SQL> alter system set memory_max_target=0 scope=spfile;
    System altered.
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup;
    ORA-01078: failure in processing system parameters
    ORA-00843: Parameter not taking MEMORY_MAX_TARGET into account
    ORA-00849: SGA_TARGET 3221225472 cannot be set to more than MEMORY_MAX_TARGET 0.
    BUT if memory_max_target is > 0 then alert log says hugepages can not be used
    it feels like catch-22.....
    thanks
    Edited by: yurib on Jun 1, 2012 4:53 PM

  • PRKC-1031: error checking free space for /dev/shm

    Dear,
    hope someone can help me with this error.
    install oracle 11g on linux OUL5 2 nodes cluster.
    the cluster is fine. when install database, i always have this error. did increase /dev/shm to 2G but still get this error.and it would not let me continue.
    the specific Prerequisite checks were fine.only failed on swap
    Linux rac11g1 2.6.18-92.1.13.0.1.el5PAE #1 SMP
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
    22282108 8542032 12589920 41% /
    /dev/hda1 101086 40264 55603 42% /boot
    tmpfs 2097152 0 2097152 0% /dev/shm
    /dev/mapper/VolGroup01-LogVol01
    25189484 769252 23140644 4% /u01
    Thanks in advance.

    Hi
    you are quite welcome, so you don't any issues with the user equiv part during the cluvfy controls?
    how many instances are you running on your server that hosts ASM?
    More than 1 then read that: Did you change your shm values in the kernel ? Please check those values in 2.6 Configuring Kernel Parameters
    64 bits: http://download.oracle.com/docs/cd/B19306_01/install.102/b15667/pre_install.htm#i1011296
    32 bits: http://download.oracle.com/docs/cd/B19306_01/install.102/b15660/pre_install.htm#sthref264
    Note that you have to increase : semmni, semmns,shmmni should be multiplied by # of instances, SHMMAX should be set to avail memory
    Can you please give us your values from your server please?
    Your physical memory :
    grep MemTotal /proc/meminfo
    And also the result of
    ipcs -l
    Edited by: Hub on Oct 24, 2008 11:07 PM

  • /dev/shm not mounted /dev busy

    Recently I was trying out a boot disk I had made, and basically, I switched it off several times due to it booting the completely wrong kernel.
    Now, I'm getting to the stage "Checking File systems" and then comes up [Failed]. I then get a message saying Reboot required, and that it will reboot in 15 seconds, just a few seconds before it reboots I get "/dev/shm not mounted, /dev busy" or something similar.
    I've booted up my sysresccd, ran "fsck.ext4 -fcv /dev/sda2" to force a check and scan for any bad blocks, it came up clean, then I rebooted and got the same error, so I copied the kernel and system.map over to /boot to make sure there's no corruption and reinstalled initscripts and util-linux-ng, rebooted, same error.
    Tried different kernels, I've checked fstab and menu.lst, no problems there, so I still don't get why I still get the same problem.
    Anybody know of any fix other than reinstalling arch?
    EDIT: Editing /etc/rc.sysinit and commenting out the fsck part made it work, finding out why fsck failed now.
    Last edited by compgenius999 (2010-03-03 19:21:35)

    anybody?

  • [SOLVED] Trying to allocate more ram to /dev/shm

    I want to allocate more than 50% ram to '/dev/shm'.
    In the wiki it says to edit '/etc/fstab' and add the size parameter but I know that '/etc/fstab' no longer mounts '/dev/shm', '/etc/rc.sysinit' does.
    Is it safe to add the '/dev/shm' entry in to '/etc/fstab or should I edit '/etc/rc.sysinit' now?
    Thanks for any responses
    Last edited by pluckypigeon (2012-10-01 23:28:04)

    You should add an entry to /etc/fstab.
    rc.sysinit (or systemd) will indeed mount /dev/shm with standard options in early boot, but then rc.sysinit (or systemd) will call /usr/lib/systemd/systemd-remount-fs, which will remount all the api filesystems (such as /dev/shm) that have entries in fstab, with the correct options specified there.

  • JOXSHM files in /dev/shm

    I have a bunch of old files with names like JOXSHM_EXT_0_DEMDB_121208837 in the /dev/shm folder. I believe these are suppose to be removed when the database is shutdown. The DEMDB instance has been down for months. Do I have to reboot to remove these or is it safe to delete them?
    I am running 11.2 on REHL 5

    uptime
    10:36:01 up 229 days, 1:16, 3 users, load average: 0.46, 0.49, 0.45

  • Question about 11gR2 Grid, RAC, /dev/shm and Automatic Memory Management

    Hello,
    i've recently installed grid and rdbms software 11.2.0.2 on a two node Oracle Linux cluster with 128gb ram each node.
    I'm using ASM to store data and ocr and I'm testing Automatic Memory Management.
    When I finished Grid+RDBMS installation I've seen that /dev/shm size is 64gb (half of my total RAM).
    I've created a database with dbca and when I was asked to choose if I wanted to use AMM I've noticed that I could
    allocate only about 60gb for Oracle. If I chose more than 90gb I got an error saying:
    Using Automatic Memory Management requires 60gb available in my two nodes.
    The current available space in the two nodes is only 30gb and 30gb.
    If you want to use AMM you should either free up some space in /dev/shm
    or reduce the memory allocated to Oracle
    I was wondering when (during the installation or the settings of kernel parameters) did I define the space of /dev/shm ?
    Since I have 128gb of RAM wouldn't it be better to use more than 64gb of ram for my /dev/shm tmpfs partition ?
    Is there a limit or a ratio for best practice for my RAM and the /dev/shm ?
    thanks in advance.

    user9051299 wrote:
    Is the "half of the RAM size" a kernel's default value or Oracle's ? Neither. There are a number of unique factors that determine the best memory size and fit for Oracle - including just how much memory is effectively available (i.e. how much is needed for other services and processes).
    And from what I understand i don't "break" any Oracle's best practice by increasing the /dev/shm right ?Correct. (at least none that I'm aware of, and none that I have read in Oracle's RAC Starter Kit documentation).

  • Temps on /dev/shm

    I'd like to know if there are any temp directories besides /tmp and /var/tmp I can/could/should mount to a tmpfs and how I would go on about doing this. For starters I don't even know how big it's got to be and if I have to do this via fstab or some kind of bind.
    Thanks for your input.

    Thank you for your input! Is there a reason why you propose 250M? On gentoo I had the problem with big compiles that /var/tmp quickly ran out of space (at least with OOo), I don't want that again. So if I realize that I might need more space on a given compile, taken that arch uses /var/tmp for the build process, could I just assign more RAM to it (on reboot and, of course, after the process ran out of space lol)?
    Besides, why the profiles folder, does that promote fragmentation, too? What happens to my FF settings and plugins and such?
    I'm asking all this for two reasons, one being speed, the other less fragmentation. At first I thought I'll just give each a partition, as I always did on my installs. But when I read that I could just point it all to shm it was more appealing. I'm not saying that  I'll immidiately fiddle with my own compiles, since it's been a while that my hdd has met the penguin, I need to reeducate myself. Arch will be my new home, since the-compile-everything-no-matter-if-it-even-needs-more-rice motto was just too much for my taste. Extra speed where it's needed makes more sense to me. End of rant, hehe.
    Last edited by p2501 (2009-07-25 15:55:05)

  • Error when compiling firefox...Out of memory: Kill process 6763

    I'm trying to compile the firefox version 15 as both 16.0.1 and I always get the same error, which I think leaves me with no ram even though I have 8 gigs, i try it with 8 gigs of swap but does exactly the same, here are all the facts about this problem, the only thing I have not tried is to change the compiler version, what do you think? this is a clear linkage error where the system breaks down after running out of physical memory...
    ERROR--->-using  yaourt -Sb or makepkg -s:
    /tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu/toolkit/library/nsUnicharUtils.cpp:275:1: warning: always_inline function might not be inlinable [-Wattributes]
    /tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu/toolkit/library/nsUnicharUtils.cpp:50:1: warning: always_inline function might not be inlinable [-Wattributes]
    /tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu/toolkit/library/nsUnicharUtils.cpp:40:1: warning: always_inline function might not be inlinable [-Wattributes]
    rm -f libxul.so
    /tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu/_virtualenv/bin/python /tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/config/pythonpath.py -I../../config /tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/config/expandlibs_exec.py --depend .deps/libxul.so.pp --target libxul.so --uselist -- c++ -pedantic -Wall -Wpointer-arith -Woverloaded-virtual -Werror=return-type -Wtype-limits -Wempty-body -Wno-ctor-dtor-privacy -Wno-overlength-strings -Wno-invalid-offsetof -Wno-variadic-macros -Wcast-align -Wno-long-long -march=native -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_FORTIFY_SOURCE=2 -fno-exceptions -fno-strict-aliasing -fno-rtti -ffunction-sections -fdata-sections -fno-exceptions -std=gnu++0x -pthread -pipe -DNDEBUG -DTRIMMED -g -fprofile-generate -O3 -fomit-frame-pointer -fPIC -shared -Wl,-z,defs -Wl,--gc-sections -Wl,-h,libxul.so -o libxul.so nsStaticXULComponents.i_o nsUnicharUtils.i_o nsBidiUtils.i_o nsSpecialCasingData.i_o nsUnicodeProperties.i_o nsRDFResource.i_o -lpthread -Wl,-O1,--sort-common,--as-needed,-z,relro -Wl,-rpath,/usr/lib/firefox -Wl,-z,noexecstack -fprofile-generate -Wl,-rpath-link,/tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu/dist/bin -Wl,-rpath-link,/usr/lib ../../toolkit/xre/libxulapp_s.a ../../staticlib/components/libnecko.a ../../staticlib/components/libuconv.a ../../staticlib/components/libi18n.a ../../staticlib/components/libchardet.a ../../staticlib/components/libjar50.a ../../staticlib/components/libstartupcache.a ../../staticlib/components/libpref.a ../../staticlib/components/libhtmlpars.a ../../staticlib/components/libidentity.a ../../staticlib/components/libimglib2.a ../../staticlib/components/libgkgfx.a ../../staticlib/components/libgklayout.a ../../staticlib/components/libdocshell.a ../../staticlib/components/libembedcomponents.a ../../staticlib/components/libwebbrwsr.a ../../staticlib/components/libnsappshell.a ../../staticlib/components/libtxmgr.a ../../staticlib/components/libcommandlines.a ../../staticlib/components/libtoolkitcomps.a ../../staticlib/components/libpipboot.a ../../staticlib/components/libpipnss.a ../../staticlib/components/libappcomps.a ../../staticlib/components/libjsreflect.a ../../staticlib/components/libcomposer.a ../../staticlib/components/libtelemetry.a ../../staticlib/components/libjsinspector.a ../../staticlib/components/libjsdebugger.a ../../staticlib/components/libstoragecomps.a ../../staticlib/components/librdf.a ../../staticlib/components/libwindowds.a ../../staticlib/components/libjsctypes.a ../../staticlib/components/libjsperf.a ../../staticlib/components/libgkplugin.a ../../staticlib/components/libunixproxy.a ../../staticlib/components/libjsd.a ../../staticlib/components/libautoconfig.a ../../staticlib/components/libauth.a ../../staticlib/components/libcookie.a ../../staticlib/components/libpermissions.a ../../staticlib/components/libuniversalchardet.a ../../staticlib/components/libfileview.a ../../staticlib/components/libplaces.a ../../staticlib/components/libtkautocomplete.a ../../staticlib/components/libsatchel.a ../../staticlib/components/libpippki.a ../../staticlib/components/libwidget_gtk2.a ../../staticlib/components/libimgicon.a ../../staticlib/components/libprofiler.a ../../staticlib/components/libaccessibility.a ../../staticlib/components/libremoteservice.a ../../staticlib/components/libspellchecker.a ../../staticlib/components/libzipwriter.a ../../staticlib/components/libservices-crypto.a ../../staticlib/libjsipc_s.a ../../staticlib/libdomipc_s.a ../../staticlib/libdomplugins_s.a ../../staticlib/libmozipc_s.a ../../staticlib/libmozipdlgen_s.a ../../staticlib/libipcshell_s.a ../../staticlib/libgfxipc_s.a ../../staticlib/libhal_s.a ../../staticlib/libdombindings_s.a ../../staticlib/libxpcom_core.a ../../staticlib/libucvutil_s.a ../../staticlib/libchromium_s.a ../../staticlib/libsnappy_s.a ../../staticlib/libgtkxtbin.a ../../staticlib/libthebes.a ../../staticlib/libgl.a ../../staticlib/libycbcr.a -L../../dist/bin -L../../dist/lib /tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu/dist/lib/libjs_static.a -lffi -Wl,-rpath-link,/usr/lib -L/usr/lib -lssl3 -lsmime3 -lnss3 -lnssutil3 -lcrmf -lXrender -lfreetype -lfontconfig -lsqlite3 -ljpeg -lpng -lz -lhunspell-1.3 -L/usr/lib -levent -lpixman-1 ../../dist/lib/libgkmedias.a -lasound -lrt -L../../dist/bin -L../../dist/lib -L/usr/lib -lplds4 -lplc4 -lnspr4 -lpthread -ldl ../../dist/lib/libmozalloc.a -ldbus-glib-1 -ldbus-1 -lgobject-2.0 -lglib-2.0 -lX11 -lXext -lpangoft2-1.0 -lfreetype -lfontconfig -lpangocairo-1.0 -lpango-1.0 -lcairo -lgobject-2.0 -lglib-2.0 -lgtk-x11-2.0 -latk-1.0 -lgio-2.0 -lpangoft2-1.0 -lfreetype -lfontconfig -lgdk-x11-2.0 -lpangocairo-1.0 -lgdk_pixbuf-2.0 -lpango-1.0 -lcairo -lgobject-2.0 -lglib-2.0 -lXt -lgthread-2.0 -lfreetype -lstartup-notification-1 -lvpx -ldl -lrt -lrt
    collect2: error: ld terminated with signal 9 [Matat]
    make[6]: *** [libxul.so] Error 1
    make[6]: Leaving directory `/tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu/toolkit/library'
    make[5]: *** [libs_tier_platform] Error 2
    make[5]: Leaving directory `/tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu'
    make[4]: *** [tier_platform] Error 2
    make[4]: Leaving directory `/tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu'
    make[3]: *** [default] Error 2
    make[3]: Leaving directory `/tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release/obj-x86_64-unknown-linux-gnu'
    make[2]: *** [realbuild] Error 2
    make[2]: Leaving directory `/tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release'
    make[1]: *** [profiledbuild] Error 2
    make[1]: Leaving directory `/tmp/yaourt-tmp-enric/abs-firefox/src/mozilla-release'
    make: *** [build] Error 2
    dmesg output on  ld :
    [ 1521.353469] ld invoked oom-killer: gfp_mask=0x280da, order=0, oom_adj=0, oom_score_adj=0
    [ 1521.353474] Pid: 6763, comm: ld Not tainted 3.6.0 #1
    [ 1521.353475] Call Trace:
    [ 1521.353482] [<ffffffff814cc68e>] ? dump_header.isra.11+0x5d/0x18e
    [ 1521.353486] [<ffffffff812ae3dc>] ? ___ratelimit+0xac/0x120
    [ 1521.353489] [<ffffffff810e2fb5>] ? oom_kill_process+0x275/0x3b0
    [ 1521.353492] [<ffffffff810e2b10>] ? find_lock_task_mm+0x20/0x70
    [ 1521.353494] [<ffffffff810e3455>] ? out_of_memory+0x1c5/0x290
    [ 1521.353497] [<ffffffff810e742a>] ? __alloc_pages_nodemask+0x85a/0x870
    [ 1521.353500] [<ffffffff811048c4>] ? handle_pte_fault+0x8c4/0xb10
    [ 1521.353505] [<ffffffff81075ddf>] ? select_task_rq_fair+0x4cf/0x790
    [ 1521.353509] [<ffffffff8100a21f>] ? native_sched_clock+0xf/0x70
    [ 1521.353512] [<ffffffff8102b880>] ? do_page_fault+0x130/0x460
    [ 1521.353515] [<ffffffff810e6301>] ? get_page_from_freelist+0x311/0x670
    [ 1521.353519] [<ffffffff814d2275>] ? page_fault+0x25/0x30
    [ 1521.353523] [<ffffffff810df577>] ? file_read_actor+0x67/0x1f0
    [ 1521.353526] [<ffffffff810f6915>] ? shmem_file_aio_read+0x155/0x3a0
    [ 1521.353530] [<ffffffff81128832>] ? do_sync_read+0x92/0xd0
    [ 1521.353532] [<ffffffff811290c0>] ? vfs_read+0xa0/0x160
    [ 1521.353535] [<ffffffff811291c7>] ? sys_read+0x47/0xa0
    [ 1521.353537] [<ffffffff814d2275>] ? page_fault+0x25/0x30
    [ 1521.353540] [<ffffffff814d27fd>] ? system_call_fastpath+0x1a/0x1f
    [ 1521.353541] Mem-Info:
    [ 1521.353543] DMA per-cpu:
    [ 1521.353544] CPU 0: hi: 0, btch: 1 usd: 0
    [ 1521.353545] CPU 1: hi: 0, btch: 1 usd: 0
    [ 1521.353546] CPU 2: hi: 0, btch: 1 usd: 0
    [ 1521.353547] CPU 3: hi: 0, btch: 1 usd: 0
    [ 1521.353548] DMA32 per-cpu:
    [ 1521.353550] CPU 0: hi: 186, btch: 31 usd: 0
    [ 1521.353551] CPU 1: hi: 186, btch: 31 usd: 26
    [ 1521.353552] CPU 2: hi: 186, btch: 31 usd: 59
    [ 1521.353553] CPU 3: hi: 186, btch: 31 usd: 0
    [ 1521.353554] Normal per-cpu:
    [ 1521.353555] CPU 0: hi: 186, btch: 31 usd: 30
    [ 1521.353556] CPU 1: hi: 186, btch: 31 usd: 156
    [ 1521.353557] CPU 2: hi: 186, btch: 31 usd: 169
    [ 1521.353558] CPU 3: hi: 186, btch: 31 usd: 0
    [ 1521.353562] active_anon:1599337 inactive_anon:345122 isolated_anon:0
    active_file:140 inactive_file:222 isolated_file:0
    unevictable:17 dirty:0 writeback:0 unstable:0
    free:11562 slab_reclaimable:14927 slab_unreclaimable:21150
    mapped:6681 shmem:932329 pagetables:10868 bounce:0
    [ 1521.353567] DMA free:15892kB min:20kB low:24kB high:28kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15644kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:8kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
    [ 1521.353568] lowmem_reserve[]: 0 3147 7925 7925
    [ 1521.353575] DMA32 free:23592kB min:4520kB low:5648kB high:6780kB active_anon:2754024kB inactive_anon:412764kB active_file:16kB inactive_file:44kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3223060kB mlocked:0kB dirty:0kB writeback:0kB mapped:5892kB shmem:1294220kB slab_reclaimable:7144kB slab_unreclaimable:5108kB kernel_stack:216kB pagetables:8616kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:715 all_unreclaimable? yes
    [ 1521.353576] lowmem_reserve[]: 0 0 4778 4778
    [ 1521.353582] Normal free:6764kB min:6860kB low:8572kB high:10288kB active_anon:3643324kB inactive_anon:967724kB active_file:544kB inactive_file:844kB unevictable:68kB isolated(anon):0kB isolated(file):0kB present:4892832kB mlocked:68kB dirty:0kB writeback:0kB mapped:20832kB shmem:2435096kB slab_reclaimable:52564kB slab_unreclaimable:79484kB kernel_stack:2968kB pagetables:34856kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:2484 all_unreclaimable? yes
    [ 1521.353583] lowmem_reserve[]: 0 0 0 0
    [ 1521.353586] DMA: 1*4kB 0*8kB 1*16kB 0*32kB 2*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15892kB
    [ 1521.353592] DMA32: 740*4kB 506*8kB 300*16kB 135*32kB 25*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 1*4096kB = 23872kB
    [ 1521.353599] Normal: 637*4kB 0*8kB 0*16kB 2*32kB 1*64kB 0*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 6516kB
    [ 1521.353605] 932908 total pagecache pages
    [ 1521.353606] 0 pages in swap cache
    [ 1521.353607] Swap cache stats: add 0, delete 0, find 0/0
    [ 1521.353608] Free swap = 0kB
    [ 1521.353609] Total swap = 0kB
    [ 1521.369740] 2094576 pages RAM
    [ 1521.369743] 80362 pages reserved
    [ 1521.369744] 36619 pages shared
    [ 1521.369745] 1993098 pages non-shared
    [ 1521.369746] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
    [ 1521.369752] [ 2286] 0 2286 6258 269 16 0 -1000 systemd-udevd
    [ 1521.369754] [ 2293] 0 2293 153174 83 229 0 0 systemd-journal
    [ 1521.369757] [ 3585] 0 3585 4695 188 13 0 0 mount.ntfs-3g
    [ 1521.369759] [ 3615] 0 3615 3547 140 12 0 0 crond
    [ 1521.369761] [ 3616] 0 3616 6708 2523 18 0 0 preload
    [ 1521.369763] [ 3617] 0 3617 17762 297 38 0 0 cupsd
    [ 1521.369765] [ 3619] 0 3619 43562 313 52 0 0 NetworkManager
    [ 1521.369767] [ 3620] 84 3620 6988 70 20 0 0 avahi-daemon
    [ 1521.369769] [ 3621] 0 3621 6515 72 18 0 0 systemd-logind
    [ 1521.369770] [ 3622] 81 3622 4595 285 14 0 -900 dbus-daemon
    [ 1521.369772] [ 3626] 0 3626 2275 32 9 0 0 agetty
    [ 1521.369774] [ 3627] 0 3627 6693 53 19 0 0 kdm
    [ 1521.369776] [ 3628] 84 3628 6957 52 18 0 0 avahi-daemon
    [ 1521.369778] [ 3631] 0 3631 45316 12716 91 0 0 X
    [ 1521.369780] [ 4287] 0 4287 15666 128 36 0 0 kdm
    [ 1521.369782] [ 5131] 102 5131 92417 912 41 0 0 polkitd
    [ 1521.369784] [ 5132] 0 5132 53222 346 41 0 0 colord
    [ 1521.369786] [ 5163] 0 5163 129753 991 153 0 0 colord-sane
    [ 1521.369788] [11395] 0 11395 523912 258 62 0 0 console-kit-dae
    [ 1521.369790] [11468] 1000 11468 3668 111 13 0 0 startkde
    [ 1521.369792] [11480] 1000 11480 3970 37 13 0 0 dbus-launch
    [ 1521.369794] [11481] 1000 11481 4839 401 15 0 0 dbus-daemon
    [ 1521.369796] [11507] 1000 11507 4068 94 12 0 0 gpg-agent
    [ 1521.369798] [11510] 1000 11510 3760 87 11 0 0 ssh-agent
    [ 1521.369799] [11524] 1000 11524 1013 21 7 0 -300 start_kdeinit
    [ 1521.369801] [11525] 1000 11525 85821 1591 149 0 -300 kdeinit4
    [ 1521.369803] [11528] 1000 11528 188590 3072 227 0 0 kded4
    [ 1521.369805] [11534] 1000 11534 108217 2084 173 0 0 kwalletd
    [ 1521.369807] [11539] 1000 11539 107913 2348 173 0 0 kglobalaccel
    [ 1521.369809] [11542] 0 11542 55597 601 43 0 0 upowerd
    [ 1521.369811] [11555] 1000 11555 1047 18 7 0 0 kwrapper4
    [ 1521.369813] [11556] 1000 11556 127307 2185 174 0 0 ksmserver
    [ 1521.369814] [11568] 0 11568 49409 269 33 0 0 udisks-daemon
    [ 1521.369816] [11573] 0 11573 12394 89 28 0 0 udisks-daemon
    [ 1521.369818] [11577] 1000 11577 214097 12177 260 0 0 kwin
    [ 1521.369820] [11590] 1000 11590 93530 1489 131 0 0 kactivitymanage
    [ 1521.369822] [11621] 1000 11621 343123 3417 215 0 0 knotify4
    [ 1521.369824] [11631] 1000 11631 175587 4370 239 0 0 krunner
    [ 1521.369826] [11633] 1000 11633 246522 16714 342 0 0 plasma-desktop
    [ 1521.369828] [11636] 1000 11636 216148 5325 253 0 0 lancelot
    [ 1521.369830] [11639] 1000 11639 50634 268 35 0 0 mission-control
    [ 1521.369831] [11643] 1000 11643 37597 405 39 0 0 akonadi_control
    [ 1521.369833] [11645] 1000 11645 357276 722 81 0 0 akonadiserver
    [ 1521.369835] [11652] 1000 11652 377776 6206 70 0 0 mysqld
    [ 1521.369837] [11738] 1000 11738 76786 1061 110 0 0 akonadi_agent_l
    [ 1521.369839] [11739] 1000 11739 76783 1056 111 0 0 akonadi_agent_l
    [ 1521.369841] [11740] 1000 11740 75143 1033 111 0 0 akonadi_agent_l
    [ 1521.369843] [11741] 1000 11741 75144 1042 110 0 0 akonadi_agent_l
    [ 1521.369845] [11742] 1000 11742 75800 1048 112 0 0 akonadi_agent_l
    [ 1521.369847] [11743] 1000 11743 75800 1054 107 0 0 akonadi_agent_l
    [ 1521.369848] [11744] 1000 11744 76786 1064 108 0 0 akonadi_agent_l
    [ 1521.369850] [11745] 1000 11745 76780 1073 111 0 0 akonadi_agent_l
    [ 1521.369852] [11746] 1000 11746 84215 1305 155 0 0 akonadi_maildis
    [ 1521.369854] [11747] 1000 11747 92098 1211 138 0 0 akonadi_nepomuk
    [ 1521.369856] [11774] 1000 11774 59745 591 76 0 0 nepomukserver
    [ 1521.369858] [11777] 1000 11777 287924 2235 149 0 0 nepomukservices
    [ 1521.369860] [11795] 1000 11795 100823 10181 51 0 0 virtuoso-t
    [ 1521.369861] [11801] 1000 11801 93041 786 99 0 0 pulseaudio
    [ 1521.369863] [11802] 133 11802 41125 46 17 0 0 rtkit-daemon
    [ 1521.369865] [11811] 1000 11811 17253 140 36 0 0 gconf-helper
    [ 1521.369867] [11813] 1000 11813 11527 127 26 0 0 gconfd-2
    [ 1521.369869] [11816] 1000 11816 68886 1148 123 0 0 kuiserver
    [ 1521.369871] [11837] 1000 11837 58146 1047 105 0 0 nepomukservices
    [ 1521.369873] [11839] 1000 11839 53655 966 98 0 0 nepomukservices
    [ 1521.369875] [11840] 1000 11840 92263 1128 107 0 0 nepomukservices
    [ 1521.369877] [11841] 1000 11841 109750 1232 107 0 0 nepomukservices
    [ 1521.369878] [12942] 1000 12942 41975 542 78 0 0 kwrited
    [ 1521.369880] [12944] 1000 12944 246808 6774 195 0 0 ktorrent
    [ 1521.369882] [12954] 1000 12954 93430 1212 139 0 0 polkit-kde-auth
    [ 1521.369884] [12957] 1000 12957 70673 1126 130 0 0 nepomukcontroll
    [ 1521.369886] [12959] 1000 12959 92309 1654 138 0 0 kgpg
    [ 1521.369888] [12966] 1000 12966 109879 2164 176 0 0 klipper
    [ 1521.369890] [13009] 1000 13009 86477 1682 131 0 0 kio_http_cache_
    [ 1521.369892] [17302] 1000 17302 270535 59165 488 0 0 firefox
    [ 1521.369894] [17322] 1000 17322 10407 86 26 0 0 gvfsd
    [ 1521.369896] [17324] 1000 17324 50365 171 31 0 0 gvfs-fuse-daemo
    [ 1521.369897] [18462] 1000 18462 138761 5014 199 0 0 konsole
    [ 1521.369899] [18464] 1000 18464 4227 156 13 0 0 bash
    [ 1521.369901] [19535] 1000 19535 3885 324 13 0 0 yaourt
    [ 1521.369903] [19680] 1000 19680 3800 253 13 0 0 makepkg
    [ 1521.369905] [22861] 1000 22861 174765 6917 209 0 0 dolphin
    [ 1521.369907] [24940] 1000 24940 12258 2121 27 0 0 Xvfb
    [ 1521.369909] [24941] 1000 24941 2944 134 11 0 0 make
    [ 1521.369911] [25116] 1000 25116 2946 137 11 0 0 make
    [ 1521.369913] [25337] 1000 25337 3013 200 11 0 0 make
    [ 1521.369915] [ 473] 1000 473 2977 169 12 0 0 make
    [ 1521.369917] [ 3719] 1000 3719 3006 176 12 0 0 make
    [ 1521.369919] [10784] 1000 10784 3006 175 13 0 0 make
    [ 1521.369921] [12252] 1000 12252 115351 9567 175 0 0 kvirc
    [ 1521.369922] [26014] 1000 26014 4256 168 13 0 0 bash
    [ 1521.369925] [30177] 1000 30177 4255 167 14 0 0 bash
    [ 1521.369926] [30976] 1000 30976 86977 3634 138 0 0 plugin-containe
    [ 1521.369928] [ 915] 1000 915 87003 1648 132 0 0 kio_file
    [ 1521.369930] [ 916] 1000 916 109277 2635 176 0 0 kio_thumbnail
    [ 1521.369932] [ 1186] 1000 1186 4227 157 13 0 0 bash
    [ 1521.369934] [ 2105] 1000 2105 86873 1689 132 0 0 klauncher
    [ 1521.369936] [ 2473] 1000 2473 87003 1648 132 0 0 kio_file
    [ 1521.369938] [ 3468] 1000 3468 141313 5962 206 0 0 kate
    [ 1521.369940] [ 6331] 1000 6331 109512 2225 174 0 0 kate
    [ 1521.369942] [ 6733] 1000 6733 2983 150 12 0 0 make
    [ 1521.369943] [ 6760] 1000 6760 17243 1559 39 0 0 python
    [ 1521.369945] [ 6761] 1000 6761 1882 30 9 0 0 c++
    [ 1521.369947] [ 6762] 1000 6762 1816 23 9 0 0 collect2
    [ 1521.369949] [ 6763] 1000 6763 812215 809141 1588 0 0 ld
    [color=#FF40BF][ 1521.369951] Out of memory: Kill process 6763 (ld) score 402 or sacrifice child
    [ 1521.369953] Killed process 6763 (ld) total-vm:3248860kB, anon-rss:3236396kB, file-rss:168kB[/color]
    gcc -v
    Using built-in specs.
    COLLECT_GCC=gcc
    COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-unknown-linux-gnu/4.7.2/lto-wrapper
    Target: x86_64-unknown-linux-gnu
    Configured with: /build/src/gcc-4.7.2/configure --prefix=/usr --libdir=/usr/lib --libexecdir=/usr/lib --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=https://bugs.archlinux.org/ --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++ --enable-shared --enable-threads=posix --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-clocale=gnu --disable-libstdcxx-pch --enable-libstdcxx-time --enable-gnu-unique-object --enable-linker-build-id --with-ppl --enable-cloog-backend=isl --disable-ppl-version-check --disable-cloog-version-check --enable-lto --enable-gold --enable-ld=default --enable-plugin --with-plugin-ld=ld.gold --with-linker-hash-style=gnu --disable-multilib --disable-libssp --disable-build-with-cxx --disable-build-poststage1-with-cxx --enable-checking=release
    Thread model: posix
    gcc version 4.7.2 (GCC)
    cat /proc/meminfo
    MemTotal: 8056856 kB
    MemFree: 1240336 kB
    Buffers: 278356 kB
    Cached: 4591784 kB
    SwapCached: 0 kB
    Active: 2700172 kB
    Inactive: 3860720 kB
    Active(anon): 2498432 kB
    Inactive(anon): 2441488 kB
    Active(file): 201740 kB
    Inactive(file): 1419232 kB
    Unevictable: 68 kB
    Mlocked: 68 kB
    SwapTotal: 0 kB
    SwapFree: 0 kB
    Dirty: 124 kB
    Writeback: 0 kB
    AnonPages: 1690996 kB
    Mapped: 218032 kB
    Shmem: 3249176 kB
    Slab: 172972 kB
    SReclaimable: 90332 kB
    SUnreclaim: 82640 kB
    KernelStack: 3264 kB
    PageTables: 38384 kB
    NFS_Unstable: 0 kB
    Bounce: 0 kB
    WritebackTmp: 0 kB
    CommitLimit: 4028428 kB
    Committed_AS: 6713700 kB
    VmallocTotal: 34359738367 kB
    VmallocUsed: 92972 kB
    VmallocChunk: 34359642076 kB
    HugePages_Total: 0
    HugePages_Free: 0
    HugePages_Rsvd: 0
    HugePages_Surp: 0
    Hugepagesize: 2048 kB
    DirectMap4k: 10240 kB
    DirectMap2M: 7237632 kB
    /etc/makepkg.conf
    # ARCHITECTURE, COMPILE FLAGS
    CARCH="x86_64"
    CHOST="x86_64-unknown-linux-gnu"
    CFLAGS="-march=native -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -D_FORTIFY_SOURCE=2"
    CXXFLAGS="${CFLAGS}"
    LDFLAGS="-Wl,-O1,--sort-common,--as-needed,-z,relro"
    MAKEFLAGS="-j5"
    thanks a lot
    Last edited by papu (2012-10-13 17:35:38)

    yes it's true it take 4gigs of 8 gigs i have,  but when i was using gentoo never had this problem when i was compiling any package and then my pc was 4gigs.
    df -hT
    Filesystem Type Size Used Avail Use% Mounted on
    rootfs rootfs 51G 7.7G 41G 17% /
    dev devtmpfs 3.9G 0 3.9G 0% /dev
    run tmpfs 3.9G 1.6M 3.9G 1% /run
    /dev/sdb2 ext4 51G 7.7G 41G 17% /
    tmpfs tmpfs 3.9G 76K 3.9G 1% /dev/shm
    tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
    tmpfs tmpfs 3.9G 56K 3.9G 1% /tmp
    /dev/sdc1 fuseblk 299G 237G 62G 80% /mnt/share
    /dev/sdb1 ext2 183M 28M 146M 16% /boot
    /dev/sdb3 ext4 18G 6.2G 11G 37% /home
    i am using vanilla kernel 3.6.0
    cat /etc/fstab
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    tmpfs /tmp tmpfs nodev,nosuid 0 0
    # /dev/sdb2
    UUID=412194cb-8953-4d02-94b4-d25d21bd7126 / ext4 rw,relatime,barrier=0 0 1
    #/dev/sdb1
    UUID=8bc28611-e3ee-4079-b311-33b9d4a0f36a /boot ext2 rw,relatime 0 2
    #/dev/sdb3
    UUID=2d6d63e0-a59f-439f-9c11-f6673055e65a /home ext4 rw,relatime,barrier=0 0 2
    #/dev/sdc1
    UUID=60F0F9D3F0F9AF80 /mnt/share ntfs-3g umask=0 0 0
    what i have to do ?
    thanks so much. friends!
    Last edited by papu (2012-10-13 19:44:17)

  • ORA-29516: Bulk load of method failed; insufficient shm-object space

    Hello,
    Just installed 11.2.0.1.0 on CentOS 5.5 64-bit. All dependencies satisfied, installation/linking went without a problem.
    Server has 32GB RAM, using AMM with target set at 29GB, no swapping is occuring.
    No matter what i do when loading Java code (loadjava with JARs or "create and compile java source") I keep getting the error:
    ORA-29516: Error in module Aurora: Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    Checked shm-related kernel params, all seems to be normal:
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    kernel.shmall = 4294967296
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048576
    Please help.

    Hi there,
    I've stumbled into exactly the same issue for 11g. After I start the database up and I ran loadjava on an externally
    compiled class (Hello.class in my instance) I got the following error:
    Error while testing for existence of dbms_java.handleMd5
    ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    ORA-06512: at "SYS.DBMS_JAVA", line 679
    Error while creating class Hello
    ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    ORA-06512: at line 1
    The following operations failed
    class Hello: creation (createFailed)
    exiting : Failures occurred during processing
    After this, I checked the trace file and saw the following error message:
    peshmmap_Create_Memory_Map:
    Map_Length = 4096
    Map_Protection = 7
    Flags = 1
    File_Offset = 0
    mmap failed with error 1
    error message:Operation not permitted
    ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
    peshmmap_Create_Memory_Map:
    Map_Length = 4096
    Map_Protection = 7
    Flags = 1
    File_Offset = 0
    mmap failed with error 1
    error message:Operation not permitted
    ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
    Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    It seems as though that "JOXSHM" of size "134217728" (which is 128MB) corresponds to the java_pool_size setting in my init.ora file:
    memory_target=1000M
    memory_max_target=2000M
    java_pool_size=128M
    shared_pool_size=256M
    Whenever I change that size it propagates to the trace file. I also picked up that only 592MB of shm memory gets used. My df -h dump:
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda7 39G 34G 4.6G 89% /
    udev 10M 288K 9.8M 3% /dev
    /dev/sda5 63M 43M 21M 69% /boot
    /dev/sda4 59G 45G 11G 81% /mnt/data
    shm 2.0G 592M 1.5G 29% /dev/shm
    The only way in which I could get loadjava to work was to remove java from the database by calling the rmjvm.sql script.
    After this I installed java again by calling the initjvm.sql script. I noticed that after these scripts my shm-memory usage
    increased to about 624MB which is 32MB larger than before:
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda7 39G 34G 4.6G 89% /
    udev 10M 288K 9.8M 3% /dev
    /dev/sda5 63M 43M 21M 69% /boot
    /dev/sda4 59G 45G 11G 81% /mnt/data
    shm 2.0G 624M 1.4G 31% /dev/shm
    However, after I stopped the database and started it again my Java was broken again and calling loadjava produced
    the same error message as before. The shm memory usage would also return to 592MB again. Is there something I
    need to do in terms of persisting the changes that initjvm and rmjvm does to the database? Or is there something else
    wrong that I'm overlooking like the memory management settings or something?
    Regards,
    Wiehann

  • Is it possible to mount a physical disk (/dev/mapper/ disk) on one of my Oracle VM server

    I have a physical disk that I can see from multipath -ll  that shows up as such
    # multipath -ll
    3600c0ff00012f4878be35c5401000000 dm-115 HP,P2000G3 FC/iSCSI
    size=410G features='1 queue_if_no_path' hwhandler='0' wp=rw
    |-+- policy='round-robin 0' prio=50 status=active
    | `- 7:0:0:49  sdcs 70:0   active ready running
    `-+- policy='round-robin 0' prio=10 status=enabled
      `- 10:0:0:49 sdcr 69:240 active ready running
    That particular is visible in the OVMM Gui as a physical disk that I can present to one of my VMs but currently its not presented to any of them.
    I have about 50 physical LUNs that my Oracle VM server can see.  I believe I can see all of them from a fdisk -l, but "dm-115" (which is from the multipath above) doesnt show up.
    This disk has 3 usable partitions on it, plus a Swap.
    I want to mount the 3rd partition temporarily on the OVM server itself and I receive
    # mount /dev/mapper/3600c0ff00012f4878be35c5401000000p3 /mnt
    mount: you must specify the filesystem type
    If I present the disk to a VM and then try to mount the /dev/xvdx3 partition -it of course works.  (x3 - represents the 3rd partition on what ever letter position the disk shows up as)
    Is this possible?

    Its more of the correct syntax. Like I can not seem to figure out how to translate the /dev/mapper path above into what fdisk -l shows. Perhaps if I knew how fdisk and multipath can be cross referenced I could mount the partition.
    I had already tried what you suggested. Here is the output if I present the disk to a VM and then mount the 3rd partition.
    # fdisk -l
    Disk /dev/xvdh: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/xvdh1   *           1          13      104391   83  Linux
    /dev/xvdh2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/xvdh3            2103       27783   206282632+  83  Linux
    /dev/xvdh4           27784       30394    20972857+   5  Extended
    /dev/xvdh5           27784       30394    20972826   83  Linux
    # mount /dev/xvdh3 /mnt  <-- no error
    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/xvda3            197G  112G   75G  60% /
    /dev/xvda5             20G 1011M   18G   6% /var
    /dev/xvda1             99M   32M   63M  34% /boot
    tmpfs                 2.0G     0  2.0G   0% /dev/shm
    /dev/xvdh3            191G   58G  124G  32% /mnt  <-- mounted just fine
    Its ext3 partition
    # df -T
    /dev/xvdh3
    ext3   199822096  60465024 129042944  32% /mnt
    Now if I go to my vm.cfg file, I can see the disk that is presented.
    My disk line contains
    disk = [...'phy:/dev/mapper/3600c0ff00012f4878be35c5401000000,xvdh,w', ...]
    Multipath shows that disk and says "dm-115" but that does not translate on fdisk
    # multipath -ll
    3600c0ff00012f4878be35c5401000000 dm-115 HP,P2000G3 FC/iSCSI
    size=410G features='1 queue_if_no_path' hwhandler='0' wp=rw
    |-+- policy='round-robin 0' prio=50 status=active
    | `- 7:0:0:49  sdcs 70:0   active ready running
    `-+- policy='round-robin 0' prio=10 status=enabled
      `- 10:0:0:49 sdcr 69:240 active ready running
    I have around 50 disks on this server, but the ones of the same size from fdisk -l from the server shows me many.
    # fdisk -l
    Disk /dev/sdp: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdp1   *           1          13      104391   83  Linux
    /dev/sdp2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdp3            2103       27783   206282632+  83  Linux
    /dev/sdp4           27784       30394    20972857+   5  Extended
    /dev/sdp5           27784       30394    20972826   83  Linux
    Disk /dev/sdab: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdab1   *           1          13      104391   83  Linux
    /dev/sdab2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/sdab3            1319       27783   212580112+  83  Linux
    /dev/sdab4           27784       30394    20972857+   5  Extended
    /dev/sdab5           27784       30394    20972826   83  Linux
    Disk /dev/sdac: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdac1   *           1          13      104391   83  Linux
    /dev/sdac2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdac3            2103       27783   206282632+  83  Linux
    /dev/sdac4           27784       30394    20972857+   5  Extended
    /dev/sdac5           27784       30394    20972826   83  Linux
    Disk /dev/sdad: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdad1   *           1          13      104391   83  Linux
    /dev/sdad2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/sdad3            1319       27783   212580112+  83  Linux
    /dev/sdad4           27784       30394    20972857+   5  Extended
    /dev/sdad5           27784       30394    20972826   83  Linux
    Disk /dev/sdae: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdae1   *           1          13      104391   83  Linux
    /dev/sdae2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdae3            2103       27783   206282632+  83  Linux
    /dev/sdae4           27784       30394    20972857+   5  Extended
    /dev/sdae5           27784       30394    20972826   83  Linux
    Disk /dev/sdaf: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdaf1   *           1          13      104391   83  Linux
    /dev/sdaf2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdaf3            2103       27783   206282632+  83  Linux
    /dev/sdaf4           27784       30394    20972857+   5  Extended
    /dev/sdaf5           27784       30394    20972826   83  Linux
    Disk /dev/sdag: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
        Device Boot      Start         End      Blocks   Id  System
    /dev/sdag1   *           1          13      104391   83  Linux
    /dev/sdag2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/sdag3            2103       27783   206282632+  83  Linux
    /dev/sdag4           27784       30394    20972857+   5  Extended
    /dev/sdag5           27784       30394    20972826   83  Linux
    Disk /dev/dm-13: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-13p1   *           1          13      104391   83  Linux
    /dev/dm-13p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-13p3            2103       27783   206282632+  83  Linux
    /dev/dm-13p4           27784       30394    20972857+   5  Extended
    /dev/dm-13p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-25: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-25p1   *           1          13      104391   83  Linux
    /dev/dm-25p2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/dm-25p3            1319       27783   212580112+  83  Linux
    /dev/dm-25p4           27784       30394    20972857+   5  Extended
    /dev/dm-25p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-26: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-26p1   *           1          13      104391   83  Linux
    /dev/dm-26p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-26p3            2103       27783   206282632+  83  Linux
    /dev/dm-26p4           27784       30394    20972857+   5  Extended
    /dev/dm-26p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-27: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-27p1   *           1          13      104391   83  Linux
    /dev/dm-27p2              14        1318    10482412+  82  Linux swap / Solaris
    /dev/dm-27p3            1319       27783   212580112+  83  Linux
    /dev/dm-27p4           27784       30394    20972857+   5  Extended
    /dev/dm-27p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-28: 439.9 GB, 439956406272 bytes
    255 heads, 63 sectors/track, 53488 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-28p1   *           1          13      104391   83  Linux
    /dev/dm-28p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-28p3            2103       27783   206282632+  83  Linux
    /dev/dm-28p4           27784       30394    20972857+   5  Extended
    /dev/dm-28p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-29: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-29p1   *           1          13      104391   83  Linux
    /dev/dm-29p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-29p3            2103       27783   206282632+  83  Linux
    /dev/dm-29p4           27784       30394    20972857+   5  Extended
    /dev/dm-29p5           27784       30394    20972826   83  Linux
    Disk /dev/dm-30: 439.9 GB, 439999987712 bytes
    255 heads, 63 sectors/track, 53493 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
          Device Boot      Start         End      Blocks   Id  System
    /dev/dm-30p1   *           1          13      104391   83  Linux
    /dev/dm-30p2              14        2102    16779892+  82  Linux swap / Solaris
    /dev/dm-30p3            2103       27783   206282632+  83  Linux
    /dev/dm-30p4           27784       30394    20972857+   5  Extended
    /dev/dm-30p5           27784       30394    20972826   83  Linux
    How to translate the /dev/mapper address into the correct fdisk, I think I can then mount it.
    If I try the same command as before with the -t option it gives me this error.
    # mount -t ext3 /dev/mapper/3600c0ff00012f48791975b5401000000p3 /mnt
    mount: special device /dev/mapper/3600c0ff00012f48791975b5401000000p3 does not exist
    I know I am close here, and feel it should be possible, I am just missing something.
    Thanks for any help

  • "Evaluation Copy" is displayed on 3d Plot with runtime version of LV 8.5 Prof Dev Sys, how do I fix

    I have written an application and made an installer for it which also installs the LV 8.5 runtime engine.  When the app is run on other computers after the install, the 3d plot displays the message "Evaluation Copy".  Is there any way to remove this?  Why does it appear?  I am using the 2008 NI Developer Suite.

    I used the 3d surface plot.  I have added controls which allow dynamic switching between color, shaded, and grayscale.  I also have enabled the 3d cursor and set its color, line and point style.  I have also added controls which allow dynamic switching of projections and show projections only, and controls for dynamically displaying the plot grids selectively.  I am dumping data from a ccd into the plot at rates up to 40 Hz.  The plot resides on a tabbed structure (third page).  The data only dumps to the tabbed page (3d plot)when it is visible (ie., third page tab is selected).  The other two pages contain 2d plots which receive the data when those tabbed pages are selected.  Everything runs fine on all plots in both the uncompiled on dev computer; compiled on dev computer; and installed on 2nd computer.  However, for the later, the "Evaluation Copy" message is displayed.  I have run the installer on the dev computer.  Could that have corrupted the registration/licensing of the cw3dgrph.ocx?

  • Problem in installaing 10g(10.2.0) dev suit on linux 4EL

    hi all
    i have completed the RPM installation and required configuration on linux for installing oracle 10g(1.2.0) dev suit on linux but when i run setup it returns error
    can anybody help me , I shall be higjly thnkfull bc it is very imp for me.
    [root@oracleserver pkg2]# Disk1/runInstaller
    The OUI Screen may take around 5 to 30 seconds to come up depending upon system performance. Please Wait .......
    Starting Oracle Universal Installer...
    Checking installer requirements...
    Checking operating system version: must be redhat-2.1, redhat-3, redhat-4, SuSE-8, SuSE-9 or UnitedLinux-1.0
    Passed
    All installer requirements met.
    Checking Temp space: must be greater than 400 MB. Actual 14942 MB Passed
    Checking swap space: must be greater than 1536 MB. Actual 2000MB Passed
    Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
    Checking if CPU speed is above 450 MHz. Actual 2794 MHz Passed
    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-04-24_12-05-25PM. Please wait ...
    Error in writing to directory /tmp/OraInstall2008-04-24_12-05-25PM. Please ensure that this directory is writable and has atleast 60 MB of disk space. Installation cannot continue.
    then i checked spaces
    [root@oracleserver pkg2]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda8 22G 5.5G 15G 28% /
    none 502M 0 502M 0% /dev/shm
    /dev/sda5 15G 2.1G 13G 14% /ddrive
    /dev/sda6 20G 8.0G 12G 41% /edrive
    /dev/sda7 5.9G 559M 5.4G 10% /fdrive
    [root@oracleserver pkg2]# df -h /tmp
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda8 22G 5.5G 15G 28% /
    [root@oracleserver tmp]# ll
    drwx------ 2 oracle oinstall 4096 Apr 24 10:44 gconfd-oracle
    drwx------ 3 root root 4096 Apr 24 11:30 gconfd-root
    drwx------ 2 root root 4096 Apr 24 11:30 keyring-LYfCYY
    srwxr-xr-x 1 oracle oinstall 0 Apr 24 10:28 mapping-oracle
    srwxr-xr-x 1 root root 0 Apr 24 11:31 mapping-root
    drwxrwx--- 3 oracle oinstall 4096 Apr 24 10:36 OraInstall2008-04-24_10-36-30AM
    drwxrwx--- 3 oracle oinstall 4096 Apr 24 10:37 OraInstall2008-04-24_10-37-27AM
    drwxrwx--- 3 root root 4096 Apr 24 11:31 OraInstall2008-04-24_12-05-25PM.
    drwx------ 2 root root 4096 Apr 24 11:32 orbit-root
    drwx------ 2 root root 4096 Apr 24 11:30 ssh-ONJshz4863
    -rw------- 1 oracle oinstall 0 Apr 22 11:40 t2lGX3GiqC
    -rw------- 1 root root 917 Apr 24 11:31 xses-root.GtGWyn
    regards
    farnaw

    Hi werner
    thnks aging for ur cooperation
    I EXECUTED THE COMMAND HERE IS THE OUTPUT
    [oracle@oracleserver pkg2]$ Disk1/runInstaller -debug
    The OUI Screen may take around 5 to 30 seconds to come up depending upon system performance. Please Wait .......
    Starting Oracle Universal Installer...
    Checking installer requirements...
    Checking operating system version: must be redhat-2.1, redhat-3, redhat-4, SuSE-8, SuSE-9 or UnitedLinux-1.0
    Passed
    All installer requirements met.
    Checking Temp space: must be greater than 400 MB. Actual 14940 MB Passed
    Checking swap space: must be greater than 1536 MB. Actual 2000MB Passed
    Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
    Checking if CPU speed is above 450 MHz. Actual 2793 MHz Passed
    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2008-04-24_02-06-02PM.
    Please wait ...unzip: cannot find ../stage/Components/oracle.swd.jre/1.4.2.0.4/1/DataFiles/*.jar, ../stage/Components/oracle.swd.jre/1.4.2.0.4/1/DataFiles/*.jar.zip or ../stage/Components/oracle.swd.jre/1.4.2.0.4/1/DataFiles/*.jar.ZIP.
    No zipfiles found.
    Error in writing to directory /tmp/OraInstall2008-04-24_02-06-02PM. Please ensure that this directory is writable and has atleast 60 MB of disk space. Installation cannot continue.
    : Success
    pzl help if any
    Regard
    Farnaw

  • Mount: /dev/sda2 already mounted or /u01 busy

    Installed new OELinux4.7 with disk partitions but unable to to mount these
    [oracle@localhost sbin]$ ./fdisk -l (give no results)
    [root@localhost /]# mount /dev/sda3 /u01
    mount: /dev/sda3 already mounted or /u01 busy
    [root@localhost /]# cd /sbin
    *[root@localhost sbin]# ./fdisk -l*
    Disk /dev/sda: 500.1 GB, 500107862016 bytes
    255 heads, 63 sectors/track, 60801 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 19 152586 83 Linux
    /dev/sda2 20 2630 20972857+ 8e Linux LVM
    /dev/sda3 2631 5241 20972857+ 8e Linux LVM
    /dev/sda4 5242 60801 446285700 5 Extended
    /dev/sda5 5242 7852 20972826 8e Linux LVM
    /dev/sda6 7853 10463 20972826 8e Linux LVM
    /dev/sda7 10464 13074 20972826 8e Linux LVM
    *[root@localhost sbin]# df -h*
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/VolGroup00-LogVol01
    20G 3.5G 16G 19% /
    /dev/sda1 145M 15M 123M 11% /boot
    none 3.0G 0 3.0G 0% /dev/shm
    /dev/mapper/VolGroup00-LogVol02
    20G 78M 19G 1% /home
    [root@localhost /]# mount /dev/sda2 /u01
    mount: /dev/sda2 already mounted or /u01 busy
    [root@localhost /]#

    On a system without LVM, a filesystem is created inside a partition. fdisk is used to list partitions on disks. Because the filesystems are inside the partitions, you can use the name of parition to mount it.
    On a system with LVM, a filesystem is created inside a logical volume, not in a partition. The partitions (fdisk -l) are used as physical volumes (pvdisplay), which are added to a volume group (vgdisplay), in which a logical volume can be created (lvdisplay). In the logical volume a filesystem is created. Because of this, only the logical volumes can be used to mount the filesystem.
    LVM adds an abstraction layer between filesystems and partitions. This is extremely handy because it's easy to add a disk (which is made physical volume) to a volume group which makes space available, which can be added to any logical volume in the volume group. When that's done, the filesystem in the logical volume can be enlarged with resize2fs, even online. Without LVM, it's not possible or very hard at best to do that.

Maybe you are looking for

  • Disk Repair message

    Hi all, I got this message below when doing a disk repair.It is not normal isn't it? Does my HDD need to be replace? Any input would be appreciated. Thank you Permissions repair complete Verifying volume "Macintosh HD" Performing live verification. C

  • Tidal vs Powershell

    Hi, Visited this one before with limited success but now its time to try again. Platform is TES 6.1.0.212 running on Solaris x86, CM is on RH linux, agent on windows server 2003. I'm attempting to automate some MS excel work via powershell. I have a

  • How  to restaure my Imac at factoriel setting

    I try to erase my hard disk  (IMac) i got problem with Norton anti virus  and restaure from times capsule Thanks in advance

  • .Scanner fix

    Hello, My printer is currently scanning pages that appear fuzzy on the CPU. I didn't notice until I forwarded the scan and ppl said its too fuzzy and not readable. What's the issue. Thanks! Optical focus not working was pointed out as an issue but I

  • How many wifi connections to HH3?

    Hi. I have a HH3 and between myself and my family, we have many devices connected to it via wifi. In total I have: 4 x Apple TVs 5 x iPhones 3 x iPads 2 x iMacs 2 x PCs 1 x Printer 2 x Sky+ boxes (19 devices in total) Recently, all of my family have