BSPWM Multi-Head Setup

I've been trying to figure out how to setup dual monitors on BSPWM with different resolutions. So far I can only have 1920x1080 on one monitor or 1680x1050 on both, obviously having a lower resolution on a 1080p monitor is pretty ugly.
Is there a solution for this?

Hi phrakture,
I actually tested it (unsuccessfully)  2 years ago on a Debian Woody. I only had 1 PC at that time and i wanted to try to provide dual access to it to my wife and me. And, having used a multi-terminal platform (VT100 powered ) as student, i thought it was quite fun to give it a try!
Since i had no dualhead video card, i used 2 video cards:
- Terminal 1: PS/2 Keyboard + PS/2 Mouse + Geforce 2 MX (AGP) +  15'' screen (color)
- Terminal 2: USB Keyboard + USB Mouse + S3 Virge (PCI) +  14'' screen (mono)
Writing a correct XF86Config file was not difficult. The problems i had were with the hardware and the kernel.
First, from what i remember, i had to find a PCI card as secondary card. I don't remember precisely why, maybe it was simply by conception of the PC architecture, but you could not manage to have 2 AGP cards working at the same time.
But the main problem was to have this work on Linux. From what i understood, the terminal layer of the 2.4 kernel locked resources in such a way that could not let XFree handle more than one real terminal at the same time. In other words: one screen worked, the other did not.
The patches from http://cambuca.ldhs.cetuc.puc-rio.br/multiuser/ did not help me (the XFree version of Debian was quite different from upstream, with already many patches). From what i know the 2.5.x kernel series saw a refactoring of the terminal layer, but it happened much after i had given up.
At first i had thought that such a setup would work naturally thanks to X design, but in fact i spent much time to end up on blocking, low-level, not-so-well-documented OS problems.
The good news for me is that now i have 2 PCs at home, so i have no need to try this anymore
If you try it on your side - and manage to build a working setup, i am sure that many will be interested though!. Personnally i would be very interested to know if it works on other OSes too (esp. BSDs).

Similar Messages

  • Multi-head setup without xorg.conf

    I'm using the newer method of running X without an xorg.conf and want a traditional multi-head setup (a "Normal multi-head" method, allowing a separate X session on each of the monitors).
    I would use this to have a different window manager run on each of the screens (something like the following for OpenBox: DISPLAY=:0.1 openbox), perhaps with the aid of Switchscreen.
    XRandR will give me one big virtual screen (and is currently how I'm doing it, combined with xnest), but that's not what I want. Any ideas?

    WooHoo !
    awkwood   -  You Rock ! I feel like an idiot for not paying closer attention to your first post. I finally realized what you were saying and figured out how to do the downgrade.
    If it's 2.12.0 I'd suggest trying 2.11.0 instead.
    The most recent update borked my dual display with a similar error message but 2.11 works fine.
    I don't use an xorg.conf.
    So essentially this seems to be a bug with version 2.12 of the xf86-video-intel driver. I downgraded to ver 2.11 and everything works perfectly. I am running dual monitors at full resolution with no fuss. So again awkwood I thank you !
    For other Arch Newbies who have this issue in the mean time until they fix the bug, this is how I did the downgrade.
    Step 1:
    download the 2.11 package from the link below
    http://arm.konnichi.com/extra/os/i686/x … pkg.tar.xz
    Step 2:
    Open a terminal and cd to the directory where you saved the file
    $ cd /home/$USERNAME/Downloads
    Step 3:
    Use pacman to install the package
    sudo pacman -U xf86-video-intel-2.11.0-2-i686.pkg.tar.xz
    Finally reboot. When you log back in everything should work fine
    Again thanks to awkwood as well as karol, blind, Inxsible and everyone else for their help. Everyone here is so helpful, I am defiantly an Arch user for life. Hopefully they will fix the bug in the next version of the driver. My next step is to see about filling a bug report.

  • Multi-user multi-head setup

    Purely out of curiosity, I would like to create the following setup:
    Computers: 1 (i.e. 1 motherboard)
    Physical displays: N
    Keyboards, mice: N
    Processor: >= N cores
    ...and have a separate tty allocated to each monitor/mouse/keyboard group such that N non-root users can simultaneously work on a single box, in independent desktop environments and unable to interfere with each other.  The processor criterion is to ensure that each user has at least one core "of their own" via affinity masking, so no one user can monopolise CPU resources. RAM and I/O resources are something that I'll think about at a later date.
    The idea is to allow multiple non-root users to simultaneously work on one computer in such a way that they cannot interfere with each other's sessions in any destructive way (e.g. reboot, kill processes, hijack session, etc).
    Could anyone point me in the right direction ( configurations / supported graphics vendors / kernel parameters ) for this?
    I have no real use for this (particularly with how cheap VDI is), it is purely an academic curiosity.
    Last edited by windows_me (2014-01-29 21:12:03)

    @Nattgew:
    Cheap USB video adapters will be fine for testing, and I can probably bag some powerful old Radeons (48xx) on the cheap, each of which will drive at least 2 screens.  Alternatively, I could probably daisychain a bunch of higher-end cheap monitors (e.g. Dell U2312) via DisplayPort.
    @slithery:
    Cheers!  I got nothing but flaming when I asked on the Ubuntu forums, you've solved the problem in a single word ("multiseat")!

  • How to make Sales Person LOV org specific in case of Multi Org Setup

    HI,
    We are working on 11.5.10.2
    We have a multi org setup and i am required to make the sales person LOV to populate only the org specific salespersons, at the sales order, quote, service contract levels.
    Is it achiveable and if yes.....please letme know the way.
    Reg,
    Ajay Agarwal.

    Hi
    Try to play around with the security profile in the HRMS and attach it to the responsibility level
    Regards
    Ramesh Kumar S

  • Frozen system when booting with dual-head setup (updates)

    Hi guys,
    I made a fresh install a couple of weeks ago and I configured my system for a dual-head setup following the wiki. I activated KMS and added radeon to the kernel parameters.
    My xorg.conf looks like this:
    Section "Monitor"
    Identifier "DVI-0"
    Option "PreferredMode" "1920x1080"
    Option "Position" "0 0"
    EndSection
    Section "Monitor"
    Identifier "HDMI-0"
    Option "PreferredMode" "1920x1080"
    Option "Position" "1920 0"
    EndSection
    I was having random freezes for a couple of days, but I didn't pay too much attention to it. I hard rebooted my computer when it froze and it could boot normally that second time.
    But after yesterday, last update to the open source drivers xf86-video-ati (1:7.5.0-2) and lib32-mesa-libgl (10.5.2-1), my system doesn't boot unless I disconnect one of the screens. When only one screen it's connected everything is fine. If I plug the second screen, it pops up the KDE service to configurate the second screen but as soon I press 'apply the chages' my system freezes and I had to hard reboot.
    I've tried all possible combinations usingt different outputs from the screens, getting the same results. Two of them connected --> freezes after grub loading. One connected --> boot Ok, but as soon as I plug the second one and I apply changes, freezes.
    I found one configuration that doesn't freeze my system. When I plug the second screen, as I said, KDE service automatically pops up. Before I was changing the positions of the screen, one beside the other one, and my system freezes when I apply the changes. If I just left the default configuration (one screen in top of the other), it doesn't freeze my system. After that, I can rearrange the screens, as I need them, and it doesn't freeze.
    Here it is my grub.cfg
    # DO NOT EDIT THIS FILE
    # It is automatically generated by grub-mkconfig using templates
    # from /etc/grub.d and settings from /etc/default/grub
    ### BEGIN /etc/grub.d/00_header ###
    insmod part_gpt
    insmod part_msdos
    if [ -s $prefix/grubenv ]; then
    load_env
    fi
    if [ "${next_entry}" ] ; then
    set default="${next_entry}"
    set next_entry=
    save_env next_entry
    set boot_once=true
    else
    set default="0"
    fi
    if [ x"${feature_menuentry_id}" = xy ]; then
    menuentry_id_option="--id"
    else
    menuentry_id_option=""
    fi
    export menuentry_id_option
    if [ "${prev_saved_entry}" ]; then
    set saved_entry="${prev_saved_entry}"
    save_env saved_entry
    set prev_saved_entry=
    save_env prev_saved_entry
    set boot_once=true
    fi
    function savedefault {
    if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
    fi
    function load_video {
    if [ x$feature_all_video_module = xy ]; then
    insmod all_video
    else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
    fi
    if [ x$feature_default_font_path = xy ] ; then
    font=unicode
    else
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos2'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos2 --hint-efi=hd0,msdos2 --hint-baremetal=ahci0,msdos2 e36984ca-21d8-4191-ba2f-a09bf8ce6f88
    else
    search --no-floppy --fs-uuid --set=root e36984ca-21d8-4191-ba2f-a09bf8ce6f88
    fi
    font="/usr/share/grub/unicode.pf2"
    fi
    if loadfont $font ; then
    set gfxmode=auto
    load_video
    insmod gfxterm
    set locale_dir=$prefix/locale
    set lang=en_US
    insmod gettext
    fi
    terminal_input console
    terminal_output gfxterm
    if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=5
    # Fallback normal timeout code in case the timeout_style feature is
    # unavailable.
    else
    set timeout=5
    fi
    ### END /etc/grub.d/00_header ###
    ### BEGIN /etc/grub.d/10_linux ###
    menuentry 'Arch Linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e36984ca-21d8-4191-ba2f-a09bf8ce6f88' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 128c8e95-9496-49d1-9a9e-bc886662258d
    else
    search --no-floppy --fs-uuid --set=root 128c8e95-9496-49d1-9a9e-bc886662258d
    fi
    echo 'Loading Linux linux ...'
    linux /vmlinuz-linux root=UUID=e36984ca-21d8-4191-ba2f-a09bf8ce6f88 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    submenu 'Advanced options for Arch Linux' $menuentry_id_option 'gnulinux-advanced-e36984ca-21d8-4191-ba2f-a09bf8ce6f88' {
    menuentry 'Arch Linux, with Linux linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-advanced-e36984ca-21d8-4191-ba2f-a09bf8ce6f88' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 128c8e95-9496-49d1-9a9e-bc886662258d
    else
    search --no-floppy --fs-uuid --set=root 128c8e95-9496-49d1-9a9e-bc886662258d
    fi
    echo 'Loading Linux linux ...'
    linux /vmlinuz-linux root=UUID=e36984ca-21d8-4191-ba2f-a09bf8ce6f88 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    menuentry 'Arch Linux, with Linux linux (fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-fallback-e36984ca-21d8-4191-ba2f-a09bf8ce6f88' {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_msdos
    insmod ext2
    set root='hd0,msdos1'
    if [ x$feature_platform_search_hint = xy ]; then
    search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 128c8e95-9496-49d1-9a9e-bc886662258d
    else
    search --no-floppy --fs-uuid --set=root 128c8e95-9496-49d1-9a9e-bc886662258d
    fi
    echo 'Loading Linux linux ...'
    linux /vmlinuz-linux root=UUID=e36984ca-21d8-4191-ba2f-a09bf8ce6f88 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux-fallback.img
    ### END /etc/grub.d/10_linux ###
    ### BEGIN /etc/grub.d/20_linux_xen ###
    ### END /etc/grub.d/20_linux_xen ###
    ### BEGIN /etc/grub.d/30_os-prober ###
    ### END /etc/grub.d/30_os-prober ###
    ### BEGIN /etc/grub.d/40_custom ###
    # This file provides an easy way to add custom menu entries. Simply type the
    # menu entries you want to add after this comment. Be careful not to change
    # the 'exec tail' line above.
    ### END /etc/grub.d/40_custom ###
    ### BEGIN /etc/grub.d/41_custom ###
    if [ -f ${config_directory}/custom.cfg ]; then
    source ${config_directory}/custom.cfg
    elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
    source $prefix/custom.cfg;
    fi
    ### END /etc/grub.d/41_custom ###
    ### BEGIN /etc/grub.d/60_memtest86+ ###
    ### END /etc/grub.d/60_memtest86+ ###
    And my fstab
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # /dev/sda2 resto del disco
    UUID=e36984ca-21d8-4191-ba2f-a09bf8ce6f88 / ext4 defaults,noatime,discard 0 1
    # /dev/sda1 100MB
    UUID=128c8e95-9496-49d1-9a9e-bc886662258d /boot ext4 defaults,noatime,discard 0 2
    # /dev/sdc2 10GB
    UUID=3959515c-30e4-4749-bc30-5babba3311e8 /var ext4 defaults 0 2
    # /dev/sdc3 resto del disco
    UUID=8ccc2ed9-35e5-4a4f-81f1-98cc8bfc4ed1 /home ext4 defaults,noatime 0 2
    # /dev/sdc1
    UUID=35e39da9-c942-434d-bbfc-deb5e40ee8be none swap defaults 0 0
    #montaje de temp en ram
    tmpfs /tmp tmpfs nodev,nosuid,size=3G 0 0
    # External HDD
    UUID=0ae5eb6a-8c4c-480c-8efe-609ae83f2bdc /media/External\040HDD ext4 defaults 0 0
    # Seagate Expansion Drive
    UUID=16668D4B668D2C95 /media/Seagate\040Expansion\040Driventfs-3g defaults,nofail 0 0
    What else should I check?
    Thanks in advance.
    Edit: added info
    Edit2: more tests.
    Last edited by doblerone (2015-03-31 08:12:38)

    Rexilion wrote:Maybe try looking at /var/log/pacman.log and see if you can downgrade packages (start with the kernel). See if that works around the problem.
    Downgrading catalyst to 14.3 seems to have done the job. Thanks for the suggestion, marking as solved.
    Inxsible wrote:bheinks, you have been on the forums long enough to be aware of the forum rules. Make use of an image hosting site.
    Sorry, I don't post often and wasn't aware of the rules on images. I'll keep that in mind next time.

  • Writing waveforms from Ch. 0 of niSCOPE to binary file in a multi-record setup?

    Hello,
    I am not very experienced with niSCOPE and writing waveform records, so I need some expert help here.
    Here is my application:
    I am generating a pulse train using a 6602 counter/timer. Each rising edge of this pulse train triggers an niFGEN to generate a single sawtooth waveform output to another device and, at the same time, acquiring data from Ch. 0 of an niSCOPE. I am fetching one record per rising edge of the pulse train for the niSCOPE (multi-record setup).
    The attached VI is where I am at thus far. All triggering and reading/fetching of the waveforms seems to be working just fine when testing with an oscilloscope. I now need to save each waveform record along with the timestamp of its rising edge trigger to a binary file. The bottom of the VI is where my attempt is at saving the waveforms to a binary file, so the attention should be there when looking at it.
    I am running LabVIEW from a computer connected to the NI PXI-Chassis using a cross-over cable.
    Questions:
    1.) Do I need to convert the data coming in on Ch. 0 of the niSCOPE to digital? Does it come in as analog from an oscilloscope? If I need to do this, how can I accomplish this?
    2.) When I try to run an example VI to write a waveform to a binary file by choosing "My Computer" in the bottom left of the VI window, it works it saves the file just fine. When I change this to run on "PXI2", a file is not even created and I get an error on File Dialog (code# 7, I think) each time the file is attempted to be closed. This may be a stupid question, but why can I not save data to a file on my computer if running the VI on "PXI2"?
    3.) Assuming the saving of each waveform to a binary file is working (read: (2) is successfully addressed), how can I also write the timestamp of the starting trigger for the waveform along with the waveform in the binary file? An example VI of how I can accomplish this would be fantastic, but I haven't been able to find one thus far.
    4.) When I was messing around and trying to accomplish this, it seemed that doing this writing may slow down the entire process too much. I need to record data to the extent of the sawtooth waveform generated by the niFGEN AWG for each trigger. Is there any changes I should make to my acquisition process in the niSCOPE section so that I can read each waveform, along with keeping the timestamp for each, and write this information to a binary file?
    I need to get this working quickly, so any help on this is greatly appreciated. Thanks in advance.
    Attachments:
    5124_update.vi ‏157 KB

    Thank you so much for your reply, David. Let me try and explain my
    situation and setup a little better, as well as discuss the points you
    made in your reply. Beware, you may want to refill your coffee as this
    post is long .
    I am using an embedded controller in a PXI-1044 chassis. I now have the
    chassis hooked up to our local network, and I am deploying my project
    to the chassis over the network as I am also connected to the local
    network. I have an oscilloscope next to me that takes as input the
    pulse train for a trigger and the generated sawtooth from the niFGEN
    for each trigger (rising edge of the pulse train from the 6602
    counter). Just to make sure synchronization is taking place, the
    sawtooth is also fed as input to the niSCOPE for acquisition.
    "PXI2" is what shows up when I choose to run a VI on the PXI chassis
    rather than "My Computer"; not sure why the 2 is there either, but that
    is what it says. I may have tracked down the issue I was having with
    writing, but more about that a little later...
    The attached VI is an update, although not much has changed. My
    application design is like this (keep in mind that some values for VI's
    are still constants in the block diagram while others are controls on
    the front panel): I am using the 6602 to generate a 1 KHz pulse train
    and routing this pulse train to PXI_Trigger0/RTSI0. I am also using the
    PXI_Clock (10) as a sample clock for this, and also using this same
    clock as the reference clock for both the 5422 and the 5124 (as per the
    synchronization help file mentioned for synchronizing multiple
    devices). Both the 5422 and the 5124 are triggered by a digital rising
    edge (from the pulse train) on PXI_Trigger0/RTSI0 (as it was routed
    there). For each trigger, the niFGEN generates a sawtooth waveform
    using a stepped trigger mode and outputs it. For each trigger, the
    niSCOPE acquires data. They are both synchronous, which is tough to see
    since one has its trigger source on the front panel and the other has
    its trigger source on the block diagram. All devices use PXI_Clock so
    they are synchronized.
    The expected behavior is to only generate a single sawtooth waveform
    per trigger with a certain number of sample points. I want to acquire
    the same number of samples using the niSCOPE, which is what I meant by
    "the extent of the waveform" in my previous post. So, should I change
    the 8192 to 1000 for the number of samples for the niSCOPE? What would
    you recommend for the sampling rate? I have been using 5 MHz for the
    niFGEN and 5 MHz for the niSCOPE...this is how it should be done,
    correct? If it is different in the VI, please let me know. For some
    reason, I have to adjust all of the values each time I open it since
    the default values are not the ones I want.
    I want to generate and acquire one waveform per trigger (one waveform
    per record). However, I want to be able to record a large number of
    records so I have enabled the circular buffer-like treatment of the
    acquired waveforms. The 100 or 1000 records is actually just a number I
    am giving it for now to make sure it is working before recording many
    more records.
    As for saving the niSCOPE data, I would like to save all data in a
    single file that is NOT ascii (to save space). I have been looking at
    the HWS file format, and would like to use it. I think the attached VI
    includes this at the bottom of the while loop. For each trigger, I
    would like to save the time (as accurate as possible) that the trigger
    occurred for the record/waveform, which appears to be (absoluteInitialX
    - relativeInitialX) as you said in your post (thanks!). I just need to
    store as much information about the waveform and time information for
    it as possible with the waveform in the file. So it looks like I will
    need to use the wfm info for that information, providing portions of it
    as waveform attributes in the HWS VI's?
    What format of data do you recommend I fetch, and will I be fetching a
    "Single waveform" or "Multiple waveforms"? Should I use I32, DBL, WDT,
    or other for the format? A balance between good precision in values and
    time it takes to fetch/record would be best.
    Given all of the above, I am having one troube with saving data to a
    file. As a reminder, I am deploying the project to the chassis over the
    network. When I choose a location and/or file to save the HWS data to,
    I only get choices that are on the PC's hard disk (such as C:\Documents
    and Settings\cgifford\...) NOT the chassis's hard disk. When I choose
    something other than "C:\" I get an error that the file could not be
    opened. However, when I choose "C:\" everything goes fine. The saved
    data is nowhere to be found on my PC though, so I am assuming that it
    is being stored on the internal 60G hard disk in the chassis that must
    be named "C" by default or something!?
    I have been told by phone support that I should be able to make a
    direct connection with the chassis just like another PC, and should be
    able to access the information on its internal hard disk in a drag and
    drop fashion. I however cannot directly connect to the PXI chassis to
    get the data that has been saved on the hard disk. We are running
    Windows XP on the PC. We did some poking around and noticed that the
    chassis is not running Windows file sharing, and only has ftp and http
    running. We tried to access it using ftp, but we didn't have a username
    and password to supply it. So, how can we enable Windows file sharing
    on the chassis? How can I connect to it to do drag and drop to get
    saved waveform data off of it? This is the main problem I am now
    facing. Eventually we would like to store data to an external hard disk
    connected to the chassis, which assumes that I can have access to the
    internal storage to tell it to save files to the external hard disk.
    For now saving it to the internal hard disk is just fine until
    everything is proven to work, but I would like to get the data off of
    the internal hard drive to put on another computer.
    Any answers/suggestions on my above questions are greatly appreciated.
    I also want to thank you for reading this long post . I eagerly await
    a reply. Thanks again in advance.
    Chris
    Attachments:
    5124_update.vi ‏143 KB

  • Align for multi-header renderer in JTable

    Hi,
    I have a question about multi-header renderer in JTable. Here is the problem, please help me:
    I used the MultiLineHeaderRenderer class example from
    http://www2.gol.com/users/tame/swing/examples/JTableExamples1.html
    This example makes the header to be centered (as setHorizontalAlignment(JLabel.CENTER)). Also, it is for the all the headers with 2 lines (i don't know how to describe it).
    In my project, I have just a few header with 2 lines, most of them are one line. By using this example renderer class, I have the header height is bigger, of course because of some 2 lines headers, and one line header is automatically set as the TOP. What I want is to make one line header to be set as the BOTTOM just like in EXCEL.
    Please help me, sorry for my technical explaination (poor huh?)
    Thanks in advance.

    If I understand correctly just put a space then a new line before your text for your cell that you want the text at the bottom.
    like this: " \nBottom Text"
    You'll need the leading space because of how StringTokenizer works.

  • Multi-Org Setup

    Hi,
    I am having task to upgrade 11.5.10 to R12, but Multi-Org setup is mandatory for upgrade to R12.
    Can you guy provide me the document where I can find patches to be applied for Multi-Org Setup.
    Regards,
    Raj

    I am having task to upgrade 11.5.10 to R12, but Multi-Org setup is mandatory for upgrade to R12.
    Can you guy provide me the document where I can find patches to be applied for Multi-Org Setup.This question was asked and answered many times in the forum before, please see old threads for the links you need to refer to.
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Multi-Org++AND+Setup&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Multi-Org++AND+Setup+AND+R12&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Multi-Org++AND+Setup+AND+Upgrade&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • Mouse pointer is unchangeable at 100% scaling in multi monitor setup with different resolutions

    Hi there, I have been asked by one of the moderators of the Windows 8.1 forums to post this topic here.  You can see the thread relating to this post here
    http://answers.microsoft.com/en-us/windows/forum/windows8_1-tms/cant-change-the-cursor-pointer/ae8ad8a0-3ded-4810-a3be-794f6c4830a4?msgId=2bd7082f-f5d1-4b01-8c70-e0d2a754d66f&page=1<o:p></o:p>
    The problem:  The mouse pointer becomes unchangeable in size, type, trails etc when 100% scaling is used.  As in it reverts to the standard small white non-inverted, no trails,
    no shadow 'pointer', no matter what options you select in the 'Mouse' control panel settings.<o:p></o:p>
    This appears to be a scaling problem with the Windows 8/8.1 display, not an actual mouse problem.  Judging by my own and other peoples experience it seems to happen mainly with multi
    monitor setups where different screen resolutions are used.  The issue occurs typically when 100% scaling for all monitors is used or this option is un-ticked and the 'change the size of all items' is on smallest (equivalent to 100%).  <o:p></o:p>
    If the scaling is set to 150% or greater the issue disappears and the mouse pointer becomes fully functional.  Also bizarrely when the scaling is set to 100% if the screen is recorded
    using a screen capture program, the recording of the screen shows the mouse pointer selected in the mouse options and not the default small white cursor that is present when recording it.  Here is a link to a video where I explain and show the problem
    happening on my system.  http://youtu.be/xs2cxoeaq-A<o:p></o:p>
    I have every update installed without exception, latest drivers etc.  I use a 55" 4k screen with 2x1080p monitors in portrait on either side.  All are connected through 1 Nvidia
    GTX 780 6GB graphics card<o:p></o:p>
    Hope you can help us<o:p></o:p>
    thanks<o:p></o:p>
    Jon M<o:p></o:p>

    Hi Jon,
    I'm looking into this issue but it will take me some time to provide a reply. 
    In the meantime I noticed that you have 3 monitors with 1x 55' and 2x27' - please have a try to only connect 1x27' monitor and see if issue persists.
    Also please try to change the screen resolution to a lower size and see if issue persists. 
    Both suggestions are not solution, I just would like to know if it is related to multiple monitor or screen resolution. 
    If you have any feedback on our support, please send to [email protected]

  • Nouveau graphical tearing, dual-head setup

    I currently experience tearing while using the Nouveau drivers for my NVidia card. I experienced the same (or a similar) tearing issue while using the proprietary drivers, which, along with the inherent incompatibility with most of the linux graphics stack, led me to switch to Nouveau.
    I'm using the 3D support from nouveau-dri along with xf86-video-nouveau and the most recent Gnome 3 build in the official repositories.
    I have a dual-head setup with one 1024x768 IBM monitor to the left of a 1920x1080 Dell monitor. Both report 60Hz vertical refresh rate in their configuration menus.
    So far, my attempts to solve this problem have consisted of the following steps:
    1. Use a minimal xorg.conf and configure everything using the integrated Gnome display settings utility. The minimal xorg.conf was as follows:
    Section "Device"
    Identifier "nvidia"
    Driver "nouveau"
    Option "GLXVBlank" "true"
    EndSection
    2. Enable/disable the GLXVBlank flag in xorg.conf.
    I do not experience the same issue when booted to Windows 7, using the most recent official NVidia drivers.
    I would greatly appreciate any help anyone could offer on this issue, it is quite frustrating to boot into Windows to watch videos without horizontal breaks in every frame. Thanks!
    Output from xrandr --current is as follows:
    [ifx@melancholy ~]$ xrandr --current
    Screen 0: minimum 320 x 200, current 2944 x 1080, maximum 8192 x 8192
    DVI-I-1 connected 1920x1080+1024+0 (normal left inverted right x axis y axis) 477mm x 268mm
    1920x1080 60.0*+
    1280x1024 75.0 60.0
    1152x864 75.0
    1024x768 75.1 60.0
    800x600 75.0 60.3
    640x480 75.0 60.0
    720x400 70.1
    DVI-I-2 connected 1024x768+0+312 (normal left inverted right x axis y axis) 304mm x 228mm
    1024x768 60.0*+ 75.1 70.1
    832x624 74.6
    800x600 72.2 75.0 60.3 56.2
    640x480 72.8 75.0 60.0 59.9
    720x400 70.1
    My current xorg.conf is as follows:
    Section "Monitor"
    Identifier "Monitor0"
    Option "PreferredMode" "1920x1080_60.00"
    EndSection
    Section "Monitor"
    Identifier "Monitor1"
    Option "PreferredMode" "1024x768_60.00"
    Option "LeftOf" "Monitor0"
    EndSection
    Section "Device"
    Identifier "nvidia"
    Driver "nouveau"
    Option "Monitor-DVI-I-1" "Monitor0"
    Option "Monitor-DVI-I-2" "Monitor1"
    Option "GLXVBlank" "true"
    EndSection
    Section "Screen"
    Identifier "Screen0"
    DefaultDepth 24
    SubSection "Display"
    Depth 24
    Virtual 2944 1080
    EndSubSection
    Device "nvidia"
    EndSection
    Section "ServerLayout"
    Identifier "Layout0"
    Screen "Screen0"
    EndSection
    Other information can be provided on request.

    Hello. I have the same problem. Not just on Arch. I've had this problem with many distros, graphics cards, and drivers across the board. Installing the proprietary nvidia driver fixes the video tearing (vsync) issue for me. However; A) it's Linux, I don't want to use proprietary anything (otherwise I'd be on a Mac); and B) the proprietary driver will not flip my display to portrait anyway.
    I basically have the same setup as Meyermagic. One GeForce 8600 GT, Two monitors (identical acers) 1680x1050 (oriented in portrait so 1050x1680 if you like), using open source Nouveau driver, created a minimal xorg file in /etc/X11/xorg.conf.d/20-nouveau.conf
    Section "Module"
    Load "glx"
    EndSection
    Section "Device"
    Identifier "Nvidia card"
    Driver "nouveau"
    Option "GLXVBlank" "true"
    EndSection
    Both monitors have a refresh rate of 60Hz. I still get really bad tearing as I drag windows across the screens.
    Again, I've had this issue with almost every Linux box I've built (the only time this is not even remotely an issue is on my Thinkpad T400 using the Intel graphics). I'm guessing I'm doing something wrong or I've just had bad luck because it is nearly impossible to find any support for this on any forum anywhere. Anything I do find is either a thread like this where someone asked the question and simply never got a response. Or it is some out-of-date forum that give you 10 ways to modify the xorg.conf none of which make any sense or don't work. Not to mention that xorg.conf is now depreciated anyways.
    Seriously, even if there is a wiki that concisely explains in excruciating detail how to actually modify the xorg conf files then I could figure it out for myself but I can't find that anywhere which makes me feel really stupid because I know it has to be out there. If there isn't an xorg wiki or xorg.com site or something that explains how to use the thing then I'm not sure I want to live on this planet anymore.
    If anyone can help in any way please do.

  • [Solved] Catalyst graphical glitches with dual head setup

    Hello all, having a weird issue with my dual monitor setup where graphical glitches make my primary monitor completely unusable. The Tear Free feature of the proprietary catalyst driver is what seems to be causing it:
    Tear Free enabled:
    Tear Free disabled:
    I can safely enable it with one monitor disabled also. The issue is that without Tear Free enabled, I'm experiencing some serious screen tearing when watching videos and playing games.
    My video card is the XFX HD 7970 Double D and I'm using the latest version of catalyst (14.4), installed from Vi0L0's repo. The monitors are the same model with the same specs (1920x1200 @60Hz). I definitely used to be able to use Tear Free with my dual head setup at most a month ago, so some update at some point must have broken it. I temporarily switched back to the open source driver before I had found Tear Free to be the culprit and the performance is still hardly comparable.
    Any ideas? Thanks in advance.
    Last edited by bheinks (2014-05-15 02:38:45)

    Rexilion wrote:Maybe try looking at /var/log/pacman.log and see if you can downgrade packages (start with the kernel). See if that works around the problem.
    Downgrading catalyst to 14.3 seems to have done the job. Thanks for the suggestion, marking as solved.
    Inxsible wrote:bheinks, you have been on the forums long enough to be aware of the forum rules. Make use of an image hosting site.
    Sorry, I don't post often and wasn't aware of the rules on images. I'll keep that in mind next time.

  • Financial Analyzer Web access after multi-node setup

    hey guys
    any idea on the follwoing
    Financial Analyzer Web access after multi-node setup
    any hints will be helpfull
    fadi

    Thank you Fadi for this quick idea.
    I will try to apply it in our environment.
    If we couldn't do it, I think that we have other two options; first options is to reinstall OFA in the Web Server Node where there is a Metalink note states how to move OFA from server to server. The second option is changing the configuration files of OFA to reference the Web Server in the new node.
    I will update you with the results.
    Regards,
    M.Muhtadi

  • How do I avoid errors on a Multi - monitor setup?

    Hi.
    I want to use Photoshop CS6 on a Multi - monitor setup, using iGPU multi-monitor + GPU system for 4 monitors.
    Preiously, I used Photoshop on a Multi - monitor setup without useing iGPU multi-monitor.
    And, Photoshop worked well.
    However, when I wanted to use 1 more monitor, my GPU supports only 3 monitors.
    Then, I enabled iGPU multi-monitor function. It supports 2 more monitors.
    At first, It worked. But, after I use "Free transform", "Transform"  or something like these tools, Photoshop hang up!
    I couldn't figure out the reason why these errors happened.
    I need ways to avoid these errors.
    Thanks.

    Thanks for viewing this discussion.
    Finally, I brok through this problem.
    I show you how to do, in case of when you face to this problem.
    The way to solve this problem is, simple, "buy a new GPU."
    But, there is 1 important point, you should buy a same bland's GPU.
    In my case, I have SAPPHIRE'S RADEON HD5670, and I bought SAPPHIRE'S RADEON HD 6450.
    I think that Photoshop GPUsniffer was confused about GPU driver.
    Because there were 2 drivers -- Radeon and Intel Graphics -- exist.
    That's why Photoshop hang up.
    For your reference,
    Best regards.

  • LMS 4.0 Multi-Server setup

    I have a problem setting up a multi-server setup. On my remote LMS/slave servers, I can import the Peer Certificate from my master LMS server and on the slave servers I can import the Master server cert. However, when I go to the Single Sign-On of the slave server it states that the cert is not installed/valid. So, I go back to the Peer Server Certificate setup and look at the cert imported and states it is valid. All self signed certificates.
    Question: Should I make new self signed certs for each server? Or/How do I make new cert within LMS(I believe in the setup you added the values)? Is there a log in LMS that I can check this process?
    From reading the docs, I don't see anything. Any insight would be greatly appreciated.
    thanks,
    John

    There is a perl script, sslutil.pl, in your LMS installation (under $nmsroot\MDC\Apache) that will allow you to validate your server's certificate. Assuming a  default installation on Windows, you can use:
    "C:\Program Files (x86)\CSCOpx\bin\perl.exe" "C:\Program Files (x86)\CSCOpx\MDC\Apache\sslutil.pl"
    from a command window
    My experience (using 3rd party signed certifcates) is that LMS is very particular abou the ownership (casuser needs to own) of the file and directory in which the certificate and key files are stored.

  • Multiple chrome windows on a multiple monitor, multi-desktop setup?

    Dear all,
    I'm having this huge headache because I have a dual monitor setup (imac 27" + second 24" monitor) and I have configured 6 different desktops in it (mountain lion here) where each desktop dual monitor scene has a chrome window (with several tabs each) on each monitor. I do this because i work 90% on SaaS's and as such I need this multi-desktop setup with chrome spread all over it.
    Problem: Everytime i reboot the system, all those crhome windows get back to desktop 1 (on both monitors) and I have to painstakingly put each chrome window one by one back into each desktop again. Its really a PITA.
    Solution: Anyone can tell me a workaround for this or if I am missing some OSX or chorme trick or tweak that will make the system "remember" where each chrome windows was in each desktop + monitor?
    Thanks in advance!
    //Ricardo

    Peggy,
    We have been running 3 separate Forte environments on one server (dev,
    test, prod) for over a year now. The environments have been at
    different Forte release levels at various times. Darrell is right on
    target with the UNIX links and separate fortedef.sh files for each
    environment. The links will help reduce disk space. Follow his
    instructions and you should be good to go. We have each Forte
    environment point to its own database instance also. You can set the
    environment variables in each fortedef.sh file to do this. If you keep
    multiple versions of Forte on a windows client machine, you will have to
    switch the Forte.ini file before starting Forte.
    As for security and auditing, I not sure what you are looking for here.
    Our application has security and auditing but that it.
    Keep in mind, that each Forte environment will be competiting for system
    resources on the same server. When we have 3 environments running, we
    notice a performance hit. Try to use as few resources as you can. For
    example, our test and production environments do not require a
    repository server, since we test deployed. So do not start these
    processes.
    Good luck.
    Robert Crisafulli
    AMISYS
    301-838-7540
    Long live Forte...

Maybe you are looking for