OMQ between OpenVMS and Linux

Are there any issues with having communications between OMQ running on OpenVMS and OMQ running on Linux? Is there any documentation on this product for both OS's that I can download and review?
Thanks

We have been successfully using OMQ between Alphas running OpenVMS 7.3-2 and Redhat Linux using v5 of OMQ for several years now. You should be able to use the manuals for Unix to set it up on Linux. As long as you have the initialization files configured on both systems to recognize each other there shouldn't be anything unusual as compared to communicating between 2 OpenVMS nodes.

Similar Messages

  • Creating network between Mac and Linux

    Hello guys! I need to set up network between my iMac and Ubuntu linux. iMac is connected in internet by Airport and Fon Wifi Router witch is connected in router which is connected to adsl -modem. My pc is connected directly to router. Both computers have access to internet. Is there any easy solutions to share files with mac and linux? I don't need any webservers etc, I just want to share files.
    Other question related to this, why is my mac's ip address like 192.168... and linux ip addrress is 84.250...?
    Thanks allready!

    Though I'm not newbie with computers, I'm really confused becouse I haven't set up network like this never before.
    Not a problem, no worry... we all know something somebody else doesn't and vice versa.
    My linux says that my DHCP -IP is 193.210.18.18, is that related to this in any way?
    Yes, if the Mac had an IP of 193.210.18.x, (but not 18 for x), then connection would be simple, but I think we must have two devices passing out IPs. What is the Mac's IP again?
    http://www.helpdesk.umd.edu/topics/communication/ethernet/office/winxp/2882/
    Do you have any advice where I could start looking from the right IP of my linux?
    http://www.webmasterforums.com/networks-clusters/1804-find-ip-address-my-linux-b ox.html
    I'm not sure if its even configurable.
    http://tinyurl.com/2gug9s

  • IndexOf - difference between Win and Linux encoding

    Hello folks, wondering if someone could put me on the right track over this little problem with porting a java app to Linux...
    I have a nice little program, developed on (the latest) JDK under windows which reads a custom file format, locates the second occurance of the substring 'PNG', ignores everything before the character before this PNG (hence the -1 below) and saves the remainder, which is now a bog-standard PNG image. The first 'PNG substring always occurs within the first 50 bytes (hence the 50 below) and the second around the 2kB mark. Here's the line that finds the location of the second 'PNG' in the file loaded into strFileContent:
    location = strFileContent.indexOf( "PNG", 50 )-1;All is well compiled and run on windows, say file 'test1.xyz' produces a value for location of 2076 and saves a nice PNG called 'test1.png'.
    When I haul it over to Linux (Ubuntu 9.04) and lo, location comes out as 1964 for the same file, and of course the file is no-longer a PNG because there are an extra 112 bytes on the front end. Running the windows compile of the code or a fresh Linux compile makes no difference.
    I'm suspecting Win and Linux Java count, perhaps, line endings or some such differently, perhaps have to check an encoding. I'd appreciate any pointers on correcting this to work on both platforms (ultimately I'm trying to appease a Mac user, but don't have a Mac to play with at the moment).
    Cheers,
    K.
    Ken

    phaethon2008 wrote:
    I'm suspecting Win and Linux Java count, perhaps, line endings or some such differently, perhaps have to check an encoding. I'd appreciate any pointers on correcting this to work on both platforms (ultimately I'm trying to appease a Mac user, but don't have a Mac to play with at the moment).The immediate cause of your problem is probably that Windows uses a 8bit encoding as the default (probably some ISO-8859-{noformat}*{noformat} variant or the Windows-bastardization of it), while Ubuntu uses UTF-8, which has a varying number of bytes per character.
    The much more important underlying problem is that you're trying to treat binary data as if it were text. A PNG image is not text. Handling binary data in Strings (or char[]) is a sure way to invite desaster.
    You must convert your code to handle InputStream/OutputStream/byte[] instead of Reader/Writer/String/char[].

  • PCNS between AD and Linux based LDAP

    Is it possible to sync passwords between an AD domain and a Linux based LDAP?
    All accounts in LDAP are present in AD. The initial flow would preferable be:
    LDAP password -> AD
    and from there on onwards, all password changes from AD should flow back to LDAP.
    Thanks

    Hi,
    I would be interested to see if the password sync can happen or you may like to find one of the 3rd party tool which can do that, so far i know i never worked with password sync service between the two, you may like to read these articles:
    http://technet.microsoft.com/en-us/magazine/2008.12.linux.aspx
    http://www.chriscowley.me.uk/blog/2013/12/16/integrating-rhel-with-active-directory/
    Best of luck, cheers
    Thanks
    Inderjit

  • Can you share an external disk between OSX and Linux?

    I am using both Linux and Mac OS X computers at work. I have found a need in some cases for backing up very large files and/or moving them between systems that are not necessarily on a saf network. At least not at the same time.
    For this purpose I wanted an external big disk that could be mounted on a Linux system as well as a Mac. To my dismay there seem to be no common modern journalling file system between these two, well dialects of, unix.
    I am not going to use and old Windows file system such as FAT32 and the only possibility so far has been Apple's HFS+ with the journal turned off (as Linux does not support writing to a journalled HFS+ file system). Darwin does not support any of the file systems that I regularly use on Linux.
    Perhaps I am mistaken - is there really nothing good and useful out there?
    /T

    I was under the impression that having a journal would be safer, not just for the boot drive. Any volume that is not properly unmounted before removed (or the couputer is turned off) might be left in an incomplete save state, since data might have been saved to buffers only, not to the disk proper.
    To me safer would be better. Apple has added journaling to HFS+ which is good, and there are several journaling file systems for Linux. Nothing in common. Not even with Windows. I guess I will have to use HFS+ with the journal turned off, but it seems suboptimal to me.
    /T

  • Difference in data transfer rates between winXP and Linux server?

    Hello all,
    I am using a winXP laptop to act as my server (all usb external hard drives are connected to it) but the data transfer rates can be really slow. Is Linux faster in that regard? Can a Linux based server provide faster data transfer rates?
    Thanks for any help.
    Bmora96

    Linux cannot make hardware go any faster - so if WinXP and its drivers are making optimal use of those USB drives and the USB data transfer pipe, Linux will not make it faster. (but installing Linux and going Tux are always excellent ideas that need no real reason either ;-) )
    Real question you should be asking is if using a notebook in a server role is wise thing to do?

  • Drives disappear between UEFI and Linux

    This problem has me stumped for a month, so I'm hoping someone can shed some light on this.
    Originally, I had an MSI motherboard with BIOS only, GRUB2 booting Arch and Windows 7, no problems whatsoever. I upgraded to an Asus motherboard with UEFI, and at first Arch booted fine (Windows just blue screened). Subsequent boots would randomly fail, sometimes telling me Arch couldn't find the root drive, sometimes the home drive, and other times it would boot successfully. There was no telling how many times I would need to reboot before Arch would find all the drives. I returned the Asus board and replaced it with a Gigabyte Z97-D3H, but the problem continued.
    Finally this Tuesday I reinstalled Arch in UEFI mode, and instead of a bootloader, UEFI boots Linux directly, as shown in the "Using UEFI directly" section of EFISTUB. Still, Arch randomly fails to find my drives.
    Here is my drive layout when Arch boots properly (when it doesn't, lsblk also can't see the missing drive):
    ┌── kiba ⟶ Paradise ~
    └───── lsblk -o name,fstype,size,label,mountpoint
    NAME FSTYPE SIZE LABEL MOUNTPOINT
    sda 59.6G
    ├─sda1 vfat 512M SHAMAN /boot
    ├─sda2 ext4 51.1G Hige /
    └─sda3 swap 8G swap [SWAP]
    sdb 119.2G
    └─sdb1 ext4 119.2G Kiba /home
    sdc 465.8G
    ├─sdc1 ntfs 128G Tsume
    └─sdc2 ext4 337.8G Toboe /home/kiba/Toboe
    And /etc/fstab:
    # <file system> <dir> <type> <options> <dump> <pass>
    # /dev/sda2 UUID=67de5f82-861e-4934-94ba-9fdde2225bb1
    LABEL=Hige / ext4 rw,noatime,discard,data=ordered 0 1
    # /dev/sda1 UUID=6A35-D4DE
    LABEL=SHAMAN /boot vfat rw,noatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2
    # /dev/sdb1 UUID=c89186ab-c005-4598-a5b9-2be4d0d6202c
    LABEL=Kiba /home ext4 rw,noatime,discard,data=ordered 0 2
    # /dev/sda3 UUID=0ee5665b-13e3-4592-bdf1-5701636697f3
    LABEL=swap none swap defaults 0 0
    LABEL=Toboe /home/kiba/Toboe ext4 defaults 0 0

    Oops, I forgot to mention that. As far as I can tell, the SMART status is fine on all the drives, but then I could be missing something.
    Here's the output for the root drive:
    ┌── kiba ⟶ Paradise ~ 11:49:50
    └───── s smartctl -t long /dev/sda && sleep 61 && s smartctl -a /dev/sda
    smartctl 6.3 2014-07-26 r3976 [x86_64-linux-3.18.6-1-ARCH] (local build)
    Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
    Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".
    Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.
    Testing has begun.
    Please wait 1 minutes for test to complete.
    Test will complete after Sat Feb 14 11:52:27 2015
    Use smartctl -X to abort test.
    smartctl 6.3 2014-07-26 r3976 [x86_64-linux-3.18.6-1-ARCH] (local build)
    Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF INFORMATION SECTION ===
    Model Family: JMicron based SSDs
    Device Model: KINGSTON SV100S264G
    Serial Number: 08AAA0003175
    Firmware Version: D100811a
    User Capacity: 64,023,257,088 bytes [64.0 GB]
    Sector Size: 512 bytes logical/physical
    Rotation Rate: Solid State Device
    Form Factor: 2.5 inches
    Device is: In smartctl database [for details use: -P show]
    ATA Version is: ATA8-ACS (minor revision not indicated)
    SATA Version is: SATA 2.6, 3.0 Gb/s
    Local Time is: Sat Feb 14 11:52:28 2015 PST
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    General SMART Values:
    Offline data collection status: (0x00) Offline data collection activity
    was never started.
    Auto Offline Data Collection: Disabled .
    Self-test execution status: ( 0) The previous self-test routine complet ed
    without error or no self-test has ever
    been run.
    Total time to complete Offline
    data collection: ( 30) seconds.
    Offline data collection
    capabilities: (0x1b) SMART execute Offline immediate.
    Auto Offline data collection on/off support.
    Suspend Offline collection upon new
    command.
    Offline surface scan supported.
    Self-test supported.
    No Conveyance Self-test supported.
    No Selective Self-test supported.
    SMART capabilities: (0x0003) Saves SMART data before entering
    power-saving mode.
    Supports SMART auto save timer.
    Error logging capability: (0x01) Error logging supported.
    General Purpose Logging supported.
    Short self-test routine
    recommended polling time: ( 1) minutes.
    Extended self-test routine
    recommended polling time: ( 1) minutes.
    SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
    2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0
    3 Unknown_Attribute 0x0007 100 100 050 Pre-fail Always - 0
    5 Reallocated_Sector_Ct 0x0013 100 100 050 Pre-fail Always - 0
    7 Unknown_Attribute 0x000b 100 100 050 Pre-fail Always - 0
    8 Unknown_Attribute 0x0005 100 100 050 Pre-fail Offline - 0
    9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 13005
    10 Unknown_Attribute 0x0013 100 100 050 Pre-fail Always - 0
    12 Power_Cycle_Count 0x0012 100 100 000 Old_age Always - 3565
    168 SATA_Phy_Error_Count 0x0012 100 100 000 Old_age Always - 15
    175 Bad_Cluster_Table_Count 0x0003 100 100 010 Pre-fail Always - 0
    192 Unexpect_Power_Loss_Ct 0x0012 100 100 000 Old_age Always - 0
    194 Temperature_Celsius 0x0022 031 100 020 Old_age Always - 31 (Min/Max 23/40)
    197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
    240 Unknown_Attribute 0x0013 100 100 050 Pre-fail Always - 0
    170 Bad_Block_Count 0x0003 100 100 010 Pre-fail Always - 0 89 0
    173 Erase_Count 0x0012 100 100 000 Old_age Always - 5 9441 7415
    SMART Error Log Version: 1
    ATA Error Count: 15 (device log contains only the most recent five errors)
    CR = Command Register [HEX]
    FR = Features Register [HEX]
    SC = Sector Count Register [HEX]
    SN = Sector Number Register [HEX]
    CL = Cylinder Low Register [HEX]
    CH = Cylinder High Register [HEX]
    DH = Device/Head Register [HEX]
    DC = Device Command Register [HEX]
    ER = Error register [HEX]
    ST = Status register [HEX]
    Powered_Up_Time is measured from power on, and printed as
    DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
    SS=sec, and sss=millisec. It "wraps" after 49.710 days.
    Error 15 occurred at disk power-on lifetime: 12999 hours (541 days + 15 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 a0
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 a0 08 00:11:20.100 IDENTIFY DEVICE
    b1 c1 00 00 00 00 00 ff 00:11:19.800 DEVICE CONFIGURATION FREEZE LOCK [OBS-ACS-3]
    f5 00 00 00 00 00 00 ff 00:11:19.800 SECURITY FREEZE LOCK
    ec 00 00 00 00 00 00 00 00:11:17.600 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 00 00:11:17.600 IDENTIFY DEVICE
    Error 14 occurred at disk power-on lifetime: 12999 hours (541 days + 15 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 a0
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 a0 08 00:15:36.300 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 ff 00:15:35.900 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 00 00:15:33.700 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 00 00:15:33.700 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 00 00:15:33.700 IDENTIFY DEVICE
    Error 13 occurred at disk power-on lifetime: 12986 hours (541 days + 2 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 00
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 00 00 00:03:08.200 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 ff 00:03:02.700 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 ff 00:02:46.000 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 ff 00:02:41.400 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 ff 00:02:24.700 IDENTIFY DEVICE
    Error 12 occurred at disk power-on lifetime: 12763 hours (531 days + 19 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 00
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 00 00 00:00:35.900 IDENTIFY DEVICE
    00 00 00 00 00 00 00 ff 00:00:29.400 NOP [Abort queued commands]
    00 00 00 00 00 00 00 ff 00:00:12.400 NOP [Abort queued commands]
    00 00 00 00 00 00 00 ff 00:00:00.000 NOP [Abort queued commands]
    Error 11 occurred at disk power-on lifetime: 12706 hours (529 days + 10 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 00
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 00 00 00:02:30.800 IDENTIFY DEVICE
    e7 00 00 00 00 00 a0 ff 00:02:11.900 FLUSH CACHE
    e7 00 00 00 00 00 a0 08 00:01:44.500 FLUSH CACHE
    e7 00 00 00 00 00 a0 08 00:01:43.300 FLUSH CACHE
    e7 00 00 00 00 00 a0 08 00:01:40.300 FLUSH CACHE
    SMART Self-test log structure revision number 1
    Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
    # 1 Extended offline Completed without error 00% 13005 -
    # 2 Extended offline Completed without error 00% 13005 -
    Selective Self-tests/Logging not supported
    And for the home drive:
    ┌── kiba ⟶ Paradise ~ 11:52:28
    └───── s smartctl -t long /dev/sdb && sleep 61 && s smartctl -a /dev/sdb
    smartctl 6.3 2014-07-26 r3976 [x86_64-linux-3.18.6-1-ARCH] (local build)
    Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
    Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".
    Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.
    Testing has begun.
    Please wait 1 minutes for test to complete.
    Test will complete after Sat Feb 14 11:54:25 2015
    Use smartctl -X to abort test.
    smartctl 6.3 2014-07-26 r3976 [x86_64-linux-3.18.6-1-ARCH] (local build)
    Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF INFORMATION SECTION ===
    Model Family: JMicron based SSDs
    Device Model: KINGSTON SV100S2128G
    Serial Number: 08BB20039237
    Firmware Version: D110225a
    User Capacity: 128,035,676,160 bytes [128 GB]
    Sector Size: 512 bytes logical/physical
    Rotation Rate: Solid State Device
    Form Factor: 2.5 inches
    Device is: In smartctl database [for details use: -P show]
    ATA Version is: ATA8-ACS (minor revision not indicated)
    SATA Version is: SATA 2.6, 3.0 Gb/s
    Local Time is: Sat Feb 14 11:54:26 2015 PST
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    General SMART Values:
    Offline data collection status: (0x00) Offline data collection activity
    was never started.
    Auto Offline Data Collection: Disabled.
    Self-test execution status: ( 0) The previous self-test routine completed
    without error or no self-test has ever
    been run.
    Total time to complete Offline
    data collection: ( 30) seconds.
    Offline data collection
    capabilities: (0x1b) SMART execute Offline immediate.
    Auto Offline data collection on/off support.
    Suspend Offline collection upon new
    command.
    Offline surface scan supported.
    Self-test supported.
    No Conveyance Self-test supported.
    No Selective Self-test supported.
    SMART capabilities: (0x0003) Saves SMART data before entering
    power-saving mode.
    Supports SMART auto save timer.
    Error logging capability: (0x01) Error logging supported.
    General Purpose Logging supported.
    Short self-test routine
    recommended polling time: ( 1) minutes.
    Extended self-test routine
    recommended polling time: ( 1) minutes.
    SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
    2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0
    3 Unknown_Attribute 0x0007 100 100 050 Pre-fail Always - 0
    5 Reallocated_Sector_Ct 0x0013 100 100 050 Pre-fail Always - 0
    7 Unknown_Attribute 0x000b 100 100 050 Pre-fail Always - 0
    8 Unknown_Attribute 0x0005 100 100 050 Pre-fail Offline - 0
    9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 9899
    10 Unknown_Attribute 0x0013 100 100 050 Pre-fail Always - 0
    12 Power_Cycle_Count 0x0012 100 100 000 Old_age Always - 3541
    168 SATA_Phy_Error_Count 0x0012 100 100 000 Old_age Always - 13
    175 Bad_Cluster_Table_Count 0x0003 100 100 010 Pre-fail Always - 0
    192 Unexpect_Power_Loss_Ct 0x0012 100 100 000 Old_age Always - 0
    194 Temperature_Celsius 0x0022 034 100 020 Old_age Always - 34 (Min/Max 23/40)
    197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
    240 Unknown_Attribute 0x0013 100 100 050 Pre-fail Always - 0
    170 Bad_Block_Count 0x0003 100 100 010 Pre-fail Always - 0 135 0
    173 Erase_Count 0x0012 100 100 000 Old_age Always - 2 16503 12094
    SMART Error Log Version: 1
    ATA Error Count: 37 (device log contains only the most recent five errors)
    CR = Command Register [HEX]
    FR = Features Register [HEX]
    SC = Sector Count Register [HEX]
    SN = Sector Number Register [HEX]
    CL = Cylinder Low Register [HEX]
    CH = Cylinder High Register [HEX]
    DH = Device/Head Register [HEX]
    DC = Device Command Register [HEX]
    ER = Error register [HEX]
    ST = Status register [HEX]
    Powered_Up_Time is measured from power on, and printed as
    DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
    SS=sec, and sss=millisec. It "wraps" after 49.710 days.
    Error 37 occurred at disk power-on lifetime: 9893 hours (412 days + 5 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 a0
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 a0 08 00:00:09.200 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 ff 00:00:08.800 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 ff 00:00:08.800 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 00 00:00:06.600 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 00 00:00:06.600 IDENTIFY DEVICE
    Error 36 occurred at disk power-on lifetime: 9880 hours (411 days + 16 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 00
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 00 00 00:09:55.000 IDENTIFY DEVICE
    ec 00 00 00 00 00 a0 ff 00:09:48.800 IDENTIFY DEVICE
    e7 00 00 00 00 00 a0 08 00:09:47.900 FLUSH CACHE
    ec 00 01 00 00 00 00 08 00:04:00.000 IDENTIFY DEVICE
    ec 00 01 00 00 00 00 08 00:03:56.100 IDENTIFY DEVICE
    Error 35 occurred at disk power-on lifetime: 9855 hours (410 days + 15 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 00
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 00 00 00:03:35.400 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 ff 00:03:29.200 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 ff 00:03:24.600 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 00 00:01:00.000 IDENTIFY DEVICE
    ec 00 00 00 00 00 00 00 00:01:00.000 IDENTIFY DEVICE
    Error 34 occurred at disk power-on lifetime: 9168 hours (382 days + 0 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 00
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 00 00 00:00:21.800 IDENTIFY DEVICE
    00 00 00 00 00 00 00 ff 00:00:21.800 NOP [Abort queued commands]
    00 00 00 00 00 00 00 ff 00:00:00.000 NOP [Abort queued commands]
    Error 33 occurred at disk power-on lifetime: 9165 hours (381 days + 21 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    84 51 00 00 00 00 a0
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    ec 00 00 00 00 00 a0 00 00:18:36.900 IDENTIFY DEVICE
    a1 00 00 00 00 00 a0 00 00:18:36.900 IDENTIFY PACKET DEVICE
    ec 00 00 00 00 00 a0 ff 00:18:36.900 IDENTIFY DEVICE
    ec 00 00 00 00 00 a0 ff 00:18:00.100 IDENTIFY DEVICE
    ec 00 00 00 00 00 a0 ff 00:17:49.100 IDENTIFY DEVICE
    SMART Self-test log structure revision number 1
    Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
    # 1 Extended offline Completed without error 00% 9899 -
    # 2 Extended offline Completed without error 00% 9894 -
    Selective Self-tests/Logging not supported
    They are both SSD's, while the extra drive is a HDD. I don't think I've ever had an issue with the HDD showing up, so maybe this is an SSD-specific problem?

  • Dataguard 11g setup between AIX and Linux

    Hi,
    We are planning to move our Oracle databases from AIX 6.1 to Oracle Linux 6.2
    To reduce the downtime, we are thinking of setting up a dataguard (physical or logical) and do the switchover.
    Have you performed this before?
    Will you pls send me the steps?
    Thanks,
    DR

    Hello again;
    I thinking no for the same reason.
    When my shop moved from AIX to Linux we just install Oracle on Linux and patched it to the same level as the AIX. We created users, tablespaces, job etc in advance and then just used import/export to move the needed schema's. Data Pump make this much easier.
    Do a schema(s) compare and switch when ready. But in any event I don't believe Data Guard can help you. You mostly have a migrate issue.
    Please consider closing your question when complete.
    h3. Oracle Database 10g & Multi-Terabyte Database Migration
    http://www.oracle.com/technetwork/database/features/availability/thehartfordprofile-xtts-133180.pdf
    h3. Incrementally Updating Transportable Tablespaces using RMAN
    http://www.oracle.com/technetwork/database/features/availability/itts-130873.pdf
    h3. Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-platformmigrationtdb-131164.pdf
    Best Regards
    mseberg
    Edited by: mseberg on Nov 13, 2012 6:31 PM

  • DBLoad utility: (huge) difference between Windows and Linux

    Hello
    I have a backfile of 55M entries to load. I have prepared it under Linux (Dual Core machine + 4GB - Mandriva distribution - java 1.6). When I run DbLoad, here is the (partial) log:
    java -Xmx2048m -jar /home/pd51444/jNplMatchBackfile_Main_0.2/lib/je-3.3.69.jar DbLoad -h . -s group.db -f group.db.txt -v
    loaded 2817812 records 63275 ms - % completed: 5
    loaded 5624893 records 141023 ms - % completed: 10
    loaded 8462602 records 962019 ms - % completed: 15
    loaded 11285385 records 2401566 ms - % completed: 20
    loaded 14094928 records 786746 ms - % completed: 25
    loaded 16914275 records 8965457 ms - % completed: 30
    loaded 19741557 records 15766560 ms - % completed: 35
    loaded 22567310 records 2226015 ms - % completed: 40
    loaded 25376363 records 19662455 ms - % completed: 45
    Then I copied the exact same file on my Windows laptop (dual core + 2GB - XP - java 1.6), and DbLoad goes much faster:
    C:\nplmatch\db&gt;java -Xmx1024m -jar ..\jar\je-3.3.69.jar DbLoad -h . -s group.db -f group.db.txt -v
    Load start: Thu Oct 02 10:33:23 CEST 2008
    loaded 2817812 records 59876 ms - % completed: 5
    loaded 5624893 records 69283 ms - % completed: 10
    loaded 8462602 records 77470 ms - % completed: 15
    loaded 11285385 records 69688 ms - % completed: 20
    loaded 14094928 records 62716 ms - % completed: 25
    loaded 16914275 records 59122 ms - % completed: 30
    loaded 19741557 records 63200 ms - % completed: 35
    loaded 22567310 records 58654 ms - % completed: 40
    loaded 25376363 records 61482 ms - % completed: 45
    loaded 28197663 records 58889 ms - % completed: 50
    loaded 31019453 records 55937 ms - % completed: 55
    loaded 33839878 records 62045 ms - % completed: 60
    loaded 36664839 records 65749 ms - % completed: 65
    loaded 39498035 records 100718 ms - % completed: 70
    loaded 42302599 records 99733 ms - % completed: 75
    loaded 45125268 records 96000 ms - % completed: 80
    loaded 47947180 records 92749 ms - % completed: 85
    loaded 50755655 records 85485 ms - % completed: 90
    loaded 53578015 records 96240 ms - % completed: 95
    Load end: Thu Oct 02 10:57:36 CEST 2008
    Also I use the same je.properties file on both platforms.
    Any idea where this performance problem comes from?
    Thanks in advance
    Best regards
    Philippe

    Hello Phillipe,
    Nothing jumps out at me, but you might try the following:
    . Check the status of disk write caches. On linux, the cache may be disabled and it may be enabled on windows.
    . Does the windows machine have an SSD?
    . Run with -verbose:gc to see if the Linux is being held up by full GC's.
    . Use top or some other utility to see if something else is running on the linux box.
    . Take some random c-\'s to get some stack traces to see what is going on when things get slow.
    . Check on the JVM ergonomics. You may be getting a server or client JVM without knowing. Force it one way or the other with -server or -client.
    Here is a (future) FAQ entry regarding disk write caches (poorly formatted at the moment):
    During my system testing I pulled the power cord on my server to make the ultimate test of JE's durability claims. I am using commitSync() for my transactions, but some of the data that JE said it had committeed was not on disk when the system came back up. What gives?
    Quoting the Berkeley DB Reference Guide:
    Many disk drives contain onboard caches. Some of these
    drives include battery-backup or other functionality that guarantees that all
    cached data will be completely written if the power fails. These drives can
    offer substantial performance improvements over drives without caching support.
    However, some caching drives rely on capacitors or other mechanisms that
    guarantee only that the write of the current sector will complete. These
    drives can endanger your database and potentially cause corruption of your
    data.
    To avoid losing your data, make sure the caching on your disk
    drives is properly configured so the drive will never report that data has
    been written unless the data is guaranteed to be written in the face of a
    power failure. Many times, this means that write-caching on the disk drive
    must be disabled.
    Some operating systems enable the disk write cache by default.
    If you need true durability in the face of a power failure, then you should
    verify that the disk write cache is disabled or that you have some alternative
    means of ensuring durability (e.g. nvram, battery backup, an Uninterruptible
    Power Supply (UPS), Solid State Disk (SSD), etc.) Some disk drives may
    actually require changing hardware jumpers to enable/disable the write cache.
    You can check the status of the write cache using the
    hdparm utility on Linux and the format utility on Solaris.
    On Windows, use the Windows Explorer. Right click on
    the disk drive that you want to check, select Properties,
    click on the Hardware tab, select the desired disk, click
    on the Properties button, click on
    Policies and verify the cache setting with the check box.
    On Windows Server 2003 you generally disable the write caching
    from within the RAID controller software (OEM specific).

  • Howto setup VPN between RVS4000 and linux PC ?

    The "vpnc" - cisco compatible vpn client is available in lastest version of Ubuntu.
    Has anyone successed to establish a VPN?

    WRT54G does not support IPsec VPN. You could try WRV210 instead.

  • Difference in read/write between Windows Vs Linux

    Hi,
    Can you please write, whether it is a bug or my ignorance. I thought, if a java program runs in Windows then it will run in other platform too, without editing any code. It proved me wrong. So is it a bug?
    Ref: Fedora linux version 5
    Kernel ver. 2.6.17
    jdk version - build 1.5.0_06-b05
    I was working on a web service. The server side program connects to an application using socket program and sends back data to client.
    While it was working very fine with Windows XP. When I moved the service to Linux (above said linux). The program stopped working. When I explored it, I found 3 things.
    1. I was using PrintWriter
    PrintWriter outputStream = new PrintWriter(new OutputStreamWriter(socket.getOutputStream()), true);
    to write into the socket. It works fine in Windows but not in Linux. ie. it sends different bytes in linux. Thus fails. When I changed it to BufferedOutputStream it works fine.
    BufferedOutputStream outputStream = new BufferedOutputStream (socket.getOutputStream());
    2. I was using BufferedReader to read
    BufferedReader inputStream = new BufferedReader(new InputStreamReader(socket.getInputStream()));
    data from socket. It reads. But byte varies from expectation. The same program works very well with windows.
    I used DataInputStream now. that works. but..
    DataInputStream inputStream = new DataInputStream(socket.getInputStream());
    3. While reading it was suppossed to read 2098 bytes, instead it reads 1448 bytes. It reads correctly in Windows, but not in the above said linux.
    Is it a bug or there is a difference in reading and writing from socket between Windows and Linux ( even though it is same JVM).
    Thanks
    Sasi.

    Thanks a lot for both of you.
    I believe, \r\n Vs \n could not be the reason.
    Because, I was sending binary data.Then you should not be using Writer at all!
    >
    With regard to "utf-8", I thought of that but I did
    not try to issue it in the function. Because, I was
    trying to send and receive raw binary data.Then you should not be using Writer at all!
    >
    Now, You may ask that why did you use PrintWriter.
    Because, the data had unsigned byte values. Makes no difference - you should not be using Writer at all!
    So I
    converted everything in to char array and send it.Will cause you problems at some point on some platform unless you use Base64 or HEX encoding.
    especially, whereever unsigned byte was required, I
    used (byte & Oxff). It worked very well with
    Windows. So I just go ahead doing other things.I use byte & 0xff for dealing with unsigned valued but your use sounds dangerous.
    >
    Here my question is, I assumed that if a program is
    written in java and complied on jvm x, then
    irrespective of the underlying OS, if the jvm is the
    same x, then the program should run without error. Is
    it wrong. If it is not wrong, then is it a jre bug?Your program will work if you have not used any implicit or explicit platform dependencies. For example, if in your program then you have hard codes a path as "C:\Program Files" then it will work on Windows (but then only if it has a C drive) but not on Unix.
    Any program has to deal with a number of platform specific features and Java protects you from them as much as possible BUT you must deal with things like Locale, EOL and default character encoding.
    No! Of course it is not a jre bug. You have coded to a specific platform and are then are suprised when it fails on other platforms.
    >
    Did you see my third point of my original problem
    statement. How to solve that.Since I don't have a view of your code I can't make an informed comment. I think I can guess what the problem is but I won't speculate.

  • Firewire between Mac and Windows

    Hi,
    I'm having some problems to connect Windows 7 and Mac OS X (10.6.5) with firewire.
    I've installed ubcore drivers (from unibrain) on windows to have firewire network drivers.
    The network works perfectly between Windows and bootcamp, between Mac and Linux, and in target mode between the MBP and windows. But i've got an error when i try to connect Windows and Mac.
    Windows seems to have a new hardware connected, and is searching for drivers that it doesn't find, and Mac gives me an error :
    ERROR: FireWire (OHCI) Lucent ID 5901 built-in: handleUnrecoverableErrorInt.
    It seems like Mac is not mounting a network connection over the firewire, but something else.
    Is there a way to force network mode for firewire ? Or is the problem somewhere else ?

    So, i've got my answer.
    You can't use firewire between Mac OS X 10.6.5 and Win7. Unibrain doesn't support it, and i can't find any other way to make it work...so you just can't.
    So sad :'(

  • What are the differences between HP-UX and linux?

    Hi,
    Could u please send me the differences between HP-UX and LINUX.
    Thanks.

    This is a forum about Oracle SQL and PL/SQL.
    It is not about Unix or Linux.
    Your question is inappropriate here and demonstrates inability to do any research on your own.
    Sybrand Bakker
    Senior Oracle DBA

  • Move encrypted filesystem between OS X and linux?

    I would like, if possible, to copy and use an encrypted filesystem between OS X and linux. I haven't found a way to do this, and would appreciate any assistance (that is, I want to create something like an encrypted sparseimage that can also be copied to, and read and mounted on, a linux system). If I build an ext2 filesystem in a sparseimage, is there any way I can get linux to understand and mount it (presumably this requires linux somehow recognising the sparseimage as a partition)? Or alternatively, is there any way to mount an encrypted linux filesystem on OS X? Cross-mounting isn't a solution to my problem; I really need to be able to put a secure filesystem in a file on a USB drive or CD or..., and then read it on a mac or linux system when I plug it in. Even the simple answer "no, it can't be done" would be a great help in saving my time... I'd like to avoid solutions that involve creating a clear copy of the encrypted system in file space (e.g. tar/gzip/encrypt/delete -> copy encrypted decrypt/gunzip/untar) if at all possible. Mechanisms that decrypt on access are much preferable.
    Thanks and Best Wishes
    Bob

    I believe TrueCrypt can do what you want.

  • ASCII-EBCDIC convertion between z/Os and Linux

    Hi experts, we are migrating our landscape to z/Os (DB+ASCS) and Linux (PAS). We have our GLOBALHOST on z/Os but we are experimenting some problems when we try to install our application servers because the conversion between platforms.
    In the planning guide we can see that there is a way to mount NFS file systems exported from z/Os, that make this convertion in an automatic way, but the commands mentioned on the guide are for UNIX and not for Linux.
    Does any of you have this kind of installtion that could help us to set this parameters ok?
    Or does any of you face this problems before?
    Regards
    gustavo

    First, yes, we have z/OS systems programmers and DBAs with specific knowledge of DB2 z/OS. One of the reasons we initially went with the Z platform when we implemented SAP was that our legacy systems ran there for many years and our company had a lot of Z knowledge and experience. zSeries was one of our "core competencies".
    I also need to give you a little more information about our Z setup. We actually had 2 z9 CECs in a sysplex, one in our primary data center and another close by in our DR site and connected by fiber. This allowed us to run SAP as HA on the Z platform. For highly used systems like production ERP we actually ran our DB2 instances active/active. This is one of the few advantages of the Z platform unavailable on other platforms (except Oracle RAC, which is also expensive but can at least be implemented on commodity hardware). Another advantage is that the SAP support personnel for DB2 z/OS are extremely knowledgeable and respond to issues very quickly.
    We also chose the Z platform because of the touted "near-continuous availability" which sounded very good. Let me assure you, however, that although SAP has been making great strides with things like the enhancement pack installer, at present you will never have zero downtime running SAP on any platform. Specifically you will still have planned downtime for SAP kernel updates and support packs or enhancement packs, period. The "near-continuous availability" in this context refers to zero unplanned downtime. In my experience this is not the case either. We had several instances of unplanned downtime, the most recent had to do with issues when the CECs got to 100% CPU utilization for a brief period of time and could not free some asinine small memory area that caused the entire sysplex to pause all LPARs until it was dealt with(yes, this could be dealt with using system automation but our Z folks would prefer to deal with these manually since each situation can be different). We worked with IBM on a PMR for several months, but our eventual "workaround" was much better. We stopped running our DB2 instances as active/active and never had the problem again. We chose this "workaround" because we knew we were abandoning the platform and any of the test fixes from IBM required a rolling update of z/OS in all LPARs (10 total at the time), which is a major hassle, especially when you do it several times applying several different fixes until the problem is finally solved.
    We also experienced some issues with DB2 z/OS itself. In one case, some data in a table in production got corrupted (yikes!!) SAP support helped us correct the data based on our QA system and IBM delivered a PTF (or maybe it was a ++APAR) to correct the problem. We also had several instances of strange poor performance in ERP or BI that were solved with a PTF or by using some special RUNSTATS output by some IBM DB2 tool our DBAs ran when we gave them the "bad" query. Every time we updated DB2 z/OS with an RSU felt like a craps shoot. Sometimes there were no issues revealed during testing, other times major issues were uncovered. This made us very hesitant when it came to patching DB2 and also made us stay well behind currently available maintenance so we could let other organizations identify problems.
    Back to the topic of downtime related to DB2 z/OS itself, we know another company which runs SAP on Z that takes several hours of downtime each week (early Sunday morning I think) to REORG some large BLOB tables(if you're not in the monthly conference call for SAP on DB2 z/OS organizations, I suggest you join in). The need for RUNSTATS and REORGs to be dealt with explicitly (typically once a day for RUNSTATs and once a week for REORGs, at least for us) is a major negative of the platform, in my opinion. It is amazing what "proper" RUNSTATS can do to a previously poor performing query(hours reduced to seconds!). Also, due to the way REORGs are handled in DB2 z/OS, you'll need a lot of extra disk space for the image copies which get created. In our experience you need enough temp disk to hold the shadow copy of the largest table being REORGd and the image copies of the largest tables that are REORGd in the same time period. I recall that the image copies can be migrated to tape or virtual tape to free the image copy space back up using a periodic job, but it was a huge amount of trial and error to properly size this temp disk space, especially when the tables requiring a REORG are not the same week-to-week. We heard that with DB2 z/OS v10 that RUNSTATS and REORGs will be dealt with automatically by DB2, but I do not know if it has even been certified for SAP yet(based on recent posts in this forum it would appear not). Even when it is, I would not recommend going to it immediately(we made this mistake when DB2 z/OS v9 was certified and suffered for months getting bugs with SAP and DB2 interoperability fixed). Also, due to the way that REORGs work on BLOB tables, there will be a period of table unavailability. The caused us some issues/headaches. There are some extra REORG parameters you can set, but these issues are still always a possibility and I think that is why the company mentioned previously just took the weekly downtime to finish the REORGs on their large BLOB tables. They are very smart folks that are very experienced with zSeries and they engaged IBM experts for assistance to try and perform the REORGs online and yet they still take the downtime to perform the BLOB REORGs offline. In contrast, these periodic database tasks do not require our Basis team to do anything with SQLServer and do not cause our end-users grief when a table is unavailable.
    Our reasons for moving platforms (which, let me assure you was a major undertaking and was considered long and hard) were based on 3 things:
    1. Complexity
    2. Performance
    3. Cost
    When I speak of complexity, let me give you some data... There was a time when ~50% of all of the OSS messages the Basis team opened with SAP were in the BC-DB-DB2 category. In contrast, I think we've opened 1 or 2 OSS messages in the BC-DB-MSS category ever. Many of the OSS messages for DB2 z/OS resulted in a fix from either SAP or from IBM. We've had seveal instances of applying a PTF, ++APAR, or RSU to z/OS and/or DB2 which fixed serious "unable to perform a job function" problems with SAP. We've yet to have to apply a single update to Windows or SQLServer to fix an issue with SAP.
    To summarize... Comparing our previous and current SAP platforms, the performance was slower, the cost higher, and the complexity much higher. I have no doubt (especially with the newer Z10 and zEnterprise 196) that we could certainly have built a zSeries SAP solution which performed on par with what we have now, but.... I could not even fathom a guess as to the cost. I suspect this is why you don't see any data  for the standard SAP SD benchmark on zSeries.
    I suspect you're already committed to the platform since deploying a Z machine, even in a lab/sandbox environment isn't as easy as going down to your local computer dealer and buying a $500 test server to install on, but... If you wanted to run SAP on DB2 I would suggest looking at DB2 LUW on either X86_64 Linux or on IBM's pSeries platform.
    Brian

Maybe you are looking for

  • HELP!: MacBook Pro Keyboard and Trackpad freeze...

    Good Afternoon, Yesterday afternoon, I turned on my MacBook Pro and a few seconds after bootup the trackpad became unresponsive, I then tried to perform a restart with the aid of the keyboard however, I then, noticed that my MBP keyboard also became

  • In ME38 If (EKET-MENGE) (EKPO-KTMNG)Schedule lines should not be created

    Hi, Our Requirement is in ME38 which is for creting Schdule line Agrement the system should not allow to create schedule lines if the Total of Schedule Line Quantity > Total Scheduled Quantity           (Sum of all EKET-MENGE) > (EKPO-KTMNG) Can anyb

  • Count Metric based on two physical columns

    I have a fact table that has Beginning Inventory On Hand (BOH) Units and Ending Inventory On Hand (EOH) Units. It also contains sales dollars and sales units and other facts. My dimensions are: Locations, Time, Products. Locations      All Stores    

  • Two workflow questions, (1) importing and (2) saving/exporting

    Apologies if the below questions are too elementary. I've just started using Lightroom. I've been trying out various editing/organizing programs and I haven't yet found the perfect one. Can someone tell me if Lightroom is capable of the following: 1.

  • How to display (not download) pdf document on web page?

    Very confused as to how to hyperlink to a word document (pdf) and have the document displayed on my web page rather than download the document...I am putting together a website for a high school cross country team and would like to make meet results