Drives disappear between UEFI and Linux

This problem has me stumped for a month, so I'm hoping someone can shed some light on this.
Originally, I had an MSI motherboard with BIOS only, GRUB2 booting Arch and Windows 7, no problems whatsoever. I upgraded to an Asus motherboard with UEFI, and at first Arch booted fine (Windows just blue screened). Subsequent boots would randomly fail, sometimes telling me Arch couldn't find the root drive, sometimes the home drive, and other times it would boot successfully. There was no telling how many times I would need to reboot before Arch would find all the drives. I returned the Asus board and replaced it with a Gigabyte Z97-D3H, but the problem continued.
Finally this Tuesday I reinstalled Arch in UEFI mode, and instead of a bootloader, UEFI boots Linux directly, as shown in the "Using UEFI directly" section of EFISTUB. Still, Arch randomly fails to find my drives.
Here is my drive layout when Arch boots properly (when it doesn't, lsblk also can't see the missing drive):
┌── kiba ⟶ Paradise ~
└───── lsblk -o name,fstype,size,label,mountpoint
NAME FSTYPE SIZE LABEL MOUNTPOINT
sda 59.6G
├─sda1 vfat 512M SHAMAN /boot
├─sda2 ext4 51.1G Hige /
└─sda3 swap 8G swap [SWAP]
sdb 119.2G
└─sdb1 ext4 119.2G Kiba /home
sdc 465.8G
├─sdc1 ntfs 128G Tsume
└─sdc2 ext4 337.8G Toboe /home/kiba/Toboe
And /etc/fstab:
# <file system> <dir> <type> <options> <dump> <pass>
# /dev/sda2 UUID=67de5f82-861e-4934-94ba-9fdde2225bb1
LABEL=Hige / ext4 rw,noatime,discard,data=ordered 0 1
# /dev/sda1 UUID=6A35-D4DE
LABEL=SHAMAN /boot vfat rw,noatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2
# /dev/sdb1 UUID=c89186ab-c005-4598-a5b9-2be4d0d6202c
LABEL=Kiba /home ext4 rw,noatime,discard,data=ordered 0 2
# /dev/sda3 UUID=0ee5665b-13e3-4592-bdf1-5701636697f3
LABEL=swap none swap defaults 0 0
LABEL=Toboe /home/kiba/Toboe ext4 defaults 0 0

Oops, I forgot to mention that. As far as I can tell, the SMART status is fine on all the drives, but then I could be missing something.
Here's the output for the root drive:
┌── kiba ⟶ Paradise ~ 11:49:50
└───── s smartctl -t long /dev/sda && sleep 61 && s smartctl -a /dev/sda
smartctl 6.3 2014-07-26 r3976 [x86_64-linux-3.18.6-1-ARCH] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".
Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 1 minutes for test to complete.
Test will complete after Sat Feb 14 11:52:27 2015
Use smartctl -X to abort test.
smartctl 6.3 2014-07-26 r3976 [x86_64-linux-3.18.6-1-ARCH] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: JMicron based SSDs
Device Model: KINGSTON SV100S264G
Serial Number: 08AAA0003175
Firmware Version: D100811a
User Capacity: 64,023,257,088 bytes [64.0 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 2.6, 3.0 Gb/s
Local Time is: Sat Feb 14 11:52:28 2015 PST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled .
Self-test execution status: ( 0) The previous self-test routine complet ed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 30) seconds.
Offline data collection
capabilities: (0x1b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 1) minutes.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0
3 Unknown_Attribute 0x0007 100 100 050 Pre-fail Always - 0
5 Reallocated_Sector_Ct 0x0013 100 100 050 Pre-fail Always - 0
7 Unknown_Attribute 0x000b 100 100 050 Pre-fail Always - 0
8 Unknown_Attribute 0x0005 100 100 050 Pre-fail Offline - 0
9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 13005
10 Unknown_Attribute 0x0013 100 100 050 Pre-fail Always - 0
12 Power_Cycle_Count 0x0012 100 100 000 Old_age Always - 3565
168 SATA_Phy_Error_Count 0x0012 100 100 000 Old_age Always - 15
175 Bad_Cluster_Table_Count 0x0003 100 100 010 Pre-fail Always - 0
192 Unexpect_Power_Loss_Ct 0x0012 100 100 000 Old_age Always - 0
194 Temperature_Celsius 0x0022 031 100 020 Old_age Always - 31 (Min/Max 23/40)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
240 Unknown_Attribute 0x0013 100 100 050 Pre-fail Always - 0
170 Bad_Block_Count 0x0003 100 100 010 Pre-fail Always - 0 89 0
173 Erase_Count 0x0012 100 100 000 Old_age Always - 5 9441 7415
SMART Error Log Version: 1
ATA Error Count: 15 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 15 occurred at disk power-on lifetime: 12999 hours (541 days + 15 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 a0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 a0 08 00:11:20.100 IDENTIFY DEVICE
b1 c1 00 00 00 00 00 ff 00:11:19.800 DEVICE CONFIGURATION FREEZE LOCK [OBS-ACS-3]
f5 00 00 00 00 00 00 ff 00:11:19.800 SECURITY FREEZE LOCK
ec 00 00 00 00 00 00 00 00:11:17.600 IDENTIFY DEVICE
ec 00 00 00 00 00 00 00 00:11:17.600 IDENTIFY DEVICE
Error 14 occurred at disk power-on lifetime: 12999 hours (541 days + 15 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 a0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 a0 08 00:15:36.300 IDENTIFY DEVICE
ec 00 00 00 00 00 00 ff 00:15:35.900 IDENTIFY DEVICE
ec 00 00 00 00 00 00 00 00:15:33.700 IDENTIFY DEVICE
ec 00 00 00 00 00 00 00 00:15:33.700 IDENTIFY DEVICE
ec 00 00 00 00 00 00 00 00:15:33.700 IDENTIFY DEVICE
Error 13 occurred at disk power-on lifetime: 12986 hours (541 days + 2 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 00
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 00 00 00:03:08.200 IDENTIFY DEVICE
ec 00 00 00 00 00 00 ff 00:03:02.700 IDENTIFY DEVICE
ec 00 00 00 00 00 00 ff 00:02:46.000 IDENTIFY DEVICE
ec 00 00 00 00 00 00 ff 00:02:41.400 IDENTIFY DEVICE
ec 00 00 00 00 00 00 ff 00:02:24.700 IDENTIFY DEVICE
Error 12 occurred at disk power-on lifetime: 12763 hours (531 days + 19 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 00
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 00 00 00:00:35.900 IDENTIFY DEVICE
00 00 00 00 00 00 00 ff 00:00:29.400 NOP [Abort queued commands]
00 00 00 00 00 00 00 ff 00:00:12.400 NOP [Abort queued commands]
00 00 00 00 00 00 00 ff 00:00:00.000 NOP [Abort queued commands]
Error 11 occurred at disk power-on lifetime: 12706 hours (529 days + 10 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 00
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 00 00 00:02:30.800 IDENTIFY DEVICE
e7 00 00 00 00 00 a0 ff 00:02:11.900 FLUSH CACHE
e7 00 00 00 00 00 a0 08 00:01:44.500 FLUSH CACHE
e7 00 00 00 00 00 a0 08 00:01:43.300 FLUSH CACHE
e7 00 00 00 00 00 a0 08 00:01:40.300 FLUSH CACHE
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 13005 -
# 2 Extended offline Completed without error 00% 13005 -
Selective Self-tests/Logging not supported
And for the home drive:
┌── kiba ⟶ Paradise ~ 11:52:28
└───── s smartctl -t long /dev/sdb && sleep 61 && s smartctl -a /dev/sdb
smartctl 6.3 2014-07-26 r3976 [x86_64-linux-3.18.6-1-ARCH] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".
Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 1 minutes for test to complete.
Test will complete after Sat Feb 14 11:54:25 2015
Use smartctl -X to abort test.
smartctl 6.3 2014-07-26 r3976 [x86_64-linux-3.18.6-1-ARCH] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: JMicron based SSDs
Device Model: KINGSTON SV100S2128G
Serial Number: 08BB20039237
Firmware Version: D110225a
User Capacity: 128,035,676,160 bytes [128 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 2.6, 3.0 Gb/s
Local Time is: Sat Feb 14 11:54:26 2015 PST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 30) seconds.
Offline data collection
capabilities: (0x1b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 1) minutes.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0
2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0
3 Unknown_Attribute 0x0007 100 100 050 Pre-fail Always - 0
5 Reallocated_Sector_Ct 0x0013 100 100 050 Pre-fail Always - 0
7 Unknown_Attribute 0x000b 100 100 050 Pre-fail Always - 0
8 Unknown_Attribute 0x0005 100 100 050 Pre-fail Offline - 0
9 Power_On_Hours 0x0012 100 100 000 Old_age Always - 9899
10 Unknown_Attribute 0x0013 100 100 050 Pre-fail Always - 0
12 Power_Cycle_Count 0x0012 100 100 000 Old_age Always - 3541
168 SATA_Phy_Error_Count 0x0012 100 100 000 Old_age Always - 13
175 Bad_Cluster_Table_Count 0x0003 100 100 010 Pre-fail Always - 0
192 Unexpect_Power_Loss_Ct 0x0012 100 100 000 Old_age Always - 0
194 Temperature_Celsius 0x0022 034 100 020 Old_age Always - 34 (Min/Max 23/40)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
240 Unknown_Attribute 0x0013 100 100 050 Pre-fail Always - 0
170 Bad_Block_Count 0x0003 100 100 010 Pre-fail Always - 0 135 0
173 Erase_Count 0x0012 100 100 000 Old_age Always - 2 16503 12094
SMART Error Log Version: 1
ATA Error Count: 37 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 37 occurred at disk power-on lifetime: 9893 hours (412 days + 5 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 a0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 a0 08 00:00:09.200 IDENTIFY DEVICE
ec 00 00 00 00 00 00 ff 00:00:08.800 IDENTIFY DEVICE
ec 00 00 00 00 00 00 ff 00:00:08.800 IDENTIFY DEVICE
ec 00 00 00 00 00 00 00 00:00:06.600 IDENTIFY DEVICE
ec 00 00 00 00 00 00 00 00:00:06.600 IDENTIFY DEVICE
Error 36 occurred at disk power-on lifetime: 9880 hours (411 days + 16 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 00
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 00 00 00:09:55.000 IDENTIFY DEVICE
ec 00 00 00 00 00 a0 ff 00:09:48.800 IDENTIFY DEVICE
e7 00 00 00 00 00 a0 08 00:09:47.900 FLUSH CACHE
ec 00 01 00 00 00 00 08 00:04:00.000 IDENTIFY DEVICE
ec 00 01 00 00 00 00 08 00:03:56.100 IDENTIFY DEVICE
Error 35 occurred at disk power-on lifetime: 9855 hours (410 days + 15 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 00
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 00 00 00:03:35.400 IDENTIFY DEVICE
ec 00 00 00 00 00 00 ff 00:03:29.200 IDENTIFY DEVICE
ec 00 00 00 00 00 00 ff 00:03:24.600 IDENTIFY DEVICE
ec 00 00 00 00 00 00 00 00:01:00.000 IDENTIFY DEVICE
ec 00 00 00 00 00 00 00 00:01:00.000 IDENTIFY DEVICE
Error 34 occurred at disk power-on lifetime: 9168 hours (382 days + 0 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 00
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 00 00 00:00:21.800 IDENTIFY DEVICE
00 00 00 00 00 00 00 ff 00:00:21.800 NOP [Abort queued commands]
00 00 00 00 00 00 00 ff 00:00:00.000 NOP [Abort queued commands]
Error 33 occurred at disk power-on lifetime: 9165 hours (381 days + 21 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
84 51 00 00 00 00 a0
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
ec 00 00 00 00 00 a0 00 00:18:36.900 IDENTIFY DEVICE
a1 00 00 00 00 00 a0 00 00:18:36.900 IDENTIFY PACKET DEVICE
ec 00 00 00 00 00 a0 ff 00:18:36.900 IDENTIFY DEVICE
ec 00 00 00 00 00 a0 ff 00:18:00.100 IDENTIFY DEVICE
ec 00 00 00 00 00 a0 ff 00:17:49.100 IDENTIFY DEVICE
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 9899 -
# 2 Extended offline Completed without error 00% 9894 -
Selective Self-tests/Logging not supported
They are both SSD's, while the extra drive is a HDD. I don't think I've ever had an issue with the HDD showing up, so maybe this is an SSD-specific problem?

Similar Messages

  • USB flash drive use between PC and MAC

    I have a usb flash drive that has files on it previously created using a PC.  It contains excel, word and powerpoint files.  I have Office for Mac and put some of  the files on the mac edited them and then tried to save them or move them back onto the drive, but cannot. I get a variety of messages including the disk is full and it is not and that it is write protected, but USB flash drives cannot be write protected.  Are usb drives portable between mac and PCs?

    hey man.
    Save the files on the jump drive to a folder on the mac.  Keep it in a good spot cuz your going to need it.
    Open disk utilities on the mac.
    click on the correct usb drive in the screen, and then click erase.  U will have formatting options in a drop down tab, pick "FAT" or "FAT 32" which it is, name it and hit erase.
    This will reformat the drive to a windows usable file format, and the mac will work with it no problem.
    Once reformat is complete close disk util. and open the mounted USB drive, found on the desktop and drag the folder you had created to it and your good to go.
    windows should have no problem reading it at all now.
    hope this helps

  • Different Drive Structure Between Principal and Mirror

    We havea 2-node (active/active) cluster with four failover instances installed. We use a dedicated drive for data files, one for log files and one for tempdb per instance .Predicting that we will be adding more instances in the future, we realized that we
    would run out of drive letters to set up. We intend to switch over and use mounted volumes instead that way we only use 1 drive letter per instance and will be able to add many more failover instances in the future. However, seen as we have over 150 databases
    across all current instances, reconfiguring the cluster drive setup will require a lot of time and work and is not in the immediate plans for our team (me :( ). I need to mirror as many databases as possible, based on priority, to a DR server on a geographically
    remote location. Since this server isn't setup with SQLServer yet, I was wondering if it would be a good idea to have the DR server set up with mount volumes now and establish mirroring between our production environment and the DR server that way so that
    at least half the battle is won. I am aware that the best practice for mirroring is to have the same drive structure between the principal and mirror server, however it is not a requirement. The only example issue I have seen with this set up is when you add
    a data file to a database and the DDL statements get sent over to the mirror and will fail since the drive setup will not be the same. This is something I am willing to deal with since adding data files to our databases is not something we do often, or at
    all. As new databases are added and mirrored, the WITH MOVE clause of the RESTORE statement can be used to place the files in their correct location in the mirror server.
    Other than this, are there any other things to consider for this kind of mirroring set up?
    Leroy G. Brown

    Hi Leroy G,
    According to my knowledge, you can deploy database mirroring with different drive structure. And the known issue is that if you add a file to the Principal, the DDL commands get applied across the Mirror, then your database will go into a suspended
    status until you duplicate the path on the mirror.
    However, as your post, you can use WITH MOVE option to create database on Mirror server. For more details, please follow the steps in this
    blog.
    Meanwhile, about setting up database mirroring, I recommend you review this online
    article.
    Thanks,
    Lydia Zhang

  • Creating network between Mac and Linux

    Hello guys! I need to set up network between my iMac and Ubuntu linux. iMac is connected in internet by Airport and Fon Wifi Router witch is connected in router which is connected to adsl -modem. My pc is connected directly to router. Both computers have access to internet. Is there any easy solutions to share files with mac and linux? I don't need any webservers etc, I just want to share files.
    Other question related to this, why is my mac's ip address like 192.168... and linux ip addrress is 84.250...?
    Thanks allready!

    Though I'm not newbie with computers, I'm really confused becouse I haven't set up network like this never before.
    Not a problem, no worry... we all know something somebody else doesn't and vice versa.
    My linux says that my DHCP -IP is 193.210.18.18, is that related to this in any way?
    Yes, if the Mac had an IP of 193.210.18.x, (but not 18 for x), then connection would be simple, but I think we must have two devices passing out IPs. What is the Mac's IP again?
    http://www.helpdesk.umd.edu/topics/communication/ethernet/office/winxp/2882/
    Do you have any advice where I could start looking from the right IP of my linux?
    http://www.webmasterforums.com/networks-clusters/1804-find-ip-address-my-linux-b ox.html
    I'm not sure if its even configurable.
    http://tinyurl.com/2gug9s

  • IDocs have disappeared between ECC and PI

    Hi All,
    A couple of days ago 3 idocs went missing between ECC and PI.
    That day 293 idocs of the same message type were sent to the same partner through PI. In PI we only found 290. The 3 IDocs were sent in the same second in the early afternoon. All IDocs before and after were processed correctly.
    The IDocs are unknown in IDX5 and in Runtime Workbench on PI.
    The idocs were not in SM58.
    I found them in WE05, with the correct status. The status update from status = 01 to 30 to 03 took place in the same second.
    25 seconds later, new IDocs of the same message type were created and processed successfully by PI.
    At the time the idocs disappeared there were no problems with either ECC or PI, as far as we know.
    The interface is live for more than 2 years. In the meantime no changes have taken place.
    I have read the links below
    Finding missing IDocs
    Interface Troubleshooting Basics
    It's a mystery to me. Where have those idocs gone?
    Anybody any idea?
    Kind Regards
    Edmond Paulussen

    Hi Edmond,
    On the SAP PI system have a look in transaction IDX5. Take the IDoc number that you could not find in PI and search them in IDX5.
    If you cannot find those IDoc in IDX5 then they never reached PI.
    Regards,
    Jannus Botha

  • USB Flash Drive Compatibility between Mac and PC?

    Hi,
    I own an iMac 2 GHz Intel Core 2 Duo 1 GB 667 MHz DDR2 SDRAM and just got a new job where I'm using a PC. At work they all use flash drives to transfer data back and forth to computers etc. Would a flash drive (let's say a Kensington since that's the brand I've seen the most at work) that works in a PC at work be compatible with my iMac at home. I know Kensington makes them compatible for Mac and PC but just wondering how the data would transfer onto my Mac. Is it readable, does it depend what I'm transferring over?
    Message was edited by: spiralgirl

    {quote} guy there said they haven't yet perfected the flash drives to switch between Mac and PC and work well so I was a little surprised at this. {quote}
    That is pretty much a standard reply you get from someone that doesn't know anything about macs. I've used those things for years without any problems, but I have been using my ipod as a disk lately because I always have it with me. It's also another less thing I have to buy and carry around.

  • IndexOf - difference between Win and Linux encoding

    Hello folks, wondering if someone could put me on the right track over this little problem with porting a java app to Linux...
    I have a nice little program, developed on (the latest) JDK under windows which reads a custom file format, locates the second occurance of the substring 'PNG', ignores everything before the character before this PNG (hence the -1 below) and saves the remainder, which is now a bog-standard PNG image. The first 'PNG substring always occurs within the first 50 bytes (hence the 50 below) and the second around the 2kB mark. Here's the line that finds the location of the second 'PNG' in the file loaded into strFileContent:
    location = strFileContent.indexOf( "PNG", 50 )-1;All is well compiled and run on windows, say file 'test1.xyz' produces a value for location of 2076 and saves a nice PNG called 'test1.png'.
    When I haul it over to Linux (Ubuntu 9.04) and lo, location comes out as 1964 for the same file, and of course the file is no-longer a PNG because there are an extra 112 bytes on the front end. Running the windows compile of the code or a fresh Linux compile makes no difference.
    I'm suspecting Win and Linux Java count, perhaps, line endings or some such differently, perhaps have to check an encoding. I'd appreciate any pointers on correcting this to work on both platforms (ultimately I'm trying to appease a Mac user, but don't have a Mac to play with at the moment).
    Cheers,
    K.
    Ken

    phaethon2008 wrote:
    I'm suspecting Win and Linux Java count, perhaps, line endings or some such differently, perhaps have to check an encoding. I'd appreciate any pointers on correcting this to work on both platforms (ultimately I'm trying to appease a Mac user, but don't have a Mac to play with at the moment).The immediate cause of your problem is probably that Windows uses a 8bit encoding as the default (probably some ISO-8859-{noformat}*{noformat} variant or the Windows-bastardization of it), while Ubuntu uses UTF-8, which has a varying number of bytes per character.
    The much more important underlying problem is that you're trying to treat binary data as if it were text. A PNG image is not text. Handling binary data in Strings (or char[]) is a sure way to invite desaster.
    You must convert your code to handle InputStream/OutputStream/byte[] instead of Reader/Writer/String/char[].

  • Can you share an external disk between OSX and Linux?

    I am using both Linux and Mac OS X computers at work. I have found a need in some cases for backing up very large files and/or moving them between systems that are not necessarily on a saf network. At least not at the same time.
    For this purpose I wanted an external big disk that could be mounted on a Linux system as well as a Mac. To my dismay there seem to be no common modern journalling file system between these two, well dialects of, unix.
    I am not going to use and old Windows file system such as FAT32 and the only possibility so far has been Apple's HFS+ with the journal turned off (as Linux does not support writing to a journalled HFS+ file system). Darwin does not support any of the file systems that I regularly use on Linux.
    Perhaps I am mistaken - is there really nothing good and useful out there?
    /T

    I was under the impression that having a journal would be safer, not just for the boot drive. Any volume that is not properly unmounted before removed (or the couputer is turned off) might be left in an incomplete save state, since data might have been saved to buffers only, not to the disk proper.
    To me safer would be better. Apple has added journaling to HFS+ which is good, and there are several journaling file systems for Linux. Nothing in common. Not even with Windows. I guess I will have to use HFS+ with the journal turned off, but it seems suboptimal to me.
    /T

  • Difference in data transfer rates between winXP and Linux server?

    Hello all,
    I am using a winXP laptop to act as my server (all usb external hard drives are connected to it) but the data transfer rates can be really slow. Is Linux faster in that regard? Can a Linux based server provide faster data transfer rates?
    Thanks for any help.
    Bmora96

    Linux cannot make hardware go any faster - so if WinXP and its drivers are making optimal use of those USB drives and the USB data transfer pipe, Linux will not make it faster. (but installing Linux and going Tux are always excellent ideas that need no real reason either ;-) )
    Real question you should be asking is if using a notebook in a server role is wise thing to do?

  • External Hard Drive Usage between Mac and PC

    I have a WD Passport SE 1 terabyte hard drive that I have backed up my PC (Windows XP) files on. I want to transfer files onto my Macbook Pro (OS X) but it comes up as read only. I've been reading a lot of different forums and stories about not being able to write and how to overcome that but am thoroughly confused.
    I use Mac and PC equally as much and it's important for me to be able to use the External HD between both platforms because both computers are not in the same location at all times for a home network type situation.
    I like the security of the NTFS but I don't want to do anything (or add any programs) that will make either the Mac or the PC unstable and unreliable.
    What do you recommend I use to be able to read/write from both platforms that will be stable?
    Why doesn't Apple/Microsoft get together and make a format compatible for both OS's?
    Is it better to use NTFS or HFS+? What are the advantages/disadvantages of both?
    I'm not very computer savvy on either machine so I need things dumb-ed down for me to understand. My mom is also in the same situation as I am so you would greatly be helping both of us out at the same time. Thanks in advance.

    You have 3 basic options to work with. Both Mac and Windows support FAT32 drives in full read/write capabilities. This solution is all many need. There are limitations to this, and the big one comes in the max size for a single file. Using FAT32, a single file has a max size of 4GB (it might be only 2GB, but I believe it is 4GB). As I said, for many this is fine, and they have a simple solution supported out of the box on both OSes.
    The second option is to format your external drive in HFS+ (Mac format). Your Mac will have full read/write access, and your PC will not have any access unless you ave the read-only driver from Boot Camp loaded, or if you buy a commercial driver like the ones from Paragon, or MacDrive. Many people will not consider this option unless they work primarily with Macs and only occasionally with a PC.
    The third option is to format the external drive as NTFS. Out of the box, Mac OS-X providers read-only support for NTFS drives, and there are several choices for free or commercial drivers to give you full read/write access to the NTFS drive. This option seems to be used the most by people who often work with multiple PC's and sometimes with a Mac or two. I have not seen any of these NTFS drivers that make a Mac unstable, but some of them are quite slow. In my house, we have about 10 PC's and one Mac. It is more cost effective to to put drivers on the Mac vs on the 10 PC's (at $40-80 per PC).
    At this point the recommendation depends upon your environment. If you are mostly around Macs and only a small number of PC's consider using a Mac format for the external drives and buying drivers for the few PC's that need to access the drive(s). If you are more like me where you interact with more PC's than Macs, consider using NTFS and getting a free or commercial NTFS driver for your Mac(s). Also keep in mind any computer that may need to access the external drive(s), so if you need a family member to access it, will they need drivers or not? One again, with my world being about 90% PC's and 10% (or less) Macs, having the majority of the computers that have full access to the drive out of the box is the "better" choice, especially considering the other computers (the Macs) can still read the information, just not write to the drives without a driver.

  • Driver clash between Garageband and Pro Tools

    I recently installed Pro Tools 7.3 LE on my Macbook to assist me in my aspirations to become a rock n roll superstar. I'm a DJ as well and have been using Garageband for my programs. But after installing Pro Tools everytime I open Garageband gives me this warning:
    +Garageband has detected a possible conflict between one or more third party MIDI or audio drivers.+
    +Be sure to install the latest drivers for all audio and MIDI equipment connected to your computer. For instructions on removing older drivers, consult the manufacturers' documentation.+
    After clicking "Ok" a couple of times I get this warning:
    +Mac OS X MIDI Services not available.+
    What do I do? Is there some kind of driver I should (un)install?

    I've had this issue ever since I upgraded to Leopard. I was running Pro Tools LE .3 and Garageband '08, but Garageband just wouldn't cooperate and give me the third-party driver conflict error. I dug through all my library folders with anything connected to third party MIDI driver (Line 6, M-Audio, intentionally shying away from PT hoping I could salvage it )and deleted it, and still no luck. Eventually I punted and uninstalled Pro Tools, and bada-boom, the error is gone and Garageband runs fine.
    I'm pretty sure now that Digidesign is in conflict with Leopard and GB 08, but they're not coming clean about it yet:
    http://www.digidesign.com/index.cfm?langid=100&navid=48&itemid=28065
    I hope this helps

  • Why does my external drive disappear on restart and i have to keep unplugging and plugging it back in

    i just purchased an iMac.  i plugged in my external usb drive with all my data and images on it and it shows up on the desktop.  It opens and I can see all my files.  If I restart the computer for any reason, the drive icon disappears from the desktop and finder.  I have to unplug it and plug it back in before it can be seen.

    You may try a SMC reset, maybe do it 2-3x.
    SMC RESET
    Shut down the computer.
    Unplug the computer's power cord and all peripherals.
    Press and hold the power button for 5 seconds.
    Release the power button.
    Attach the computers power cable.
    Press the power button to turn on the computer.

  • PCNS between AD and Linux based LDAP

    Is it possible to sync passwords between an AD domain and a Linux based LDAP?
    All accounts in LDAP are present in AD. The initial flow would preferable be:
    LDAP password -> AD
    and from there on onwards, all password changes from AD should flow back to LDAP.
    Thanks

    Hi,
    I would be interested to see if the password sync can happen or you may like to find one of the 3rd party tool which can do that, so far i know i never worked with password sync service between the two, you may like to read these articles:
    http://technet.microsoft.com/en-us/magazine/2008.12.linux.aspx
    http://www.chriscowley.me.uk/blog/2013/12/16/integrating-rhel-with-active-directory/
    Best of luck, cheers
    Thanks
    Inderjit

  • OMQ between OpenVMS and Linux

    Are there any issues with having communications between OMQ running on OpenVMS and OMQ running on Linux? Is there any documentation on this product for both OS's that I can download and review?
    Thanks

    We have been successfully using OMQ between Alphas running OpenVMS 7.3-2 and Redhat Linux using v5 of OMQ for several years now. You should be able to use the manuals for Unix to set it up on Linux. As long as you have the initialization files configured on both systems to recognize each other there shouldn't be anything unusual as compared to communicating between 2 OpenVMS nodes.

  • DBLoad utility: (huge) difference between Windows and Linux

    Hello
    I have a backfile of 55M entries to load. I have prepared it under Linux (Dual Core machine + 4GB - Mandriva distribution - java 1.6). When I run DbLoad, here is the (partial) log:
    java -Xmx2048m -jar /home/pd51444/jNplMatchBackfile_Main_0.2/lib/je-3.3.69.jar DbLoad -h . -s group.db -f group.db.txt -v
    loaded 2817812 records 63275 ms - % completed: 5
    loaded 5624893 records 141023 ms - % completed: 10
    loaded 8462602 records 962019 ms - % completed: 15
    loaded 11285385 records 2401566 ms - % completed: 20
    loaded 14094928 records 786746 ms - % completed: 25
    loaded 16914275 records 8965457 ms - % completed: 30
    loaded 19741557 records 15766560 ms - % completed: 35
    loaded 22567310 records 2226015 ms - % completed: 40
    loaded 25376363 records 19662455 ms - % completed: 45
    Then I copied the exact same file on my Windows laptop (dual core + 2GB - XP - java 1.6), and DbLoad goes much faster:
    C:\nplmatch\db&gt;java -Xmx1024m -jar ..\jar\je-3.3.69.jar DbLoad -h . -s group.db -f group.db.txt -v
    Load start: Thu Oct 02 10:33:23 CEST 2008
    loaded 2817812 records 59876 ms - % completed: 5
    loaded 5624893 records 69283 ms - % completed: 10
    loaded 8462602 records 77470 ms - % completed: 15
    loaded 11285385 records 69688 ms - % completed: 20
    loaded 14094928 records 62716 ms - % completed: 25
    loaded 16914275 records 59122 ms - % completed: 30
    loaded 19741557 records 63200 ms - % completed: 35
    loaded 22567310 records 58654 ms - % completed: 40
    loaded 25376363 records 61482 ms - % completed: 45
    loaded 28197663 records 58889 ms - % completed: 50
    loaded 31019453 records 55937 ms - % completed: 55
    loaded 33839878 records 62045 ms - % completed: 60
    loaded 36664839 records 65749 ms - % completed: 65
    loaded 39498035 records 100718 ms - % completed: 70
    loaded 42302599 records 99733 ms - % completed: 75
    loaded 45125268 records 96000 ms - % completed: 80
    loaded 47947180 records 92749 ms - % completed: 85
    loaded 50755655 records 85485 ms - % completed: 90
    loaded 53578015 records 96240 ms - % completed: 95
    Load end: Thu Oct 02 10:57:36 CEST 2008
    Also I use the same je.properties file on both platforms.
    Any idea where this performance problem comes from?
    Thanks in advance
    Best regards
    Philippe

    Hello Phillipe,
    Nothing jumps out at me, but you might try the following:
    . Check the status of disk write caches. On linux, the cache may be disabled and it may be enabled on windows.
    . Does the windows machine have an SSD?
    . Run with -verbose:gc to see if the Linux is being held up by full GC's.
    . Use top or some other utility to see if something else is running on the linux box.
    . Take some random c-\'s to get some stack traces to see what is going on when things get slow.
    . Check on the JVM ergonomics. You may be getting a server or client JVM without knowing. Force it one way or the other with -server or -client.
    Here is a (future) FAQ entry regarding disk write caches (poorly formatted at the moment):
    During my system testing I pulled the power cord on my server to make the ultimate test of JE's durability claims. I am using commitSync() for my transactions, but some of the data that JE said it had committeed was not on disk when the system came back up. What gives?
    Quoting the Berkeley DB Reference Guide:
    Many disk drives contain onboard caches. Some of these
    drives include battery-backup or other functionality that guarantees that all
    cached data will be completely written if the power fails. These drives can
    offer substantial performance improvements over drives without caching support.
    However, some caching drives rely on capacitors or other mechanisms that
    guarantee only that the write of the current sector will complete. These
    drives can endanger your database and potentially cause corruption of your
    data.
    To avoid losing your data, make sure the caching on your disk
    drives is properly configured so the drive will never report that data has
    been written unless the data is guaranteed to be written in the face of a
    power failure. Many times, this means that write-caching on the disk drive
    must be disabled.
    Some operating systems enable the disk write cache by default.
    If you need true durability in the face of a power failure, then you should
    verify that the disk write cache is disabled or that you have some alternative
    means of ensuring durability (e.g. nvram, battery backup, an Uninterruptible
    Power Supply (UPS), Solid State Disk (SSD), etc.) Some disk drives may
    actually require changing hardware jumpers to enable/disable the write cache.
    You can check the status of the write cache using the
    hdparm utility on Linux and the format utility on Solaris.
    On Windows, use the Windows Explorer. Right click on
    the disk drive that you want to check, select Properties,
    click on the Hardware tab, select the desired disk, click
    on the Properties button, click on
    Policies and verify the cache setting with the check box.
    On Windows Server 2003 you generally disable the write caching
    from within the RAID controller software (OEM specific).

Maybe you are looking for