Volume "Bin" Space

Can anyone tell me what I have to do to increase the "volume bin space"?
During Set-up I get the following pop-up window:
"Out of disk space. Disk space required for the setup exceeds available disk space.
Out of disk space – volume “bin”; required space 360 kb; available space; 0 kb. Free some disk space and retry"
I started off with 13 GB free the first time I encountered this. I deleted some pictures and currently have 16GB of disk space available.

"Out of disk space. Disk space required for the setup exceeds available disk space.
Out of disk space – volume “bin”; required space 360 kb; available space; 0 kb. Free some disk space and retry"
hi Bob!
I could swear i've seen this reported once before, but i've lost track of that thread.
hmmm ... a preliminary question. are you getting an error number with that message? if so, which one is it?
in the meantime, let's try throwing the general advice on installation problems found on the document below at your issue:
http://docs.info.apple.com/article.html?artnum=93976
... and it would be a good idea to also turn off your
b antispyware
during the installation. (some of those packages have been producing some unusual installation problems.)
keep us posted.
love, b

Similar Messages

  • Removing disk from a volume storage space 2012 r2

    Hi, I've got a issue, here's my situation, i'm using storage space on a windows server 2012 R2 for my backups. Fisrt I had 14 3tb disk in mirror so there was 19tb usable for date. Now I needed more space so I just added 14 more disk and I extended the virtual
    disk to 38tb. Then when I wnet into windows management -> disk management to extend the partition to use the full 38tb, I got the following error message
    Virtual Disk Service error:
    The volume cannot be extended because the number of clusters will
    exceed the maximum number of clusters supported by the file system
    Then I realized that the cluster size where only at 8k, reading on the Web i understood that I couldn't get more then 32tb.
    Now I would like to remove all of the 14 disk I just added in the pool and create a new pool with 64Ko cluster size so I wont worry about my next size upgrade.
    I'm afraid that even if windows see the added space has a not allocated space, the storage pool is now using the disks. I can't afford to lose any data.
    How can I proceed??
    I saw procedure on this site but they seem's to be for removing 1 disk... since I have 14 disk fully integrated in my pool I am a bit nervous following a one disk procedure
    thank you

    Hi,
    As you have already added the 14 new disks into your existing pool, you will need to check if virtual disk is already extended onto these new physical disks. 
    If not, you should able to remove them from the pool - if virtual disk is already extended onto these new disks, It will ask for replacement to replace them with new hard disks. 
    If a part of hard disks could be removed, maybe you can migrate data onto these hard disks (no need to create a storage pool), delete the existing pool to recreate and migrate files back. 
    If you have any feedback on our support, please send to [email protected]

  • Logical volume free space

    I'm hardly able to work due to insufficient free space, although there's over 100 TB free. How do I fix this? Here's the data:
    Available:          1.03 TB (1,031,502,872,576 bytes)
      Capacity:          1.11 TB (1,111,826,497,536 bytes)
      Mount Point:          /
      File System:          Journaled HFS+
      Writable:          Yes
      Ignore Ownership:          No
      BSD Name:          disk2
      Volume UUID:          2674000A-FC2A-3020-9E9F-864C1F981EF6
      Logical Volume:
      Revertible:          No
      Encrypted:          No
      LV UUID:          97DBC1BE-78AA-4CF2-980B-E2E73E10C769
      Logical Volume Group:
      Name:          Macintosh HD
      Size:          1.12 TB (1,120,333,979,648 bytes)
      Free Space:          115 KB (114,688 bytes)

    I got the same issue:
    Mount Point : /  Capacity : 120.1 GB (120,101,797,888 Bytes)
      Format : Logical Partition  Available : 16.71 GB (16,707,100,672 Bytes)
      Owners Enabled : Yes  Used : 103.39 GB (103,394,697,216 Bytes)
    Name : Macintosh HD  Capacity : 120.47 GB (120,473,067,520 Bytes)
      Type : Logical Volume Group  Available : 18.9 MB (18,948,096 Bytes)
      Disk Status : Online  Used : 120.45 GB (120,454,119,424 Bytes)

  • Volume size and Capacity size Don't match

    Hello,
    I have a fileshare running Windows 2008 R2 that has a Dynamic Disk that is 1670.78 GB formatted NTFS.  The problem is if I look at the properties of the volume it says that the capacity is 987 GB.  I'm not sure how to get the capacity up to the
    full size of the volume.
    The histry is we had a 2003 x64 file share that started life as a scratch place to store files so was RAID 0, according to the original specs for the project that was what they wanted.  After time passed the share became permenant storage and loosing
    it was actually a problem.  At that point I wanted to make it RAID 5.  However I also was hoping to keep the Shadow Copies to enable them to find previous versions of files so I ended up creating an iSCSI target and mirroring them through windows,
    which made a full backup including the copies.  I then broke the mirror rebuilt the RAID and then re-mirrored to the new volume.  However becuase of going from RAID 0 to 5 I lost some space so had to resize the volume.  I used a 2008 R2 machine
    to shring the volume and then raised the capacity back up to the full size.  However the space available never actually grew.
    At that point I figured the problem was from the machine still running 2003 R2 and having been touched by a 2008 R2 machine.  So I scheduled an upgrade and the server is now running 2008 R2, however I am still missing 600GB.
    I can't expand the Volume at this point since there is no free space on the disk, and if I try to shrink the volume I get the option of how much to shrink, then after saying ok I get the error: "The parameter is incorrect."   When I look in the
    Application Log I see an error saying the volume wasn't defragmented because an error was encountered. "The parameter is incorrect. (0x80070057)"
    Please let me know any suggestions or if there is any additional information that would be helpful.
    Thank you,
    Eric

    Hello,
    Here is the chkdsk result:
    CHKDSK is verifying files (stage 1 of 3)...
      406672 file records processed.
    File verification completed.
      3368 large file records processed.
      0 bad file records processed.
      0 EA records processed.
      0 reparse records processed.
    CHKDSK is verifying indexes (stage 2 of 3)...
      469986 index entries processed.
    Index verification completed.
      0 unindexed files scanned.
      0 unindexed files recovered.
    CHKDSK is verifying security descriptors (stage 3 of 3)...
      406672 file SDs/SIDs processed.
    Security descriptor verification completed.
      31658 data files processed.
    CHKDSK is verifying Usn Journal...
      314844376 USN bytes processed.
    Usn Journal verification completed.
    CHKDSK discovered free space marked as allocated in the
    master file table (MFT) bitmap.
    Windows has made corrections to the file system.
    1035144191 KB total disk space.
     912448340 KB in 341209 files.
        165660 KB in 31659 indexes.
             0 KB in bad sectors.
        809607 KB in use by the system.
         65536 KB occupied by the log file.
     121720584 KB available on disk.
          4096 bytes in each allocation unit.
     258786047 total allocation units on disk.
      30430146 allocation units available on disk.
    Even though it reports altering the free space when I go look at the disk I see this:
    (This is the best command I've found to copy into here, if you know a better one please let me know)
    DISKPART> detail volume
      Disk ###  Status         Size     Free     Dyn  Gpt
    * Disk 1    Online         1670 GB      0 B   *
    Read-only              : No
    Hidden                 : No
    No Default Drive Letter: No
    Shadow Copy            : No
    Offline                : No
    BitLocker Encrypted    : No
    Installable            : No
    Volume Capacity        :  987 GB
    Volume Free Space      :  116 GB
    I don't think this is just the difference between size and size on disk, I think for some reason my MFT or something is screwed up.
    Please let me know what you would like me to try next.
    Thanks,
    Eric

  • Startup disk contains replica of itself inside invisible Volume folder

    Hello. Bizarrely, my startup disk contains an image of itself, naturally diminishing the available disk space. It sits inside an invisible folder, namely: Volumes/[name of the starup disk]. This invisible folder "Volumes". Now, inside this folder are images (or folders) of all the volumes on the machine, and of each of them finder claims a size that maches the size of the date on it. But the folder "Volumes" itself is claimed to be the almost size of all the date on the startup disk except the Volumes folder itself. So Users, Applications, Library and System together are the same size as this invisible Volumes folder—almost. The Caches folders are missing inside the Volumes folder. And what finder counts as used space on that startup disk is just all the files including the size of that Volumes folder.
    I don't think that's the way it should be, is it?

    Hello H-C-R. no, I'm not. It may very well be that /Volumes folder. It doesn't display a / in the folder name, but maybe that's not what you mean. Anyway, it is like I said an invisible folder entitled "Volumes", and it contains what looks like all volumes connected, and their respective size count displays the size of the data each. All those folder's have icons like the volumes they refer to (no indication of them being aliases), except the startup volume. which is the volume that ominous invisible Volumes folder is sited on, too. That one is present twice. Once as a folder bearing the name of the volume, but not it's icon, and once as an alias bearing the name of the volume with [space]1 added to it, and the volume's icon. And the invisible Volumes folder on the startup disk has a size count, and that size count adds to the Used count of that startup disk. And since this is a long explanation for something visible at one sight, I'll give you a picture —
    hope that it will be legible. Nice avatar icon btw. (yours).
    Hello, Mr. Nanita, too. Does it contain only that alias?

  • Creating Volumes - Pros & Cons?

    Just got a new 320GB external HD and am trying to decide how to set it up.
    I figure I'll create 2 volumes as backups for my OS 10.4.11 and Classic via
    Carbon Copy Cloner. That will leave about 250GB of space.
    This space will be used for things like a Graphics Studio for Photoshop projects,
    a Music Studio for recording and editing with Garage Band, a workshop for various
    writing projects and a storage space for movies and iTunes music.
    Would there be any advantage to separating these into volumes? My desktop is
    already cluttered with volume icons for my 2 internal HDs (80GB and 120GB).
    I feel I made a mistake creating so many volumes since it's a chore to re-size
    or remove an existing volume.
    So what is the advantage of volumizing HD space? Does it help to prevent fragmentation of data? And is that really a big problem?
    A related question: Does that app for reorganizing volumes - I think it's called
    Drive Genius - really work? I've heard some negative comments about it.
    Advice and suggestions would be appreciated.

    I have separate partitions on my internal & external backup hard drive. One partition is for OS 9.2.2 & apps, one partition is for OS 10.4.11 & apps, and a small bootable partition with diagnostic apps like Disk Warrior. All partitions are bootable.
    You need to check if your Mac can use all of the 320GB. You may be limited to 128GB. Look at these links.
    What Macs natively support large IDE drives? (over 128GB formatted)
    http://forums.xlr8yourmac.com/action.lasso?-database=faq.fp3& layout=FaqList&-response=answer.faq.lasso&-recordID=34188&-search
    How Big a Hard Drive Can I Put in My iMac, eMac, or Power Mac?
    http://lowendmac.com/macdan/05/1024.html
    Using 128 GB or Larger ATA Hard Drives
    http://support.apple.com/kb/HT2544
    The Power Mac Storage FAQ
    http://forums.macnn.com/65/power-mac-and-mac-pro/246391/the-power-mac-storage-fa q/
    Possible Alternative - SpeedTools ATA Hi-Cap Support Driver: Allows the use of extended capacity ATA drives (drives greater than 128 Gigabytes in size) on older (Pre Mirrored Door) G4 and G3 Macintoshes running MacOS X versions 10.2 and later. Cost $24.95
    http://www.speedtools2.com/ATA6.html
    Possible Alternative 2 - Larger than 128GB drives can be used by adding a PCI ATA/100 or ATA/133 controller card, one which is 48-bit LBA compliant; or by adding a PCI SATA controller card and using SATA drives.
     Cheers, Tom

  • Basic of recycle bin

    Dear Experts,
    can you please clarify on this, as you know,in oracle 10g there is a new feature recycle bin, in my database having table with morethan 50Gb of size, if i dropped that particular table that can be restored from recycle bin, can you please explain how recycle bin manage huge size of table objects in that area. i searched some where i couldnt get functionality of recycle bin, where we have to assign the space for holdeing the huge size of tables in recycle bin. in case that recylce bin space is exceeded how we can flashback that dropped tables,? if it is in flashback recovery area, or db_recovery_file_dest_size=10G is low compared to actual table size how it will work? please explain this. apologies if this is a basic question, thanks in Advance.

    Hi,
    What the drop do, is really just renaming the table in the database, not deleting the data from the tablespace.
    Given that, "undropping" a table is just giving the table back it's old name. If you start adding objects into the tablespace that contains a dropped table, oracle will sooner or later start allocation extents that belongs to the dropped table. Once this happens, you're no longer able to "undrop" the table.
    More specific:
    Oracle will leave objects in the recyclebin until the tablespace runs out of space, or until you hit your user quota on the tablespace. At that point, Oracle purges the objects one at a time, starting with the ones dropped the longest time ago, until there is enough space for the current operation. If the tablespace data files are AUTOEXTEND ON, Oracle will purge recyclebin objects before it autoextends a datafile.
    More info can be found here:
    http://www.orafaq.com/node/968
    HtH
    Johan
    Edited by: Johan Nilsson on Jan 20, 2012 1:20 AM

  • Accidental Volume Name Change

    Today we had an unwelcome name change to our raid array. The name was changed when unwelcome visitors fingered the keyboard and replaced the name of the volume with spaces. This immediately effected all managed computers attempting to access the mounted share points or their profiles upon login.
    The name the desktop has been restored however attempting to review the share points the following message appears. The Selected Share point could not be found.
    One or more items have been moved, renamed or deleted. Restore the names and locations of the Items in the path.
    When examing the name of the mounted volume with Disk Utility the mount point appears as VAULT_1 when the finder displays VAULT
    Any suggestions ?

    yes, a disk drive mounts under /Volumes/ and WGM creates sharepoints there so for example, u have a HD called Store, it will be
    /Volumes/Store
    and the sharepoint will be Store, but
    if the name changed to Stores then the original share Stores will still be there in Volumes so u will get
    /Volumes/Store
    /Volumes/Stores
    if your rename Stores back to Store, it will apear on the desktop as Store but because there is already a /Volumes/Store path then it will be renames
    /Volumes/Store_1 i.e the second HD with that names,
    All shares will point to /Volumes/Store
    now to fix this you will need to do this
    remove the folder called /Volume/Store which is the old share point path and is now a folder not a drive path
    rename your HD to Store and reboot
    Store will then be correctly mounted as
    /Volumes/Store
    and all share points should now work again.
    ive done this a few times myself.

  • Migrating Users Quotas on NSS Volumes

    Hi !
    We have a Netware 6.5 SP7 server with around 45000 users on it. We have a NSS volumes called USERS where must of the users have a quotas restrictions.
    How can I migrate all the users quotas restrictions from a Netware server to an OES 2 Linux server ?
    I tried many tools like SCMT, migfiles, tcnvlnx.nlm, ... Everything is migrating correctly EXCEPT the users quotas on the NSS volumes.
    Any ideas ?
    Eric

    Marcel Cox wrote:
    > Are your users lum enabled?
    Quotas don't need LUM enabled users on OES2 Linux.
    But if Eric is referring to Volume Space Restrictions and migrations
    just folders, then it won't work.. directory restrictions should
    migrate, haven't tried becoz we only user Volume based.
    6.1.3 User Space Restrictions Only Migrated in Full Volume Copies
    In consolidation projects where individual folders are dragged and
    dropped, or where a volume is
    copied to a folder on the destination server, user space restrictions on
    the source volume are not
    migrated to the destination volume.
    If your consolidation project is set up to copy an entire volume, with
    the contents of the source
    volume being copied to the root of the destination volume, user space
    restrictions are migrated to the
    destination volume. If user space restrictions already exist on the
    destination volume, they are
    overwritten with the restrictions migrated from the source volume.
    -sk

  • AppleRAID enable: The target disk must be an existing volume, not a whole disk

    I have a Samsung 750 which failed to spin up a time or two the past year. Scrounging I found a Seagate 750. Seeing as how I have plenty of drive bays I think the right thing to do is bind the Samsung and Seagate into a mirrored RAID. This volume is used for Time Machine.
    Did similar to my Mac Mini Snow Leopard Server when it was new without wiping the original filesystem. But can't get there from here today following similar instructions.
    The Samsung is disk2, Seagate is disk1. In Terminal.app I start with:
    % diskutil appleRAID enable mirror /dev/disk2
    The target disk must be an existing volume, not a whole disk
    Blah. Perhaps this helps? Is essentially the same as the 320GB system drive as formatted by Apple:
    % diskutil info disk2
       Device Identifier:        disk2
       Device Node:              /dev/disk2
       Part Of Whole:            disk2
       Device / Media Name:      SAMSUNG HD753LJ Media
       Volume Name:             
       Escaped with Unicode:    
       Mounted:                  No
       File System:              None
       Partition Type:           GUID_partition_scheme
       Bootable:                 Not bootable
       Media Type:               Generic
       Protocol:                 SATA
       SMART Status:             Verified
       Total Size:               750.2 GB (750156374016 Bytes) (exactly 1465149168 512-Byte-Blocks)
       Volume Free Space:        Not Applicable
       Read-Only Media:          No
       Read-Only Volume:         Not applicable (no filesystem)
       Ejectable:                No
       Whole:                    Yes
       Internal:                 Yes
       OS 9 Drivers:             No
       Low Level Format:         Not Supported
       Device Location:          "Bay 2"

    This message, and threads like yours got asked daily for over two years, now it is only a couple times a week!!
    Is it so hard to follow through? you were to have backup already, clones are best, then erase/format and restore.
    Then partition.
    Some have been able to use Disk Utility booted from OS X DVD or another drive, and repair the drive.
    You have to use Boot Camp Assistant (99.9% anyway) to create and achieve a proper Windows Master Boot Record partition.

  • 11.1.2 essbase upgrade not recognizing application files location

    Hi all
    I am upgrading an essbase database (Planning) from 11.1.1.3 to 11.1.2.2,
    I trying to configure 2 essbase servers (original 11.1.1.3 and new 11.1.2) running on 1 physical server using different ports and filesystems
    Following all the steps in the Deployment guide in Preparing Essbase Data for Upgrading,
    I ran all the steps in the Manual file transfer instructions to copy from source machine to target machine.
    I have started the steps in rehosting the essbase applications,
    - Completed enable lookup by cluster name:
    - I completed the server to cluster script on the HSS box (updateEssbaseServer )
    when i went to run EssbaseUpdateEssbaseServer one of the planning databases did not update i got this error
    [Mon May 21 14:03:50 2012]cphypd.sherwin.com///admin/Error(1002097)
    Unable to load database [PlanFcst]
    Error loading application = 1002097, continue...
    looking in the logs i found that it was trying to go to the prior location of the 11.1.1.3 instance and not the new location of the 11.1.2 instance
    Why is it not recognizing the new datafile locations

    I saw this in the guide
    I followed these instructions and specified a different directory when running the staging tool, the script was empty, i do not believe it should be, it should contain mapping info to update essbase.sec, which it did not
    On the Configure Essbase Server page, for Full path to application location (ARBORPATH), specify the location of the existing or replicated Essbase data.
    Note:
    If you replicated data to a new machine, and if you selected Differently-named disk volumes, table spaces, or ARBORPATH on source and target or Consolidated disk volumes or tablespaces on target by exporting data during data replication, you must run a script immediately after configuring Essbase to update the Essbase security file to reflect the disk volumes on the upgraded system. The Essbase Staging Tool provides a script to update the settings in the security file (essbase.sec):
    Start the Essbase Server and EPM System services. Navigate to ARBORPATH/app on the machine that is hosting the upgraded Essbase Server and run the following script using MaxL:
    %ARBORPATH%/bin/startMaxl.bat -u userName editagtsec.msh
    where userName is the Administrator user name to connect to the upgraded Essbase server. The script prompts you to enter the password.
    Note:
    editagtsec.msh could be empty in some scenarios. For example, when you launched the Staging Tool, the Staging Tool reports on existing volumes. If no volumes are listed, editagtsec.msh is empty.

  • Build Installer LV8.0 - is this real??

    Hi all
    I recently received LV8.0. After playing around and test some new features, I wanted to start to bring all testrigs that run under LV to LV8.0. Therefore I chose one of them, made a LV-project, modified the source code, created a application build script and an installer build script. I run the application build script without any problems. OK - everything's fine up to now. But as I wanted to create an installer, it obviously just creates a lot of "nonsense".
    I checked some additional installers (Datasocket, LV8.0 runtime engine, MAX 4.0, Fieldpoint, Scope and VISA) to make just one Installer, which would be very nice to handle. When checking NI Datasocket, I saw in the distribution title on the right side (see attachment) LabVIEW 8.0 Real-Time Module. Why???? There is no need to include the rt-module, as I don't use it.
    I thought "ok, maybe a wrong string displayed - let's create the installer" and run the build script. While running it asked me for the device driver cd's (ok - could be logical) and out of a sudden it also asked me for the cd of the LV realtime module. Well - I inserted it an continued the build.
    In the end I got a directory structure (Volume/bin/lotsOf"pXX") with a size around 650 MB - for a simple application of approximatly 10MB.
    When I browse some of these "pXX" directories I discover subdirectories like "p14/LV711RTE", "p15/LVBrokerAux71" and lots of different msi-files (for instance "NIRegistrationWizard.msi" - but why?).
    Is there anyone out there who can help me quickly?
    Thomas
    Using LV8.0
    Don't be afraid to rate a good answer...
    Attachments:
    Installer.JPG ‏167 KB

    Thomas,
    Thanks, I too have been waiting for us to have the ability to build installers that include all the drivers and related software you need. There was a lot of work that went into it by a team of people here at NI. Their work is being used in LabVIEW, CVI and TestStand now for generating such installers. I'll pass along your complements to the people involved.
    So you unchecked DataSocket yet your DataSocket communications are still working? I think that might be due to DataSocket being included through one of the other installers. When you check an item we'll include all the dependencies that you need. Most of the time those dependencies don't show up in the list of additional installers, because they are lower level drivers and software you don't interact with directly and don't need to know about. Some of the items that are dependencies for other installers do appear in the list such as DataSocket, LabVIEW Run-Time Engine and MAX. FieldPoint for instance is dependent on MAX and will include it if you are including FieldPoint. If you check MAX too, the Installer Builder will only include one copy of MAX. I'm not sure which of the other items is dependent on DataSocket but it could be getting included that way. Another possibility is some older LabVIEW Run-Time Engines are being included in your installer because of MAX and they include an older version of DataSocket as part of their installer not as a dependency so it could get included that way, but that would be an older version of DataSocket and not what you are using on your development machine. Since you know you are using DataSocket I recommend checking it, so you know the version you are using will get included.
    When you uninstalled LabVIEW Real-Time, the core portion of LabVIEW Real-Time was removed, however any dependencies it had that are also a dependency of another NI Installer would not be removed. As I mentioned earlier LabVIEW 8.0 installed DataSocket and the Real-Time Installer upgraded it. The installers have the necessary logic to know that just because RT installed a newer DataSocket we cannot remove it with RT because LabVIEW depends on it as well. When you ran the RT installer the progress bar probably showed installing part X of Y, and when you uninstalled it there was also a progress bar that showed removing part X of Z. Where Z is a smaller number than Y because some dependencies of other products were updated and they can't be removed until all the installers that need them are removed.
    So the next question is "I want to rebuild this installer without having to put the CD in. How can I do that?" The ideal answer would be for us to have a check box on the dialog when you are prompted for the distribution that says something about copying the necessary files to your hard drive and you could choose to uncheck it. We didn't get around to putting that in for LabVIEW 8.0 but if you happen to use CVI you'll notice they have such a check box and as I said we use the same installer building technology so it wouldn't be unreasonable to see that in the future for LabVIEW . But what can you do right now?
    One options would be to always copy your CDs over to the hard drive before you install them and leave the installers on your hard drive, that way the last place they were installed from was somewhere on you hard drive that can be found without user interaction. We don't do that by default since copying the LabVIEW CD(s)plus the driver CDs would really use up your free space. This is messy though because you have to uninstall everything and reinstall it.
    Another option is to copy the distribution you are being prompted for (the entire disc) to your hard drive and then in the Installer Builder select NI DataSocket and change the Installer location path to the location you copied it to (the top level folder that has a nidist.id file in it). The press OK on the dialog, now the Installer Builder will look in that location for NI DataSocket, until a newer version is installed. We have had some problems with this mechanism not getting all the dependencies from the new location, so it isn't a 100% solution, but I just tried it for DataSocket and it worked.
    The first option will catch 100% of the dependencies, where as the second option unfortunately won't.
    Kennon

  • Xbmc-standalone won't start after update

    I updated and now xbmc service won't start. When i run startx on my admin user, I can get xbmc to start up, but it won't start as 'xbmc' anymore. When I `su xbmc` and try to run `startx` or `xbmc-standalone` I get the same error. Not sure what's going on.
    orwell% sudo systemctl status xbmc -l
    xbmc.service - Starts instance of XBMC using xinit
    Loaded: loaded (/usr/lib/systemd/system/xbmc.service; enabled)
    Active: inactive (dead) since Mon 2014-03-03 19:00:23 PST; 3s ago
    Process: 2012 ExecStart=/usr/bin/xinit /usr/bin/dbus-launch /usr/bin/xbmc-standalone -l /run/lirc/lircd -- :0 -nolisten tcp (code=exited, status=0/SUCCESS)
    Main PID: 2012 (code=exited, status=0/SUCCESS)
    Mar 03 19:00:20 orwell xinit[2012]: Initializing built-in extension DRI2
    Mar 03 19:00:20 orwell xinit[2012]: Loading extension GLX
    Mar 03 19:00:20 orwell xinit[2012]: (II) [KMS] Kernel modesetting enabled.
    Mar 03 19:00:21 orwell xinit[2012]: ERROR: Unable to create application. Exiting
    Mar 03 19:00:21 orwell xinit[2012]: ERROR: Unable to create application. Exiting
    Mar 03 19:00:22 orwell xinit[2012]: ERROR: Unable to create application. Exiting
    Mar 03 19:00:22 orwell xinit[2012]: XBMC has exited uncleanly 3 times in the last 1 seconds.
    Mar 03 19:00:22 orwell xinit[2012]: Something is probably wrong
    Mar 03 19:00:22 orwell xinit[2012]: /usr/bin/xinit: connection to X server lost
    Mar 03 19:00:22 orwell xinit[2012]: waiting for X server to shut down (EE) Server terminated successfully (0). Closing log file.
    I've looked through : https://bbs.archlinux.org/viewtopic.php?id=177770 and it seems to be a similar issue.
    Last edited by eyeemaye (2014-03-05 03:45:13)

    After today's update (version 12.3-11), I get a new error. For the record I've changed PAMName from 'login' to 'su' for debugging purposes.
    xbmc.service - Starts instance of XBMC using xinit
    Loaded: loaded (/usr/lib/systemd/system/xbmc.service; enabled)
    Active: inactive (dead) since Fri 2014-03-07 20:58:52 PST; 2min 33s ago
    Process: 3381 ExecStart=/usr/bin/xinit /usr/bin/dbus-launch /usr/bin/xbmc-standalone -l /run/lirc/lircd -- :0 -nolisten tcp vt7 (code=exited, status=0/SUCCESS)
    Main PID: 3381 (code=exited, status=0/SUCCESS)
    Mar 07 20:58:51 orwell pulseaudio[3443]: [pulseaudio] module.c: Failed to load module "module-device-restore" (argument: ""): initialization failed.
    Mar 07 20:58:51 orwell xinit[3381]: ERROR: Unable to create application. Exiting
    Mar 07 20:58:51 orwell pulseaudio[3469]: [pulseaudio] module-device-restore.c: Failed to open volume database '/var/lib/xbmc/.config/pulse/739ce619b49f4eb797280beb523b50fd-device-volumes': No space left on device
    Mar 07 20:58:51 orwell pulseaudio[3469]: [pulseaudio] module.c: Failed to load module "module-device-restore" (argument: ""): initialization failed.
    Mar 07 20:58:51 orwell xinit[3381]: ERROR: Unable to create application. Exiting
    Mar 07 20:58:51 orwell xinit[3381]: XBMC has exited uncleanly 3 times in the last 1 seconds.
    Mar 07 20:58:51 orwell xinit[3381]: Something is probably wrong
    Mar 07 20:58:51 orwell xinit[3381]: /usr/bin/xinit: connection to X server lost
    Mar 07 20:58:51 orwell xinit[3381]: waiting for X server to shut down (EE) Server terminated successfully (0). Closing log file.
    Mar 07 20:58:52 orwell systemd[3382]: pam_unix(su:session): session closed for user xbmc
    Last edited by eyeemaye (2014-03-08 05:19:34)

  • Unbricking without opening

    Hi all, NEWS:I have setup a github for my experiments.So you can find all my information here:https://github.com/utessel/mycloud  Most important:This posting is about how to unbrick a MyCloud without opening it.At least: Some important steps to do this... Disclaimer:1.) It is not a complete walkthrough how to do this.2.) I cannot (and will not) explain or help how to unbrick your device.3.) I hope someone else will clean this up, improve it and create something even more useful out of it:I just wrote down the steps I did, (ok, with a little bit of cleanup already)If there is a wiki or something like that for this: feel free to use my information.4.) I will not provide any binaries I created for myself. My background:I bought a broken MyCloud (in fact to have a second one, just to be able try this) and started to "unbrick" that, and I was trying it the hard way: Not connecting the disk to anything else.So in fact my MyCloud was open: This allowed me to use a serial console to watch what is happening. But that is finally not required at all. The final result works like this:1. you need a working DHCP server that sends a TFTP Server address: I use dnsmasq for this. (don't ask me how to configure that!)2. Setup a tftp folder with two files "startup.sh" and "uImage" (more about these files below)3. Execute (my) rawping command to send the magic ICMP packet (you find it in my other posting about the serial interface)4. Power your MyCloud device (I expect you already have a working LAN connection).5. Wait a bit and you can connect via telnet to your device and you can start to repair it. So how does this work:After power on the device starts the first and the second barebox loader.The second barebox sees the "magic packet" and now tries to download startup.sh from the tftp server:Mine looks like this: startup.shecho Will boot via tftp
    timeout -c 2
    addpart /dev/mem 8M@0x3008000(uImage)
    tftp uImage /dev/mem.uImage
    bootargs="console=ttyS0,115200n8, "
    bootargs="$bootargs mac_addr=$eth0.ethaddr panic=3"
    bootm /dev/mem.uImage  This will now load the kernel (=uImage) via tftp and start it.The kernel boots and will use its "initramfs" that I compiled into it:My "initramfs" contains just the network modules, a busybox installation and a few scripts: These scripts will start the network, make a dhcp request (again) and finally start telnetd:Now I can connect via telnet to my box. The whole procedure does not need anything on the disk at all.I expect it would work even without having a disk, but I didn't try that. I now can mount partitions, communicate via network, whatever busybox allows.So it should be possible to download new images to the disk etc.Or simply repair the broken script you have changed? You now might ask "how can i create that uImage file?" For that I used two things:1) The kernel from the GPL package2) and the busybox sources (I used busybox-1.22.1 I already had from an openwrt installation). For busybox I used "defconfig", and changed (via menuconfig) to build it "static" (that is important). For the kernel I changed (via menuconfig) to use initramfs and added the path to my initramfs.cpio.gz.(in fact I tried a lot of other changes, but finally was able to remove all of them for this...) Here is my script I used to compile busybox and the kernel and generating the scripts in the initramfs:Disclaimer:I know I don't have all steps repeatable with this script. As I tried this several times, I know I copied the modules from the _bin folder (created during kernel compile) to the _install folder of busybox once.Maybe I forgot other steps I did. But I hope someone else can fill this gap. export ARCH=arm
    export CROSS_COMPILE=arm-linux-gnueabi-
    cd busybox-1.22.1
    make -j8 install
    cd _install
    mkdir {bin,dev,sbin,etc,proc,sys,lib}
    mkdir lib/modules
    mkdir dev/pts
    mkdir -p usr/share/udhcpc
    echo "#!/bin/sh" >init
    echo "mount -t proc proc /proc" >>init
    echo "mount -t sysfs sysfs /sys" >>init
    echo "mount -t devpts none /dev/pts" >>init
    echo "echo /sbin/mdev >/proc/sys/kernel/hotplug" >>init
    echo "mdev -s" >>init
    echo "insmod /lib/modules/3.2.26/pfe.ko lro_mode=1 tx_qos=1 alloc_on_init=1" >>init
    echo "ifconfig eth0 up" >>init
    echo "udhcpc -b" >>init
    echo "telnetd -l/bin/sh" >>init
    echo "exec /sbin/init" >>init
    chmod +x init
    echo "T0:2345:respawn:/sbin/getty -L ttyS0 115200 vt100" >etc/inittab
    echo "ttyS0::askfirst:-/bin/sh" >etc/inittab
    echo "#!/bin/sh" >usr/share/udhcpc/default.script
    echo "case \"\$1\" in" >>usr/share/udhcpc/default.script
    echo " renew|bound)" >>usr/share/udhcpc/default.script
    echo " /sbin/ifconfig \$interface \$ip \$BROADCAST \$NETMASK" >>usr/share/udhcpc/default.script
    echo " ;;" >>usr/share/udhcpc/default.script
    echo "esac" >>usr/share/udhcpc/default.script
    echo "exit 0" >>usr/share/udhcpc/default.script
    chmod +x usr/share/udhcpc/default.script
    sudo mknod dev/null c 1 3
    sudo mknod dev/tty c 5 0
    sudo mknod dev/console c 5 1find . | cpio -H newc -o > ../../initramfs.cpio
    cd ../..
    cat initramfs.cpio | gzip > initramfs.cpio.gz
    cp initramfs.cpio.gz packages/kernel_3.2/
    cd packages/kernel_3.2
    make uImage
    cp _bld/arch/arm/boot/uImage /home/tftp/ Oh:I used the compiler I got from the ubuntu 14.04 installation (sudo apt-get install gcc-arm-linux-gnueabi),not the one wd uses. In fact I think I will now try to setup a clean debian system on my device, as I don't need all the cloud things... Ciao,   Baerle  

    hi all thanks to baerle and fox_exe for detailed procedure and support  i cannot mount /dev/sda4 i get 'No such file or direcotry'  in the dmsg boot log i've got these [    9.394868] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   14.938401] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   14.944867] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   20.488788] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   20.495250] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   26.038382] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   26.044844] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   31.588296] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   31.594755] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   37.138177] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   37.144638] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   42.688068] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   42.694530] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   48.238308] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   48.244769] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   53.788145] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   53.794605] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   59.339924] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   59.346383] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   64.886828] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   64.893294] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   70.441961] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   70.448420] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
    [   75.980575] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [   75.987035] ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)   so i've tried to open the wdmc (according to wd support i can open it to recover data)  but when i connect the disk is not recognized  diskutil info /dev/disk1   Device Identifier:        disk1   Device Node:              /dev/disk1   Part of Whole:            disk1   Device / Media Name:      WDC WD30 EFRX-68EUZN0 Media    Volume Name:              Not applicable (no file system)    Mounted:                  Not applicable (no file system)    File System:              None    Content (IOContent):      None   OS Can Be Installed:      No   Media Type:               Generic   Protocol:                 USB   SMART Status:             Not Supported    Total Size:               0 B (0 Bytes) (exactly 0 512-Byte-Units)   Volume Free Space:        Not applicable (no file system)   Device Block Size:        512 Bytes    Read-Only Media:          No   Read-Only Volume:         Not applicable (no file system)   Ejectable:                Yes    Whole:                    Yes   Internal:                 No   OS 9 Drivers:             No   Low Level Format:         Not supported  /dev/disk1   #:                       TYPE NAME                    SIZE       IDENTIFIER   0:                                                   *0 B        disk1  i've tried also to use diskutil diskRepair but i've got this rror repairing map: POSIX reports: Invalid argument (22)  or gpt recover /dev/disk1  gpt recover: unable to open device '/dev/disk1': Operation not supported by device  do you know if i can: 1) access to my data and recover data2) alternatively initialize disk from scratch   ?  any support is appreciated  thank you!!

  • Superfluous files (like lv80rte) included with installer [LV 8.2]. Why?

    I was looking through all the subdirectories created under Volume\bin by the installer in LV 8.2 and was wondering why some of them are there.
    There are files everywhere that seem to deal with many languages even though I told the installer I only wanted English.
    There's a directory that includes files such as PXIPlatformHardwareSupport64.msi and PXIPlatformServices.msi. I'm not using any PXI hardware.
    There are directories LVBrokerAux71 and LVBrokerAux800. What are they?
    There's a LV711RTE and LV80RTE dir which deal with LV 7.1.1 and LV8.0 run time engines. Why are they needed?
    George

    GS wrote:
    Why can't everything be included in the latest runtime engine?
    Probably because that would require creating a new build (i.e. a new version of MAX) and validating it and everything that goes along with that which is quite a lot of work. It's much easier to just take the X megabytes, since space is mostly cheap.
    Try to take over the world!

Maybe you are looking for