I5 27" too heavy for its mount?

I notice that my 27" i5 steadily creeps down to a perfectly vertical position over the course of a day or two. I'd like to keep it turned slightly upward, like I did with my 24", but so far, it doesn't want to stay. Anyone else experiencing this?

HI,
That should not happen. Even if you didn't get AppleCare, contact them regardless.
http://www.apple.com/support/products/
Carolyn

Similar Messages

  • Def Tech 1000 too heavy for mount

    I just got around to hanging my 2 front speakers (Definitive Technology 1000s) and it seems as though they are too heavy for the mounts I bought (Def Tech Pro Mount 80). The speakers fall forward. Has anyone else run into this problem? Any suggestions? Thanks very much.

    Hey Prince,
    I double-checked with Definitive Technology’s website, but it doesn’t appear that the ProMount 80 was designed to support ProMonitor 1000 speakers. In fact, the only accessory that was listed as being compatible was the ProStand 100/200/1000. For additional information though you may want to give Definitive Technology a call at 1-(800)-228-7148. A support rep should be able to provide you some basic troubleshooting support, as well as confirm whether or not you purchased the correct wall mount.
    Hope this helps you out.
    Aaron|Social Media Specialist | Best Buy® Corporate
     Private Message

  • HT4972 i have an iphone 4 with ios 5.1.1 and i want to upgrade it to the latest ios 6 and not 7 because its too heavy for iphone 4 how can i do it

    i have an iphone 4 with ios 5.1.1 and i want to upgrade it to the latest ios 6 and not 7 because its too heavy for iphone 4 how can i do it?

    You cannot. When you upgrade, you will get the latest version of iOS 7.

  • ORA-09100 specified length too long for its datatype with Usage Tracking.

    Hello Everyone,
    I'm getting an (ORA-09100 specified length too long for its datatype) (a sample error is provided below) when viewing the "Long-Running Queries" from the default Usage Tracking Dashboard. I've isolated the problem to the logical column "Logical SQL" corresponding to the physical column "QUERY_TEXT" in the table S_NQ_ACCT. Everything else is working correctly. The logical column "Logical SQL" is configured as a VARCHAR of length 1024 and the physical column "QUERY_TEXT" is configured as a VARCHAR2 of length 1024 bytes in an Oracle 11g database. Both are the default configurations and were not changed.
    In the the table S_NQ_ACCT we do have record entries in the field "QUERY_TEXT" that are of length 1024 characters. I've tried various configuration such as increasing the the number of bytes or removing any special character but without any sucess. Currently, my only possible workaround is reducing the "Query_Text" data entries to roughly 700 characters. This makes the error go away. Additional point my character set to WE8ISO8859P15.
    - Any suggestions?
    - Has anyone else ever had this problem?
    - Is this potentially an issue with the ODBC drive? If so, why would ODBC not truncate the field length?
    - What is the maximum length supported by BI, ODBC?
    Thanks in advance for everyones help.
    Regards,
    FBELL
    *******************************Error Message**************************************************
    View Display Error
    Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 17001] Oracle Error code: 910, message: ORA-00910: specified length too long for its datatype at OCI call OCIStmtExecute: select distinct T38187.QUERY_TEXT as c1 from S_NQ_ACCT T38187 order by c1. [nQSError: 17011] SQL statement execution failed. (HY000)
    SQL Issued: SELECT Topic."Logical SQL" saw_0 FROM "Usage Tracking" ORDER BY saw_0
    *******************************************************************************************

    I beleieve I have found the issue for at least one report.
    We have views in our production environment that call materialized views on another database via db link. They are generated nightly to reduce load for day-old reporting purposes on the Production server.
    I have found that the report in question uses a view with PRODUCT_DESCRIPTION. In the remote database, this is a VARCHAR2(1995 Bytes) column. However, when we create a view in our Production environment that simply calls this materialized view, it moves the length to VARCHAR2(4000).
    The oddest thing is that the longest string stored in the MV for that column is 71 characters long.
    I may be missing something here.... But the view that Discoverer created on the APPS side also has a column length for the PRODUCT_DESCRIPTION column of VARCHAR2(4000) and running the report manually returns results less than that - is this a possible bug?

  • OCZ Goliath 2 SE too heavy for m/b?

    Hi all,
    has anybody had any problems with this heatsink?
    This is the heaviest heatsink ive ever held.
    Are the stories about heatsinks ripping out cpu's true?
    This is going to be coupled with a delta 80mm on a k7t pro2-a board.
    tia!
    stevensly

    TRIM Enabler support thread:
    http://forums.macrumors.com/showthread.php&t=1125400
    http://www.groths.org/?p=308
    I don't know that model but I would update the firmware.
    I'd also let the SF controller do its think and not use TE unless you feel it is safe and the current version. That may not have always been true. SF Background GC should be enough. And wipe/reinstall if not.
    As for sleep etc, hibernation is a laptop feature. And even PC BIOS have had to be updated (two PCs, Intel and Gigabyte) and until then it was an issue at times.
    OWC uses SF and does not recommend TRIM for OS X.
    So, what I would say is this:
    When your computer has a hard freeze, it needs you to act/react accordingly:
    Safe Boot at minimum to clean up and attempt to repair its directory.
    Not sure why or what but that can happen to anyone almost and may not even be SSD related.
    I'd keep the drive, check back OCZ and firmware, talk to others on the MacRumors and Groths.
    note: I reformat drives regularly, couple times a year, replace system drive also, and it is the amount of writes and I/O your SSD has to handle, not years. Even DVDs don't last two decades.

  • I updated to iTunes 11 on my Windows PC. Since the update, when I open iTunes its window is too large for my screen. Resizing is a pain, because I have to change my screen resolution. And when I restart iTunes,it has reverted back to being too large.Help!

    I recently updated to iTunes 11 on my Windows PC. Since the update, each time I open iTunes its window is too large for my screen. Resizing it is a pain, because I have to change my screen resolution. And then when I restart iTunes, it has reverted back to being too large. Please help!

    Having the same issue here....a quick fix is to just minimize the window, and immediately maximize it.  It will now fit the screen.  Trouble is, after you close it, you'll have to do it again when you re-start itunes.  The good news is it takes all of about 2 seconds.  Apple should fix this.  Should.

  • Alter mount database failing: Intel SVR4 UNIX Error: 79: Value too large for defined data type

    Hi there,
    I am having a kind of weird issues with my oracle enterprise db which was perfectly working since 2009. After having had some trouble with my network switch (replaced the switch) the all network came back and all subnet devices are functioning perfect.
    This is an NFS for oracle db backup and the oracle is not starting in mount/alter etc.
    Here the details of my server:
    - SunOS 5.10 Generic_141445-09 i86pc i386 i86pc
    - Oracle Database 10g Enterprise Edition Release 10.2.0.2.0
    - 38TB disk space (plenty free)
    - 4GB RAM
    And when I attempt to start the db, here the logs:
    Starting up ORACLE RDBMS Version: 10.2.0.2.0.
    System parameters with non-default values:
      processes                = 150
      shared_pool_size         = 209715200
      control_files            = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
      db_cache_size            = 104857600
      compatible               = 10.2.0
      log_archive_dest         = /opt/oracle/oradata/CATL/archive
      log_buffer               = 2867200
      db_files                 = 80
      db_file_multiblock_read_count= 32
      undo_management          = AUTO
      global_names             = TRUE
      instance_name            = CATL
      parallel_max_servers     = 5
      background_dump_dest     = /opt/oracle/admin/CATL/bdump
      user_dump_dest           = /opt/oracle/admin/CATL/udump
      max_dump_file_size       = 10240
      core_dump_dest           = /opt/oracle/admin/CATL/cdump
      db_name                  = CATL
      open_cursors             = 300
    PMON started with pid=2, OS id=10751
    PSP0 started with pid=3, OS id=10753
    MMAN started with pid=4, OS id=10755
    DBW0 started with pid=5, OS id=10757
    LGWR started with pid=6, OS id=10759
    CKPT started with pid=7, OS id=10761
    SMON started with pid=8, OS id=10763
    RECO started with pid=9, OS id=10765
    MMON started with pid=10, OS id=10767
    MMNL started with pid=11, OS id=10769
    Thu Nov 28 05:49:02 2013
    ALTER DATABASE   MOUNT
    Thu Nov 28 05:49:02 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Trying to start db without mount it starts without issues:
    SQL> startup nomount
    ORACLE instance started.
    Total System Global Area  343932928 bytes
    Fixed Size                  1280132 bytes
    Variable Size             234882940 bytes
    Database Buffers          104857600 bytes
    Redo Buffers                2912256 bytes
    SQL>
    But when I try to mount or alter db:
    SQL> alter database mount;
    alter database mount
    ERROR at line 1:
    ORA-00205: error in identifying control file, check alert log for more info
    SQL>
    From the logs again:
    alter database mount
    Thu Nov 28 06:00:20 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Thu Nov 28 06:00:20 2013
    ORA-205 signalled during: alter database mount
    We have already checked in everywhere in the system, got oracle support as well without success. The control files are in the place and checked with strings, they are correct.
    Can somebody give a clue please?
    Maybe somebody had similar issue here....
    Thanks in advance.

    Did the touch to update the date, but no joy either....
    These are further logs, so maybe can give a clue:
    Wed Nov 20 05:58:27 2013
    Errors in file /opt/oracle/admin/CATL/bdump/catl_j000_7304.trc:
    ORA-12012: error on auto execute of job 5324
    ORA-27468: "SYS.PURGE_LOG" is locked by another process
    Sun Nov 24 20:13:40 2013
    Starting ORACLE instance (normal)
    control_files = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
    Sun Nov 24 20:15:42 2013
    alter database mount
    Sun Nov 24 20:15:42 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Sun Nov 24 20:15:42 2013
    ORA-205 signalled during: alter database mount

  • "This backup is too large for the backup volume" - Info

    Hi there. I had a problem with my time machine and got an error stating "This backup is too large for the backup volume". I did noticed after logging in that TM was indexing in the upper right corner [magnifier with a flashing dot(spotlight)] for a few seconds. So then I went on to "back up now" and it was preparing and then I got the Error message described above. So I uninstalled my anti-virus (you must disable auto protection/or exclude timemachine.app and its plist file (location below)from Anti-virus preferences in the case you have a anti-virus app, otherwise it will take forever to back up.) though that was not my issue. I then turn off time machine and deleted this .plist file in Macintosh HD > Library > Preferences > com.apple.TimeMachine.plist....STOP here if this fixed your problem after restarting. Time machine External Drive in Disk Utility **THESE STEPS WILL ERASE YOUR ENTIRE BACKUPS** ( "Erase" and rename or "partition" to make more that one partition on the External Drive if you wish, and Rename) (Disk utility> Partition tab> "option" you must - guid=intel / apple partition map=PowerPC)...sorry alot of newbie out there...by deleting the "com.apple.TimeMachine.plist" = when you plug in you TM it will ask you if you want to use the drive as a TM back up automatically. This did the trick. But to let you guys know I also used Cocktail (app) and used a feature it has to erase my computers spotlight index and rebuild it. Also in Cocktail, when you have your time machine plugged in you can erase its index and disable it all together. I recommend you first disable spotlight (before the first initial TM backup) in system preferences > spotlight> Privacy (tab) and plus to add time machine ...which has to be mounted (plugged in) to add from window under "Devices".

    http://www.macfixit.com/article.php?story=20090403093528353

  • "Backup is too large for the backup volume" error

    I've been backing up with TM for a while now, and finally it seems as though the hard drive is full, since I'm down to 4.2GB available of 114.4GB.
    Whenever TM tries to do a backup, it gives me the error "This backup is too large for the backup volume. The backup requires 10.8 GB but only 4.2GB are available. To select a larger volume, or make the backup smaller by excluding files, open System Preferences and choose Time Machine."
    I understand that I have those two options, but why can't TM just erase the oldest backup and use that free space to make the new backup? I know a 120GB drive is pretty small, but if I have to just keep accumulating backups infinitely, I'm afraid I'll end up with 10 years of backups and a 890-zettabyte drive taking up my garage. I'm hoping there's a more practical solution.

    John,
    Please review the following article as it might explain what you are encountering.
    *_“This Backup is Too Large for the Backup Volume”_*
    First, much depends on the size of your Mac’s internal hard disk, the quantity of data it contains, and the size of the hard disk designated for Time Machine backups. It is recommended that any hard disk designated for Time Machine backups be +at least+ twice as large as the hard disk it is backing up from. You see, the more space it has to grow, the greater the history it can preserve.
    *Disk Management*
    Time Machine is designed to use the space it is given as economically as possible. When backups reach the limit of expansion, Time Machine will begin to delete old backups to make way for newer data. The less space you provide for backups the sooner older data will be discarded. [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    However, Time Machine will only delete what it considers “expired”. Within the Console Logs this process is referred to as “thinning”. It appears that many of these “expired” backups are deleted when hourly backups are consolidated into daily backups and daily backups are consolidated into weekly backups. This consolidation takes place once hourly backups reach 24 hours old and daily backups reach about 30 days old. Weekly backups will only be deleted, or ‘thinned’, once the backup drive nears full capacity.
    One thing seems for sure, though; If a new incremental backup happens to be larger than what Time Machine currently considers “expired” then you will get the message “This backup is too large for the backup volume.” In other words, Time Machine believes it would have to sacrifice to much to accommodate the latest incremental backup. This is probably why Time Machine always overestimates incremental backups by 2 to 10 times the actual size of the data currently being backed up. Within the Console logs this is referred to as “padding”. This is so that backup files never actually reach the physically limits of the backup disk itself.
    *Recovering Backup Space*
    If you have discovered that large unwanted files have been backed up, you can use the Time Machine “time travel” interface to recovered some of that space. Do NOT, however, delete files from a Time Machine backup disk by manually mounting the disk and dragging files to the trash. You can damage or destroy your original backups by this means.
    Additionally, deleting files you no longer wish to keep on your Mac does not immediately remove such files from Time Machine backups. Once data has been removed from your Macs' hard disk it will remain in backups for some time until Time Machine determines that it has "expired". That's one of its’ benefits - it retains data you may have unintentionally deleted. But eventually that data is expunged. If, however, you need to remove backed up files immediately, do this:
    Launch Time Machine from the Dock icon.
    Initially, you are presented with a window labeled “Today (Now)”. This window represents the state of your Mac as it exists now. +DO NOT+ delete or make changes to files while you see “Today (Now)” at the bottom of the screen. Otherwise, you will be deleting files that exist "today" - not yesterday or last week.
    Click on the window just behind “Today (Now)”. This represents the last successful backup and should display the date and time of this backup at the bottom of the screen.
    Now, navigate to where the unwanted file resides. If it has been some time since you deleted the file from your Mac, you may need to go farther back in time to see the unwanted file. In that case, use the time scale on the right to choose a date prior to when you actually deleted the file from your Mac.
    Highlight the file and click the Actions menu (Gear icon) from the toolbar.
    Select “Delete all backups of <this file>”.
    *Full Backup After Restore*
    If you are running out of disk space sooner than expected it may be that Time Machine is ignoring previous backups and is trying to perform another full backup of your system? This will happen if you have reinstalled the System Software (Mac OS), or replaced your computer with a new one, or hard significant repair work done on your exisitng Mac. Time Machine will perform a new full backup. This is normal. [http://support.apple.com/kb/TS1338]
    You have several options if Time Machine is unable to perform the new full backup:
    A. Delete the old backups, and let Time Machine begin a fresh.
    B. Attach another external hard disk and begin backups there, while keeping this current hard disk. After you are satisfied with the new backup set, you can later reformat the old hard disk and use it for other storage.
    C. Ctrl-Click the Time Machine Dock icon and select "Browse Other Time Machine disks...". Then select the old backup set. Navigate to files/folders you don't really need backups of and go up to the Action menu ("Gear" icon) and select "Delete all backups of this file." If you delete enough useless stuff, you may be able to free up enough space for the new backup to take place. However, this method is not assured as it may not free up enough "contiguous space" for the new backup to take place.
    *Outgrown Your Backup Disk?*
    On the other hand, your computers drive contents may very well have outgrown the capacity of the Time Machine backup disk. It may be time to purchase a larger capacity hard drive for Time Machine backups. Alternatively, you can begin using the Time Machine Preferences exclusion list to prevent Time Machine from backing up unneeded files/folders.
    Consider as well: Do you really need ALL that data on your primary hard disk? It sounds like you might need to Archive to a different hard disk anything that's is not of immediate importance. You see, Time Machine is not designed for archiving purposes, just as a backup of your local drive(s). In the event of disaster, it can get your system back to its' current state without having to reinstall everything. But if you need LONG TERM storage, then you need another drive that is removed from your normal everyday working environment.
    This KB article discusses this scenario with some suggestions including Archiving the old backups and starting fresh [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    Let us know if this clarifies things.
    Cheers!

  • Cannot decrypt RSA encrypted text : due to : input too large for RSA cipher

    Hi,
    I am in a fix trying to decrypt this RSA encrypted String ... plzz help
    I have the encrypted text as a String.
    This is what I do to decrypt it using the Private key
    - Determine the block size of the Cipher object
    - Get the array of bytes from the String
    - Find out how many block sized partitions I have in the array
    - Encrypt the exact block sized partitions using update() method
    - Ok, now its easy to find out how many bytes remain (using % operator)
    - If the remaining bytes is 0 then simply call the 'doFinal()'
    i.e. the one which returns an array of bytes and takes no args
    - If the remaining bytes is not zero then call the
    'doFinal(byte [] input, int offset, in inputLen)' method for the
    bytes which actually remained
    However, this doesnt work. This is making me go really crazy.
    Can anyone point out whats wrong ? Plzz
    Here is the (childish) code
    Cipher rsaDecipher = null;
    //The initialization stuff for rsaDecipher
    //The rsaDecipher Cipher is using 256 bit keys
    //I havent specified anything regarding padding
    //And, I am using BouncyCastle
    String encryptedString;
    // read in the string from the network
    // this string is encrypted using an RSA public key generated earlier
    // I have to decrypt this string using the corresponding Private key
    byte [] input = encryptedString.getBytes();
    int blockSize = rsaDecipher.getBlockSize();
    int outputSize = rsaDecipher.getOutputSize(blockSize);
    byte [] output = new byte[outputSize];
    int numBlockSizedPartitions = input.length / blockSize;
    int numRemainingBytes = input.length % blockSize;
    boolean hasRemainingBytes = false;
    if (numRemainingBytes > 0)
      hasRemainingBytes = true;
    int offset = 0;
    int inputLen = blockSize;
    StringBuffer buf = new StringBuffer();
    for (int i = 0; i < numBlockSizedPartitions; i++) {
      output = rsaDecipher.update(input, offset, blockSize);
      offset += blockSize;
      buf.append(new String(output));
    if (hasRemainingBytes) {
      //This is excatly where I get the "input too large for RSA cipher"
      //Which is suffixed with ArrayIndexOutofBounds
      output = rsaDecipher.doFinal(input,offset,numRemainingBytes);
    } else {
      output = rsaDecipher.doFinal();
    buf.append(new String(output));
    //After having reached till here, will it be wrong if I assumed that I
    //have the properly decrypted string ???

    Hi,
    I am in a fix trying to decrypt this RSA encrypted
    String ... plzz helpYou're already broken at this point.
    Repeat after me: ciphertext CANNOT be safely represented as a String. Strings have internal structure - if you hand ciphertext to the new String(byte[]) constructor, it will eat your ciphertext and leave you with garbage. Said garbage will fail to decrypt in a variety of puzzling fashions.
    If you want to transmit ciphertext as a String, you need to use something like Base64 to encode the raw bytes. Then, on the receiving side, you must Base64-DEcode back into bytes, and then decrypt the resulting byte[].
    Second - using RSA as a general-purpose cipher is a bad idea. Don't do that. It's slow (on the order of 100x slower than the slowest symmetric cipher). It has a HUGE block size (governed by the keysize). And it's subject to attack if used as a stream-cipher (IIRC - I can no longer find the reference for that, so take it with a grain of salt...) Standard practice is to use RSA only to encrypt a generated key for some symmetric algorithm (like, say, AES), and use that key as a session-key.
    At any rate - the code you posted is broken before you get to this line:byte [] input = encryptedString.getBytes();Go back to the encrypting and and make it stop treating your ciphertext as a String.
    Grant

  • Project too new for final cut 6.0.2 error message

    My colleague at work has edited a rough cut on her Quad Core Intel computer. I'm using a Power Mac G5, and was able to download the project onto my desktop, but I get this message every time I try to open it: "This project is unreadable or may be too new for this version of Final Cut." We're both using FCP 6.0.2, I've done all the software updates I can, but I still receive the message. Does anyone know what I can do?

    Did the person use the same workstation you are using? If you have the other person's volume mounted on your desktop, and you are definitely running the same OS, and you have the same version of FCP, then try opening the original file, not a copy. If it opens, do a *"save as"* instead of using a duplicated copy of the project.
    Also, if they are using a different workstation, they may have another 3rd party component installed that they used, (maybe a filter or codec), that would interfere with it opening. The best way to make sure FibreJet works seemlessly in a multi-station work environment is to make absolutely sure that every station is a clone of the others. If you do an update on one, you have to do it on all. If you add an application, make sure you add it to all stations.
    If you are using a different workstation, try to open it with the other station, then mount your desktop to that station through the LAN, and do a *"save as"* to your desktop, then try opening it from there.
    Of course, do all this after trying to zip it before copying it. It only takes a second and would eliminate one aspect of the possible problem.

  • SQL Error: ORA-12899: value too large for column

    Hi,
    I'm trying to understand the above error. It occurs when we are migrating data from one oracle database to another:
    Error report:
    SQL Error: ORA-12899: value too large for column "USER_XYZ"."TAB_XYZ"."COL_XYZ" (actual: 10, maximum: 8)
    12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
    *Cause:    An attempt was made to insert or update a column with a value
    which is too wide for the width of the destination column.
    The name of the column is given, along with the actual width
    of the value, and the maximum allowed width of the column.
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.
    *Action:   Examine the SQL statement for correctness.  Check source
    and destination column data types.
    Either make the destination column wider, or use a subset
    of the source column (i.e. use substring).
    The source database runs - Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    The target database runs - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    The source and target table are identical and the column definitions are exactly the same. The column we get the error on is of CHAR(8). To migrate the data we use either a dblink or oracle datapump, both result in the same error. The data in the column is a fixed length string of 8 characters.
    To resolve the error the column "COL_XYZ" gets widened by:
    alter table TAB_XYZ modify (COL_XYZ varchar2(10));
    -alter table TAB_XYZ succeeded.
    We now move the data from the source into the target table without problem and then run:
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));
    -Error report:
    SQL Error: ORA-01441: cannot decrease column length because some value is too big
    01441. 00000 - "cannot decrease column length because some value is too big"
    *Cause:   
    *Action:
    So we leave the column width at 10, but the curious thing is - once we have the data in the target table, we can then truncate the same table at source (ie. get rid of all the data) and move the data back in the original table (with COL_XYZ set at CHAR(8)) - without any issue.
    My guess the error has something to do with the storage on the target database, but I would like to understand why. If anybody has an idea or suggestion what to look for - much appreciated.
    Cheers.

    843217 wrote:
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.You are looking at character lengths vs byte lengths.
    The data in the column is a fixed length string of 8 characters.
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));varchar2(8 byte) or varchar2(8 char)?
    Use SQL Reference for datatype specification, length function, etc.
    For more info, reference {forum:id=50} forum on the topic. And of course, the Globalization support guide.

  • I have my iTunes library on an external HD because it is too big for my iMAC 500GB HD. I want to backup my external HD. Can I do this with any Apple cloud product or should I look elsewhere? Also is it worth just buying another external HD to back up

    I have my iTunes library on an external HD because it is too big for my iMAC 500GB HD. However, it is not backed up, so I need a solution to backup my external HD. Can I do this with any Apple cloud product or should I look elsewhere for cloud products? Will it be cheaper/easier just to buy another external HD to back up my existing external HD? Thanks

    I don't know if this is me adding files to iTunes when the external wasn't connected
    it is.
    is it OK to just keep deleting that library on the Air?
    i wouldn't - at least not until i
    mount the external
    point iTunes media folder location back to the external via preferences > advanced
    consolidate my library via file > library > organize library
    The ntfs hasn't seem to be causing any problems, but I've always wanted to know.
    in order for your Mac to write to NTFS drives, it needs some help by installing e.g. the NTFS 3G driver. apparently that or something similar is installed on your Mac already. preferably, it would be formatted for Mac but then windows machines would need to have e.g. MacDrive installed to recognize the drive.

  • HT1229 My iPhoto Library (version 8.1.2) is 280GB (greater than 50% of my 500GB total storage memory on my iMac.  It was too large for me to drag it to a new hard drive so the Apple geniuses did it for me.  However they did not delete the Library from my

    My iPhoto Library (version 8.1.2) is 280GB (greater than 50% of my 500GB total storage memory on my iMac.  It was too large for me to drag it to a new hard drive so the Apple geniuses did it for me.  However they did not delete the Library from my iMac (that's my responsibility).  I dragged it to Trash and when it started to move I clicked over to the new hard drive to confirm it had indeed been copied.  I became nervous when I didn't see among the few files on this otherwise empty new hard drive anything that resembled a 280GB Library so I cancelled the migration to trash.
    How can I be sure that my iphoto has been copied and that all my "metadata" survived in tact?

    the new backup drive
    I thought the new drive would be your data drive to host the iPhoto library. Do you also use it for TimeMachine backups?
    I am unable to search either in email as well as Finder.  I AM able to search within iPhoto though, thankfully
    Spotlight may still be busy rebuildig its index.
    You could try to rebuild the Spotlight Index, if you do see no progress:
    Spotlight: How to re-index folders or volumes
    I hope, other frequent posters will drop in. I have not used iPhoto 8.x in long time.

  • HT204370 I recently purchased a movie at 1080p and when I went to view it, I received a message stating 1080p is too large for playback.  How can I re-download the movie at 720?

    I recently purchased a movie at 1080p and when I went to view it, I received a message stating 1080p is too large for playback.  How can I re-download the movie at 720?

    Not sure what happend there. But ATV only streams, no HD so if you leave it too long, the whole download gets lost.
    Check with iTS.
    iTunes Store Support
    http://www.apple.com/emea/support/itunes/contact.html

Maybe you are looking for

  • I Can't see BI+ Reports in Smartspace

    Hi all. I make a new installation of SmartSpace 9.3.1 and install without problems, but when try to check reports of webanalysis in SmartSpace Content only show smartspace with this error. com.brio.one.services.globalservicemanager.GSMException: GsmN

  • Writting to an specific spreadsheet of an existing excel file.

    I need to write a labview 2-D array of real numbers to an specifc spreadsheet of an existing excel file (which has many sheets and is located in my hard disk).   Some years ago a member forum wrote a nice and small code for writting to excel.  The co

  • Macro to close key figure

    Is there any macro or another method to make one key figure only output, for a month for example?

  • Automatically delete imported photos from memory card

    I cant find this option anywhere, and with other photo management systems it's a fairly standard option, but how do you automatically delete the photos off the memory card once you have successfully imported them??

  • 7.0 DSO Activation creates 3.x InfoSource and DataSource

    Hello experts, Just wondering if the following is a normal behaviour or it's our system that has not been set up properly.  After I create a new 7.0 DSO, I go to activate it and I noticed that it creates an infosource/datasource in 3.x version instea